Articoli di riviste sul tema "Mitigation des biais"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Mitigation des biais.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Mitigation des biais".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Philipps, Nathalia, Pierre P. Kastendeuch e Georges Najjar. "Analyse de la variabilité spatio-temporelle de l’îlot de chaleur urbain à Strasbourg (France)". Climatologie 17 (2020): 10. http://dx.doi.org/10.1051/climat/202017010.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Une analyse de la dynamique temporelle et de la distribution spatiale de l’îlot de chaleur urbain (ICU) strasbourgeois a été menée à l’aide d’un réseau de stations météorologiques réparties sur l’ensemble du territoire de l’agglomération strasbourgeoise. L’importante variabilité temporelle de l’ICU est illustrée non seulement à travers son comportement thermique journalier, mais également par le biais des fortes différences d’intensité selon les saisons et les types de temps. Favorisé lors de vents faibles et d’ensoleillement important, l’ICU se montre particulièrement intense durant les belles journées estivales, les moyennes pouvant localement atteindre un gain de cinq degrés lors du paroxysme nocturne. Concernant l’aspect spatial, les disparités entre stations soulignent une hétérogénéité de l’ICU essentiellement liée à la variabilité intrinsèque du milieu urbain. L’analyse statistique a ainsi mis en évidence le rôle de plusieurs paramètres morphologiques et d’occupation du sol, et par conséquent justifie pleinement la mise en place d’une classification en Local Climate Zones (LCZ) de l’Eurométropole de Strasbourg. La végétation apparaît comme étant un facteur de mitigation prééminent, notamment lorsqu’elle est présente de manière notable dans des zones densément bâties et fortement minéralisées. Concernant les paramètres relevant de la géométrie urbaine, les intensités moyennes d’ICU les plus élevées sont systématiquement mesurées dans les zones les plus densément bâties. Une nouvelle méthodologie de cartographie de l’ICU se basant sur les paramètres sous-jacents des LCZ est proposée. Cette carte permet l’obtention de valeurs pertinentes d’intensité de l’ICU en tout point du territoire.
2

Rahmawati, Fitriana, e Fitri Santi. "A Literature Review on the Influence of Availability Bias and Overconfidence Bias on Investor Decisions". East Asian Journal of Multidisciplinary Research 2, n. 12 (30 dicembre 2023): 4961–76. http://dx.doi.org/10.55927/eajmr.v2i12.6896.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This research examines the impact of Availability Bias and Overconfidence Bias on investment decisions. Utilizing a literature review approach and VOSviewer analysis, this study explores how these biases affect investor decision-making processes and potential mitigation strategies. The objective is to highlight the significance of understanding and mitigating these biases in achieving more rational investment decisions. The findings underscore the potential negative effects of both biases, leading to overconfident and less rational investment decisions. Awareness of their interplay is crucial, as they reinforce each other's negative effects on investment decision-making. Overcoming these cognitive biases is essential for more effective investment decision-making. This research contributes insights into mitigating biases, aiding in a more balanced and rational approach to investment decision-making.
3

Djebrouni, Yasmine, Nawel Benarba, Ousmane Touat, Pasquale De Rosa, Sara Bouchenak, Angela Bonifati, Pascal Felber, Vania Marangozova e Valerio Schiavoni. "Bias Mitigation in Federated Learning for Edge Computing". Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, n. 4 (19 dicembre 2023): 1–35. http://dx.doi.org/10.1145/3631455.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Federated learning (FL) is a distributed machine learning paradigm that enables data owners to collaborate on training models while preserving data privacy. As FL effectively leverages decentralized and sensitive data sources, it is increasingly used in ubiquitous computing including remote healthcare, activity recognition, and mobile applications. However, FL raises ethical and social concerns as it may introduce bias with regard to sensitive attributes such as race, gender, and location. Mitigating FL bias is thus a major research challenge. In this paper, we propose Astral, a novel bias mitigation system for FL. Astral provides a novel model aggregation approach to select the most effective aggregation weights to combine FL clients' models. It guarantees a predefined fairness objective by constraining bias below a given threshold while keeping model accuracy as high as possible. Astral handles the bias of single and multiple sensitive attributes and supports all bias metrics. Our comprehensive evaluation on seven real-world datasets with three popular bias metrics shows that Astral outperforms state-of-the-art FL bias mitigation techniques in terms of bias mitigation and model accuracy. Moreover, we show that Astral is robust against data heterogeneity and scalable in terms of data size and number of FL clients. Astral's code base is publicly available.
4

Gallaher, Joshua P., Alexander J. Kamrud e Brett J. Borghetti. "Detection and Mitigation of Inefficient Visual Searching". Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, n. 1 (dicembre 2020): 47–51. http://dx.doi.org/10.1177/1071181320641015.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A commonly known cognitive bias is a confirmation bias: the overweighting of evidence supporting a hy- pothesis and underweighting evidence countering that hypothesis. Due to high-stress and fast-paced opera- tions, military decisions can be affected by confirmation bias. One military decision task prone to confirma- tion bias is a visual search. During a visual search, the operator scans an environment to locate a specific target. If confirmation bias causes the operator to scan the wrong portion of the environment first, the search is inefficient. This study has two primary goals: 1) detect inefficient visual search using machine learning and Electroencephalography (EEG) signals, and 2) apply various mitigation techniques in an effort to im- prove the efficiency of searches. Early findings are presented showing how machine learning models can use EEG signals to detect when a person might be performing an inefficient visual search. Four mitigation techniques were evaluated: a nudge which indirectly slows search speed, a hint on how to search efficiently, an explanation for why the participant was receiving a nudge, and instructions to instruct the participant to search efficiently. These mitigation techniques are evaluated, revealing the most effective mitigations found to be the nudge and hint techniques.
5

Lee, Yu-Hao, Norah E. Dunbar, Claude H. Miller, Brianna L. Lane, Matthew L. Jensen, Elena Bessarabova, Judee K. Burgoon et al. "Training Anchoring and Representativeness Bias Mitigation Through a Digital Game". Simulation & Gaming 47, n. 6 (20 agosto 2016): 751–79. http://dx.doi.org/10.1177/1046878116662955.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Objective. Humans systematically make poor decisions because of cognitive biases. Can digital games train people to avoid cognitive biases? The goal of this study is to investigate the affordance of different educational media in training people about cognitive biases and to mitigate cognitive biases within their decision-making processes. Method. A between-subject experiment was conducted to compare a digital game, a traditional slideshow, and a combined condition in mitigating two types of cognitive biases: anchoring bias and representativeness bias. We measured both immediate effects and delayed effects after four weeks. Results. The digital game and slideshow conditions were effective in mitigating cognitive biases immediately after the training, but the effects decayed after four weeks. By providing the basic knowledge through the slideshow, then allowing learners to practice bias-mitigation techniques in the digital game, the combined condition was most effective at mitigating the cognitive biases both immediately and after four weeks.
6

K. Devasenapathy, Arun Padmanabhan,. "Uncovering Bias: Exploring Machine Learning Techniques for Detecting and Mitigating Bias in Data – A Literature Review". International Journal on Recent and Innovation Trends in Computing and Communication 11, n. 9 (30 ottobre 2023): 776–81. http://dx.doi.org/10.17762/ijritcc.v11i9.8965.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The presence of Bias in models developed using machine learning algorithms has emerged as a critical issue. This literature review explores the topic of uncovering the existence of bias in data and the application of techniques for detecting and mitigating Bias. The review provides a comprehensive analysis of the existing literature, focusing on pre-processing techniques, post-pre-processing techniques, and fairness constraints employed to uncover and address the existence of Bias in machine learning models. The effectiveness, limitations, and trade-offs of these techniques are examined, highlighting their impact on advocating fairness and equity in decision-making processes. The methodology consists of two key steps: data preparation and bias analysis, followed by machine learning model development and evaluation. In the data preparation phase, the dataset is analyzed for biases and pre-processed using techniques like reweighting or relabeling to reduce bias. In the model development phase, suitable algorithms are selected, and fairness metrics are defined and optimized during the training process. The models are then evaluated using performance and fairness measures and the best-performing model is chosen. The methodology ensures a systematic exploration of machine learning techniques to detect and mitigate bias, leading to more equitable decision-making. The review begins by examining the techniques of pre-processing, which involve cleaning the data, selecting the features, feature engineering, and sampling. These techniques play an important role in preparing the data to reduce bias and promote fairness in machine learning models. The analysis highlights various studies that have explored the effectiveness of these techniques in uncovering and mitigating bias in data, contributing to the development of more equitable and unbiased machine learning models. Next, the review delves into post-pre-processing techniques that focus on detecting and mitigating bias after the initial data preparation steps. These techniques include bias detection methods that assess the disparate impact or disparate treatment in model predictions, as well as bias mitigation techniques that modify model outputs to achieve fairness across different groups. The evaluation of these techniques, their performance metrics, and potential trade-offs between fairness and accuracy are discussed, providing insights into the challenges and advancements in bias mitigation. Lastly, the review examines fairness constraints, which involve the imposition of rules or guidelines on machine learning algorithms to ensure fairness in predictions or decision-making processes. The analysis explores different fairness constraints, such as demographic parity, equalized odds, and predictive parity, and their effectiveness in reducing bias and advocating fairness in machine learning models. Overall, this literature review provides a comprehensive understanding of the techniques employed to uncover and mitigate the existence of bias in machine learning models. By examining pre-processing techniques, post-pre-processing techniques, and fairness constraints, the review contributes to the development of more fair and unbiased machine learning models, fostering equity and ethical decision-making in various domains. By examining relevant studies, this review provides insights into the effectiveness and limitations of various pre-processing techniques for bias detection and mitigation via Pre-processing, Adversarial learning, Fairness Constraints, and Post-processing techniques.
7

Chu, Charlene, Simon Donato-Woodger, Shehroz Khan, Kathleen Leslie, Tianyu Shi, Rune Nyrup e Amanda Grenier. "STRATEGIES TO MITIGATE MACHINE LEARNING BIAS AFFECTING OLDER ADULTS: RESULTS FROM A SCOPING REVIEW". Innovation in Aging 7, Supplement_1 (1 dicembre 2023): 717–18. http://dx.doi.org/10.1093/geroni/igad104.2325.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Digital ageism, defined as age-related bias in artificial intelligence (AI) and technological systems, has emerged as a significant concern for its potential impact on society, health, equity, and older people’s well-being. This scoping review aims to identify mitigation strategies used in research studies to address age-related bias in machine learning literature. We conducted a scoping review following Arksey & O’Malley’s methodology, and completed a comprehensive search strategy of five databases (Web of Science, CINAHL, EMBASE, IEEE Xplore, and ACM digital library). Articles were included if there was an AI application, age-related bias, and the use of a mitigation strategy. Efforts to mitigate digital ageism were sparse: our search generated 7595 articles, but only a limited number of them met the inclusion criteria. Upon screening, we identified only nine papers which attempted to mitigate digital ageism. Of these, eight involved computer vision models (facial, age prediction, brain age) while one predicted activity based on accelerometer and vital sign measurements. Three broad categories of approaches to mitigating bias in AI were identified: i) sample modification: creating a smaller, more balanced sample from the existing dataset; ii) data augmentation: modifying images to create more training data from the existing datasets without adding additional images; and iii) application of statistical or algorithmic techniques to reduce bias. Digital ageism is a newly-established topic of research, and can affect machine learning models through multiple pathways. Our results advance research on digital ageism by presenting the challenges and possibilities for mitigating digital ageism in machine learning models.
8

Featherston, Rebecca Jean, Aron Shlonsky, Courtney Lewis, My-Linh Luong, Laura E. Downie, Adam P. Vogel, Catherine Granger, Bridget Hamilton e Karyn Galvin. "Interventions to Mitigate Bias in Social Work Decision-Making: A Systematic Review". Research on Social Work Practice 29, n. 7 (23 dicembre 2018): 741–52. http://dx.doi.org/10.1177/1049731518819160.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Purpose: This systematic review synthesized evidence supporting interventions aimed at mitigating cognitive bias associated with the decision-making of social work professionals. Methods: A systematic search was conducted within 10 social services and health-care databases. Review authors independently screened studies in duplicate against prespecified inclusion criteria, and two review authors undertook data extraction and quality assessment. Results: Four relevant studies were identified. Because these studies were too heterogeneous to conduct meta-analyses, results are reported narratively. Three studies focused on diagnostic decisions within mental health and one considered family reunification decisions. Two strategies were reportedly effective in mitigating error: a nomogram tool and a specially designed online training course. One study assessing a consider-the-opposite approach reported no effect on decision outcomes. Conclusions: Cognitive bias can impact the accuracy of clinical reasoning. This review highlights the need for research into cognitive bias mitigation within the context of social work practice decision-making.
9

Erkmen, Cherie Parungo, Lauren Kane e David T. Cooke. "Bias Mitigation in Cardiothoracic Recruitment". Annals of Thoracic Surgery 111, n. 1 (gennaio 2021): 12–15. http://dx.doi.org/10.1016/j.athoracsur.2020.07.005.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Vejsbjerg, Inge, Elizabeth M. Daly, Rahul Nair e Svetoslav Nizhnichenkov. "Interactive Human-Centric Bias Mitigation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 21 (24 marzo 2024): 23838–40. http://dx.doi.org/10.1609/aaai.v38i21.30582.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Bias mitigation algorithms differ in their definition of bias and how they go about achieving that objective. Bias mitigation algorithms impact different cohorts differently and allowing end users and data scientists to understand the impact of these differences in order to make informed choices is a relatively unexplored domain. This demonstration presents an interactive bias mitigation pipeline that allows users to understand the cohorts impacted by their algorithm choice and provide feedback in order to provide a bias mitigated pipeline that most aligns with their goals.
11

Bulut, Solmaz, Mehdi Rostami, Shahla Shokatpour Lotfi, Naser Jafarzadeh, Sefa Bulut, Baidi Bukhori, Seyed Hadi Seyed Alitabar, Zohreh Zadhasn e Farzaneh Mardani. "The Impact of Counselor Bias in Assessment: A Comprehensive Review and Best Practices". Journal of Assessment and Research in Applied Counseling 5, n. 4 (2023): 89–103. http://dx.doi.org/10.61838/kman.jarac.5.4.11.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Objective: This review article aims to comprehensively explore the impact of counselor bias on assessment processes within the counseling profession. It seeks to identify the types and manifestations of biases, assess their implications on counseling outcomes, and recommend best practices for mitigating these biases to promote more equitable counseling practices. Methods and Materials: A systematic literature review was conducted, examining peer-reviewed articles, books, and conference proceedings published between 1997 and 2023. Databases such as PsycINFO, PubMed, ERIC, and Google Scholar were searched using keywords related to counselor bias, psychological assessment, and best practices in bias mitigation. The selection criteria focused on studies that explicitly addressed counselor biases in the context of assessment practices. Theoretical frameworks relevant to understanding and addressing counselor bias, such as Implicit Association Theory, Social Cognition Theory, and the Multicultural Counseling Competency Framework, were also reviewed to provide a conceptual backdrop for the analysis. Findings: The review reveals that counselor bias—spanning from pre-assessment and in-assessment to post-assessment phases—significantly undermines the objectivity and fairness of psychological assessments. These biases, deeply rooted in societal stereotypes and personal prejudices, manifest in various forms, including racial, ethnic, gender, and socioeconomic biases. Theoretical frameworks highlight the complexity of counselor biases and underscore the importance of self-awareness, reflective practice, and multicultural competencies in mitigating their impact. Best practices identified include enhancing counselor self-awareness, integrating comprehensive bias-awareness training in counselor education, and implementing systemic changes to support equity in counseling practices. Conclusion: Counselor bias presents a pervasive challenge within the counseling profession, impacting the validity and efficacy of psychological assessments. Addressing this issue requires a concerted effort that encompasses individual, educational, and systemic interventions. By adopting best practices focused on bias mitigation and promoting cultural sensitivity, the counseling profession can move towards more equitable and effective practices. Future research should aim to explore the effectiveness of specific interventions and expand the understanding of biases beyond the traditionally examined dimensions.
12

Sripathi, Madhavi. "Mitigating Data Bias in Healthcare AI: Strategies and Impact on Patient Outcomes". Journal of Advanced Research in Quality Control & Management 08, n. 02 (10 novembre 2023): 01–05. http://dx.doi.org/10.24321/2582.3280.202302.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Singh, Richa, Puspita Majumdar, Surbhi Mittal e Mayank Vatsa. "Anatomizing Bias in Facial Analysis". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 11 (28 giugno 2022): 12351–58. http://dx.doi.org/10.1609/aaai.v36i11.21500.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups. Due to its impact on society, it has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals. This has led to research in the identification and mitigation of bias in AI systems. In this paper, we encapsulate bias detection/estimation and mitigation algorithms for facial analysis. Our main contributions include a systematic review of algorithms proposed for understanding bias, along with a taxonomy and extensive overview of existing bias mitigation algorithms. We also discuss open challenges in the field of biased facial analysis.
14

Gill, Michael J., e Alexandra Pizzuto. "Unwilling to Un-Blame: Whites Who Dismiss Historical Causes of Societal Disparities Also Dismiss Personal Mitigating Information for Black Offenders". Social Cognition 40, n. 1 (febbraio 2022): 55–87. http://dx.doi.org/10.1521/soco.2022.40.1.55.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
When will racial bias in blame and punishment emerge? Here, we focus on White people's willingness to “un-blame” Black and White offenders upon learning of their unfortunate life histories or biological impairments. We predicted that personal mitigating narratives of Black (but not White) offenders would be ignored by Whites who are societal-level anti-historicists. Societal-level anti-historicists deny that a history of oppression by Whites has shaped current societal-level intergroup disparities. Thus, our prediction centers on how societal-level beliefs relate to bias against individuals. Our predictions were confirmed in three studies. In one of those studies, we also showed how racial bias in willingness to un-blame can be removed: Societal-level anti-historicists became open to mitigation for Black offenders if they were reminded that the offender began as an innocent baby. Results are discussed in terms of how the rich literature on blame and moral psychology could enrich the study of racial bias.
15

Kurmi, Vinod K., Rishabh Sharma, Yash Vardhan Sharma e Vinay P. Namboodiri. "Gradient Based Activations for Accurate Bias-Free Learning". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 7 (28 giugno 2022): 7255–62. http://dx.doi.org/10.1609/aaai.v36i7.20687.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Bias mitigation in machine learning models is imperative, yet challenging. While several approaches have been proposed, one view towards mitigating bias is through adversarial learning. A discriminator is used to identify the bias attributes such as gender, age or race in question. This discriminator is used adversarially to ensure that it cannot distinguish the bias attributes. The main drawback in such a model is that it directly introduces a trade-off with accuracy as the features that the discriminator deems to be sensitive for discrimination of bias could be correlated with classification. In this work we solve the problem. We show that a biased discriminator can actually be used to improve this bias-accuracy tradeoff. Specifically, this is achieved by using a feature masking approach using the discriminator's gradients. We ensure that the features favoured for the bias discrimination are de-emphasized and the unbiased features are enhanced during classification. We show that this simple approach works well to reduce bias as well as improve accuracy significantly. We evaluate the proposed model on standard benchmarks. We improve the accuracy of the adversarial methods while maintaining or even improving the unbiasness and also outperform several other recent methods.
16

Chu, Charlene, Simon Donato-Woodger, Shehroz S. Khan, Tianyu Shi, Kathleen Leslie, Samira Abbasgholizadeh-Rahimi, Rune Nyrup e Amanda Grenier. "Strategies to Mitigate Age-Related Bias in Machine Learning: Scoping Review". JMIR Aging 7 (22 marzo 2024): e53564. http://dx.doi.org/10.2196/53564.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Background Research suggests that digital ageism, that is, age-related bias, is present in the development and deployment of machine learning (ML) models. Despite the recognition of the importance of this problem, there is a lack of research that specifically examines the strategies used to mitigate age-related bias in ML models and the effectiveness of these strategies. Objective To address this gap, we conducted a scoping review of mitigation strategies to reduce age-related bias in ML. Methods We followed a scoping review methodology framework developed by Arksey and O’Malley. The search was developed in conjunction with an information specialist and conducted in 6 electronic databases (IEEE Xplore, Scopus, Web of Science, CINAHL, EMBASE, and the ACM digital library), as well as 2 additional gray literature databases (OpenGrey and Grey Literature Report). Results We identified 8 publications that attempted to mitigate age-related bias in ML approaches. Age-related bias was introduced primarily due to a lack of representation of older adults in the data. Efforts to mitigate bias were categorized into one of three approaches: (1) creating a more balanced data set, (2) augmenting and supplementing their data, and (3) modifying the algorithm directly to achieve a more balanced result. Conclusions Identifying and mitigating related biases in ML models is critical to fostering fairness, equity, inclusion, and social benefits. Our analysis underscores the ongoing need for rigorous research and the development of effective mitigation approaches to address digital ageism, ensuring that ML systems are used in a way that upholds the interests of all individuals. Trial Registration Open Science Framework AMG5P; https://osf.io/amg5p
17

Patil, Pranita, e Kevin Purcell. "Decorrelation-Based Deep Learning for Bias Mitigation". Future Internet 14, n. 4 (29 marzo 2022): 110. http://dx.doi.org/10.3390/fi14040110.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Although deep learning has proven to be tremendously successful, the main issue is the dependency of its performance on the quality and quantity of training datasets. Since the quality of data can be affected by biases, a novel deep learning method based on decorrelation is presented in this study. The decorrelation specifically learns bias invariant features by reducing the non-linear statistical dependency between features and bias itself. This makes the deep learning models less prone to biased decisions by addressing data bias issues. We introduce Decorrelated Deep Neural Networks (DcDNN) or Decorrelated Convolutional Neural Networks (DcCNN) and Decorrelated Artificial Neural Networks (DcANN) by applying decorrelation-based optimization to Deep Neural Networks (DNN) and Artificial Neural Networks (ANN), respectively. Previous bias mitigation methods result in a drastic loss in accuracy at the cost of bias reduction. Our study aims to resolve this by controlling how strongly the decorrelation function for bias reduction and loss function for accuracy affect the network objective function. The detailed analysis of the hyperparameter shows that for the optimal value of hyperparameter, our model is capable of maintaining accuracy while being bias invariant. The proposed method is evaluated on several benchmark datasets with different types of biases such as age, gender, and color. Additionally, we test our approach along with traditional approaches to analyze the bias mitigation in deep learning. Using simulated datasets, the results of t-distributed stochastic neighbor embedding (t-SNE) of the proposed model validated the effective removal of bias. An analysis of fairness metrics and accuracy comparisons shows that using our proposed models reduces the biases without compromising accuracy significantly. Furthermore, the comparison of our method with existing methods shows the superior performance of our model in terms of bias mitigation, as well as simplicity of training.
18

Kim, Hyo-eun. "Fairness Criteria and Mitigation of AI Bias". Korean Journal of Psychology: General 40, n. 4 (25 dicembre 2021): 459–85. http://dx.doi.org/10.22257/kjp.2021.12.40.4.459.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Park, Souneil, Seungwoo Kang, Sangyoung Chung e Junehwa Song. "A Computational Framework for Media Bias Mitigation". ACM Transactions on Interactive Intelligent Systems 2, n. 2 (giugno 2012): 1–32. http://dx.doi.org/10.1145/2209310.2209311.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Hopkins, Taylor. "Bias Mitigation: Identifying Barriers and Finding Solutions". Forensic Science International: Synergy 6 (2023): 100420. http://dx.doi.org/10.1016/j.fsisyn.2023.100420.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Rotenberg, Wendy. "Mitigation of U.S. Home Bias in the Valuation of Canadian Natural Resource Firms: Choice of Reporting and Transaction Currency". Multinational Finance Journal 17, n. 3/4 (1 dicembre 2013): 201–41. http://dx.doi.org/10.17578/17-3/4-4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Hudson, P., W. J. W. Botzen, H. Kreibich, P. Bubeck e J. C. J. H. Aerts. "Evaluating the effectiveness of flood damage mitigation measures by the application of propensity score matching". Natural Hazards and Earth System Sciences 14, n. 7 (15 luglio 2014): 1731–47. http://dx.doi.org/10.5194/nhess-14-1731-2014.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract. The employment of damage mitigation measures (DMMs) by individuals is an important component of integrated flood risk management. In order to promote efficient damage mitigation measures, accurate estimates of their damage mitigation potential are required. That is, for correctly assessing the damage mitigation measures' effectiveness from survey data, one needs to control for sources of bias. A biased estimate can occur if risk characteristics differ between individuals who have, or have not, implemented mitigation measures. This study removed this bias by applying an econometric evaluation technique called propensity score matching (PSM) to a survey of German households along three major rivers that were flooded in 2002, 2005, and 2006. The application of this method detected substantial overestimates of mitigation measures' effectiveness if bias is not controlled for, ranging from nearly EUR 1700 to 15 000 per measure. Bias-corrected effectiveness estimates of several mitigation measures show that these measures are still very effective since they prevent between EUR 6700 and 14 000 of flood damage per flood event. This study concludes with four main recommendations regarding how to better apply propensity score matching in future studies, and makes several policy recommendations.
23

Hudson, P., W. J. W. Botzen, H. Kreibich, P. Bubeck e J. C. J. H. Aerts. "Evaluating the effectiveness of flood damage mitigation measures by the application of Propensity Score Matching". Natural Hazards and Earth System Sciences Discussions 2, n. 1 (22 gennaio 2014): 681–723. http://dx.doi.org/10.5194/nhessd-2-681-2014.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract. The employment of damage mitigation measures by individuals is an important component of integrated flood risk management. In order to promote efficient damage mitigation measures, accurate estimates of their damage mitigation potential are required. That is, for correctly assessing the damage mitigation measures' effectiveness from survey data, one needs to control for sources of bias. A biased estimate can occur if risk characteristics differ between individuals who have, or have not, implemented mitigation measures. This study removed this bias by applying an econometric evaluation technique called Propensity Score Matching to a survey of German households along along two major rivers major rivers that were flooded in 2002, 2005 and 2006. The application of this method detected substantial overestimates of mitigation measures' effectiveness if bias is not controlled for, ranging from nearly € 1700 to € 15 000 per measure. Bias-corrected effectiveness estimates of several mitigation measures show that these measures are still very effective since they prevent between € 6700–14 000 of flood damage. This study concludes with four main recommendations regarding how to better apply Propensity Score Matching in future studies, and makes several policy recommendations.
24

Cai, Zhenyu. "Quantum Error Mitigation using Symmetry Expansion". Quantum 5 (21 settembre 2021): 548. http://dx.doi.org/10.22331/q-2021-09-21-548.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Even with the recent rapid developments in quantum hardware, noise remains the biggest challenge for the practical applications of any near-term quantum devices. Full quantum error correction cannot be implemented in these devices due to their limited scale. Therefore instead of relying on engineered code symmetry, symmetry verification was developed which uses the inherent symmetry within the physical problem we try to solve. In this article, we develop a general framework named symmetry expansion which provides a wide spectrum of symmetry-based error mitigation schemes beyond symmetry verification, enabling us to achieve different balances between the estimation bias and the sampling cost of the scheme. We show that certain symmetry expansion schemes can achieve a smaller estimation bias than symmetry verification through cancellation between the biases due to the detectable and undetectable noise components. A practical way to search for such a small-bias scheme is introduced. By numerically simulating the Fermi-Hubbard model for energy estimation, the small-bias symmetry expansion we found can achieve an estimation bias 6 to 9 times below what is achievable by symmetry verification when the average number of circuit errors is between 1 to 2. The corresponding sampling cost for random shot noise reduction is just 2 to 6 times higher than symmetry verification. Beyond symmetries inherent to the physical problem, our formalism is also applicable to engineered symmetries. For example, the recent scheme for exponential error suppression using multiple noisy copies of the quantum device is just a special case of symmetry expansion using the permutation symmetry among the copies.
25

Pagano, Tiago P., Rafael B. Loureiro, Fernanda V. N. Lisboa, Rodrigo M. Peixoto, Guilherme A. S. Guimarães, Gustavo O. R. Cruz, Maira M. Araujo et al. "Bias and Unfairness in Machine Learning Models: A Systematic Review on Datasets, Tools, Fairness Metrics, and Identification and Mitigation Methods". Big Data and Cognitive Computing 7, n. 1 (13 gennaio 2023): 15. http://dx.doi.org/10.3390/bdcc7010015.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
One of the difficulties of artificial intelligence is to ensure that model decisions are fair and free of bias. In research, datasets, metrics, techniques, and tools are applied to detect and mitigate algorithmic unfairness and bias. This study examines the current knowledge on bias and unfairness in machine learning models. The systematic review followed the PRISMA guidelines and is registered on OSF plataform. The search was carried out between 2021 and early 2022 in the Scopus, IEEE Xplore, Web of Science, and Google Scholar knowledge bases and found 128 articles published between 2017 and 2022, of which 45 were chosen based on search string optimization and inclusion and exclusion criteria. We discovered that the majority of retrieved works focus on bias and unfairness identification and mitigation techniques, offering tools, statistical approaches, important metrics, and datasets typically used for bias experiments. In terms of the primary forms of bias, data, algorithm, and user interaction were addressed in connection to the preprocessing, in-processing, and postprocessing mitigation methods. The use of Equalized Odds, Opportunity Equality, and Demographic Parity as primary fairness metrics emphasizes the crucial role of sensitive attributes in mitigating bias. The 25 datasets chosen span a wide range of areas, including criminal justice image enhancement, finance, education, product pricing, and health, with the majority including sensitive attributes. In terms of tools, Aequitas is the most often referenced, yet many of the tools were not employed in empirical experiments. A limitation of current research is the lack of multiclass and multimetric studies, which are found in just a few works and constrain the investigation to binary-focused method. Furthermore, the results indicate that different fairness metrics do not present uniform results for a given use case, and that more research with varied model architectures is necessary to standardize which ones are more appropriate for a given context. We also observed that all research addressed the transparency of the algorithm, or its capacity to explain how decisions are taken.
26

Ande, Janaki Rama Phanendra Kumar. "AI-Powered Decentralized Recruitment System on the Blockchain". Global Disclosure of Economics and Business 10, n. 2 (1 agosto 2021): 91–104. http://dx.doi.org/10.18034/gdeb.v10i2.734.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
By combining artificial intelligence (AI) algorithms with blockchain technology, the AI-powered decentralized Recruitment System on the Blockchain (ADRSoB) offers a cutting-edge method for completely changing the hiring process. This study aims to assess the effectiveness and impact of ADRSoB in several areas, such as user satisfaction, bias mitigation, candidate selection quality, efficiency, and transparency. From a methodological standpoint, case studies, research papers, and current literature pertinent to ADRSoB were compiled using a secondary data-based review article approach. Principal results show that ADRSoB increases user happiness while promoting transparency, mitigating biases, improving applicant selection quality, and streamlining recruiting processes. However, policy implications such as encouraging technology use, guaranteeing data privacy legislation, and fostering fairness in algorithmic systems are required due to obstacles to technology adoption, data privacy issues, and bias in algorithmic decision-making. ADRSoB has enormous potential to change hiring procedures while addressing critical issues and encouraging moral and responsible hiring in the digital era.
27

Christensen-Branum, Lezlie, Ashley Strong e Cindy D'On Jones. "Mitigating Myside Bias in Argumentation". Journal of Adolescent & Adult Literacy 62, n. 4 (26 settembre 2018): 435–45. http://dx.doi.org/10.1002/jaal.915.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Siddique, Sunzida, Mohd Ariful Haque, Roy George, Kishor Datta Gupta, Debashis Gupta e Md Jobair Hossain Faruk. "Survey on Machine Learning Biases and Mitigation Techniques". Digital 4, n. 1 (20 dicembre 2023): 1–68. http://dx.doi.org/10.3390/digital4010001.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Machine learning (ML) has become increasingly prevalent in various domains. However, ML algorithms sometimes give unfair outcomes and discrimination against certain groups. Thereby, bias occurs when our results produce a decision that is systematically incorrect. At various phases of the ML pipeline, such as data collection, pre-processing, model selection, and evaluation, these biases appear. Bias reduction methods for ML have been suggested using a variety of techniques. By changing the data or the model itself, adding more fairness constraints, or both, these methods try to lessen bias. The best technique relies on the particular context and application because each technique has advantages and disadvantages. Therefore, in this paper, we present a comprehensive survey of bias mitigation techniques in machine learning (ML) with a focus on in-depth exploration of methods, including adversarial training. We examine the diverse types of bias that can afflict ML systems, elucidate current research trends, and address future challenges. Our discussion encompasses a detailed analysis of pre-processing, in-processing, and post-processing methods, including their respective pros and cons. Moreover, we go beyond qualitative assessments by quantifying the strategies for bias reduction and providing empirical evidence and performance metrics. This paper serves as an invaluable resource for researchers, practitioners, and policymakers seeking to navigate the intricate landscape of bias in ML, offering both a profound understanding of the issue and actionable insights for responsible and effective bias mitigation.
29

Dunbar, Norah E., Matthew L. Jensen, Claude H. Miller, Elena Bessarabova, Yu-Hao Lee, Scott N. Wilson, Javier Elizondo et al. "Mitigation of Cognitive Bias with a Serious Game". International Journal of Game-Based Learning 7, n. 4 (ottobre 2017): 86–100. http://dx.doi.org/10.4018/ijgbl.2017100105.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
One of the benefits of using digital games for education is that games can provide feedback for learners to assess their situation and correct their mistakes. We conducted two studies to examine the effectiveness of different feedback design (timing, duration, repeats, and feedback source) in a serious game designed to teach learners about cognitive biases. We also compared the digital game-based learning condition to a professional training video. Overall, the digital game was significantly more effective than the video condition. Longer durations and repeats improve the effects on bias-mitigation. Surprisingly, there was no significant difference between just-in-time feedback and delayed feedback, and computer-generated feedback was more effective than feedback from other players.
30

Guan, Maime, e Joachim Vandekerckhove. "A Bayesian approach to mitigation of publication bias". Psychonomic Bulletin & Review 23, n. 1 (1 luglio 2015): 74–86. http://dx.doi.org/10.3758/s13423-015-0868-6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Davison, Robert M. "Editorial - Cultural Bias in Reviews and Mitigation Options". Information Systems Journal 24, n. 6 (5 agosto 2014): 475–77. http://dx.doi.org/10.1111/isj.12046.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Korzhenevych, I. P., e O. V. Gots. "THE MITIGATION STEERING BIAS CURVES FOR INDUSTRIAL TRANSPORT". Science and Transport Progress, n. 16 (25 giugno 2007): 26–28. http://dx.doi.org/10.15802/stp2007/17607.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Penn, Jerrod M., Daniel R. Petrolia e J. Matthew Fannin. "Hypothetical bias mitigation in representative and convenience samples". Applied Economic Perspectives and Policy 45, n. 2 (18 maggio 2023): 721–43. http://dx.doi.org/10.1002/aepp.13374.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Li, Xuesong, Dajun Sun e Zhongyi Cao. "Mitigation method of acoustic doppler velocity measurement bias". Ocean Engineering 306 (agosto 2024): 118082. http://dx.doi.org/10.1016/j.oceaneng.2024.118082.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Tjia, Jennifer, Michele Pugnaire, Joanne Calista, Ethan Eisdorfer, Janet Hale, Jill Terrien, Olga Valdman et al. "Using Simulation-Based Learning with Standardized Patients (SP) in an Implicit Bias Mitigation Clinician Training Program". Journal of Medical Education and Curricular Development 10 (gennaio 2023): 238212052311750. http://dx.doi.org/10.1177/23821205231175033.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Objectives To describe the development and refinement of an implicit bias recognition and management training program for clinical trainees. Methods In the context of an NIH-funded clinical trial to address healthcare disparities in hypertension management, research and education faculty at an academic medical center used a participatory action research approach to engage local community members to develop and refine a “knowledge, awareness, and skill-building” bias recognition and mitigation program. The program targeted medical residents and Doctor of Nursing Practice students. The content of the two-session training included: didactics about healthcare disparities, racism and implicit bias; implicit association test (IAT) administration to raise awareness of personal implicit bias; skill building for bias-mitigating communication; and case scenarios for skill practice in simulation-based encounters with standardized patients (SPs) from the local community. Results The initial trial year enrolled n = 65 interprofessional participants. Community partners and SPs who engaged throughout the design and implementation process reported overall positive experiences, but SPs expressed need for greater faculty support during in-person debriefings following simulation encounters to balance power dynamics. Initial year trainee participants reported discomfort with intensive sequencing of in-person didactics, IATs, and SP simulations in each of the two training sessions. In response, authors refined the training program to separate didactic sessions from IAT administration and SP simulations, and to increase safe space, and trainee and SP empowerment. The final program includes more interactive discussions focused on identity, race and ethnicity, and strategies to address local health system challenges related to structural racism. Conclusion It is possible to develop and implement a bias awareness and mitigation skills training program that uses simulation-based learning with SPs, and to engage with local community members to tailor the content to address the experience of local patient populations. Further research is needed to measure the success and impact of replicating this approach elsewhere.
36

Prater, James, Konstantinos Kirytopoulos e Tony Ma. "Optimism bias within the project management context". International Journal of Managing Projects in Business 10, n. 2 (4 aprile 2017): 370–85. http://dx.doi.org/10.1108/ijmpb-07-2016-0063.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Purpose One of the major challenges for any project is to prepare and develop an achievable baseline schedule and thus set the project up for success, rather than failure. The purpose of this paper is to explore and investigate research outputs in one of the major causes, optimism bias, to identify problems with developing baseline schedules and analyse mitigation techniques and their effectiveness recommended by research to minimise the impact of this bias. Design/methodology/approach A systematic quantitative literature review was followed, examining Project Management Journals, documenting the mitigation approaches recommended and then reviewing whether these approaches were validated by research. Findings Optimism bias proved to be widely accepted as a major cause of unrealistic scheduling for projects, and there is a common understanding as to what it is and the effects that it has on original baseline schedules. Based upon this review, the most recommended mitigation method is Flyvbjerg’s “Reference class,” which has been developed based upon Kahneman’s “Outside View”. Both of these mitigation techniques are based upon using an independent third party to review the estimate. However, within the papers reviewed, apart from the engineering projects, there has been no experimental and statistically validated research into the effectiveness of this method. The majority of authors who have published on this topic are based in Europe. Research limitations/implications The short-listed papers for this review referred mainly to non-engineering projects which included information technology focussed ones. Thus, on one hand, empirical research is needed for engineering projects, while on the other hand, the lack of tangible evidence for the effectiveness of methods related to the alleviation of optimism bias issues calls for greater research into the effectiveness of mitigation techniques for not only engineering projects, but for all projects. Originality/value This paper documents the growth within the project management research literature over time on the topic of optimism bias. Specifically, it documents the various methods recommended to mitigate the phenomenon and highlights quantitatively the research undertaken on the subject. Moreover, it introduces paths for further research.
37

Wagoner, Erika L., Eduardo Rozo, Xiao Fang, Martín Crocce, Jack Elvin-Poole e Noah Weaverdyck. "Linear systematics mitigation in galaxy clustering in the Dark Energy Survey Year 1 Data". Monthly Notices of the Royal Astronomical Society 503, n. 3 (10 marzo 2021): 4349–62. http://dx.doi.org/10.1093/mnras/stab717.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
ABSTRACT We implement a linear model for mitigating the effect of observing conditions and other sources of contamination in galaxy clustering analyses. Our treatment improves upon the fiducial systematics treatment of the Dark Energy Survey (DES) Year 1 (Y1) cosmology analysis in four crucial ways. Specifically, our treatment (1) does not require decisions as to which observable systematics are significant and which are not, allowing for the possibility of multiple maps adding coherently to give rise to significant bias even if no single map leads to a significant bias by itself, (2) characterizes both the statistical and systematic uncertainty in our mitigation procedure, allowing us to propagate said uncertainties into the reported cosmological constraints, (3) explicitly exploits the full spatial structure of the galaxy density field to differentiate between cosmology-sourced and systematics-sourced fluctuations within the galaxy density field, and (4) is fully automated, and can therefore be trivially applied to any data set. The updated correlation function for the DES Y1 redMaGiC catalogue minimally impacts the cosmological posteriors from that analysis. Encouragingly, our analysis does improve the goodness-of-fit statistic of the DES Y1 3 × 2pt data set (Δχ2 = −6.5 with no additional parameters). This improvement is due in nearly equal parts to both the change in the correlation function and the added statistical and systematic uncertainties associated with our method. We expect the difference in mitigation techniques to become more important in future work as the size of cosmological data sets grows.
38

Frachtenberg, Eitan, e Kelly S. McConville. "Metrics and methods in the evaluation of prestige bias in peer review: A case study in computer systems conferences". PLOS ONE 17, n. 2 (25 febbraio 2022): e0264131. http://dx.doi.org/10.1371/journal.pone.0264131.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The integrity of peer review is essential for modern science. Numerous studies have therefore focused on identifying, quantifying, and mitigating biases in peer review. One of these better-known biases is prestige bias, where the recognition of a famous author or affiliation leads reviewers to subconsciously treat their submissions preferentially. A common mitigation approach for prestige bias is double-blind reviewing, where the identify of authors is hidden from reviewers. However, studies on the effectivness of this mitigation are mixed and are rarely directly comparable to each other, leading to difficulty in generalization of their results. In this paper, we explore the design space for such studies in an attempt to reach common ground. Using an observational approach with a large dataset of peer-reviewed papers in computer systems, we systematically evaluate the effects of different prestige metrics, aggregation methods, control variables, and outlier treatments. We show that depending on these choices, the data can lead to contradictory conclusions with high statistical significance. For example, authors with higher h-index often preferred to publish in competitive conferences which are also typically double-blind, whereas authors with higher paper counts often preferred the single-blind conferences. The main practical implication of our analyses is that a narrow evaluation may lead to unreliable results. A thorough evaluation of prestige bias requires a careful inventory of assumptions, metrics, and methodology, often requiring a more detailed sensitivity analysis than is normally undertaken. Importantly, two of the most commonly used metrics for prestige evaluation, past publication count and h-index, are not independent from the choice of publishing venue, which must be accounted for when comparing authors prestige across conferences.
39

Clegg, Benjamin A., Brian McKernan, Rosa M. Martey, Sarah M. Taylor, Jennifer Stromer-Galley, Kate Kenski, E. Tobi Saulnier et al. "Effective Mitigation of Anchoring Bias, Projection Bias, and Representativeness Bias from Serious Game-based Training". Procedia Manufacturing 3 (2015): 1558–65. http://dx.doi.org/10.1016/j.promfg.2015.07.438.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Mosteiro, Pablo, Jesse Kuiper, Judith Masthoff, Floortje Scheepers e Marco Spruit. "Bias Discovery in Machine Learning Models for Mental Health". Information 13, n. 5 (5 maggio 2022): 237. http://dx.doi.org/10.3390/info13050237.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Fairness and bias are crucial concepts in artificial intelligence, yet they are relatively ignored in machine learning applications in clinical psychiatry. We computed fairness metrics and present bias mitigation strategies using a model trained on clinical mental health data. We collected structured data related to the admission, diagnosis, and treatment of patients in the psychiatry department of the University Medical Center Utrecht. We trained a machine learning model to predict future administrations of benzodiazepines on the basis of past data. We found that gender plays an unexpected role in the predictions—this constitutes bias. Using the AI Fairness 360 package, we implemented reweighing and discrimination-aware regularization as bias mitigation strategies, and we explored their implications for model performance. This is the first application of bias exploration and mitigation in a machine learning model trained on real clinical psychiatry data.
41

Kempf, Arlo, e Preeti Nayak. "Practicing Professional Discomfort as Self-Location: White Teacher Experiences With Race Bias Mitigation". Journal of the Canadian Association for Curriculum Studies 18, n. 1 (27 giugno 2020): 51–52. http://dx.doi.org/10.25071/1916-4467.40584.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This study is among the first in Canada to research implicit race bias mitigation in secondary teacher practice. The findings emerge from data collected from a ten-month engagement period with 12 Ontario teachers who, alongside the research team, codesigned a race bias mitigation plan based on four to six varied mitigation strategies. These included technical and dialogical activities and a required reading of one anti-racist and/or anti-colonial book. Throughout the project, teachers engaged in ongoing reflection, journaling, email exchanges and an in-person interview. A thematic analysis of this data was completed (Ryan & Bernard, 2003). The design of this study was underpinned by a braiding of social psychology with critical race theory, second wave White teacher identity studies and other approaches. This multimodal approach brings a critical and dynamic reading of whiteness in education. Three broad preliminary findings have emerged from this study. First, teacher perceptions of efficacy of implicit race bias mitigation strategies relied on their noticing of conscious changes in their perceptions of and experiences with race, racism and Black, Indigenous and People of Colour (BIPOC) students. Second, the concurrent use of critical anti-racist strategies, alongside implicit race bias mitigation strategies, seemed to instigate participants’ deepest reflections on race. Finally, this synergy and the long duration of the project contributed to the participants’ evolving understandings of racism in education as a phenomenon that goes beyond the domain of the individual. The results may deepen our understandings of the challenges and opportunities surrounding implicit race bias mitigation work in terms of teacher practices and theoretical considerations.
42

De Biasio, Francesco, e Stefano Zecchetto. "Tuning the Model Winds in Perspective of Operational Storm Surge Prediction in the Adriatic Sea". Journal of Marine Science and Engineering 11, n. 3 (3 marzo 2023): 544. http://dx.doi.org/10.3390/jmse11030544.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In the Adriatic Sea, the sea surface wind forecasts are often underestimated, with detrimental effects on the accuracy of sea level and storm surge predictions. Among the various causes, this mainly depends on the meteorological forcing of the wind. In this paper, we try to improve an existing numerical method, called “wind bias mitigation”, which relies on scatterometer wind observations to determine a multiplicative factor Δw, whose application to the model wind reduces its inaccuracy with respect to the scatterometer wind. Following four different mathematical approaches, we formulate and discuss seven new expressions of the multiplicative factor. The eight different expressions of the bias mitigation factor, the original one and the seven formulated in this study, are assessed with the aid of four datasets of real sea surface wind events in a variety of sea level conditions in the northern Adriatic Sea, several of which gave rise to high water events in the Venice Lagoon. The statistical analysis shows that some of the seven new formulations of the wind bias mitigation factor are able to lower the model-scatterometer bias with respect to the original formulation. For some other of the seven new formulations, the absolute bias, with respect to scatterometer, of the mitigated model wind field, results lower than that supplied by the unmodified model wind field in 81% of the considered storm surge events in the area of interest, against the 73% of the original formulation of the wind bias mitigation. This represents an 11% improvement in the bias mitigation process, with respect to the original formulation. The best performing of the seven new wind bias mitigation factors, that based on the linear least square regression of the squared wind speed (LLSRE), has been implemented in the operational sea level forecast chain of the Tide Forecast and Early Warning Centre of the Venice Municipality (CPSM), to provide support to the operation of the MO.SE. barriers in Venice.
43

Kohn, Rachel. "Eliminating Bias in Survival Estimation: Statistical Bias Mitigation Is the First Step Forward*". Critical Care Medicine 52, n. 3 (21 febbraio 2024): 506–9. http://dx.doi.org/10.1097/ccm.0000000000006110.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
44

l, l., e l. l. "Mitigating Attentional Bias: The Impact of Perceived Social Self-Efficacy in Individuals with MMO Games Addiction Tendency". Korean Data Analysis Society 26, n. 1 (29 febbraio 2024): 15–33. http://dx.doi.org/10.37727/jkdas.2024.26.1.15.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Low self-efficacy in interpersonal relationships, linked to MMO game addiction, worsens the inclination towards addiction as individuals seek social interaction within the game, leading to attentional bias towards game stimuli. This study aimed to investigate if manipulating perceived social self-efficacy levels could reduce attentional bias in MMO game addiction compared to non-addictive gamers. 503 undergraduates participated, including the MMO addiction group (n=60) and the control group (n=60), identified through the Korean version of the Internet Game Disorder Scale. Participants were divided into high and low perceived social self-efficacy conditions through false feedback. Dot probe tasks assessed attentional bias changes before and after manipulated feedback using a “social intelligence test.” The attentional bias score, initially higher in the addiction group, decreased after intervention with increased social self-efficacy. No significant changes were observed in control groups and the addiction group with decreased social self-efficacy. These findings confirm that boosting perceived social self-efficacy in MMO addiction can reduce attentional bias towards game stimuli, suggesting crucial interventions for alleviating addictive behaviors.
45

Sethi, Rahul, Vedang Ratan Vatsa e Parth Chhaparwal. "IDENTIFICATION AND MITIGATION OF ALGORITHMIC BIAS THROUGH POLICY INSTRUMENTS". International Journal of Advanced Research 8, n. 7 (31 luglio 2020): 1515–22. http://dx.doi.org/10.21474/ijar01/11418.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Somasundaram, Ananthi, David Vállez García, Elisabeth Pfaehler, Joyce van Sluis, Rudi A. J. O. Dierckx, Elisabeth G. E. de Vries e Ronald Boellaard. "Mitigation of noise-induced bias of PET radiomic features". PLOS ONE 17, n. 8 (25 agosto 2022): e0272643. http://dx.doi.org/10.1371/journal.pone.0272643.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Introduction One major challenge in PET radiomics is its sensitivity to noise. Low signal-to-noise ratio (SNR) affects not only the precision but also the accuracy of quantitative metrics extracted from the images resulting in noise-induced bias. This phantom study aims to identify the radiomic features that are robust to noise in terms of precision and accuracy and to explore some methods that might help to correct noise-induced bias. Methods A phantom containing three 18F-FDG filled 3D printed inserts, reflecting heterogeneous tracer uptake and realistic tumor shapes, was used in the study. The three different phantom inserts were filled and scanned with three different tumor-to-background ratios, simulating a total of nine different tumors. From the 40-minute list-mode data, ten frames each for 5 s, 10 s, 30 s, and 120 s frame duration were reconstructed to generate images with different noise levels. Under these noise conditions, the precision and accuracy of the radiomic features were analyzed using intraclass correlation coefficient (ICC) and similarity distance metric (SDM) respectively. Based on the ICC and SDM values, the radiomic features were categorized into four groups: poor, moderate, good, and excellent precision and accuracy. A “difference image” created by subtracting two statistically equivalent replicate images was used to develop a model to correct the noise-induced bias. Several regression methods (e.g., linear, exponential, sigmoid, and power-law) were tested. The best fitting model was chosen based on Akaike information criteria. Results Several radiomic features derived from low SNR images have high repeatability, with 68% of radiomic features having ICC ≥ 0.9 for images with a frame duration of 5 s. However, most features show a systematic bias that correlates with the increase in noise level. Out of 143 features with noise-induced bias, the SDM values were improved based on a regression model (53 features to excellent and 67 to good) indicating that the noise-induced bias of these features can be, at least partially, corrected. Conclusion To have a predictive value, radiomic features should reflect tumor characteristics and be minimally affected by noise. The present study has shown that it is possible to correct for noise-induced bias, at least in a subset of the features, using a regression model based on the local image noise estimates.
47

Ashokan, Ashwathy, e Christian Haas. "Fairness metrics and bias mitigation strategies for rating predictions". Information Processing & Management 58, n. 5 (settembre 2021): 102646. http://dx.doi.org/10.1016/j.ipm.2021.102646.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Fortunato, S., A. Flammini, F. Menczer e A. Vespignani. "Topical interests and the mitigation of search engine bias". Proceedings of the National Academy of Sciences 103, n. 34 (10 agosto 2006): 12684–89. http://dx.doi.org/10.1073/pnas.0605525103.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Rohbani, Nezam, Mojtaba Ebrahimi, Seyed-Ghassem Miremadi e Mehdi B. Tahoori. "Bias Temperature Instability Mitigation via Adaptive Cache Size Management". IEEE Transactions on Very Large Scale Integration (VLSI) Systems 25, n. 3 (marzo 2017): 1012–22. http://dx.doi.org/10.1109/tvlsi.2016.2606579.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Nazer, Lama H., Razan Zatarah, Shai Waldrip, Janny Xue Chen Ke, Mira Moukheiber, Ashish K. Khanna, Rachel S. Hicklen et al. "Bias in artificial intelligence algorithms and recommendations for mitigation". PLOS Digital Health 2, n. 6 (22 giugno 2023): e0000278. http://dx.doi.org/10.1371/journal.pdig.0000278.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The adoption of artificial intelligence (AI) algorithms is rapidly increasing in healthcare. Such algorithms may be shaped by various factors such as social determinants of health that can influence health outcomes. While AI algorithms have been proposed as a tool to expand the reach of quality healthcare to underserved communities and improve health equity, recent literature has raised concerns about the propagation of biases and healthcare disparities through implementation of these algorithms. Thus, it is critical to understand the sources of bias inherent in AI-based algorithms. This review aims to highlight the potential sources of bias within each step of developing AI algorithms in healthcare, starting from framing the problem, data collection, preprocessing, development, and validation, as well as their full implementation. For each of these steps, we also discuss strategies to mitigate the bias and disparities. A checklist was developed with recommendations for reducing bias during the development and implementation stages. It is important for developers and users of AI-based algorithms to keep these important considerations in mind to advance health equity for all populations.

Vai alla bibliografia