Letteratura scientifica selezionata sul tema "Mitigation des biais"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Mitigation des biais".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Mitigation des biais":

1

Philipps, Nathalia, Pierre P. Kastendeuch e Georges Najjar. "Analyse de la variabilité spatio-temporelle de l’îlot de chaleur urbain à Strasbourg (France)". Climatologie 17 (2020): 10. http://dx.doi.org/10.1051/climat/202017010.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Une analyse de la dynamique temporelle et de la distribution spatiale de l’îlot de chaleur urbain (ICU) strasbourgeois a été menée à l’aide d’un réseau de stations météorologiques réparties sur l’ensemble du territoire de l’agglomération strasbourgeoise. L’importante variabilité temporelle de l’ICU est illustrée non seulement à travers son comportement thermique journalier, mais également par le biais des fortes différences d’intensité selon les saisons et les types de temps. Favorisé lors de vents faibles et d’ensoleillement important, l’ICU se montre particulièrement intense durant les belles journées estivales, les moyennes pouvant localement atteindre un gain de cinq degrés lors du paroxysme nocturne. Concernant l’aspect spatial, les disparités entre stations soulignent une hétérogénéité de l’ICU essentiellement liée à la variabilité intrinsèque du milieu urbain. L’analyse statistique a ainsi mis en évidence le rôle de plusieurs paramètres morphologiques et d’occupation du sol, et par conséquent justifie pleinement la mise en place d’une classification en Local Climate Zones (LCZ) de l’Eurométropole de Strasbourg. La végétation apparaît comme étant un facteur de mitigation prééminent, notamment lorsqu’elle est présente de manière notable dans des zones densément bâties et fortement minéralisées. Concernant les paramètres relevant de la géométrie urbaine, les intensités moyennes d’ICU les plus élevées sont systématiquement mesurées dans les zones les plus densément bâties. Une nouvelle méthodologie de cartographie de l’ICU se basant sur les paramètres sous-jacents des LCZ est proposée. Cette carte permet l’obtention de valeurs pertinentes d’intensité de l’ICU en tout point du territoire.
2

Rahmawati, Fitriana, e Fitri Santi. "A Literature Review on the Influence of Availability Bias and Overconfidence Bias on Investor Decisions". East Asian Journal of Multidisciplinary Research 2, n. 12 (30 dicembre 2023): 4961–76. http://dx.doi.org/10.55927/eajmr.v2i12.6896.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This research examines the impact of Availability Bias and Overconfidence Bias on investment decisions. Utilizing a literature review approach and VOSviewer analysis, this study explores how these biases affect investor decision-making processes and potential mitigation strategies. The objective is to highlight the significance of understanding and mitigating these biases in achieving more rational investment decisions. The findings underscore the potential negative effects of both biases, leading to overconfident and less rational investment decisions. Awareness of their interplay is crucial, as they reinforce each other's negative effects on investment decision-making. Overcoming these cognitive biases is essential for more effective investment decision-making. This research contributes insights into mitigating biases, aiding in a more balanced and rational approach to investment decision-making.
3

Djebrouni, Yasmine, Nawel Benarba, Ousmane Touat, Pasquale De Rosa, Sara Bouchenak, Angela Bonifati, Pascal Felber, Vania Marangozova e Valerio Schiavoni. "Bias Mitigation in Federated Learning for Edge Computing". Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, n. 4 (19 dicembre 2023): 1–35. http://dx.doi.org/10.1145/3631455.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Federated learning (FL) is a distributed machine learning paradigm that enables data owners to collaborate on training models while preserving data privacy. As FL effectively leverages decentralized and sensitive data sources, it is increasingly used in ubiquitous computing including remote healthcare, activity recognition, and mobile applications. However, FL raises ethical and social concerns as it may introduce bias with regard to sensitive attributes such as race, gender, and location. Mitigating FL bias is thus a major research challenge. In this paper, we propose Astral, a novel bias mitigation system for FL. Astral provides a novel model aggregation approach to select the most effective aggregation weights to combine FL clients' models. It guarantees a predefined fairness objective by constraining bias below a given threshold while keeping model accuracy as high as possible. Astral handles the bias of single and multiple sensitive attributes and supports all bias metrics. Our comprehensive evaluation on seven real-world datasets with three popular bias metrics shows that Astral outperforms state-of-the-art FL bias mitigation techniques in terms of bias mitigation and model accuracy. Moreover, we show that Astral is robust against data heterogeneity and scalable in terms of data size and number of FL clients. Astral's code base is publicly available.
4

Gallaher, Joshua P., Alexander J. Kamrud e Brett J. Borghetti. "Detection and Mitigation of Inefficient Visual Searching". Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, n. 1 (dicembre 2020): 47–51. http://dx.doi.org/10.1177/1071181320641015.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A commonly known cognitive bias is a confirmation bias: the overweighting of evidence supporting a hy- pothesis and underweighting evidence countering that hypothesis. Due to high-stress and fast-paced opera- tions, military decisions can be affected by confirmation bias. One military decision task prone to confirma- tion bias is a visual search. During a visual search, the operator scans an environment to locate a specific target. If confirmation bias causes the operator to scan the wrong portion of the environment first, the search is inefficient. This study has two primary goals: 1) detect inefficient visual search using machine learning and Electroencephalography (EEG) signals, and 2) apply various mitigation techniques in an effort to im- prove the efficiency of searches. Early findings are presented showing how machine learning models can use EEG signals to detect when a person might be performing an inefficient visual search. Four mitigation techniques were evaluated: a nudge which indirectly slows search speed, a hint on how to search efficiently, an explanation for why the participant was receiving a nudge, and instructions to instruct the participant to search efficiently. These mitigation techniques are evaluated, revealing the most effective mitigations found to be the nudge and hint techniques.
5

Lee, Yu-Hao, Norah E. Dunbar, Claude H. Miller, Brianna L. Lane, Matthew L. Jensen, Elena Bessarabova, Judee K. Burgoon et al. "Training Anchoring and Representativeness Bias Mitigation Through a Digital Game". Simulation & Gaming 47, n. 6 (20 agosto 2016): 751–79. http://dx.doi.org/10.1177/1046878116662955.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Objective. Humans systematically make poor decisions because of cognitive biases. Can digital games train people to avoid cognitive biases? The goal of this study is to investigate the affordance of different educational media in training people about cognitive biases and to mitigate cognitive biases within their decision-making processes. Method. A between-subject experiment was conducted to compare a digital game, a traditional slideshow, and a combined condition in mitigating two types of cognitive biases: anchoring bias and representativeness bias. We measured both immediate effects and delayed effects after four weeks. Results. The digital game and slideshow conditions were effective in mitigating cognitive biases immediately after the training, but the effects decayed after four weeks. By providing the basic knowledge through the slideshow, then allowing learners to practice bias-mitigation techniques in the digital game, the combined condition was most effective at mitigating the cognitive biases both immediately and after four weeks.
6

K. Devasenapathy, Arun Padmanabhan,. "Uncovering Bias: Exploring Machine Learning Techniques for Detecting and Mitigating Bias in Data – A Literature Review". International Journal on Recent and Innovation Trends in Computing and Communication 11, n. 9 (30 ottobre 2023): 776–81. http://dx.doi.org/10.17762/ijritcc.v11i9.8965.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The presence of Bias in models developed using machine learning algorithms has emerged as a critical issue. This literature review explores the topic of uncovering the existence of bias in data and the application of techniques for detecting and mitigating Bias. The review provides a comprehensive analysis of the existing literature, focusing on pre-processing techniques, post-pre-processing techniques, and fairness constraints employed to uncover and address the existence of Bias in machine learning models. The effectiveness, limitations, and trade-offs of these techniques are examined, highlighting their impact on advocating fairness and equity in decision-making processes. The methodology consists of two key steps: data preparation and bias analysis, followed by machine learning model development and evaluation. In the data preparation phase, the dataset is analyzed for biases and pre-processed using techniques like reweighting or relabeling to reduce bias. In the model development phase, suitable algorithms are selected, and fairness metrics are defined and optimized during the training process. The models are then evaluated using performance and fairness measures and the best-performing model is chosen. The methodology ensures a systematic exploration of machine learning techniques to detect and mitigate bias, leading to more equitable decision-making. The review begins by examining the techniques of pre-processing, which involve cleaning the data, selecting the features, feature engineering, and sampling. These techniques play an important role in preparing the data to reduce bias and promote fairness in machine learning models. The analysis highlights various studies that have explored the effectiveness of these techniques in uncovering and mitigating bias in data, contributing to the development of more equitable and unbiased machine learning models. Next, the review delves into post-pre-processing techniques that focus on detecting and mitigating bias after the initial data preparation steps. These techniques include bias detection methods that assess the disparate impact or disparate treatment in model predictions, as well as bias mitigation techniques that modify model outputs to achieve fairness across different groups. The evaluation of these techniques, their performance metrics, and potential trade-offs between fairness and accuracy are discussed, providing insights into the challenges and advancements in bias mitigation. Lastly, the review examines fairness constraints, which involve the imposition of rules or guidelines on machine learning algorithms to ensure fairness in predictions or decision-making processes. The analysis explores different fairness constraints, such as demographic parity, equalized odds, and predictive parity, and their effectiveness in reducing bias and advocating fairness in machine learning models. Overall, this literature review provides a comprehensive understanding of the techniques employed to uncover and mitigate the existence of bias in machine learning models. By examining pre-processing techniques, post-pre-processing techniques, and fairness constraints, the review contributes to the development of more fair and unbiased machine learning models, fostering equity and ethical decision-making in various domains. By examining relevant studies, this review provides insights into the effectiveness and limitations of various pre-processing techniques for bias detection and mitigation via Pre-processing, Adversarial learning, Fairness Constraints, and Post-processing techniques.
7

Chu, Charlene, Simon Donato-Woodger, Shehroz Khan, Kathleen Leslie, Tianyu Shi, Rune Nyrup e Amanda Grenier. "STRATEGIES TO MITIGATE MACHINE LEARNING BIAS AFFECTING OLDER ADULTS: RESULTS FROM A SCOPING REVIEW". Innovation in Aging 7, Supplement_1 (1 dicembre 2023): 717–18. http://dx.doi.org/10.1093/geroni/igad104.2325.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Digital ageism, defined as age-related bias in artificial intelligence (AI) and technological systems, has emerged as a significant concern for its potential impact on society, health, equity, and older people’s well-being. This scoping review aims to identify mitigation strategies used in research studies to address age-related bias in machine learning literature. We conducted a scoping review following Arksey & O’Malley’s methodology, and completed a comprehensive search strategy of five databases (Web of Science, CINAHL, EMBASE, IEEE Xplore, and ACM digital library). Articles were included if there was an AI application, age-related bias, and the use of a mitigation strategy. Efforts to mitigate digital ageism were sparse: our search generated 7595 articles, but only a limited number of them met the inclusion criteria. Upon screening, we identified only nine papers which attempted to mitigate digital ageism. Of these, eight involved computer vision models (facial, age prediction, brain age) while one predicted activity based on accelerometer and vital sign measurements. Three broad categories of approaches to mitigating bias in AI were identified: i) sample modification: creating a smaller, more balanced sample from the existing dataset; ii) data augmentation: modifying images to create more training data from the existing datasets without adding additional images; and iii) application of statistical or algorithmic techniques to reduce bias. Digital ageism is a newly-established topic of research, and can affect machine learning models through multiple pathways. Our results advance research on digital ageism by presenting the challenges and possibilities for mitigating digital ageism in machine learning models.
8

Featherston, Rebecca Jean, Aron Shlonsky, Courtney Lewis, My-Linh Luong, Laura E. Downie, Adam P. Vogel, Catherine Granger, Bridget Hamilton e Karyn Galvin. "Interventions to Mitigate Bias in Social Work Decision-Making: A Systematic Review". Research on Social Work Practice 29, n. 7 (23 dicembre 2018): 741–52. http://dx.doi.org/10.1177/1049731518819160.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Purpose: This systematic review synthesized evidence supporting interventions aimed at mitigating cognitive bias associated with the decision-making of social work professionals. Methods: A systematic search was conducted within 10 social services and health-care databases. Review authors independently screened studies in duplicate against prespecified inclusion criteria, and two review authors undertook data extraction and quality assessment. Results: Four relevant studies were identified. Because these studies were too heterogeneous to conduct meta-analyses, results are reported narratively. Three studies focused on diagnostic decisions within mental health and one considered family reunification decisions. Two strategies were reportedly effective in mitigating error: a nomogram tool and a specially designed online training course. One study assessing a consider-the-opposite approach reported no effect on decision outcomes. Conclusions: Cognitive bias can impact the accuracy of clinical reasoning. This review highlights the need for research into cognitive bias mitigation within the context of social work practice decision-making.
9

Erkmen, Cherie Parungo, Lauren Kane e David T. Cooke. "Bias Mitigation in Cardiothoracic Recruitment". Annals of Thoracic Surgery 111, n. 1 (gennaio 2021): 12–15. http://dx.doi.org/10.1016/j.athoracsur.2020.07.005.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Vejsbjerg, Inge, Elizabeth M. Daly, Rahul Nair e Svetoslav Nizhnichenkov. "Interactive Human-Centric Bias Mitigation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 21 (24 marzo 2024): 23838–40. http://dx.doi.org/10.1609/aaai.v38i21.30582.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Bias mitigation algorithms differ in their definition of bias and how they go about achieving that objective. Bias mitigation algorithms impact different cohorts differently and allowing end users and data scientists to understand the impact of these differences in order to make informed choices is a relatively unexplored domain. This demonstration presents an interactive bias mitigation pipeline that allows users to understand the cohorts impacted by their algorithm choice and provide feedback in order to provide a bias mitigated pipeline that most aligns with their goals.

Tesi sul tema "Mitigation des biais":

1

Le, Berre Guillaume. "Vers la mitigation des biais en traitement neuronal des langues". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0074.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Il est de notoriété que les modèles d'apprentissage profond sont sensibles aux biais qui peuvent être présents dans les données utilisées pour l'apprentissage. Ces biais qui peuvent être définis comme de l'information inutile ou préjudiciable pour la tâche considérée, peuvent être de différentes natures: on peut par exemple trouver des biais dans les styles d'écriture utilisés, mais aussi des biais bien plus problématiques portant sur le sexe ou l'origine ethnique des individus. Ces biais peuvent provenir de différentes sources, comme des annotateurs ayant créé les bases de données, ou bien du processus d'annotation lui-même. Ma thèse a pour sujet l'étude de ces biais et, en particulier, s'organise autour de la mitigation des effets des biais sur l'apprentissage des modèles de Traitement Automatique des Langues (TAL). J'ai notamment beaucoup travaillé avec les modèles pré-entraînés comme BERT, RoBERTa ou UnifiedQA qui sont devenus incontournables ces dernières années dans tous les domaines du TAL et qui, malgré leur large pré-entraînement, sont très sensibles à ces problèmes de biais. Ma thèse s'organise en trois volets, chacun présentant une façon différente de gérer les biais présents dans les données. Le premier volet présente une méthode permettant d'utiliser les biais présents dans une base de données de résumé automatique afin d'augmenter la variabilité et la contrôlabilité des résumés générés. Puis, dans le deuxième volet, je m'intéresse à la génération automatique d'une base de données d'entraînement pour la tâche de question-réponse à choix multiples. L'intérêt d'une telle méthode de génération est qu'elle permet de ne pas faire appel à des annotateurs et donc d'éliminer les biais venant de ceux-ci dans les données. Finalement, je m'intéresse à l'entraînement d'un modèle multitâche pour la reconnaissance optique de texte. Je montre dans ce dernier volet qu'il est possible d'augmenter les performances de nos modèles en utilisant différents types de données (manuscrites et tapuscrites) lors de leur entraînement
It is well known that deep learning models are sensitive to biases that may be present in the data used for training. These biases, which can be defined as useless or detrimental information for the task in question, can be of different kinds: one can, for example, find biases in the writing styles used, but also much more problematic biases relating to the sex or ethnic origin of individuals. These biases can come from different sources, such as annotators who created the databases, or from the annotation process itself. My thesis deals with the study of these biases and, in particular, is organized around the mitigation of the effects of biases on the training of Natural Language Processing (NLP) models. In particular, I have worked a lot with pre-trained models such as BERT, RoBERTa or UnifiedQA which have become essential in recent years in all areas of NLP and which, despite their extensive pre-training, are very sensitive to these bias problems.My thesis is organized in three parts, each presenting a different way of managing the biases present in the data. The first part presents a method allowing to use the biases present in an automatic summary database in order to increase the variability and the controllability of the generated summaries. Then, in the second part, I am interested in the automatic generation of a training dataset for the multiple-choice question-answering task. The advantage of such a generation method is that it makes it possible not to call on annotators and therefore to eliminate the biases coming from them in the data. Finally, I am interested in training a multitasking model for optical text recognition. I show in this last part that it is possible to increase the performance of our models by using different types of data (handwritten and typed) during their training
2

Gadala, M. "Automation bias : exploring causal mechanisms and potential mitigation strategies". Thesis, City, University of London, 2017. http://openaccess.city.ac.uk/17889/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Automated decision support tools are designed to aid users and improve their performance in certain tasks by providing advice in the form of prompts, alarms, assessments, or recommendations. However, recent evidence suggests that sometimes use of such tools introduces decision errors that are not made without the tool. We refer to this phenomenon as “automation bias” (AB), resulting in a broader definition of this term than used by many authors. Sometimes, such automation-induced errors can even result in overall performance (in terms of correct decisions) which is actually worse with the tool than without it. Our literature review reveals an emphasis on mediators affecting automation bias and some mitigation strategies aimed at reducing it. However, there is a lack of research on the cognitive causal explanations for automation bias and on adaptive mitigation strategies that result in tools that adapt to the needs and characteristics of individual users. This thesis aims to address some of these gaps in the literature and focuses on systems consisting of a human and an automated tool which does not replace, but instead supports the human towards making a decision, with the overall responsibility lying with the human user. The overall goal of this thesis is to help reduce the rate of automation bias through a better understanding of its causes and the proposal of innovative, adaptive mitigation strategies. To achieve this, we begin with an extensive literature review on automation bias including examples, mediators, explanations, and mitigations while identifying areas for further research. This review is followed by the presentation of three experiments aimed at reducing the rate of AB in different ways: (1) an experiment to explore causal mechanisms of automation bias, the effect of the mere presence of tool advice before its presentation and the effect of the sequence of tool advice in a glaucoma risk calculator environment, (2) simulations that apply concepts of diversity to human + human systems to improve system performance in a breast cancer double reading programme, and (3) an experiment to study the possibility of improving system performance by tailoring tool setting (sensitivity / specificity combination) for groups of similarly skilled users and cases of similar difficulty level using a spellchecking tool. Results from the glaucoma experiment provide evidence of the effect of the presence of tool advice on user decisions - even before its presentation, as well as evidence of a newly introduced cognitive mechanism (users’ strategic change in decision threshold) which may account for some automation bias errors previously observed but unexplained in the literature. Results from the double reading experiment provide evidence of the benefits of diversity in improving system performance. Finally, results from the spell checker experiment provide evidence that groups of similarly skilled users perform better at different tool settings, that the same group of users perform better using a different tool setting in difficult versus easy tasks, and that use of simple models of user behaviour may allow the prediction, among a subset of tool settings for a certain tool, the tool setting that would be most appropriate for each user ability group and class of case difficulty.
3

Fyrvald, Johanna. "Mitigating algorithmic bias in Artificial Intelligence systems". Thesis, Uppsala universitet, Matematiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-388627.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Artificial Intelligence (AI) systems are increasingly used in society to make decisions that can have direct implications on human lives; credit risk assessments, employment decisions and criminal suspects predictions. As public attention has been drawn towards examples of discriminating and biased AI systems, concerns have been raised about the fairness of these systems. Face recognition systems, in particular, are often trained on non-diverse data sets where certain groups often are underrepresented in the data. The focus of this thesis is to provide insights regarding different aspects that are important to consider in order to mitigate algorithmic bias as well as to investigate the practical implications of bias in AI systems. To fulfil this objective, qualitative interviews with academics and practitioners with different roles in the field of AI and a quantitative online survey is conducted. A practical scenario covering face recognition and gender bias is also applied in order to understand how people reason about this issue in a practical context. The main conclusion of the study is that despite high levels of awareness and understanding about challenges and technical solutions, the academics and practitioners showed little or no awareness of legal aspects regarding bias in AI systems. The implication of this finding is that AI can be seen as a disruptive technology, where organizations tend to develop their own mitigation tools and frameworks as well as use their own moral judgement and understanding of the area instead of turning to legal authorities.
4

Salomon, Sophie. "Bias Mitigation Techniques and a Cost-Aware Framework for Boosted Ranking Algorithms". Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1586450345426827.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Frick, Eric Christopher. "Mitigation of magnetic interference and compensation of bias drift in inertial sensors". Thesis, University of Iowa, 2015. https://ir.uiowa.edu/etd/5472.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Magnetic interference in the motion capture environment is caused primarily by ferromagnetic objects and current-carrying devices disturbing the ambient, geomagnetic field. Inertial sensors gather magnetic data to determine and stabilize their global heading estimates, and such magnetic field disturbances alter heading estimates. This decreases orientation accuracy and therefore decreases motion capture accuracy. The often used Kalman Filter approach deals with magnetic interference by ignoring the magnetic data during periods interference is encountered, but this method is only effective when the disturbances are ephemeral, and cannot not retroactively repair data from disturbed time periods. The objective of this research is to develop a method of magnetic interference mitigation for environments where magnetic interference is the norm rather than the exception. To the knowledge of this author, the ability to use inertial and magnetic sensors to capture accurate, global, and drift-free orientation data in magnetically disturbed areas has yet to be developed. Furthermore there are no methods known to this author that are able to use data from undisturbed time periods to retroactively repair data from disturbed time periods. The investigation begins by exploring the use of magnetic shielding, with the reasoning that application of shielding so as to impede disturbed fields from affecting the inertial sensors would increase orientation accuracy. It was concluded that while shielding can mitigate the effect of magnetic interference, its application requires a tedious trial and error testing that was not guaranteed to improve results. Furthermore, shielding works by redirecting magnetic field lines, increasing field complexity, and thus has a high potential to exacerbate magnetic interference. Shielding was determined to be an impractical approach, and development of a magnetic inference mitigation algorithm began. The algorithm was constructed such that magnetic data would be filtered before inclusion in the orientation estimate, with the result that exposure in an undisturbed environment would improve estimation, but exposure to a disturbed environment would have no effect. The algorithm was designed for post-processing, rather than real-time use as Kalman Filters are, which enabled magnetic data gathered before and after a time point could affect estimation. The algorithm was evaluated by comparing it with the Kalman Filter approach of the company XSENS, using the gold standard of optical motion capture as the reference point. Under the tested conditions of stationary periods and smooth planar motion, the developed algorithm was resistant to magnetic interference for the duration of testing, while the Kalman Filter began to degrade after approximately 15 seconds. In a 190 second test, of which 180 were spent in a disturbed environment, the developed algorithm resulted in 0.4 degrees of absolute error, compared to the of the Kalman Filter’s 78.8 degrees. The developed algorithm shows the potential for inertial systems to be used effectively in situations of consistent magnetic interference. As the benefits of inertial motion capture make it a more attractive option than optical motion capture, immunity to magnetic interference significantly expands the usable range of motion capture environments. Such expansion would be beneficial for motion capture studies as a whole, allowing for the cheaper, more practical inertial approach to motion capture to supplant the more expensive and time consuming optimal option.
6

Taylor, Stephen Luke. "Analyzing methods of mitigating initialization bias in transportation simulation models". Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37208.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
All computer simulation models require some form of initialization before their outputs can be considered meaningful. Simulation models are typically initialized in a particular, often "empty" state and therefore must be "warmed-up" for an unknown amount of simulation time before reaching a "quasi-steady-state" representative of the systems' performance. The portion of the output series that is influenced by the arbitrary initialization is referred to as the initial transient and is a widely recognized problem in simulation analysis. Although several methods exist for removing the initial transient, there are no methods that perform well in all applications. This research evaluates the effectiveness of several techniques for reducing initialization bias from simulations using the commercial transportation simulation model VISSIM®. The three methods ultimately selected for evaluation are Welch's Method, the Marginal Standard Error Rule (MSER) and the Volume Balancing Method currently being used by the CORSIM model. Three model instances - a single intersection, a corridor, and a large network - were created to analyze the length of the initial transient for varying scenarios, under high and low demand scenarios. After presenting the results of each initialization method, advantages and criticisms of each are discussed as well as issues that arose during the implementation. The results for estimation of the extent of the initial transient are compared across each method and across the varying model sizes and volume levels. Based on the results of this study, Welch's Method is recommended based on is consistency and ease of implementation.
7

Sweeney, Christopher(Christopher J. ). M. Eng Massachusetts Institute of Technology. "Understanding and mitigating unintended demographic bias in machine learning systems". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123131.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 81-84).
Machine Learning is becoming more and more influential in our society. Algorithms that learn from data are streamlining tasks in domains like employment, banking, education, heath care, social media, etc. Unfortunately, machine learning models are very susceptible to unintended bias, resulting in unfair and discriminatory algorithms with the power to adversely impact society. This unintended bias is usually subtle, emanating from many different sources and taking on many forms. This thesis will focus on understanding how unfair biases with respect to various demographic groups show up in machine learning systems. Furthermore, we develop multiple techniques to mitigate unintended demographic bias at various stages of typical machine learning pipelines. Using Natural Language Processing as a framework, we show substantial improvements in fairness for standard machine learning systems, when using our bias mitigation techniques.
by Christopher Sweeney.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
8

Isumbingabo, Emma Francoise. "Evaluation and mitigation of the undesired effect of DC bias on inverter power transformer". Master's thesis, University of Cape Town, 2009. http://hdl.handle.net/11427/5202.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Inverters have traditionally been used mostly in standalone systems (non-grid connected), Uninterruptible Power Supplies (UPS) and, more recently, in distributed generated systems (DGs). DG systems are based on grid connected inverters and are increasingly being connected to utility grids to convert renewable energy sources to the utility grids. Such sources are likely to have a significant impact in the future in meeting the electricity demands of industry and domestic consumption. Common DGs utilize DC power sources such as fuel cells, batteries, photovoltaic (solar) power, and wind power. Most of power supplies in domestic and industries are AC power consumers and, for this reason, the DC power has to be converted to meet the requirement. Two main causes of DC current in inverter power transformer are: 1) Non-linearity and asymmetry in its switching mechanism which may result in undesired DC current at its input. This DC current introduced into an inverter transformer results in the transformer's magnetic flux distortion and in some cases magnetic saturation. This, in turn, results in asymmetrical primary currents in the transformer (inverter side). This is due to the non linear characteristics of the transformer magnetic flux. 2) The same effects can be produced by the connection of asymmetrical loads (e.g. asymmetrical rectifier) to the inverter output. The result in both cases is an asymmetrical magnetic flux in the transformer. This is manifested as even and odd current harmonics as well as an increase in the reactive power requirement from the inverter. vi To remedy this situation, it is, therefore, necessary to incorporate into the inverter's control system a mechanism of cancelling the DC magnetic motive force (mmf) that causes the magnetic flux distortion. This Thesis presents a method of introducing a DC voltage component in the inverter's voltage output so as to inject the necessary DC current into the primary side of the inverter's transformer so as to cancel the total DC mmf that the transformer is subjected to ( supply and load side). This project consists of three main parts namely: Modeling, Simulation and Laboratory Experiment. Activities undertaken under Modeling and Simulation were as follows: Determining the effects of DC current on a power transformer. Investigating the likely occurrence of saturation of the power transformer incorporated in inverter systems. Mitigating the effects that can be caused by the presence of a DC component in the windings of a power transformer. After understanding the literature on the subject of interest, MATLAB SIMULINK and MATLAB m-files were used to simulate the behavior of the power transformer under three situations : The transformer under linear load. The transformer subjected to asymmetrical loading. The inverter system that has a power transformer on its output were designed in MATLAB and used to simulate the situation for each case. To validate the theory and simulation results, experimental work was carried out as follows: vii Investigation of the effects that DC (current) injection can have on a 6 kVA power transformer. Investigation of the performance of a 6 kVA power transformer under linear loading. Investigation of the performance of a 6 kVA power transformer under non-linear loads. Investigation of the likely occurrence of DC offset in inverter system. Mitigation of the effect of DC bias on power transformer using extra windings. Mitigation of the effects of DC offset in power inverter transformer by using the second harmonic content of the primary current as a feedback signal. Results obtained showed a successful implementation of the proposed method. However limitations of the controller performances were experienced and will require future work. It was concluded that a total removal of the undesired effects of DC bias is achievable and that total removal of DC offset in power inverter transformer is possible if the limitations of the controller are overcome.
9

Ashton, Christie. "A critical review of approaches to mitigating bias in fingerprint identification". Thesis, Ashton, Christie (2018) A critical review of approaches to mitigating bias in fingerprint identification. Masters by Coursework thesis, Murdoch University, 2018. https://researchrepository.murdoch.edu.au/id/eprint/41502/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Fingerprint identification is a discipline used within forensic science which assists in criminal investigations1, 2. The process of fingerprint identification involves the comparison of crime scene evidence with known exemplars. This form of examination is heavily reliant on human examiners and their conclusions as to whether there is an identification, exclusion or insufficient information to identify3. This form of forensic identification has become a focus due to concern of the effects of cognitive bias on examiners conclusions. Concerns have prompted research into the area of approaches to mitigate bias throughout forensic fingerprint protocols. Research into the common sources of bias during a fingerprint examination was conducted to gain an understanding of how bias may potentially be reduced. Throughout this dissertation the psychological and forensic approaches to bias were reviewed and the international and Australian approaches to bias mitigation were discussed. This found that there was evidence of a widespread issue regarding human cognitive bias in fingerprint examiners, however, there were no uniform mitigation strategies in place. Limitations to recommended approaches and currently implemented strategies have been reviewed, identifying that there is still a need for further research into the theoretical approaches to overcome bias. Therefore, leading to the formation of a study that aims to identify the theoretical approaches as suggested by literature, and critically review the effectiveness of these methods in controlling and reducing bias. The potential outcome from the suggested study may result in a useful document that will provide the practical field of forensic science with a comprehensive and critical review of approaches to assist in the development of standardised protocols.
10

Lowery, Meghan Rachelle. "MITIGATING SEX BIAS IN COMPENSATION DECISIONS: THE ROLE OF COMPARATIVE DATA". OpenSIUC, 2010. https://opensiuc.lib.siu.edu/dissertations/231.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Gender differences in salaries are prominent in most fields. Several laws exist to decrease the amount of pay discrimination and provide remedies for discriminatory organizational behaviors, yet these laws have proven insufficient to eradicate pay inequities. One source for such discrimination in pay stems from the evaluation of employee performance. Performance appraisal systems can be biased in very small ways that yield larger negative effects on later employment-related decisions, such as compensation. The goal of this study was to examine decision-making processes and conclusions raters make during the evaluation of employees. It was expected that the type of presentation and the content of the ratings of performance sub-dimensions would affect gender differences in composite ratings, salary increases, and merit bonuses. Specifically, women were expected to be rated lower when employee performance information was presented sequentially, where it would be harder to directly compare one employee with another and thus not mitigate sex bias. Comparatively, when employee performance information was presented in aggregate form, where comparisons among employees would be easier, no sex bias was expected. It was also hypothesized that in the sequential condition, participants would provide casuistry-based reasoning for their decisions such that explanations for men's better performance would be justified by their performance on the sub-dimension on which the male candidate was rated highly. No effect was found for target gender on any of the outcomes. There was a significant difference for participant gender in the amount of salary increases and merit bonuses assigned. Male participants assigned significantly higher raises and bonuses than female participants to employees. There was also a strong main effect for task-related skills across all outcomes. Employees who were higher in the task dimension were rated higher, awarded higher pay, and given larger bonuses. There were no gender differences in the task ratings. Qualitative data analyses support these conclusions. Although participants provided lengthy reasons for their decisions, none showed or explained a gender difference. Limitations and recommendations for future studies are discussed.

Libri sul tema "Mitigation des biais":

1

Whitesmith, Martha. Cognitive Bias in Intelligence Analysis. Edinburgh University Press, 2020. http://dx.doi.org/10.3366/edinburgh/9781474466349.001.0001.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Belief, Bias and Intelligence outlines an approach for reducing the risk of cognitive biases impacting intelligence analysis that draws from experimental research in the social sciences. It critiques the reliance of Western intelligence agencies on the use of a method for intelligence analysis developed by the CIA in the 1990’s, the Analysis of Competing Hypotheses (ACH). The book shows that the theoretical basis of the ACH method is significantly flawed, and that there is no empirical basis for the use of ACH in mitigating cognitive biases. It puts ACH to the test in an experimental setting against two key cognitive biases with unique empirical research facilitated by UK’s Professional Heads of Intelligence Analysis unit at the Cabinet Office, includes meta-analysis into which analytical factors increase and reduce the risk of cognitive bias and recommends an alternative approach to risk mitigation for intelligence communities. Finally, it proposes alternative models for explaining the underlying causes of cognitive biases, challenging current leading theories in the social sciences.
2

Kenski, Kate. Overcoming Confirmation and Blind Spot Biases When Communicating Science. A cura di Kathleen Hall Jamieson, Dan M. Kahan e Dietram A. Scheufele. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780190497620.013.40.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This chapter focuses on two biases that lead people away from evaluating evidence and scientific studies impartially—confirmation bias and bias blind spot. The chapter first discusses different ways in which people process information and reviews the costs and benefits of utilizing cognitive shortcuts in decision making. Next, two common cognitive biases, confirmation bias and bias blind spot, are explained. Then the literature on “debiasing” is explored. Finally, the implications of confirmation bias and bias blind spot in the context of communicating about science are examined, and an agenda for future research on understanding and mitigating these biases is offered.
3

Mink, John. Forecasting with Out-Liars: Mitigating Blame, Bias, and Apathy in Your Planning Process to Drive Meaningful and Sustainable Financial Improvements. Mindstir Media, 2021.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Sandis, Elizabeth. Early Modern Drama at the Universities. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780192857132.001.0001.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This is the first history of Oxford and Cambridge drama in the Tudor and Stuart period. It guides the reader through the theatrical experiences of students at university in early modern England, following them on the journey from schoolboys to scholars to graduates in the workplace. Early Modern Drama at the Universities is structured to make the subject as accessible as possible, mitigating the difficulties of this sizeable and complex body of evidence. The hundreds of plays we have inherited from Oxford and Cambridge are steeped in Classical culture, and the academic establishment’s bias against print culture means that most evidence remains in manuscript form. Opening up these plays to a wider readership, this study carves three main roads into the corpus, introducing key institutions, intertexts, and individuals. For the first time we can see the extent to which institutional culture made the drama what it is: pedagogically-inspired, homosocial, and self-reflexive. Early Modern Drama at the Universities argues that it was primarily on a college level that students lived, worked, and proved themselves to the community, and that if we are to understand university drama as a whole, we must create it from the building blocks of individual college histories.

Capitoli di libri sul tema "Mitigation des biais":

1

Formanek, Kay. "Surfacing and Mitigating Bias". In Beyond D&I, 137–62. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-75336-8_6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Pat, Croskerry. "Cognitive Bias Mitigation: Becoming Better Diagnosticians". In Diagnosis, 257–87. Boca Raton : Taylor & Francis, 2017.: CRC Press, 2017. http://dx.doi.org/10.1201/9781315116334-15.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Wang, Guanchu, Mengnan Du, Ninghao Liu, Na Zou e Xia Hu. "Mitigating Algorithmic Bias with Limited Annotations". In Machine Learning and Knowledge Discovery in Databases: Research Track, 241–58. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43415-0_15.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Tarallo, Mark. "Dancing with Myself: Self-Management and Bias Mitigation". In Modern Management and Leadership, 27–34. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003095620-6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Wu, Ye, Yuanjing Feng, Dinggang Shen e Pew-Thian Yap. "Penalized Geodesic Tractography for Mitigating Gyral Bias". In Medical Image Computing and Computer Assisted Intervention – MICCAI 2018, 12–19. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00931-1_2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Li, Yinxiao. "Mitigating Position Bias in Hotels Recommender Systems". In Communications in Computer and Information Science, 74–84. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37249-0_6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Sharma, Ashish, Rajeshwar Mehrotra e Fiona Johnson. "A New Framework for Modeling Future Hydrologic Extremes: Nested Bias Correction as a Precursor to Stochastic Rainfall Downscaling". In Climate Change Modeling, Mitigation, and Adaptation, 357–86. Reston, VA: American Society of Civil Engineers, 2013. http://dx.doi.org/10.1061/9780784412718.ch13.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Corliss, David J. "Designing Against Bias: Identifying and Mitigating Bias in Machine Learning and AI". In Lecture Notes in Networks and Systems, 411–18. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-47715-7_28.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Shi, Sheng, Shanshan Wei, Zhongchao Shi, Yangzhou Du, Wei Fan, Jianping Fan, Yolanda Conyers e Feiyu Xu. "Algorithm Bias Detection and Mitigation in Lenovo Face Recognition Engine". In Natural Language Processing and Chinese Computing, 442–53. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60457-8_36.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Wang, Xing, Guoqiang Zhao, Feng Zhang e Yongan Yang. "Characterization and Mitigation of BeiDou Triple-Frequency Code Multipath Bias". In Lecture Notes in Electrical Engineering, 467–80. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0014-1_39.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Mitigation des biais":

1

Calegari, Roberta, Gabriel G. Castañé, Michela Milano e Barry O'Sullivan. "Assessing and Enforcing Fairness in the AI Lifecycle". In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/735.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A significant challenge in detecting and mitigating bias is creating a mindset amongst AI developers to address unfairness. The current literature on fairness is broad, and the learning curve to distinguish where to use existing metrics and techniques for bias detection or mitigation is difficult. This survey systematises the state-of-the-art about distinct notions of fairness and relative techniques for bias mitigation according to the AI lifecycle. Gaps and challenges identified during the development of this work are also discussed.
2

Cheong, Jiaee, Selim Kuzucu, Sinan Kalkan e Hatice Gunes. "Towards Gender Fairness for Mental Health Prediction". In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/658.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Mental health is becoming an increasingly prominent health challenge. Despite a plethora of studies analysing and mitigating bias for a variety of tasks such as face recognition and credit scoring, research on machine learning (ML) fairness for mental health has been sparse to date. In this work, we focus on gender bias in mental health and make the following contributions. First, we examine whether bias exists in existing mental health datasets and algorithms. Our experiments were conducted using Depresjon, Psykose and D-Vlog. We identify that both data and algorithmic bias exist. Second, we analyse strategies that can be deployed at the pre-processing, in-processing and post-processing stages to mitigate for bias and evaluate their effectiveness. Third, we investigate factors that impact the efficacy of existing bias mitigation strategies and outline recommendations to achieve greater gender fairness for mental health. Upon obtaining counter-intuitive results on D-Vlog dataset, we undertake further experiments and analyses, and provide practical suggestions to avoid hampering bias mitigation efforts in ML for mental health.
3

Grari, Vincent, Sylvain Lamprier e Marcin Detyniecki. "Fairness without the Sensitive Attribute via Causal Variational Autoencoder". In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/98.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In recent years, most fairness strategies in machine learning have focused on mitigating unwanted biases by assuming that the sensitive information is available. However, in practice this is not always the case: due to privacy purposes and regulations such as RGPD in EU, many personal sensitive attributes are frequently not collected. Yet, only a few prior works address the issue of mitigating bias in such a difficult setting, in particular to meet classical fairness objectives such as Demographic Parity and Equalized Odds. By leveraging recent developments for approximate inference, we propose in this paper an approach to fill this gap. To infer a sensitive information proxy, we introduce a new variational auto-encoding-based framework named SRCVAE that relies on knowledge of the underlying causal graph. The bias mitigation is then done in an adversarial fairness approach. Our proposed method empirically achieves significant improvements over existing works in the field. We observe that the generated proxy’s latent space correctly recovers sensitive information and that our approach achieves a higher accuracy while obtaining the same level of fairness on two real datasets.
4

Park, Souneil, Seungwoo Kang, Sangjeong Lee, Sangyoung Chung e Junehwa Song. "Mitigating media bias". In the hypertext 2008 workshop. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1379157.1379169.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Qraitem, Maan, Kate Saenko e Bryan A. Plummer. "Bias Mimicking: A Simple Sampling Approach for Bias Mitigation". In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01945.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Jiang, Jian, Viswonathan Manoranjan, Hanan Salam e Oya Celiktutan. "Generalised Bias Mitigation for Personality Computing". In MM '23: The 31st ACM International Conference on Multimedia. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3607865.3616175.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Jeon, Eojin, Mingyu Lee, Juhyeong Park, Yeachan Kim, Wing-Lam Mok e SangKeun Lee. "Improving Bias Mitigation through Bias Experts in Natural Language Understanding". In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.emnlp-main.681.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Akl, Naeem, e Ahmed Tewfik. "Optimal information sequencing for cognitive bias mitigation". In 2014 6th International Symposium on Communications, Control and Signal Processing (ISCCSP). IEEE, 2014. http://dx.doi.org/10.1109/isccsp.2014.6877806.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Heuss, Maria, Daniel Cohen, Masoud Mansoury, Maarten de Rijke e Carsten Eickhoff. "Predictive Uncertainty-based Bias Mitigation in Ranking". In CIKM '23: The 32nd ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3583780.3615011.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Dervişoğlu, Havvanur, e Mehmet Fatih Amasyali. "Racial Bias Mitigation with Federated Learning Approach". In 2023 8th International Conference on Computer Science and Engineering (UBMK). IEEE, 2023. http://dx.doi.org/10.1109/ubmk59864.2023.10286618.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "Mitigation des biais":

1

Serakos, Demetrios, John E. Gray e Hazim Youssef. Topics in Mitigating Radar Bias. Fort Belvoir, VA: Defense Technical Information Center, gennaio 2012. http://dx.doi.org/10.21236/ada604137.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Dolabella, Marcelo, e Mauricio Mesquita Moreira. Fighting Global Warming: Is Trade Policy in Latin America and the Caribbean a Help or a Hindrance? Inter-American Development Bank, agosto 2022. http://dx.doi.org/10.18235/0004426.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The dire prospects of global warming have been increasing the pressure on policymakers to use trade policy as a mitigation tool, challenging trade economists canonical “targeting principle.” Even though the justifications for this stance remain as valid as ever, it no longer seems feasible in a world that is already engaging actively in using trade policy for climate purposes. However, the search for second-best solutions remains warranted. In this paper, we focus on the climate benefits of tariff reform for a broad sample of Latin American and Caribbean countries, drawing on Shapiros (2021) insights about the environmental bias of trade policy. Using a partial equilibrium approach and GTAP 10-MRIO data for 2014, we show that even though there is evidence of a negative bias toward “dirty goods” in half of the countries studied, translating this into actionable tariff reforms is plagued by interpretation and implementation difficulties, as well as by jurisdictional and efficiency trade-offs. There are also questions about their efficacy in curbing greenhouse gas emissions.
3

Tipton, Kelley, Brian F. Leas, Emilia Flores, Christopher Jepson, Jaya Aysola, Jordana Cohen, Michael Harhay et al. Impact of Healthcare Algorithms on Racial and Ethnic Disparities in Health and Healthcare. Agency for Healthcare Research and Quality (AHRQ), dicembre 2023. http://dx.doi.org/10.23970/ahrqepccer268.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Objectives. To examine the evidence on whether and how healthcare algorithms (including algorithm-informed decision tools) exacerbate, perpetuate, or reduce racial and ethnic disparities in access to healthcare, quality of care, and health outcomes, and examine strategies that mitigate racial and ethnic bias in the development and use of algorithms. Data sources. We searched published and grey literature for relevant studies published between January 2011 and February 2023. Based on expert guidance, we determined that earlier articles are unlikely to reflect current algorithms. We also hand-searched reference lists of relevant studies and reviewed suggestions from experts and stakeholders. Review methods. Searches identified 11,500 unique records. Using predefined criteria and dual review, we screened and selected studies to assess one or both Key Questions (KQs): (1) the effect of algorithms on racial and ethnic disparities in health and healthcare outcomes and (2) the effect of strategies or approaches to mitigate racial and ethnic bias in the development, validation, dissemination, and implementation of algorithms. Outcomes of interest included access to healthcare, quality of care, and health outcomes. We assessed studies’ methodologic risk of bias (ROB) using the ROBINS-I tool and piloted an appraisal supplement to assess racial and ethnic equity-related ROB. We completed a narrative synthesis and cataloged study characteristics and outcome data. We also examined four Contextual Questions (CQs) designed to explore the context and capture insights on practical aspects of potential algorithmic bias. CQ 1 examines the problem’s scope within healthcare. CQ 2 describes recently emerging standards and guidance on how racial and ethnic bias can be prevented or mitigated during algorithm development and deployment. CQ 3 explores stakeholder awareness and perspectives about the interaction of algorithms and racial and ethnic disparities in health and healthcare. We addressed these CQs through supplemental literature reviews and conversations with experts and key stakeholders. For CQ 4, we conducted an in-depth analysis of a sample of six algorithms that have not been widely evaluated before in the published literature to better understand how their design and implementation might contribute to disparities. Results. Fifty-eight studies met inclusion criteria, of which three were included for both KQs. One study was a randomized controlled trial, and all others used cohort, pre-post, or modeling approaches. The studies included numerous types of clinical assessments: need for intensive care or high-risk care management; measurement of kidney or lung function; suitability for kidney or lung transplant; risk of cardiovascular disease, stroke, lung cancer, prostate cancer, postpartum depression, or opioid misuse; and warfarin dosing. We found evidence suggesting that algorithms may: (a) reduce disparities (i.e., revised Kidney Allocation System, prostate cancer screening tools); (b) perpetuate or exacerbate disparities (e.g., estimated glomerular filtration rate [eGFR] for kidney function measurement, cardiovascular disease risk assessments); and/or (c) have no effect on racial or ethnic disparities. Algorithms for which mitigation strategies were identified are included in KQ 2. We identified six types of strategies often used to mitigate the potential of algorithms to contribute to disparities: removing an input variable; replacing a variable; adding one or more variables; changing or diversifying the racial and ethnic composition of the patient population used to train or validate a model; creating separate algorithms or thresholds for different populations; and modifying the statistical or analytic techniques used by an algorithm. Most mitigation efforts improved proximal outcomes (e.g., algorithmic calibration) for targeted populations, but it is more challenging to infer or extrapolate effects on longer term outcomes, such as racial and ethnic disparities. The scope of racial and ethnic bias related to algorithms and their application is difficult to quantify, but it clearly extends across the spectrum of medicine. Regulatory, professional, and corporate stakeholders are undertaking numerous efforts to develop standards for algorithms, often emphasizing the need for transparency, accountability, and representativeness. Conclusions. Algorithms have been shown to potentially perpetuate, exacerbate, and sometimes reduce racial and ethnic disparities. Disparities were reduced when race and ethnicity were incorporated into an algorithm to intentionally tackle known racial and ethnic disparities in resource allocation (e.g., kidney transplant allocation) or disparities in care (e.g., prostate cancer screening that historically led to Black men receiving more low-yield biopsies). It is important to note that in such cases the rationale for using race and ethnicity was clearly delineated and did not conflate race and ethnicity with ancestry and/or genetic predisposition. However, when algorithms include race and ethnicity without clear rationale, they may perpetuate the incorrect notion that race is a biologic construct and contribute to disparities. Finally, some algorithms may reduce or perpetuate disparities without containing race and ethnicity as an input. Several modeling studies showed that applying algorithms out of context of original development (e.g., illness severity scores used for crisis standards of care) could perpetuate or exacerbate disparities. On the other hand, algorithms may also reduce disparities by standardizing care and reducing opportunities for implicit bias (e.g., Lung Allocation Score for lung transplantation). Several mitigation strategies have been shown to potentially reduce the contribution of algorithms to racial and ethnic disparities. Results of mitigation efforts are highly context specific, relating to unique combinations of algorithm, clinical condition, population, setting, and outcomes. Important future steps include increasing transparency in algorithm development and implementation, increasing diversity of research and leadership teams, engaging diverse patient and community groups in the development to implementation lifecycle, promoting stakeholder awareness (including patients) of potential algorithmic risk, and investing in further research to assess the real-world effect of algorithms on racial and ethnic disparities before widespread implementation.
4

Panek, Krol e Huth. PR-312-12208-R03 USEPA AERMOD Plume Rise and Volume Formulations and Implications for Existing RICE. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), febbraio 2016. http://dx.doi.org/10.55274/r0010858.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AERMOD is the EPA recommended dispersion modeling tool for evaluating impacts from typical compressor station engine sources. This is a companion document to two previous PRCI reports that addressed AERMOD Fortran compiler issues and a subsequent report that examined AERMOD Plume Volume Molar ratio Method (PVMRM) issues that lead to conservative model over-predictions. This report further explores AERMOD plume rise and volume estimates as a possible cause or contributor of model over-prediction and resulting plume chemistry concerns. AERMOD over-prediction bias has significant negative implications for permitting new sources, permit renewal for existing sources, and NAAQS compliance analyses, where modeled impacts are compared to the NO2 NAAQS at or beyond the facility fenceline. AERMOD conservatism also impacts state agency State Implementation Plans and resulting control strategies. Permitting requirements associated with the new 1-hour standard could impose unnecessary controls, overly stringent controls, and a significant compliance burden. Where mitigation may be warranted, costs will escalate due to �over-control� in response to model conservatism and deficiencies in model performance.
5

Carter, Sara, Jane Griffin, Samantha Lako, Cheryl Harewood, Lisa Kessler e Elizabeth Parish. The impacts of COVID-19 on schools’ willingness to participate in research. RTI Press, gennaio 2024. http://dx.doi.org/10.3768/rtipress.2024.rb.0036.2401.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
COVID-19 had significant impacts on the field of education and, in turn, on school-based research. During this unprecedented time, nearly all schools closed, disrupting learning as schools shifted to a virtual format. Addressing the lasting effects of school closures is a major challenge in the post-pandemic education climate. Educators indicate these challenges have limited their willingness or ability to participate in research. We analyzed over 700 reasons for refusal in four recent education studies to examine the effects of COVID-19 on school-based research. About 4% of education leaders cited COVID-19 as the primary factor impacting their unwillingness to participate, while related factors such as learning loss, instructional time, or teacher shortage were cited approximately 16%. Over 40% of schools declined because of required testing and surveys. Given the voluntary nature of participation, the remaining schools declined for various reasons not necessarily related to COVID-19. Insufficient participation can be detrimental to research by impacting the quantity and quality of data collected and possibly introducing bias into the data, thus skewing findings. In the post-pandemic era, school-based researchers must be mindful of the challenges schools face and develop mitigation strategies to contend with the reluctance to participate in external research.
6

Bray, Jonathan, Ross Boulanger, Misko Cubrinovski, Kohji Tokimatsu, Steven Kramer, Thomas O'Rourke, Ellen Rathje, Russell Green, Peter Robertson e Christine Beyzaei. U.S.—New Zealand— Japan International Workshop, Liquefaction-Induced Ground Movement Effects, University of California, Berkeley, California, 2-4 November 2016. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, marzo 2017. http://dx.doi.org/10.55461/gzzx9906.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
There is much to learn from the recent New Zealand and Japan earthquakes. These earthquakes produced differing levels of liquefaction-induced ground movements that damaged buildings, bridges, and buried utilities. Along with the often spectacular observations of infrastructure damage, there were many cases where well-built facilities located in areas of liquefaction-induced ground failure were not damaged. Researchers are working on characterizing and learning from these observations of both poor and good performance. The “Liquefaction-Induced Ground Movements Effects” workshop provided an opportunity to take advantage of recent research investments following these earthquake events to develop a path forward for an integrated understanding of how infrastructure performs with various levels of liquefaction. Fifty-five researchers in the field, two-thirds from the U.S. and one-third from New Zealand and Japan, convened in Berkeley, California, in November 2016. The objective of the workshop was to identify research thrusts offering the greatest potential for advancing our capabilities for understanding, evaluating, and mitigating the effects of liquefaction-induced ground movements on structures and lifelines. The workshop also advanced the development of younger researchers by identifying promising research opportunities and approaches, and promoting future collaborations among participants. During the workshop, participants identified five cross-cutting research priorities that need to be addressed to advance our scientific understanding of and engineering procedures for soil liquefaction effects during earthquakes. Accordingly, this report was organized to address five research themes: (1) case history data; (2) integrated site characterization; (3) numerical analysis; (4) challenging soils; and (5) effects and mitigation of liquefaction in the built environment and communities. These research themes provide an integrated approach toward transformative advances in addressing liquefaction hazards worldwide. The archival documentation of liquefaction case history datasets in electronic data repositories for use by the broader research community is critical to accelerating advances in liquefaction research. Many of the available liquefaction case history datasets are not fully documented, published, or shared. Developing and sharing well-documented liquefaction datasets reflect significant research efforts. Therefore, datasets should be published with a permanent DOI, with appropriate citation language for proper acknowledgment in publications that use the data. Integrated site characterization procedures that incorporate qualitative geologic information about the soil deposits at a site and the quantitative information from in situ and laboratory engineering tests of these soils are essential for quantifying and minimizing the uncertainties associated site characterization. Such information is vitally important to help identify potential failure modes and guide in situ testing. At the site scale, one potential way to do this is to use proxies for depositional environments. At the fabric and microstructure scale, the use of multiple in situ tests that induce different levels of strain should be used to characterize soil properties. The development of new in situ testing tools and methods that are more sensitive to soil fabric and microstructure should be continued. The development of robust, validated analytical procedures for evaluating the effects of liquefaction on civil infrastructure persists as a critical research topic. Robust validated analytical procedures would translate into more reliable evaluations of critical civil infrastructure iv performance, support the development of mechanics-based, practice-oriented engineering models, help eliminate suspected biases in our current engineering practices, and facilitate greater integration with structural, hydraulic, and wind engineering analysis capabilities for addressing multi-hazard problems. Effective collaboration across countries and disciplines is essential for developing analytical procedures that are robust across the full spectrum of geologic, infrastructure, and natural hazard loading conditions encountered in practice There are soils that are challenging to characterize, to model, and to evaluate, because their responses differ significantly from those of clean sands: they cannot be sampled and tested effectively using existing procedures, their properties cannot be estimated confidently using existing in situ testing methods, or constitutive models to describe their responses have not yet been developed or validated. Challenging soils include but are not limited to: interbedded soil deposits, intermediate (silty) soils, mine tailings, gravelly soils, crushable soils, aged soils, and cemented soils. New field and laboratory test procedures are required to characterize the responses of these materials to earthquake loadings, physical experiments are required to explore mechanisms, and new soil constitutive models tailored to describe the behavior of such soils are required. Well-documented case histories involving challenging soils where both the poor and good performance of engineered systems are documented are also of high priority. Characterizing and mitigating the effects of liquefaction on the built environment requires understanding its components and interactions as a system, including residential housing, commercial and industrial buildings, public buildings and facilities, and spatially distributed infrastructure, such as electric power, gas and liquid fuel, telecommunication, transportation, water supply, wastewater conveyance/treatment, and flood protection systems. Research to improve the characterization and mitigation of liquefaction effects on the built environment is essential for achieving resiliency. For example, the complex mechanisms of ground deformation caused by liquefaction and building response need to be clarified and the potential bias and dispersion in practice-oriented procedures for quantifying building response to liquefaction need to be quantified. Component-focused and system-performance research on lifeline response to liquefaction is required. Research on component behavior can be advanced by numerical simulations in combination with centrifuge and large-scale soil–structure interaction testing. System response requires advanced network analysis that accounts for the propagation of uncertainty in assessing the effects of liquefaction on large, geographically distributed systems. Lastly, research on liquefaction mitigation strategies, including aspects of ground improvement, structural modification, system health monitoring, and rapid recovery planning, is needed to identify the most effective, cost-efficient, and sustainable measures to improve the response and resiliency of the built environment.
7

Eslava, Marcela, Alessandro Maffioli e Marcela Meléndez Arjona. Second-tier Government Banks and Access to Credit: Micro-Evidence from Colombia. Inter-American Development Bank, marzo 2012. http://dx.doi.org/10.18235/0011364.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Government-owned development banks have often been justified by the need to respond to financial market imperfections that hinder the establishment and growth of promising businesses, and as a result, stifle economic development more generally. However, evidence on the effectiveness of these banks in mitigating financial constraints is still lacking. To fill this gap, this paper analyzes the impact of Bancoldex, Colombia's publicly owned development bank, on access to credit. It uses a unique dataset that contains key characteristics of all loans issued to businesses in Colombia, including the financial intermediary through which the loan was granted and whether the loan was funded with Bancoldex resources. The paper assesses effects on access to credit by comparing Bancoldex loans to loans from other sources and study the impact of receiving credit from Bancoldex on a firm's subsequent credit history. To address concerns about selection bias, it uses a combination of models that control for fixed effects and matching techniques. The findings herein show that credit relationships involving Bancoldex funding are characterized by lower interest rates, larger loans, and loans with longer terms. These characteristics translated into lower average interest rates and larger average loans for firms that used Bancoldex credit. Average loans of Bancoldex' beneficiaries also exhibit longer terms, although this effect can take two years to materialize. Finally, the findings show evidence of a demonstration effect of Bancoldex: beneficiary firms that have access Bancoldex credit are able to significantly expand the number of intermediaries with whom they have credit relationships.
8

Avis, William. Refugee and Mixed Migration Displacement from Afghanistan. Institute of Development Studies (IDS), agosto 2021. http://dx.doi.org/10.19088/k4d.2022.002.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This rapid literature review summarises evidence and key lessons that exist regarding previous refugee and mixed migration displacement from Afghanistan to surrounding countries. The review identified a diverse literature that explored past refugee and mixed migration, with a range of quantitative and qualitative studies identified. A complex and fluid picture is presented with waves of mixed migration (both outflow and inflow) associated with key events including the: Soviet–Afghan War (1979–1989); Afghan Civil War (1992–96); Taliban Rule (1996–2001); War in Afghanistan (2001–2021). A contextual picture emerges of Afghans having a long history of using mobility as a survival strategy or as social, economic and political insurance for improving livelihoods or to escape conflict and natural disasters. Whilst violence has been a principal driver of population movements among Afghans, it is not the only cause. Migration has also been associated with natural disasters (primarily drought) which is considered a particular issue across much of the country – this is associated primarily with internal displacement. Further to this, COVID-19 is impacting upon and prompting migration to and from Afghanistan. Data on refugee and mixed migration movement is diverse and at times contradictory given the fluidity and the blurring of boundaries between types of movements. Various estimates exist for numbers of Afghanistan refugees globally. It is also important to note that migratory flows are often fluid involving settlement in neighbouring countries, return to Afghanistan. In many countries, Afghani migrants and refugees face uncertain political situations and have, in recent years, been ‘coerced’ into returning to Afghanistan with much discussion of a ‘return bias’ being evident in official policies. The literature identified in this report (a mix of academic, humanitarian agency and NGO) is predominantly focused on Pakistan and Iran with a less established evidence base on the scale of Afghan refugee and migrant communities in other countries in the region. . Whilst conflict has been a primary driver of displacement, it has intersected with drought conditions and poor adherence to COVID-19 mitigation protocols. Past efforts to address displacement internationally have affirmed return as the primary objective in relation to durable solutions; practically, efforts promoted improved programming interventions towards creating conditions for sustainable return and achieving improved reintegration prospects for those already returned to Afghanistan.

Vai alla bibliografia