Journal articles on the topic 'Unbiased Learning'

To see the other types of publications on this topic, follow the link: Unbiased Learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Unbiased Learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ai, Qingyao, Tao Yang, Huazheng Wang, and Jiaxin Mao. "Unbiased Learning to Rank." ACM Transactions on Information Systems 39, no. 2 (March 2021): 1–29. http://dx.doi.org/10.1145/3439861.

Full text
Abstract:
How to obtain an unbiased ranking model by learning to rank with biased user feedback is an important research question for IR. Existing work on unbiased learning to rank (ULTR) can be broadly categorized into two groups—the studies on unbiased learning algorithms with logged data, namely, the offline unbiased learning, and the studies on unbiased parameters estimation with real-time user interactions, namely, the online learning to rank. While their definitions of unbiasness are different, these two types of ULTR algorithms share the same goal—to find the best models that rank documents based on their intrinsic relevance or utility. However, most studies on offline and online unbiased learning to rank are carried in parallel without detailed comparisons on their background theories and empirical performance. In this article, we formalize the task of unbiased learning to rank and show that existing algorithms for offline unbiased learning and online learning to rank are just the two sides of the same coin. We evaluate eight state-of-the-art ULTR algorithms and find that many of them can be used in both offline settings and online environments with or without minor modifications. Further, we analyze how different offline and online learning paradigms would affect the theoretical foundation and empirical effectiveness of each algorithm on both synthetic and real search data. Our findings provide important insights and guidelines for choosing and deploying ULTR algorithms in practice.
APA, Harvard, Vancouver, ISO, and other styles
2

Vydiswaran, V. G. Vinod, ChengXiang Zhai, Dan Roth, and Peter Pirolli. "Unbiased learning of controversial topics." Proceedings of the American Society for Information Science and Technology 49, no. 1 (2012): 1–4. http://dx.doi.org/10.1002/meet.14504901291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Backus, B. T. "Optimal learning rates for unbiased perception." Journal of Vision 3, no. 9 (March 16, 2010): 175. http://dx.doi.org/10.1167/3.9.175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Breeden, Joseph L., and Eugenia Leonova. "Creating Unbiased Machine Learning Models by Design." Journal of Risk and Financial Management 14, no. 11 (November 22, 2021): 565. http://dx.doi.org/10.3390/jrfm14110565.

Full text
Abstract:
Unintended bias against protected groups has become a key obstacle to the widespread adoption of machine learning methods. This work presents a modeling procedure that carefully builds models around protected class information in order to make sure that the final machine learning model is independent of protected class status, even in a nonlinear sense. This procedure works for any machine learning method. The procedure was tested on subprime credit card data combined with demographic data by zip code from the US Census. The census data serves as an imperfect proxy for borrower demographics but serves to illustrate the procedure.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Ming, Shuo Zhu, Chunxu Li, and Wencang Zhao. "Target unbiased meta-learning for graph classification." Journal of Computational Design and Engineering 8, no. 5 (September 15, 2021): 1355–66. http://dx.doi.org/10.1093/jcde/qwab050.

Full text
Abstract:
Abstract Even though numerous works focus on the few-shot learning issue by combining meta-learning, there are still limits to traditional graph classification problems. The antecedent algorithms directly extract features from the samples, and do not take into account the preference of the trained model to the previously “seen” targets. In order to overcome the aforementioned issues, an effective strategy with training an unbiased meta-learning algorithm was developed in this paper, which sorted out problems of target preference and few-shot under the meta-learning paradigm. First, the interactive attention extraction module as a supplement to feature extraction was employed, which improved the separability of feature vectors, reduced the preference of the model for a certain target, and remarkably improved the generalization ability of the model on the new task. Second, the graph neural network was used to fully mine the relationship between samples to constitute graph structures and complete image classification tasks at a node level, which greatly enhanced the accuracy of classification. A series of experimental studies were conducted to validate the proposed methodology, where the few-shot and semisupervised learning problem has been effectively solved. It also proved that our model has better accuracy than traditional classification methods on real-world datasets.
APA, Harvard, Vancouver, ISO, and other styles
6

Jia, Zhen, Zhang Zhang, Liang Wang, Caifeng Shan, and Tieniu Tan. "Deep Unbiased Embedding Transfer for Zero-Shot Learning." IEEE Transactions on Image Processing 29 (2020): 1958–71. http://dx.doi.org/10.1109/tip.2019.2947780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Zong-Hui, Zi-Qian Lu, and Zhe-Ming Lu. "Unbiased hybrid generation network for zero-shot learning." Electronics Letters 56, no. 18 (September 3, 2020): 929–31. http://dx.doi.org/10.1049/el.2020.1594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shamsi, Zahra, and Diwakar Shukla. "Efficient Unbiased Sampling of Protein Dynamics using Reinforcement Learning." Biophysical Journal 114, no. 3 (February 2018): 673a. http://dx.doi.org/10.1016/j.bpj.2017.11.3630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Premo, L. S., and Jonathan B. Scholnick. "The Spatial Scale of Social Learning Affects Cultural Diversity." American Antiquity 76, no. 1 (January 2011): 163–76. http://dx.doi.org/10.7183/0002-7316.76.1.163.

Full text
Abstract:
Sewall Wright's (1943) concept of isolation by distance is as germane to cultural transmission as genetic transmission. Yet there has been little research on how the spatial scale of social learning—the geographic extent of cultural transmission—affects cultural diversity. Here, we employ agent-based simulation to study how the spatial scale of unbiased social learning affects selectively neutral cultural diversity over a range of population sizes and densities. We show that highly localized unbiased cultural transmission may be easily confused with a form of biased cultural transmission, especially in low-density populations. Our results have important implications for how archaeologists infer mechanisms of cultural transmission from diversity estimates that depart from the expectations of neutral theory.
APA, Harvard, Vancouver, ISO, and other styles
10

Guo, Fan, Weiqing Li, Ziqi Shen, and Xiangyu Shi. "MTCLF: A multitask curriculum learning framework for unbiased glaucoma screenings." Computer Methods and Programs in Biomedicine 221 (June 2022): 106910. http://dx.doi.org/10.1016/j.cmpb.2022.106910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lv, Fengmao, Haiyang Liu, Yichen Wang, Jiayi Zhao, and Guowu Yang. "Learning Unbiased Zero-Shot Semantic Segmentation Networks Via Transductive Transfer." IEEE Signal Processing Letters 27 (2020): 1640–44. http://dx.doi.org/10.1109/lsp.2020.3023340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Dimitrakakis, Christos, Guangliang Li, and Nikoalos Tziortziotis. "The Reinforcement Learning Competition 2014." AI Magazine 35, no. 3 (September 19, 2014): 61–65. http://dx.doi.org/10.1609/aimag.v35i3.2548.

Full text
Abstract:
Reinforcement learning is one of the most general problems in artificial intelligence. It has been used to model problems in automated experiment design, control, economics, game playing, scheduling and telecommunications. The aim of the reinforcement learning competition is to encourage the development of very general learning agents for arbitrary reinforcement learning problems and to provide a test-bed for the unbiased evaluation of algorithms.
APA, Harvard, Vancouver, ISO, and other styles
13

Pan, Yonghua, Zechao Li, Liyan Zhang, and Jinhui Tang. "Causal Inference with Knowledge Distilling and Curriculum Learning for Unbiased VQA." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 3 (August 31, 2022): 1–23. http://dx.doi.org/10.1145/3487042.

Full text
Abstract:
Recently, many Visual Question Answering (VQA) models rely on the correlations between questions and answers yet neglect those between the visual information and the textual information. They would perform badly if the handled data distribute differently from the training data (i.e., out-of-distribution (OOD) data). Towards this end, we propose a two-stage unbiased VQA approach that addresses the unbiased issue from a causal perspective. In the causal inference stage, we mark the spurious correlation on the causal graph, explore the counterfactual causality, and devise a causal target based on the inherent correlations between the conventional and counterfactual VQA models. In the distillation stage, we introduce the causal target into the training process and leverages distilling as well as curriculum learning to capture the unbiased model. Since Causal Inference with Knowledge Distilling and Curriculum Learning (CKCL) reinforces the contribution of the visual information and eliminates the impact of the spurious correlation by distilling the knowledge in causal inference to the VQA model, it contributes to the good performance on both the standard data and out-of-distribution data. The extensive experimental results on VQA-CP v2 dataset demonstrate the superior performance of the proposed method compared to the state-of-the-art (SotA) methods.
APA, Harvard, Vancouver, ISO, and other styles
14

Lu, Alex, Oren Kraus, Sam Cooper, and Alan Moses. "Learning Biology Through Puzzle-solving: Unbiased Automatic Understanding of Microscopy Images with Self-supervised Learning." Microscopy and Microanalysis 26, S2 (July 30, 2020): 690–92. http://dx.doi.org/10.1017/s1431927620015548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Nguyen, Thanh-Tung, Joshua Zhexue Huang, and Thuy Thi Nguyen. "Unbiased Feature Selection in Learning Random Forests for High-Dimensional Data." Scientific World Journal 2015 (2015): 1–18. http://dx.doi.org/10.1155/2015/471371.

Full text
Abstract:
Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select good features in learning RFs for high-dimensional data. We first remove the uninformative features usingp-value assessment, and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in increasing the accuracy and the AUC measures.
APA, Harvard, Vancouver, ISO, and other styles
16

Murari, A., J. Vega, G. A. Rattá, G. Vagliasindi, M. F. Johnson, and S. H. Hong. "Unbiased and non-supervised learning methods for disruption prediction at JET." Nuclear Fusion 49, no. 5 (April 29, 2009): 055028. http://dx.doi.org/10.1088/0029-5515/49/5/055028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Bey, Romain, Romain Goussault, François Grolleau, Mehdi Benchoufi, and Raphaël Porcher. "Fold-stratified cross-validation for unbiased and privacy-preserving federated learning." Journal of the American Medical Informatics Association 27, no. 8 (July 4, 2020): 1244–51. http://dx.doi.org/10.1093/jamia/ocaa096.

Full text
Abstract:
Abstract Objective We introduce fold-stratified cross-validation, a validation methodology that is compatible with privacy-preserving federated learning and that prevents data leakage caused by duplicates of electronic health records (EHRs). Materials and Methods Fold-stratified cross-validation complements cross-validation with an initial stratification of EHRs in folds containing patients with similar characteristics, thus ensuring that duplicates of a record are jointly present either in training or in validation folds. Monte Carlo simulations are performed to investigate the properties of fold-stratified cross-validation in the case of a model data analysis using both synthetic data and MIMIC-III (Medical Information Mart for Intensive Care-III) medical records. Results In situations in which duplicated EHRs could induce overoptimistic estimations of accuracy, applying fold-stratified cross-validation prevented this bias, while not requiring full deduplication. However, a pessimistic bias might appear if the covariate used for the stratification was strongly associated with the outcome. Discussion Although fold-stratified cross-validation presents low computational overhead, to be efficient it requires the preliminary identification of a covariate that is both shared by duplicated records and weakly associated with the outcome. When available, the hash of a personal identifier or a patient’s date of birth provides such a covariate. On the contrary, pseudonymization interferes with fold-stratified cross-validation, as it may break the equality of the stratifying covariate among duplicates. Conclusion Fold-stratified cross-validation is an easy-to-implement methodology that prevents data leakage when a model is trained on distributed EHRs that contain duplicates, while preserving privacy.
APA, Harvard, Vancouver, ISO, and other styles
18

Rizhinashvili, Davit, Abdallah Hussein Sham, and Gholamreza Anbarjafari. "Gender Neutralisation for Unbiased Speech Synthesising." Electronics 11, no. 10 (May 17, 2022): 1594. http://dx.doi.org/10.3390/electronics11101594.

Full text
Abstract:
Machine learning can encode and amplify negative biases or stereotypes already present in humans, resulting in high-profile cases. There can be multiple sources encoding the negative bias in these algorithms, like errors from human labelling, inaccurate representation of different population groups in training datasets, and chosen model structures and optimization methods. Our paper proposes a novel approach to speech processing that can resolve the gender bias problem by eliminating the gender parameter. Therefore, we devised a system that transforms the input sound (speech of a person) into a neutralized voice to the point where the gender of the speaker becomes indistinguishable by both humans and AI. Wav2Vec based network has been utilised to conduct speech gender recognition to validate the main claim of this research work, which is the neutralisation of gender from the speech. Such a system can be used as a batch pre-processing layer for training models, thus making associated gender bias irrelevant. Further, such a system can also find its application where speaker gender bias by humans is also prominent, as the listener will not be able to judge the gender from speech.
APA, Harvard, Vancouver, ISO, and other styles
19

Gu, Jeongmin, Jose A. Iglesias-Guitian, and Bochang Moon. "Neural James-Stein Combiner for Unbiased and Biased Renderings." ACM Transactions on Graphics 41, no. 6 (November 30, 2022): 1–14. http://dx.doi.org/10.1145/3550454.3555496.

Full text
Abstract:
Unbiased rendering algorithms such as path tracing produce accurate images given a huge number of samples, but in practice, the techniques often leave visually distracting artifacts (i.e., noise) in their rendered images due to a limited time budget. A favored approach for mitigating the noise problem is applying learning-based denoisers to unbiased but noisy rendered images and suppressing the noise while preserving image details. However, such denoising techniques typically introduce a systematic error, i.e., the denoising bias, which does not decline as rapidly when increasing the sample size, unlike the other type of error, i.e., variance. It can technically lead to slow numerical convergence of the denoising techniques. We propose a new combination framework built upon the James-Stein (JS) estimator, which merges a pair of unbiased and biased rendering images, e.g., a path-traced image and its denoised result. Unlike existing post-correction techniques for image denoising, our framework helps an input denoiser have lower errors than its unbiased input without relying on accurate estimation of per-pixel denoising errors. We demonstrate that our framework based on the well-established JS theories allows us to improve the error reduction rates of state-of-the-art learning-based denoisers more robustly than recent post-denoisers.
APA, Harvard, Vancouver, ISO, and other styles
20

Javeed, Ashir, Ana Luiza Dallora, Johan Sanmartin Berglund, and Peter Anderberg. "An Intelligent Learning System for Unbiased Prediction of Dementia Based on Autoencoder and Adaboost Ensemble Learning." Life 12, no. 7 (July 21, 2022): 1097. http://dx.doi.org/10.3390/life12071097.

Full text
Abstract:
Dementia is a neurological condition that primarily affects older adults and there is still no cure or therapy available to cure it. The symptoms of dementia can appear as early as 10 years before the beginning of actual diagnosed dementia. Hence, machine learning (ML) researchers have presented several methods for early detection of dementia based on symptoms. However, these techniques suffer from two major flaws. The first issue is the bias of ML models caused by imbalanced classes in the dataset. Past research did not address this issue well and did not take preventative precautions. Different ML models were developed to illustrate this bias. To alleviate the problem of bias, we deployed a synthetic minority oversampling technique (SMOTE) to balance the training process of the proposed ML model. The second issue is the poor classification accuracy of ML models, which leads to a limited clinical significance. To improve dementia prediction accuracy, we proposed an intelligent learning system that is a hybrid of an autoencoder and adaptive boost model. The autoencoder is used to extract relevant features from the feature space and the Adaboost model is deployed for the classification of dementia by using an extracted subset of features. The hyperparameters of the Adaboost model are fine-tuned using a grid search algorithm. Experimental findings reveal that the suggested learning system outperforms eleven similar systems which were proposed in the literature. Furthermore, it was also observed that the proposed learning system improves the strength of the conventional Adaboost model by 9.8% and reduces its time complexity. Lastly, the proposed learning system achieved classification accuracy of 90.23%, sensitivity of 98.00% and specificity of 96.65%.
APA, Harvard, Vancouver, ISO, and other styles
21

ZANIN, DIEGO, and RICCARDO ZECCHINA. "LEARNING INTERFERENCE REDUCTION IN NEURAL NETWORKS." Modern Physics Letters B 09, no. 18 (August 10, 1995): 1165–74. http://dx.doi.org/10.1142/s0217984995001169.

Full text
Abstract:
The learning and generalization properties of a modified learning cost function for Neural Networks models are discussed. We show that the introduction of a “cross-talk” term allows for an improvement of performance based on the control of the convergence subspaces of the network outputs. In the case of an unbiased distribution of binary patterns, we derive analytically the learning performance of the single layer architecture whereas we investigate numerically the generalization capabilities. An enhancement of computational performance is observed for multi-classification purposes, and also for imperfectly classified training sets.
APA, Harvard, Vancouver, ISO, and other styles
22

Xue, Tianfang, and Haibin Yu. "Unbiased Model-Agnostic Metalearning Algorithm for Learning Target-Driven Visual Navigation Policy." Computational Intelligence and Neuroscience 2021 (December 8, 2021): 1–12. http://dx.doi.org/10.1155/2021/5620751.

Full text
Abstract:
As deep reinforcement learning methods have made great progress in the visual navigation field, metalearning-based algorithms are gaining more attention since they greatly improve the expansibility of moving agents. According to metatraining mechanism, typically an initial model is trained as a metalearner by existing navigation tasks and becomes well performed in new scenes through relatively few recursive trials. However, if a metalearner is overtrained on the former tasks, it may hardly achieve generalization on navigating in unfamiliar environments as the initial model turns out to be quite biased towards former ambient configuration. In order to train an impartial navigation model and enhance its generalization capability, we propose an Unbiased Model-Agnostic Metalearning (UMAML) algorithm towards target-driven visual navigation. Inspired by entropy-based methods, maximizing the uncertainty over output labels in classification tasks, we adopt inequality measures used in Economics as a concise metric to calculate the loss deviation across unfamiliar tasks. With succinctly minimizing the inequality of task losses, an unbiased navigation model without overperforming in particular scene types can be learnt based on Model-Agnostic Metalearning mechanism. The exploring agent complies with a more balanced update rule, able to gather navigation experience from training environments. Several experiments have been conducted, and results demonstrate that our approach outperforms other state-of-the-art metalearning navigation methods in generalization ability.
APA, Harvard, Vancouver, ISO, and other styles
23

Hatch, Nile W. "Biased and Unbiased Specification of the Learning Curve: Coping with Unobserved History." Academy of Management Proceedings 2016, no. 1 (January 2016): 18177. http://dx.doi.org/10.5465/ambpp.2016.18177abstract.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Danishvar, Morad, Alireza Mousavi, and Peter Broomhead. "EventiC: A Real-Time Unbiased Event-Based Learning Technique for Complex Systems." IEEE Transactions on Systems, Man, and Cybernetics: Systems 50, no. 5 (May 2020): 1649–62. http://dx.doi.org/10.1109/tsmc.2017.2775666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Tran-Nguyen, Viet-Khoa, Célien Jacquemard, and Didier Rognan. "LIT-PCBA: An Unbiased Data Set for Machine Learning and Virtual Screening." Journal of Chemical Information and Modeling 60, no. 9 (April 13, 2020): 4263–73. http://dx.doi.org/10.1021/acs.jcim.0c00155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Alahmari, Saeed S., Dmitry Goldgof, Lawrence Hall, Hady Ahmady Phoulady, Raj H. Patel, and Peter R. Mouton. "Automated Cell Counts on Tissue Sections by Deep Learning and Unbiased Stereology." Journal of Chemical Neuroanatomy 96 (March 2019): 94–101. http://dx.doi.org/10.1016/j.jchemneu.2018.12.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Sugiyama, Masashi, and Hidemitsu Ogawa. "Subspace Information Criterion for Model Selection." Neural Computation 13, no. 8 (August 1, 2001): 1863–89. http://dx.doi.org/10.1162/08997660152469387.

Full text
Abstract:
The problem of model selection is considerably important for acquiring higher levels of generalization capability in supervised learning. In this article, we propose a new criterion for model selection, the subspace information criterion (SIC), which is a generalization of Mallows's CL. It is assumed that the learning target function belongs to a specified functional Hilbert space and the generalization error is defined as the Hilbert space squared norm of the difference between the learning result function and target function. SIC gives an unbiased estimate of the generalization error so defined. SIC assumes the availability of an unbiased estimate of the target function and the noise covariance matrix, which are generally unknown. A practical calculation method of SIC for least-mean-squares learning is provided under the assumption that the dimension of the Hilbert space is less than the number of training examples. Finally, computer simulations in two examples show that SIC works well even when the number of training examples is small.
APA, Harvard, Vancouver, ISO, and other styles
28

Hulle, Marc M. Van. "Differential Log Likelihood for Evaluating and Learning Gaussian Mixtures." Neural Computation 18, no. 2 (February 1, 2006): 430–45. http://dx.doi.org/10.1162/089976606775093873.

Full text
Abstract:
We introduce a new unbiased metric for assessing the quality of density estimation based on gaussian mixtures, called differential log likelihood. As an application, we determine the optimal smoothness and the optimal number of kernels in gaussian mixtures. Furthermore, we suggest a learning strategy for gaussian mixture density estimation and compare its performance with log likelihood maximization for a wide range of real-world data sets.
APA, Harvard, Vancouver, ISO, and other styles
29

Pritchett, Lant, and Justin Sandefur. "Learning from Experiments when Context Matters." American Economic Review 105, no. 5 (May 1, 2015): 471–75. http://dx.doi.org/10.1257/aer.p20151016.

Full text
Abstract:
Suppose a policymaker is interested in the impact of an existing social program. Impact estimates using observational data suffer potential bias, while unbiased experimental estimates are often limited to other contexts. This creates a practical trade-off between internal and external validity for evidence-based policymaking. We explore this trade-off empirically for several common policies analyzed in development economics, including microcredit, migration, and education interventions. Based on mean-squared error, non-experimental evidence within context outperforms experimental evidence from another context. This advantage declines, but may not reverse, with experimental replication. We offer four reasons these findings are of general relevance to policy evaluation.
APA, Harvard, Vancouver, ISO, and other styles
30

Jain, Shantanu, Justin Delano, Himanshu Sharma, and Predrag Radivojac. "Class Prior Estimation with Biased Positives and Unlabeled Examples." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4255–63. http://dx.doi.org/10.1609/aaai.v34i04.5848.

Full text
Abstract:
Positive-unlabeled learning is often studied under the assumption that the labeled positive sample is drawn randomly from the true distribution of positives. In many application domains, however, certain regions in the support of the positive class-conditional distribution are over-represented while others are under-represented in the positive sample. Although this introduces problems in all aspects of positive-unlabeled learning, we begin to address this challenge by focusing on the estimation of class priors, quantities central to the estimation of posterior probabilities and the recovery of true classification performance. We start by making a set of assumptions to model the sampling bias. We then extend the identifiability theory of class priors from the unbiased to the biased setting. Finally, we derive an algorithm for estimating the class priors that relies on clustering to decompose the original problem into subproblems of unbiased positive-unlabeled learning. Our empirical investigation suggests feasibility of the correction strategy and overall good performance.
APA, Harvard, Vancouver, ISO, and other styles
31

Allen, Angier, Samson Mataraso, Anna Siefkas, Hoyt Burdick, Gregory Braden, R. Phillip Dellinger, Andrea McCoy, et al. "A Racially Unbiased, Machine Learning Approach to Prediction of Mortality: Algorithm Development Study." JMIR Public Health and Surveillance 6, no. 4 (October 22, 2020): e22400. http://dx.doi.org/10.2196/22400.

Full text
Abstract:
Background Racial disparities in health care are well documented in the United States. As machine learning methods become more common in health care settings, it is important to ensure that these methods do not contribute to racial disparities through biased predictions or differential accuracy across racial groups. Objective The goal of the research was to assess a machine learning algorithm intentionally developed to minimize bias in in-hospital mortality predictions between white and nonwhite patient groups. Methods Bias was minimized through preprocessing of algorithm training data. We performed a retrospective analysis of electronic health record data from patients admitted to the intensive care unit (ICU) at a large academic health center between 2001 and 2012, drawing data from the Medical Information Mart for Intensive Care–III database. Patients were included if they had at least 10 hours of available measurements after ICU admission, had at least one of every measurement used for model prediction, and had recorded race/ethnicity data. Bias was assessed through the equal opportunity difference. Model performance in terms of bias and accuracy was compared with the Modified Early Warning Score (MEWS), the Simplified Acute Physiology Score II (SAPS II), and the Acute Physiologic Assessment and Chronic Health Evaluation (APACHE). Results The machine learning algorithm was found to be more accurate than all comparators, with a higher sensitivity, specificity, and area under the receiver operating characteristic. The machine learning algorithm was found to be unbiased (equal opportunity difference 0.016, P=.20). APACHE was also found to be unbiased (equal opportunity difference 0.019, P=.11), while SAPS II and MEWS were found to have significant bias (equal opportunity difference 0.038, P=.006 and equal opportunity difference 0.074, P<.001, respectively). Conclusions This study indicates there may be significant racial bias in commonly used severity scoring systems and that machine learning algorithms may reduce bias while improving on the accuracy of these methods.
APA, Harvard, Vancouver, ISO, and other styles
32

Abedin, Md Joynul, Titon Barua, Mahdokht Shaibani, and Mainak Majumder. "A High Throughput and Unbiased Machine Learning Approach for Classification of Graphene Dispersions." Advanced Science 7, no. 20 (August 25, 2020): 2001600. http://dx.doi.org/10.1002/advs.202001600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Liu, Zhe, Yun Li, Lina Yao, Xianzhi Wang, and Guodong Long. "Task Aligned Generative Meta-learning for Zero-shot Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 8723–31. http://dx.doi.org/10.1609/aaai.v35i10.17057.

Full text
Abstract:
Zero-shot learning (ZSL) refers to the problem of learning to classify instances from novel classes (unseen) that are absent in the training set (seen). Most ZSL methods infer the correlation between visual features and attributes to train the classifier for unseen classes. They may have a strong bias towards seen classes during training. Meta-learning has been introduced to mitigate the basis, but meta-ZSL methods are inapplicable when tasks used for training are sampled from diverse distributions. In this regard, we propose a novel Task-aligned Generative Meta-learning model for Zero-shot learning (TGMZ), aiming to mitigate the potentially biased training and to enable meta-ZSL to accommodate real-world datasets that contain diverse distributions. Specifically, TGMZ incorporates an attribute-conditioned task-wise distribution alignment network that projects tasks into a unified distribution to deliver an unbiased model. Our experiments show TGMZ achieves a relative improvement of 2.1%, 3.0%, 2.5%, and 7.6% over state-of-the-art algorithms on AWA1, AWA2, CUB, and aPY datasets, respectively. Overall, TGMZ outperforms competitors by 3.6% in the generalized zero-shot learning (GZSL) setting and 7.9% in our proposed fusion-ZSL setting.
APA, Harvard, Vancouver, ISO, and other styles
34

Nagao, Yukiko, Mika Sakamoto, Takumi Chinen, Yasushi Okada, and Daisuke Takao. "Robust classification of cell cycle phase and biological feature extraction by image-based deep learning." Molecular Biology of the Cell 31, no. 13 (June 15, 2020): 1346–54. http://dx.doi.org/10.1091/mbc.e20-03-0187.

Full text
Abstract:
By applying convolutional neural network-based classifiers, we demonstrate that cell images can be robustly classified according to cell cycle phases. Combined with Grad-CAM analysis, our approach enables us to extract biological features underlying cellular phenomena of interest in an unbiased and data-driven manner.
APA, Harvard, Vancouver, ISO, and other styles
35

Pereira, João P. B., Erik S. G. Stroes, Aeilko H. Zwinderman, and Evgeni Levin. "Covered Information Disentanglement: Model Transparency via Unbiased Permutation Importance." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7984–92. http://dx.doi.org/10.1609/aaai.v36i7.20769.

Full text
Abstract:
Model transparency is a prerequisite in many domains and an increasingly popular area in machine learning research. In the medical domain, for instance, unveiling the mechanisms behind a disease often has higher priority than the diagnostic itself since it might dictate or guide potential treatments and research directions. One of the most popular approaches to explain model global predictions is the permutation importance where the performance on permuted data is benchmarked against the baseline. However, this method and other related approaches will undervalue the importance of a feature in the presence of covariates since these cover part of its provided information. To address this issue, we propose Covered Information Disentanglement CID, a framework that considers all feature information overlap to correct the values provided by permutation importance. We further show how to compute CID efficiently when coupled with Markov random fields. We demonstrate its efficacy in adjusting permutation importance first on a controlled toy dataset and discuss its effect on real-world medical data.
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Yu-Chen, Yu-Ting Wu, Tzu-Mao Li, and Yung-Yu Chuang. "Learning to cluster for rendering with many lights." ACM Transactions on Graphics 40, no. 6 (December 2021): 1–10. http://dx.doi.org/10.1145/3478513.3480561.

Full text
Abstract:
We present an unbiased online Monte Carlo method for rendering with many lights. Our method adapts both the hierarchical light clustering and the sampling distribution to our collected samples. Designing such a method requires us to make clustering decisions under noisy observation, and making sure that the sampling distribution adapts to our target. Our method is based on two key ideas: a coarse-to-fine clustering scheme that can find good clustering configurations even with noisy samples, and a discrete stochastic successive approximation method that starts from a prior distribution and provably converges to a target distribution. We compare to other state-of-the-art light sampling methods, and show better results both numerically and visually.
APA, Harvard, Vancouver, ISO, and other styles
37

Pearl, Lisa S. "When Unbiased Probabilistic Learning Is Not Enough: Acquiring a Parametric System of Metrical Phonology." Language Acquisition 18, no. 2 (April 6, 2011): 87–120. http://dx.doi.org/10.1080/10489223.2011.554261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Xu, Chundong, Qinglin Li, and Dongwen Ying. "An Effective Adaptive Combination Strategy for Distributed Learning Network." Applied Sciences 11, no. 12 (June 20, 2021): 5723. http://dx.doi.org/10.3390/app11125723.

Full text
Abstract:
In this paper, we develop a modified adaptive combination strategy for the distributed estimation problem over diffusion networks. We still consider the online adaptive combiners estimation problem from the perspective of minimum variance unbiased estimation. In contrast with the classic adaptive combination strategy which exploits orthogonal projection technology, we formulate a non-constrained mean-square deviation (MSD) cost function by introducing Lagrange multipliers. Based on the Karush–Kuhn–Tucker (KKT) conditions, we derive the fixed-point iteration scheme of adaptive combiners. Illustrative simulations validate the improved transient and steady-state performance of the diffusion least-mean-square LMS algorithm incorporated with the proposed adaptive combination strategy.
APA, Harvard, Vancouver, ISO, and other styles
39

Christensen, Anders S., and O. Anatole von Lilienfeld. "Operator Quantum Machine Learning: Navigating the Chemical Space of Response Properties." CHIMIA International Journal for Chemistry 73, no. 12 (December 18, 2019): 1028–31. http://dx.doi.org/10.2533/chimia.2019.1028.

Full text
Abstract:
The identification and use of structure–property relationships lies at the heart of the chemical sciences. Quantum mechanics forms the basis for the unbiased virtual exploration of chemical compound space (CCS), imposing substantial compute needs if chemical accuracy is to be reached. In order to accelerate predictions of quantum properties without compromising accuracy, our lab has been developing quantum machine learning (QML) based models which can be applied throughout CCS. Here, we briefly explain, review, and discuss the recently introduced operator formalism which substantially improves the data efficiency for QML models of common response properties.
APA, Harvard, Vancouver, ISO, and other styles
40

Yang, QW. "A new biased estimation method based on Neumann series for solving ill-posed problems." International Journal of Advanced Robotic Systems 16, no. 4 (July 2019): 172988141987205. http://dx.doi.org/10.1177/1729881419872058.

Full text
Abstract:
The ill-posed least squares problems often arise in many engineering applications such as machine learning, intelligent navigation algorithms, surveying and mapping adjustment model, and linear regression model. A new biased estimation (BE) method based on Neumann series is proposed in this article to solve the ill-posed problems more effectively. Using Neumann series expansion, the unbiased estimate can be expressed as the sum of infinite items. When all the high-order items are omitted, the proposed method degenerates into the ridge estimation or generalized ridge estimation method, whereas a series of new biased estimates can be acquired by including some high-order items. Using the comparative analysis, the optimal biased estimate can be found out with less computation. The developed theory establishes the essential relationship between BE and unbiased estimation and can unify the existing unbiased and biased estimate formulas. Moreover, the proposed algorithm suits for not only ill-conditioned equations but also rank-defect equations. Numerical results show that the proposed BE method has improved accuracy over the existing robust estimation methods to a certain extent.
APA, Harvard, Vancouver, ISO, and other styles
41

Villarreal, Micah N., Alexander J. Kamrud, and Brett J. Borghetti. "Confirmation Bias Estimation from Electroencephalography with Machine Learning." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (November 2019): 73–77. http://dx.doi.org/10.1177/1071181319631208.

Full text
Abstract:
Cognitive biases are known to affect human decision making and can have disastrous effects in the fast-paced environments of military operators. Traditionally, post-hoc behavioral analysis is used to measure the level of bias in a decision. However, these techniques can be hindered by subjective factors and cannot be collected in real-time. This pilot study collects behavior patterns and physiological signals present during biased and unbiased decision-making. Supervised machine learning models are trained to find the relationship between Electroencephalography (EEG) signals and behavioral evidence of cognitive bias. Once trained, the models should infer the presence of confirmation bias during decision-making using only EEG - without the interruptions or the subjective nature of traditional confirmation bias estimation techniques.
APA, Harvard, Vancouver, ISO, and other styles
42

WENDEMUTH, A., and D. SHERRINGTON. "FAST LEARNING OF BIASED PATTERNS IN NEURAL NETWORKS." International Journal of Neural Systems 04, no. 03 (September 1993): 223–30. http://dx.doi.org/10.1142/s0129065793000183.

Full text
Abstract:
Usual neural network gradient descent training algorithms require training times of the same order as the number of neurons N if the patterns are biased. In this paper, modified algorithms are presented which require training times equal to those in unbiased cases which are of order 1. Exact convergence proofs are given. Gain parameters which produce minimal learning times in large networks are computed by replica methods. It is demonstrated how these modified algorithms are applied in order to produce four types of solutions to the learning problem: 1. a solution with all internal fields equal to the desired output, 2. the Adaline (or pseudo-inverse) solution, 3. the perceptron of optimal stability without threshold and 4. the perceptron of optimal stability with threshold.
APA, Harvard, Vancouver, ISO, and other styles
43

Freed, Benjamin, Guillaume Sartoretti, Jiaheng Hu, and Howie Choset. "Communication Learning via Backpropagation in Discrete Channels with Unknown Noise." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 7160–68. http://dx.doi.org/10.1609/aaai.v34i05.6205.

Full text
Abstract:
This work focuses on multi-agent reinforcement learning (RL) with inter-agent communication, in which communication is differentiable and optimized through backpropagation. Such differentiable approaches tend to converge more quickly to higher-quality policies compared to techniques that treat communication as actions in a traditional RL framework. However, modern communication networks (e.g., Wi-Fi or Bluetooth) rely on discrete communication channels, for which existing differentiable approaches that consider real-valued messages cannot be directly applied, or require biased gradient estimators. Some works have overcome this problem by treating the message space as an extension of the action space, and use standard RL to optimize message selection, but these methods tend to converge slower and to inferior policies. In this paper, we propose a stochastic message encoding/decoding procedure that makes a discrete communication channel mathematically equivalent to an analog channel with additive noise, through which gradients can be backpropagated. Additionally, we introduce an encryption step for use in noisy channels that forces channel noise to be message-independent, allowing us to compute unbiased derivative estimates even in the presence of unknown channel noise. To the best of our knowledge, this work presents the first differentiable communication learning approach that can compute unbiased derivatives through channels with unknown noise. We demonstrate the effectiveness of our approach in two example multi-robot tasks: a path finding and a collaborative search problem. There, we show that our approach achieves learning speed and performance similar to differentiable communication learning with real-valued messages (i.e., unlimited communication bandwidth), while naturally handling more realistic real-world communication constraints. Content Areas: Multi-Agent Communication, Reinforcement Learning.
APA, Harvard, Vancouver, ISO, and other styles
44

Sollich, Peter, and David Barber. "Online Learning from Finite Training Sets and Robustness to Input Bias." Neural Computation 10, no. 8 (November 1, 1998): 2201–17. http://dx.doi.org/10.1162/089976698300017034.

Full text
Abstract:
We analyze online gradient descent learning from finite training sets at noninfinitesimal learning rates η. Exact results are obtained for the time-dependent generalization error of a simple model system: a linear network with a large number of weights N, trained on p = αN examples. This allows us to study in detail the effects of finite training set size α on, for example, the optimal choice of learning rate η. We also compare online and offline learning, for respective optimal settings of η at given final learning time. Online learning turns out to be much more robust to input bias and actually outperforms offline learning when such bias is present; for unbiased inputs, online and offline learning perform almost equally well.
APA, Harvard, Vancouver, ISO, and other styles
45

Zhu, Beier, Yulei Niu, Xian-Sheng Hua, and Hanwang Zhang. "Cross-Domain Empirical Risk Minimization for Unbiased Long-Tailed Classification." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 3589–97. http://dx.doi.org/10.1609/aaai.v36i3.20271.

Full text
Abstract:
We address the overlooked unbiasedness in existing long-tailed classification methods: we find that their overall improvement is mostly attributed to the biased preference of "tail" over "head", as the test distribution is assumed to be balanced; however, when the test is as imbalanced as the long-tailed training data---let the test respect Zipf's law of nature---the "tail" bias is no longer beneficial overall because it hurts the "head" majorities. In this paper, we propose Cross-Domain Empirical Risk Minimization (xERM) for training an unbiased test-agnostic model to achieve strong performances on both test distributions, which empirically demonstrates that xERM fundamentally improves the classification by learning better feature representation rather than the "head vs. tail" game. Based on causality, we further theoretically explain why xERM achieves unbiasedness: the bias caused by the domain selection is removed by adjusting the empirical risks on the imbalanced domain and the balanced but unseen domain.
APA, Harvard, Vancouver, ISO, and other styles
46

Zhang, Yikai, Hui Qu, Dimitris Metaxas, and Chao Chen. "Local Regularizer Improves Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6861–68. http://dx.doi.org/10.1609/aaai.v34i04.6167.

Full text
Abstract:
Regularization plays an important role in generalization of deep learning. In this paper, we study the generalization power of an unbiased regularizor for training algorithms in deep learning. We focus on training methods called Locally Regularized Stochastic Gradient Descent (LRSGD). An LRSGD leverages a proximal type penalty in gradient descent steps to regularize SGD in training. We show that by carefully choosing relevant parameters, LRSGD generalizes better than SGD. Our thorough theoretical analysis is supported by experimental evidence. It advances our theoretical understanding of deep learning and provides new perspectives on designing training algorithms. The code is available at https://github.com/huiqu18/LRSGD.
APA, Harvard, Vancouver, ISO, and other styles
47

Hochberg, Judith G. "First steps in the acquisition of Spanish stress." Journal of Child Language 15, no. 2 (June 1988): 273–92. http://dx.doi.org/10.1017/s030500090001237x.

Full text
Abstract:
ABSTRACTThis article uses longitudinal data from four Mexican-American children to explore two aspects of the acquisition of Spanish word stress that precede and accompany learning of the stress system itself. First, contrary to Allen & Hawkins' proposed universal ‘trochaic bias’ (Allen 1982, Allen & Hawkins 1977, 1979, 1980), it is shown that children have a ‘neutral start’ in stress learning: they approach the task of stress learning unbiased towards any particular stress type. Secondly, several examples are found in which children's attention to phonetic or semantic aspects of normatively unstressed syllables leads them to shift stress to that syllable.
APA, Harvard, Vancouver, ISO, and other styles
48

Sugiyama, Masashi, Motoaki Kawanabe, and Klaus-Robert Müller. "Trading Variance Reduction with Unbiasedness: The Regularized Subspace Information Criterion for Robust Model Selection in Kernel Regression." Neural Computation 16, no. 5 (May 1, 2004): 1077–104. http://dx.doi.org/10.1162/089976604773135113.

Full text
Abstract:
A well-known result by Stein (1956) shows that in particular situations, biased estimators can yield better parameter estimates than their generally preferred unbiased counterparts. This letter follows the same spirit, as we will stabilize the unbiased generalization error estimates by regularization and finally obtain more robust model selection criteria for learning. We trade a small bias against a larger variance reduction, which has the beneficial effect of being more precise on a single training set. We focus on the subspace information criterion (SIC), which is an unbiased estimator of the expected generalization error measured by the reproducing kernel Hilbert space norm. SIC can be applied to the kernel regression, and it was shown in earlier experiments that a small regularization of SIC has a stabilization effect. However, it remained open how to appropriately determine the degree of regularization in SIC. In this article, we derive an unbiased estimator of the expected squared error, between SIC and the expected generalization error and propose determining the degree of regularization of SIC such that the estimator of the expected squared error is minimized. Computer simulations with artificial and real data sets illustrate that the proposed method works effectively for improving the precision of SIC, especially in the high-noise-level cases. We furthermore compare the proposed method to the original SIC, the cross-validation, and anempirical Bayesian method in ridge parameter selection, withgood results.
APA, Harvard, Vancouver, ISO, and other styles
49

Wilson, Scott R., Murray E. Close, Phillip Abraham, Theo S. Sarris, Laura Banasiak, Roland Stenger, and John Hadfield. "Achieving unbiased predictions of national-scale groundwater redox conditions via data oversampling and statistical learning." Science of The Total Environment 705 (February 2020): 135877. http://dx.doi.org/10.1016/j.scitotenv.2019.135877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Huang, Yiyan, Cheuk Hang Leung, Xing Yan, Qi Wu, Nanbo Peng, Dongdong Wang, and Zhixiang Huang. "The Causal Learning of Retail Delinquency." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 1 (May 18, 2021): 204–12. http://dx.doi.org/10.1609/aaai.v35i1.16094.

Full text
Abstract:
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography