Journal articles on the topic 'Empirical risk minimization'

To see the other types of publications on this topic, follow the link: Empirical risk minimization.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Empirical risk minimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Clémençon, Stephan, Patrice Bertail, and Emilie Chautru. "Sampling and empirical risk minimization." Statistics 51, no. 1 (December 14, 2016): 30–42. http://dx.doi.org/10.1080/02331888.2016.1259810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lecué, Guillaume, and Shahar Mendelson. "Aggregation via empirical risk minimization." Probability Theory and Related Fields 145, no. 3-4 (November 12, 2008): 591–613. http://dx.doi.org/10.1007/s00440-008-0180-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lugosi, G., and K. Zeger. "Nonparametric estimation via empirical risk minimization." IEEE Transactions on Information Theory 41, no. 3 (May 1995): 677–87. http://dx.doi.org/10.1109/18.382014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Koltchinskii, Vladimir. "Sparsity in penalized empirical risk minimization." Annales de l'Institut Henri Poincaré, Probabilités et Statistiques 45, no. 1 (February 2009): 7–57. http://dx.doi.org/10.1214/07-aihp146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Klemelä, Jussi, and Enno Mammen. "Empirical risk minimization in inverse problems." Annals of Statistics 38, no. 1 (February 2010): 482–511. http://dx.doi.org/10.1214/09-aos726.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Liyuan, Biqin Song, Zhibin Pan, Chuanwu Yang, Chi Xiao, and Weifu Li. "Gradient Learning under Tilted Empirical Risk Minimization." Entropy 24, no. 7 (July 9, 2022): 956. http://dx.doi.org/10.3390/e24070956.

Full text
Abstract:
Gradient Learning (GL), aiming to estimate the gradient of target function, has attracted much attention in variable selection problems due to its mild structure requirements and wide applicability. Despite rapid progress, the majority of the existing GL works are based on the empirical risk minimization (ERM) principle, which may face the degraded performance under complex data environment, e.g., non-Gaussian noise. To alleviate this sensitiveness, we propose a new GL model with the help of the tilted ERM criterion, and establish its theoretical support from the function approximation viewpoint. Specifically, the operator approximation technique plays the crucial role in our analysis. To solve the proposed learning objective, a gradient descent method is proposed, and the convergence analysis is provided. Finally, simulated experimental results validate the effectiveness of our approach when the input variables are correlated.
APA, Harvard, Vancouver, ISO, and other styles
7

Perez-Cruz, F., A. Navia-Vazquez, A. R. Figueiras-Vidal, and A. Artes-Rodriguez. "Empirical risk minimization for support vector classifiers." IEEE Transactions on Neural Networks 14, no. 2 (March 2003): 296–303. http://dx.doi.org/10.1109/tnn.2003.809399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Golubev, G. K. "On a Method of Empirical Risk Minimization." Problems of Information Transmission 40, no. 3 (July 2004): 202–11. http://dx.doi.org/10.1023/b:prit.0000044256.20595.e6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Brownlees, Christian, Emilien Joly, and Gábor Lugosi. "Empirical risk minimization for heavy-tailed losses." Annals of Statistics 43, no. 6 (December 2015): 2507–36. http://dx.doi.org/10.1214/15-aos1350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Loustau, Sébastien. "Penalized empirical risk minimization over Besov spaces." Electronic Journal of Statistics 3 (2009): 824–50. http://dx.doi.org/10.1214/08-ejs316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

van de Geer, Sara, and Martin J. Wainwright. "On Concentration for (Regularized) Empirical Risk Minimization." Sankhya A 79, no. 2 (August 2017): 159–200. http://dx.doi.org/10.1007/s13171-017-0111-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Liu, Changxin, Karl H. Johansson, and Yang Shi. "Distributed empirical risk minimization with differential privacy." Automatica 162 (April 2024): 111514. http://dx.doi.org/10.1016/j.automatica.2024.111514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Mo, Xiaomei, and Jie Xu. "Convergence and consistency of ERM algorithm with uniformly ergodic Markov chain samples." International Journal of Wavelets, Multiresolution and Information Processing 14, no. 03 (May 2016): 1650013. http://dx.doi.org/10.1142/s0219691316500132.

Full text
Abstract:
This paper studies the convergence rate and consistency of Empirical Risk Minimization algorithm, where the samples need not be independent and identically distributed (i.i.d.) but can come from uniformly ergodic Markov chain (u.e.M.c.). We firstly establish the generalization bounds of Empirical Risk Minimization algorithm with u.e.M.c. samples. Then we deduce that the Empirical Risk Minimization algorithm on the base of u.e.M.c. samples is consistent and owns a fast convergence rate.
APA, Harvard, Vancouver, ISO, and other styles
14

Lecué, Guillaume, and Shahar Mendelson. "Performance of empirical risk minimization in linear aggregation." Bernoulli 22, no. 3 (August 2016): 1520–34. http://dx.doi.org/10.3150/15-bej701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Tsuchiya, Taira, Nontawat Charoenphakdee, Issei Sato, and Masashi Sugiyama. "Semisupervised Ordinal Regression Based on Empirical Risk Minimization." Neural Computation 33, no. 12 (November 12, 2021): 3361–412. http://dx.doi.org/10.1162/neco_a_01445.

Full text
Abstract:
Abstract Ordinal regression is aimed at predicting an ordinal class label. In this letter, we consider its semisupervised formulation, in which we have unlabeled data along with ordinal-labeled data to train an ordinal regressor. There are several metrics to evaluate the performance of ordinal regression, such as the mean absolute error, mean zero-one error, and mean squared error. However, the existing studies do not take the evaluation metric into account, restrict model choice, and have no theoretical guarantee. To overcome these problems, we propose a novel generic framework for semisupervised ordinal regression based on the empirical risk minimization principle that is applicable to optimizing all of the metrics mentioned above. In addition, our framework has flexible choices of models, surrogate losses, and optimization algorithms without the common geometric assumption on unlabeled data such as the cluster assumption or manifold assumption. We provide an estimation error bound to show that our risk estimator is consistent. Finally, we conduct experiments to show the usefulness of our framework.
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Puyu, Zhenhuan Yang, Yunwen Lei, Yiming Ying, and Hai Zhang. "Differentially private empirical risk minimization for AUC maximization." Neurocomputing 461 (October 2021): 419–37. http://dx.doi.org/10.1016/j.neucom.2021.07.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

McGoff, Kevin, and Andrew B. Nobel. "Empirical risk minimization and complexity of dynamical models." Annals of Statistics 48, no. 4 (August 2020): 2031–54. http://dx.doi.org/10.1214/19-aos1876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ho, Chin Pang, and Panos Parpas. "Empirical risk minimization: probabilistic complexity and stepsize strategy." Computational Optimization and Applications 73, no. 2 (March 2, 2019): 387–410. http://dx.doi.org/10.1007/s10589-019-00080-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Poulsen, Rolf, Klaus Reiner Schenk-Hoppé, and Christian-Oliver Ewald. "Risk minimization in stochastic volatility models: model risk and empirical performance." Quantitative Finance 9, no. 6 (September 2009): 693–704. http://dx.doi.org/10.1080/14697680902852738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

KASHIMA, H. "Risk-Sensitive Learning via Minimization of Empirical Conditional Value-at-Risk." IEICE Transactions on Information and Systems E90-D, no. 12 (December 1, 2007): 2043–52. http://dx.doi.org/10.1093/ietisy/e90-d.12.2043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Cohen, Shay B., and Noah A. Smith. "Empirical Risk Minimization for Probabilistic Grammars: Sample Complexity and Hardness of Learning." Computational Linguistics 38, no. 3 (September 2012): 479–526. http://dx.doi.org/10.1162/coli_a_00092.

Full text
Abstract:
Probabilistic grammars are generative statistical models that are useful for compositional and sequential structures. They are used ubiquitously in computational linguistics. We present a framework, reminiscent of structural risk minimization, for empirical risk minimization of probabilistic grammars using the log-loss. We derive sample complexity bounds in this framework that apply both to the supervised setting and the unsupervised setting. By making assumptions about the underlying distribution that are appropriate for natural language scenarios, we are able to derive distribution-dependent sample complexity bounds for probabilistic grammars. We also give simple algorithms for carrying out empirical risk minimization using this framework in both the supervised and unsupervised settings. In the unsupervised case, we show that the problem of minimizing empirical risk is NP-hard. We therefore suggest an approximate algorithm, similar to expectation-maximization, to minimize the empirical risk.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhu, Beier, Yulei Niu, Xian-Sheng Hua, and Hanwang Zhang. "Cross-Domain Empirical Risk Minimization for Unbiased Long-Tailed Classification." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 3589–97. http://dx.doi.org/10.1609/aaai.v36i3.20271.

Full text
Abstract:
We address the overlooked unbiasedness in existing long-tailed classification methods: we find that their overall improvement is mostly attributed to the biased preference of "tail" over "head", as the test distribution is assumed to be balanced; however, when the test is as imbalanced as the long-tailed training data---let the test respect Zipf's law of nature---the "tail" bias is no longer beneficial overall because it hurts the "head" majorities. In this paper, we propose Cross-Domain Empirical Risk Minimization (xERM) for training an unbiased test-agnostic model to achieve strong performances on both test distributions, which empirically demonstrates that xERM fundamentally improves the classification by learning better feature representation rather than the "head vs. tail" game. Based on causality, we further theoretically explain why xERM achieves unbiasedness: the bias caused by the domain selection is removed by adjusting the empirical risks on the imbalanced domain and the balanced but unseen domain.
APA, Harvard, Vancouver, ISO, and other styles
23

Kusunoki, Yoshifumi, Jerzy Błaszczyński, Masahiro Inuiguchi, and Roman Słowiński. "Empirical risk minimization for dominance-based rough set approaches." Information Sciences 567 (August 2021): 395–417. http://dx.doi.org/10.1016/j.ins.2021.02.043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Liu, Changxin, Karl H. Johansson, and Yang Shi. "Private Stochastic Dual Averaging for Decentralized Empirical Risk Minimization." IFAC-PapersOnLine 55, no. 13 (2022): 43–48. http://dx.doi.org/10.1016/j.ifacol.2022.07.233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Saumard, Adrien. "On optimality of empirical risk minimization in linear aggregation." Bernoulli 24, no. 3 (August 2018): 2176–203. http://dx.doi.org/10.3150/17-bej925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Owusu-Agyemang, Kwabena, Zhen Qin, Appiah Benjamin, Hu Xiong, and Zhiguang Qin. "Guaranteed distributed machine learning: Privacy-preserving empirical risk minimization." Mathematical Biosciences and Engineering 18, no. 4 (2021): 4772–96. http://dx.doi.org/10.3934/mbe.2021243.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Wei Bian and Dacheng Tao. "Constrained Empirical Risk Minimization Framework for Distance Metric Learning." IEEE Transactions on Neural Networks and Learning Systems 23, no. 8 (August 2012): 1194–205. http://dx.doi.org/10.1109/tnnls.2012.2198075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Sergienko, I. V., A. M. Gupal, and A. A. Vagis. "Bayesian approach, theory of empirical risk minimization. Comparative analysis." Cybernetics and Systems Analysis 44, no. 6 (November 2008): 822–31. http://dx.doi.org/10.1007/s10559-008-9058-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Norkin, V. I., and M. A. Keyzer. "Efficiency of classification methods based on empirical risk minimization." Cybernetics and Systems Analysis 45, no. 5 (September 2009): 750–61. http://dx.doi.org/10.1007/s10559-009-9153-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Laptin, Y. P., Y. I. Zhuravlev, and A. P. Vinogradov. "Empirical risk minimization and problems of constructing linear classifiers." Cybernetics and Systems Analysis 47, no. 4 (July 2011): 640–48. http://dx.doi.org/10.1007/s10559-011-9344-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Zhao, Hanyu, Yangqi Huang, Kunqi Zhao, and Sizhuo Wang. "Applying self-attention model to learn both Empirical Risk Minimization and Invariant Risk Minimization for multimedia recommendation." Applied and Computational Engineering 44, no. 1 (March 5, 2024): 33–47. http://dx.doi.org/10.54254/2755-2721/44/20230093.

Full text
Abstract:
Multimedia recommendation systems have many applications in our daily life. However, how accurately capture a customer's preference is an issue that is difficult to deal with. The proposed Invariant Risk Minimization (IRM) and Empirical Risk Minimization (ERM) are ways to learn a customer's preference. Still, both frameworks show some limitations: although ERM performs excellently in a single environment, it fails to generalize well when faced with multiple and new domains. On the other hand, IRM learns invariant features across heterogeneous environments, but it lacks theoretical guarantees and performs less effectively where the invariants are unclear. This paper proposes an ERM and IRM Optimized Rating Framework (EIOR) as our final recommender model with direct rating scores. The EIOR enhances the accuracy and functionality of the multimedia recommendation systems by utilizing self-attention mechanisms to combine IRM and ERM with adjusted attention weights. Specifically, IRM learns invariant parts across different environments, while ERM learns variant parts. With self-attention, we can adaptively allocate attention weights for the two pieces and seek the optimal pair of attention weights based on the loss function. We demonstrate EIOR on a cutting-edge recommender model UltraGCN and use the open multimedia dataset of TikTok to finish all the experiments. The results validate the effectiveness of EIOR by comparing purely operating invariant representations alone with the framework of IRM.
APA, Harvard, Vancouver, ISO, and other styles
32

Mey, Alexander, and Marco Loog. "Consistency and Finite Sample Behavior of Binary Class Probability Estimation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 8967–74. http://dx.doi.org/10.1609/aaai.v35i10.17084.

Full text
Abstract:
We investigate to which extent one can recover class probabilities within the empirical risk minimization (ERM) paradigm. We extend existing results and emphasize the tight relations between empirical risk minimization and class probability estimation. Following previous literature on excess risk bounds and proper scoring rules, we derive a class probability estimator based on empirical risk minimization. We then derive conditions under which this estimator will converge with high probability to the true class probabilities with respect to the L1-norm. One of our core contributions is a novel way to derive finite sample L1-convergence rates of this estimator for different surrogate loss functions. We also study in detail which commonly used loss functions are suitable for this estimation problem and briefly address the setting of model-misspecification.
APA, Harvard, Vancouver, ISO, and other styles
33

Song, Qing. "A Robust Information Clustering Algorithm." Neural Computation 17, no. 12 (December 1, 2005): 2672–98. http://dx.doi.org/10.1162/089976605774320548.

Full text
Abstract:
We focus on the scenario of robust information clustering (RIC) based on the minimax optimization of mutual information (MI). The minimization of MI leads to the standard mass-constrained deterministic annealing clustering, which is an empirical risk-minimization algorithm. The maximization of MI works out an upper bound of the empirical risk via the identification of outliers (noisy data points). Furthermore, we estimate the real risk VC-bound and determine an optimal cluster number of the RIC based on the structural risk-minimization principle. One of the main advantages of the minimax optimization of MI is that it is a nonparametric approach, which identifies the outliers through the robust density estimate and forms a simple data clustering algorithm based on the square error of the Euclidean distance.
APA, Harvard, Vancouver, ISO, and other styles
34

Lecué, Guillaume. "Empirical risk minimization is optimal for the convex aggregation problem." Bernoulli 19, no. 5B (November 2013): 2153–66. http://dx.doi.org/10.3150/12-bej447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Liu, and Dacheng Tao. "The Double-Accelerated Stochastic Method for Regularized Empirical Risk Minimization." IEEE Transactions on Emerging Topics in Computational Intelligence 3, no. 6 (December 2019): 440–51. http://dx.doi.org/10.1109/tetci.2019.2896090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Meir, Ronny. "Empirical Risk Minimization versus Maximum-Likelihood Estimation: A Case Study." Neural Computation 7, no. 1 (January 1995): 144–57. http://dx.doi.org/10.1162/neco.1995.7.1.144.

Full text
Abstract:
We study the interaction between input distributions, learning algorithms, and finite sample sizes in the case of learning classification tasks. Focusing on the case of normal input distributions, we use statistical mechanics techniques to calculate the empirical and expected (or generalization) errors for several well-known algorithms learning the weights of a single-layer perceptron. In the case of spherically symmetric distributions within each class we find that the simple Hebb rule, corresponding to maximum-likelihood parameter estimation, outperforms the other more complex algorithms, based on error minimization. Moreover, we show that in the regime where the overlap between the classes is large, algorithms with low empirical error do worse in terms of generalization, a phenomenon known as overtraining.
APA, Harvard, Vancouver, ISO, and other styles
37

Chichignoud, Michaël, and Sébastien Loustau. "Bandwidth selection in kernel empirical risk minimization via the gradient." Annals of Statistics 43, no. 4 (August 2015): 1617–46. http://dx.doi.org/10.1214/15-aos1318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lee, Ching-pei, and Kai-Wei Chang. "Distributed block-diagonal approximation methods for regularized empirical risk minimization." Machine Learning 109, no. 4 (December 18, 2019): 813–52. http://dx.doi.org/10.1007/s10994-019-05859-2.

Full text
Abstract:
AbstractIn recent years, there is a growing need to train machine learning models on a huge volume of data. Therefore, designing efficient distributed optimization algorithms for empirical risk minimization (ERM) has become an active and challenging research topic. In this paper, we propose a flexible framework for distributed ERM training through solving the dual problem, which provides a unified description and comparison of existing methods. Our approach requires only approximate solutions of the sub-problems involved in the optimization process, and is versatile to be applied on many large-scale machine learning problems including classification, regression, and structured prediction. We show that our framework enjoys global linear convergence for a broad class of non-strongly-convex problems, and some specific choices of the sub-problems can even achieve much faster convergence than existing approaches by a refined analysis. This improved convergence rate is also reflected in the superior empirical performance of our method.
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Shutian, Tao Li, and Quanyan Zhu. "Game-Theoretic Distributed Empirical Risk Minimization With Strategic Network Design." IEEE Transactions on Signal and Information Processing over Networks 9 (2023): 542–56. http://dx.doi.org/10.1109/tsipn.2023.3306106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Cui, Zhenghang, Nontawat Charoenphakdee, Issei Sato, and Masashi Sugiyama. "Classification from Triplet Comparison Data." Neural Computation 32, no. 3 (March 2020): 659–81. http://dx.doi.org/10.1162/neco_a_01262.

Full text
Abstract:
Learning from triplet comparison data has been extensively studied in the context of metric learning, where we want to learn a distance metric between two instances, and ordinal embedding, where we want to learn an embedding in a Euclidean space of the given instances that preserve the comparison order as much as possible. Unlike fully labeled data, triplet comparison data can be collected in a more accurate and human-friendly way. Although learning from triplet comparison data has been considered in many applications, an important fundamental question of whether we can learn a classifier only from triplet comparison data without all the labels has remained unanswered. In this letter, we give a positive answer to this important question by proposing an unbiased estimator for the classification risk under the empirical risk minimization framework. Since the proposed method is based on the empirical risk minimization framework, it inherently has the advantage that any surrogate loss function and any model, including neural networks, can be easily applied. Furthermore, we theoretically establish an estimation error bound for the proposed empirical risk minimizer. Finally, we provide experimental results to show that our method empirically works well and outperforms various baseline methods.
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Hong, Chuanbao Ren, and Luoqing Li. "U-Processes and Preference Learning." Neural Computation 26, no. 12 (December 2014): 2896–924. http://dx.doi.org/10.1162/neco_a_00674.

Full text
Abstract:
Preference learning has caused great attention in machining learning. In this letter we propose a learning framework for pairwise loss based on empirical risk minimization of U-processes via Rademacher complexity. We first establish a uniform version of Bernstein inequality of U-processes of degree 2 via the entropy methods. Then we estimate the bound of the excess risk by using the Bernstein inequality and peeling skills. Finally, we apply the excess risk bound to the pairwise preference and derive the convergence rates of pairwise preference learning algorithms with squared loss and indicator loss by using the empirical risk minimization with respect to U-processes.
APA, Harvard, Vancouver, ISO, and other styles
42

Jiang, Wenxin, and Martin A. Tanner. "RISK MINIMIZATION FOR TIME SERIES BINARY CHOICE WITH VARIABLE SELECTION." Econometric Theory 26, no. 5 (March 5, 2010): 1437–52. http://dx.doi.org/10.1017/s0266466609990636.

Full text
Abstract:
This paper considers the problem of predicting binary choices by selecting from a possibly large set of candidate explanatory variables, which can include both exogenous variables and lagged dependent variables. We consider risk minimization with the risk function being the predictive classification error. We study the convergence rates of empirical risk minimization in both the frequentist and Bayesian approaches. The Bayesian treatment uses a Gibbs posterior constructed directly from the empirical risk instead of using the usual likelihood-based posterior. Therefore these approaches do not require a correctly specified probability model. We show that the proposed methods have near optimal performance relative to a class of linear classification rules with selected variables. Such results in classification are obtained in a framework of dependent data with strong mixing.
APA, Harvard, Vancouver, ISO, and other styles
43

Luo, Zhijian, Siyu Chen, and Yuntao Qian. "Stochastic Momentum Method With Double Acceleration for Regularized Empirical Risk Minimization." IEEE Access 7 (2019): 166551–63. http://dx.doi.org/10.1109/access.2019.2953288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Lee, Ji-Woong, and Pramod P. Khargonekar. "Distribution-free consistency of empirical risk minimization and support vector regression." Mathematics of Control, Signals, and Systems 21, no. 2 (September 16, 2009): 111–25. http://dx.doi.org/10.1007/s00498-009-0041-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Shimada, Takuya, Han Bao, Issei Sato, and Masashi Sugiyama. "Classification From Pairwise Similarities/Dissimilarities and Unlabeled Data via Empirical Risk Minimization." Neural Computation 33, no. 5 (April 13, 2021): 1234–68. http://dx.doi.org/10.1162/neco_a_01373.

Full text
Abstract:
Abstract Pairwise similarities and dissimilarities between data points are often obtained more easily than full labels of data in real-world classification problems. To make use of such pairwise information, an empirical risk minimization approach has been proposed, where an unbiased estimator of the classification risk is computed from only pairwise similarities and unlabeled data. However, this approach has not yet been able to handle pairwise dissimilarities. Semisupervised clustering methods can incorporate both similarities and dissimilarities into their framework; however, they typically require strong geometrical assumptions on the data distribution such as the manifold assumption, which may cause severe performance deterioration. In this letter, we derive an unbiased estimator of the classification risk based on all of similarities and dissimilarities and unlabeled data. We theoretically establish an estimation error bound and experimentally demonstrate the practical usefulness of our empirical risk minimization method.
APA, Harvard, Vancouver, ISO, and other styles
46

LI, HONG, NA CHEN, and YUAN Y. TANG. "LOCAL LEARNING ESTIMATES BY INTEGRAL OPERATORS." International Journal of Wavelets, Multiresolution and Information Processing 08, no. 05 (September 2010): 695–712. http://dx.doi.org/10.1142/s0219691310003729.

Full text
Abstract:
In this paper, we consider the problem of local risk minimization on the basis of empirical data, which is a generalization of the problem of global risk minimization. A new local risk regularization scheme is proposed. The error estimate for the proposed algorithm is obtained by using probabilistic estimates for integral operators. Experiments are presented to illustrate the general theory. Simulation results on several artificial real datasets show that the local risk regularization algorithm has better performance.
APA, Harvard, Vancouver, ISO, and other styles
47

Gyurik, Casper, Dyon Vreumingen, van, and Vedran Dunjko. "Structural risk minimization for quantum linear classifiers." Quantum 7 (January 13, 2023): 893. http://dx.doi.org/10.22331/q-2023-01-13-893.

Full text
Abstract:
Quantum machine learning (QML) models based on parameterized quantum circuits are often highlighted as candidates for quantum computing's near-term “killer application''. However, the understanding of the empirical and generalization performance of these models is still in its infancy. In this paper we study how to balance between training accuracy and generalization performance (also called structural risk minimization) for two prominent QML models introduced by Havlíček et al. \cite{havlivcek:qsvm}, and Schuld and Killoran \cite{schuld:qsvm}. Firstly, using relationships to well understood classical models, we prove that two model parameters – i.e., the dimension of the sum of the images and the Frobenius norm of the observables used by the model – closely control the models' complexity and therefore its generalization performance. Secondly, using ideas inspired by process tomography, we prove that these model parameters also closely control the models' ability to capture correlations in sets of training examples. In summary, our results give rise to new options for structural risk minimization for QML models.
APA, Harvard, Vancouver, ISO, and other styles
48

Rubinstein, Benjamin I. P., and Aleksandr Simma. "On the Stability of Empirical Risk Minimization in the Presence of Multiple Risk Minimizers." IEEE Transactions on Information Theory 58, no. 7 (July 2012): 4160–63. http://dx.doi.org/10.1109/tit.2012.2191681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Liu, Guangxin, Liguo Wang, Danfeng Liu, Lei Fei, and Jinghui Yang. "Hyperspectral Image Classification Based on Non-Parallel Support Vector Machine." Remote Sensing 14, no. 10 (May 19, 2022): 2447. http://dx.doi.org/10.3390/rs14102447.

Full text
Abstract:
Support vector machine (SVM) has a good effect in the supervised classification of hyperspectral images. In view of the shortcomings of the existing parallel structure SVM, this article proposes a non-parallel SVM model. Based on the traditional parallel boundary structure vector machine, this model adds an additional empirical risk minimization term to the original optimization problem by adding the least square term of the sample and obtains two non-parallel hyperplanes, respectively, forming a new non-parallel SVM algorithm to minimize the additional empirical risk of non-parallel SVM (Additional Empirical Risk Minimization Non-parallel Support Vector Machine, AERM-NPSVM). On the basis of AERM-NPSVM, the bias constraint is added to it, and AERM-NPSVM (BC-AERM-NPSVM) is further obtained. The experimental results show that, compared with the traditional parallel SVM model and the classical non-parallel SVM model, Twin Support Vector Machine (TWSVM), the new model, has a better effect in hyperspectral image classification and better generalization performance.
APA, Harvard, Vancouver, ISO, and other styles
50

Owusu-Agyemang, Kwabena, Zhen Qin, Appiah Benjamin, Hu Xiong, and Zhiguang Qin. "Insuring against the perils in distributed learning: privacy-preserving empirical risk minimization." Mathematical Biosciences and Engineering 18, no. 4 (2021): 3006–33. http://dx.doi.org/10.3934/mbe.2021151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography