Academic literature on the topic 'Empirical risk minimization'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Empirical risk minimization.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Empirical risk minimization"
Clémençon, Stephan, Patrice Bertail, and Emilie Chautru. "Sampling and empirical risk minimization." Statistics 51, no. 1 (December 14, 2016): 30–42. http://dx.doi.org/10.1080/02331888.2016.1259810.
Full textLecué, Guillaume, and Shahar Mendelson. "Aggregation via empirical risk minimization." Probability Theory and Related Fields 145, no. 3-4 (November 12, 2008): 591–613. http://dx.doi.org/10.1007/s00440-008-0180-8.
Full textLugosi, G., and K. Zeger. "Nonparametric estimation via empirical risk minimization." IEEE Transactions on Information Theory 41, no. 3 (May 1995): 677–87. http://dx.doi.org/10.1109/18.382014.
Full textKoltchinskii, Vladimir. "Sparsity in penalized empirical risk minimization." Annales de l'Institut Henri Poincaré, Probabilités et Statistiques 45, no. 1 (February 2009): 7–57. http://dx.doi.org/10.1214/07-aihp146.
Full textKlemelä, Jussi, and Enno Mammen. "Empirical risk minimization in inverse problems." Annals of Statistics 38, no. 1 (February 2010): 482–511. http://dx.doi.org/10.1214/09-aos726.
Full textLiu, Liyuan, Biqin Song, Zhibin Pan, Chuanwu Yang, Chi Xiao, and Weifu Li. "Gradient Learning under Tilted Empirical Risk Minimization." Entropy 24, no. 7 (July 9, 2022): 956. http://dx.doi.org/10.3390/e24070956.
Full textPerez-Cruz, F., A. Navia-Vazquez, A. R. Figueiras-Vidal, and A. Artes-Rodriguez. "Empirical risk minimization for support vector classifiers." IEEE Transactions on Neural Networks 14, no. 2 (March 2003): 296–303. http://dx.doi.org/10.1109/tnn.2003.809399.
Full textGolubev, G. K. "On a Method of Empirical Risk Minimization." Problems of Information Transmission 40, no. 3 (July 2004): 202–11. http://dx.doi.org/10.1023/b:prit.0000044256.20595.e6.
Full textBrownlees, Christian, Emilien Joly, and Gábor Lugosi. "Empirical risk minimization for heavy-tailed losses." Annals of Statistics 43, no. 6 (December 2015): 2507–36. http://dx.doi.org/10.1214/15-aos1350.
Full textLoustau, Sébastien. "Penalized empirical risk minimization over Besov spaces." Electronic Journal of Statistics 3 (2009): 824–50. http://dx.doi.org/10.1214/08-ejs316.
Full textDissertations / Theses on the topic "Empirical risk minimization"
Csiba, Dominik. "Data sampling strategies in stochastic algorithms for empirical risk minimization." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/29635.
Full textMathieu, Timothée. "M-estimation and Median of Means applied to statistical learning Robust classification via MOM minimization MONK – outlier-robust mean embedding estimation by median-of-means Excess risk bounds in robust empirical risk minimization." Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPASM002.
Full textThe main objective of this thesis is to study methods for robust statistical learning. Traditionally, in statistics we use models or simplifying assumptions that allow us to represent the real world. However, some deviations from the hypotheses can strongly disrupt the statistical analysis of a database. By robust statistics, we mean methods that can handle on the one hand so-called abnormal data (sensor error, human error) but also data of a highly variable nature. We apply robust techniques to statistical learning, giving theoretical efficiency results of the proposed methods as well as illustrations on simulated and real data
Suzzi, Mattia. "Introduzione al Machine Learning e alle reti neurali." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021.
Find full textAchab, Mastane. "Ranking and risk-aware reinforcement learning." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT020.
Full textThis thesis divides into two parts: the first part is on ranking and the second on risk-aware reinforcement learning. While binary classification is the flagship application of empirical risk minimization (ERM), the main paradigm of machine learning, more challenging problems such as bipartite ranking can also be expressed through that setup. In bipartite ranking, the goal is to order, by means of scoring methods, all the elements of some feature space based on a training dataset composed of feature vectors with their binary labels. This thesis extends this setting to the continuous ranking problem, a variant where the labels are taking continuous values instead of being simply binary. The analysis of ranking data, initiated in the 18th century in the context of elections, has led to another ranking problem using ERM, namely ranking aggregation and more precisely the Kemeny's consensus approach. From a training dataset made of ranking data, such as permutations or pairwise comparisons, the goal is to find the single "median permutation" that best corresponds to a consensus order. We present a less drastic dimensionality reduction approach where a distribution on rankings is approximated by a simpler distribution, which is not necessarily reduced to a Dirac mass as in ranking aggregation.For that purpose, we rely on mathematical tools from the theory of optimal transport such as Wasserstein metrics. The second part of this thesis focuses on risk-aware versions of the stochastic multi-armed bandit problem and of reinforcement learning (RL), where an agent is interacting with a dynamic environment by taking actions and receiving rewards, the objective being to maximize the total payoff. In particular, a novel atomic distributional RL approach is provided: the distribution of the total payoff is approximated by particles that correspond to trimmed means
Philips, Petra Camilla, and petra philips@gmail com. "Data-Dependent Analysis of Learning Algorithms." The Australian National University. Research School of Information Sciences and Engineering, 2005. http://thesis.anu.edu.au./public/adt-ANU20050901.204523.
Full textNeirac, Lucie. "Learning with a linear loss function : excess risk and estimation bounds for ERM and minimax MOM estimators, with applications." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAG012.
Full textCommunity detection, phase recovery, signed clustering, angular group synchronization, Maxcut, sparse PCA, the single index model, and the list goes on, are all classical topics within the field of machine learning and statistics. At first glance, they are pretty different problems with different types of data and different goals. However, the literature of recent years shows that they do have one thing in common: they all are amenable to Semi-Definite Programming (SDP). And because they are amenable to SDP, we can go further and recast them in the classical machine learning framework of risk minimization, and this with the simplest possible loss function: the linear loss function. This, in turn, opens up the opportunity to leverage the vast literature related to risk minimization to derive excess risk and estimation bounds as well as algorithms to unravel these problems. The aim of this work is to propose a unified methodology to obtain statistical properties of classical machine learning procedures based on the linear loss function, which corresponds, for example, to the case of SDP procedures that we look as ERM procedures. Embracing a machine learning view point allows us to go into greater depth and introduce other estimators which are effective in handling two key challenges within statistical learning: sparsity, and robustness to adversarial contamination and heavy-tailed data. We attack the structural learning problem by proposing a regularized version of the ERM estimator. We then turn to the robustness problem and introduce an estimator based on the median of means (MOM) principle, which we call the minmax MOM estimator. This latter estimator addresses the problem of robustness and can be constructed whatever the loss function, including the linear loss function. We also present a regularized version of the minmax MOM estimator. For each of those estimators we are able to provide excess risk and estimation bounds, which are derived from two key tools: local complexity fixed points and curvature equations of the excess risk function. To illustrate the relevance of our approach, we apply our methodology to five classical problems within the frame of statistical learning, for which we improve the state-of-the-art results
Gazagnadou, Nidham. "Expected smoothness for stochastic variance-reduced methods and sketch-and-project methods for structured linear systems." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT035.
Full textThe considerable increase in the number of data and features complicates the learning phase requiring the minimization of a loss function. Stochastic gradient descent (SGD) and variance reduction variants (SAGA, SVRG, MISO) are widely used to solve this problem. In practice, these methods are accelerated by computing these stochastic gradients on a "mini-batch": a small group of samples randomly drawn.Indeed, recent technological improvements allowing the parallelization of these calculations have generalized the use of mini-batches.In this thesis, we are interested in the study of variants of stochastic gradient algorithms with reduced variance by trying to find the optimal hyperparameters: step and mini-batch size. Our study allows us to give convergence results interpolating between stochastic methods drawing a single sample per iteration and the so-called "full-batch" gradient descent using all samples at each iteration. Our analysis is based on the expected smoothness constant which allows to capture the regularity of the random function whose gradient is calculated.We study another class of optimization algorithms: the "sketch-and-project" methods. These methods can also be applied as soon as the learning problem boils down to solving a linear system. This is the case of ridge regression. We analyze here variants of this method that use different strategies of momentum and acceleration. These methods also depend on the sketching strategy used to compress the information of the system to be solved at each iteration. Finally, we show that these methods can also be extended to numerical analysis problems. Indeed, the extension of sketch-and-project methods to Alternating-Direction Implicit (ADI) methods allows to apply them to large-scale problems, when the so-called "direct" solvers are too slow
Caponnetto, Andrea, and Alexander Rakhlin. "Some Properties of Empirical Risk Minimization over Donsker Classes." 2005. http://hdl.handle.net/1721.1/30545.
Full textMukherjee, Sayan, Partha Niyogi, Tomaso Poggio, and Ryan Rifkin. "Statistical Learning: Stability is Sufficient for Generalization and Necessary and Sufficient for Consistency of Empirical Risk Minimization." 2002. http://hdl.handle.net/1721.3/5507.
Full textrevised July 2003
Philips, Petra. "Data-Dependent Analysis of Learning Algorithms." Phd thesis, 2005. http://hdl.handle.net/1885/47998.
Full textBooks on the topic "Empirical risk minimization"
Koltchinskii, Vladimir. Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22147-7.
Full textKoltchinskii, Vladimir. Oracle inequalities in empirical risk minimization and sparse recovery problems: École d'été de probabilités de Saint-Flour XXXVIII-2008. Berlin: Springer Verlag, 2011.
Find full textEcole d'été de probabilités de Saint-Flour (38th : 2008), ed. Oracle inequalities in empirical risk minimization and sparse recovery problems: École d'été de probabilités de Saint-Flour XXXVIII-2008. Berlin: Springer Verlag, 2011.
Find full textMachine Learning from Weak Supervision: An Empirical Risk Minimization Approach. MIT Press, 2022.
Find full textKoltchinskii, Vladimir. Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems: École d'Été de Probabilités de Saint-Flour XXXVIII-2008. Springer London, Limited, 2011.
Find full textKoltchinskii, Vladimir. Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems: École d'Été de Probabilités de Saint-Flour XXXVIII-2008. Springer, 2011.
Find full textBaillo, Amparo, Antonio Cuevas, and Ricardo Fraiman. Classification methods for functional data. Edited by Frédéric Ferraty and Yves Romain. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780199568444.013.10.
Full textKuypers, Dirk R. J., and Maarten Naesens. Immunosuppression. Edited by Jeremy R. Chapman. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780199592548.003.0281_update_001.
Full textBook chapters on the topic "Empirical risk minimization"
Langford, John, Xinhua Zhang, Gavin Brown, Indrajit Bhattacharya, Lise Getoor, Thomas Zeugmann, Thomas Zeugmann, et al. "Empirical Risk Minimization." In Encyclopedia of Machine Learning, 312. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_251.
Full textJung, Alexander. "Empirical Risk Minimization." In Machine Learning: Foundations, Methodologies, and Applications, 81–98. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-8193-6_4.
Full textZhang, Xinhua. "Empirical Risk Minimization." In Encyclopedia of Machine Learning and Data Mining, 1. Boston, MA: Springer US, 2016. http://dx.doi.org/10.1007/978-1-4899-7502-7_79-1.
Full textZhang, Xinhua. "Empirical Risk Minimization." In Encyclopedia of Machine Learning and Data Mining, 392–93. Boston, MA: Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_79.
Full textFranco, Danilo, Luca Oneto, and Davide Anguita. "Fair Empirical Risk Minimization Revised." In Advances in Computational Intelligence, 29–42. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43085-5_3.
Full textVapnik, Vladimir. "Methods of Expected-Risk Minimization." In Estimation of Dependences Based on Empirical Data, 27–44. New York, NY: Springer New York, 2006. http://dx.doi.org/10.1007/0-387-34239-7_2.
Full textBartlett, Peter L., Shahar Mendelson, and Petra Philips. "Local Complexities for Empirical Risk Minimization." In Learning Theory, 270–84. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-27819-1_19.
Full textVapnik, Vladimir. "The Method of Structural Minimization of Risk." In Estimation of Dependences Based on Empirical Data, 232–66. New York, NY: Springer New York, 2006. http://dx.doi.org/10.1007/0-387-34239-7_8.
Full textClémençon, Stéphan, Gábor Lugosi, and Nicolas Vayatis. "Ranking and Scoring Using Empirical Risk Minimization." In Learning Theory, 1–15. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11503415_1.
Full textFukuchi, Kazuto, and Jun Sakuma. "Neutralized Empirical Risk Minimization with Generalization Neutrality Bound." In Machine Learning and Knowledge Discovery in Databases, 418–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-44848-9_27.
Full textConference papers on the topic "Empirical risk minimization"
Oneto, Luca, Michele Donini, and Massimiliano Pontil. "General Fair Empirical Risk Minimization." In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9206819.
Full textXin, Ran, Anit Kumar Sahu, Soummya Kar, and Usman A. Khan. "Distributed empirical risk minimization over directed graphs." In 2019 53rd Asilomar Conference on Signals, Systems, and Computers. IEEE, 2019. http://dx.doi.org/10.1109/ieeeconf44664.2019.9049065.
Full textWang, Shaoshen, Yanbin Liu, Ling Chen, and Chengqi Zhang. "Diminishing Empirical Risk Minimization for Unsupervised Anomaly Detection." In 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022. http://dx.doi.org/10.1109/ijcnn55064.2022.9892076.
Full textTaheri, Omid, and Sergiy A. Vorobyov. "Empirical risk minimization-based analysis of segmented compressed sampling." In 2010 44th Asilomar Conference on Signals, Systems and Computers. IEEE, 2010. http://dx.doi.org/10.1109/acssc.2010.5757506.
Full textWang, Xinqiang, and Lei Guo. "A new convergent algorithm for online empirical risk minimization." In 2017 36th Chinese Control Conference (CCC). IEEE, 2017. http://dx.doi.org/10.23919/chicc.2017.8029140.
Full textDeoras, Anoop, Denis Filimonov, Mary Harper, and Fred Jelinek. "Model combination for Speech Recognition using Empirical Bayes Risk minimization." In 2010 IEEE Spoken Language Technology Workshop (SLT 2010). IEEE, 2010. http://dx.doi.org/10.1109/slt.2010.5700857.
Full textBassily, Raef, Adam Smith, and Abhradeep Thakurta. "Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds." In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2014. http://dx.doi.org/10.1109/focs.2014.56.
Full textBadiei Khuzani, Masoud. "Distributed Primal-Dual Proximal Method for Regularized Empirical Risk Minimization." In 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2018. http://dx.doi.org/10.1109/icmla.2018.00152.
Full textJiaxuan Huang and Hongsen Huang. "Using empirical risk minimization to detect community structure in the blogosphere." In 2010 IEEE International Conference on Intelligent Systems and Knowledge Engineering (ISKE). IEEE, 2010. http://dx.doi.org/10.1109/iske.2010.5680843.
Full textShen, Zebang, Hui Qian, Tongzhou Mu, and Chao Zhang. "Accelerated Doubly Stochastic Gradient Algorithm for Large-scale Empirical Risk Minimization." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/378.
Full textReports on the topic "Empirical risk minimization"
Caponnetto, Andrea, and Alexander Rakhlin. Some Properties of Empirical Risk Minimization Over Donsker Classes. Fort Belvoir, VA: Defense Technical Information Center, May 2005. http://dx.doi.org/10.21236/ada454986.
Full textMukherjee, Sayan, Partha Niyogi, Tomaso Poggio, and Ryan Rifkin. Statistical Learning: Stability is Sufficient for Generalization and Necessary and Sufficient for Consistency of Empirical Risk Minimization. Fort Belvoir, VA: Defense Technical Information Center, January 2004. http://dx.doi.org/10.21236/ada459857.
Full text