Academic literature on the topic 'Rademacher averages'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Rademacher averages.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Rademacher averages"

1

Le Merdy, Christian, and Fedor Sukochev. "Rademacher averages on noncommutative symmetric spaces." Journal of Functional Analysis 255, no. 12 (December 2008): 3329–55. http://dx.doi.org/10.1016/j.jfa.2008.05.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hinrichs, Aicke. "Rademacher and Gaussian averages and Rademacher cotype of operators between Banach spaces." Proceedings of the American Mathematical Society 128, no. 1 (June 21, 1999): 203–13. http://dx.doi.org/10.1090/s0002-9939-99-05012-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

El-Yaniv, R., and D. Pechyony. "Transductive Rademacher Complexity and its Applications." Journal of Artificial Intelligence Research 35 (June 22, 2009): 193–234. http://dx.doi.org/10.1613/jair.2587.

Full text
Abstract:
We develop a technique for deriving data-dependent error bounds for transductive learning algorithms based on transductive Rademacher complexity. Our technique is based on a novel general error bound for transduction in terms of transductive Rademacher complexity, together with a novel bounding technique for Rademacher averages for particular algorithms, in terms of their "unlabeled-labeled" representation. This technique is relevant to many advanced graph-based transductive algorithms and we demonstrate its effectiveness by deriving error bounds to three well known algorithms. Finally, we present a new PAC-Bayesian bound for mixtures of transductive algorithms based on our Rademacher bounds.
APA, Harvard, Vancouver, ISO, and other styles
4

Mendelson, S. "Rademacher averages and phase transitions in Glivenko-Cantelli classes." IEEE Transactions on Information Theory 48, no. 1 (2002): 251–63. http://dx.doi.org/10.1109/18.971753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

B.C., Yathish Aradhya, and Y. P. Gowramma. "Progressive Sampling Algorithm with Rademacher Averages for Optimized Learning of Big Data: A Novel Approach." International Journal of Computer Applications 175, no. 15 (August 17, 2020): 37–40. http://dx.doi.org/10.5120/ijca2020920652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Antonis Bisbas. "On The Hausdorff Dimension of Average Type Sums of Rademacher Functions." Real Analysis Exchange 29, no. 1 (2004): 139. http://dx.doi.org/10.14321/realanalexch.29.1.0139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Федосеев, В. Б., and А. В. Шишулин. "О распределении по размерам дисперсных частиц фрактальной формы." Журнал технической физики 91, no. 1 (2021): 39. http://dx.doi.org/10.21883/jtf.2021.01.50270.159-20.

Full text
Abstract:
In this paper, a dispersed system formed by an ensemble of particles of different volume has been modeled in the framework of a thermodynamical approach. Particle shape has been determined by its fractal dimension which correlates its volume and surface area. Using the methods of number theory and Hardy-Ramanujan-Rademacher formula, we have calculated the equilibrium size distributions for nanoparticles of different shape in an ensemble. Estimates of the average volume and fractal dimension of dispersed particles have been obtained based on distribution functions. The correlation between average geometrical characteristics of particles in the ensemble, thermodynamical conditions of the dispersed system and properties of its substance have also been revealed.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Hong, Zhibin Pan, Luoqing Li, and Yuanyan Tang. "Error Analysis of Coefficient-Based Regularized Algorithm for Density-Level Detection." Neural Computation 25, no. 4 (April 2013): 1107–21. http://dx.doi.org/10.1162/neco_a_00421.

Full text
Abstract:
In this letter, we consider a density-level detection (DLD) problem by a coefficient-based classification framework with [Formula: see text]-regularizer and data-dependent hypothesis spaces. Although the data-dependent characteristic of the algorithm provides flexibility and adaptivity for DLD, it leads to difficulty in generalization error analysis. To overcome this difficulty, an error decomposition is introduced from an established classification framework. On the basis of this decomposition, the estimate of the learning rate is obtained by using Rademacher average and stepping-stone techniques. In particular, the estimate is independent of the capacity assumption used in the previous literature.
APA, Harvard, Vancouver, ISO, and other styles
9

PENG, ZEWU, YAN PAN, YONG TANG, and GUOHUA CHEN. "A RELATIONAL RANKING METHOD WITH GENERALIZATION ANALYSIS." International Journal on Artificial Intelligence Tools 21, no. 03 (June 2012): 1250021. http://dx.doi.org/10.1142/s0218213012500212.

Full text
Abstract:
Recently, learning to rank, which aims at constructing a model for ranking objects, is one of the hot research topics in information retrieval and machine learning communities. Most of existing learning to rank approaches are based on the assumption that each object is independently and identically distributed. Although this assumption simplifies ranking problems, the implicit interconnections between objects are ignored. In this paper, a graph based ranking framework is proposed, which takes advantage of implicit correlations between objects. Furthermore, the derived relational ranking algorithm from this framework, called GRSVM, is developed based on the conventional algorithm RankSVM-primal. In addition, generalization properties of different relational ranking algorithms are analyzed using Rademacher Average. Based on the analysis, we find that GRSVM can achieve tighter generalization bound than existing relational ranking algorithms in most cases. Finally, a comparison of experimental results produced by improved and conventional algorithms shows the superior performance of the former.
APA, Harvard, Vancouver, ISO, and other styles
10

Hegland, Markus, and Frank De Hoog. "Low rank approximation of positive semi-definite symmetric matrices using Gaussian elimination and volume sampling." ANZIAM Journal 62 (November 14, 2021): C58—C71. http://dx.doi.org/10.21914/anziamj.v62.16036.

Full text
Abstract:
Positive semi-definite matrices commonly occur as normal matrices of least squares problems in statistics or as kernel matrices in machine learning and approximation theory. They are typically large and dense. Thus algorithms to solve systems with such a matrix can be very costly. A core idea to reduce computational complexity is to approximate the matrix by one with a low rank. The optimal and well understood choice is based on the eigenvalue decomposition of the matrix. Unfortunately, this is computationally very expensive. Cheaper methods are based on Gaussian elimination but they require pivoting. We show how invariant matrix theory provides explicit error formulas for an averaged error based on volume sampling. The formula leads to ratios of elementary symmetric polynomials on the eigenvalues. We discuss several bounds for the expected norm of the approximation error and include examples where this expected error norm can be computed exactly. References A. Dax. “On extremum properties of orthogonal quotients matrices”. In: Lin. Alg. Appl. 432.5 (2010), pp. 1234–1257. doi: 10.1016/j.laa.2009.10.034. M. Dereziński and M. W. Mahoney. Determinantal Point Processes in Randomized Numerical Linear Algebra. 2020. url: https://arxiv.org/abs/2005.03185. A. Deshpande, L. Rademacher, S. Vempala, and G. Wang. “Matrix approximation and projective clustering via volume sampling”. In: Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithm. SODA ’06. Miami, Florida: Society for Industrial and Applied Mathematics, 2006, pp. 1117–1126. url: https://dl.acm.org/doi/abs/10.5555/1109557.1109681. S. A. Goreinov, E. E. Tyrtyshnikov, and N. L. Zamarashkin. “A theory of pseudoskeleton approximations”. In: Lin. Alg. Appl. 261.1 (1997), pp. 1–21. doi: 10.1016/S0024-3795(96)00301-1. M. W. Mahoney and P. Drineas. “CUR matrix decompositions for improved data analysis”. In: Proc. Nat. Acad. Sci. 106.3 (Jan. 20, 2009), pp. 697–702. doi: 10.1073/pnas.0803205106. M. Marcus and L. Lopes. “Inequalities for symmetric functions and Hermitian matrices”. In: Can. J. Math. 9 (1957), pp. 305–312. doi: 10.4153/CJM-1957-037-9.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Rademacher averages"

1

Bousquet, Olivier Jean André. "Inégalités de concentration et théorie des processus empiriques appliqués à l'analyse d'algorithmes d'apprentissage." Palaiseau, Ecole polytechnique, 2002. http://www.theses.fr/2002EPXX0031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Philips, Petra Camilla, and petra philips@gmail com. "Data-Dependent Analysis of Learning Algorithms." The Australian National University. Research School of Information Sciences and Engineering, 2005. http://thesis.anu.edu.au./public/adt-ANU20050901.204523.

Full text
Abstract:
This thesis studies the generalization ability of machine learning algorithms in a statistical setting. It focuses on the data-dependent analysis of the generalization performance of learning algorithms in order to make full use of the potential of the actual training sample from which these algorithms learn.¶ First, we propose an extension of the standard framework for the derivation of generalization bounds for algorithms taking their hypotheses from random classes of functions. This approach is motivated by the fact that the function produced by a learning algorithm based on a random sample of data depends on this sample and is therefore a random function. Such an approach avoids the detour of the worst-case uniform bounds as done in the standard approach. We show that the mechanism which allows one to obtain generalization bounds for random classes in our framework is based on a “small complexity” of certain random coordinate projections. We demonstrate how this notion of complexity relates to learnability and how one can explore geometric properties of these projections in order to derive estimates of rates of convergence and good confidence interval estimates for the expected risk. We then demonstrate the generality of our new approach by presenting a range of examples, among them the algorithm-dependent compression schemes and the data-dependent luckiness frameworks, which fall into our random subclass framework.¶ Second, we study in more detail generalization bounds for a specific algorithm which is of central importance in learning theory, namely the Empirical Risk Minimization algorithm (ERM). Recent results show that one can significantly improve the high-probability estimates for the convergence rates for empirical minimizers by a direct analysis of the ERM algorithm. These results are based on a new localized notion of complexity of subsets of hypothesis functions with identical expected errors and are therefore dependent on the underlying unknown distribution. We investigate the extent to which one can estimate these high-probability convergence rates in a data-dependent manner. We provide an algorithm which computes a data-dependent upper bound for the expected error of empirical minimizers in terms of the “complexity” of data-dependent local subsets. These subsets are sets of functions of empirical errors of a given range and can be determined based solely on empirical data. We then show that recent direct estimates, which are essentially sharp estimates on the high-probability convergence rate for the ERM algorithm, can not be recovered universally from empirical data.
APA, Harvard, Vancouver, ISO, and other styles
3

Philips, Petra. "Data-Dependent Analysis of Learning Algorithms." Phd thesis, 2005. http://hdl.handle.net/1885/47998.

Full text
Abstract:
This thesis studies the generalization ability of machine learning algorithms in a statistical setting. It focuses on the data-dependent analysis of the generalization performance of learning algorithms in order to make full use of the potential of the actual training sample from which these algorithms learn.¶ First, we propose an extension of the standard framework for the derivation of generalization bounds for algorithms taking their hypotheses from random classes of functions. ... ¶ Second, we study in more detail generalization bounds for a specific algorithm which is of central importance in learning theory, namely the Empirical Risk Minimization algorithm (ERM). ...
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Rademacher averages"

1

Ledoux, Michel, and Michel Talagrand. "Rademacher Averages." In Probability in Banach Spaces, 89–121. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/978-3-642-20212-4_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

V’yugin, Vladimir V. "VC Dimension, Fat-Shattering Dimension, Rademacher Averages, and Their Applications." In Measures of Complexity, 57–74. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-21852-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Buhmann, M. D., Prem Melville, Vikas Sindhwani, Novi Quadrianto, Wray L. Buntine, Luís Torgo, Xinhua Zhang, et al. "Rademacher Average." In Encyclopedia of Machine Learning, 823. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_689.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Rademacher averages"

1

Riondato, Matteo, and Eli Upfal. "VC-Dimension and Rademacher Averages." In KDD '15: The 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2783258.2789984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Riondato, Matteo, and Eli Upfal. "Mining Frequent Itemsets through Progressive Sampling with Rademacher Averages." In KDD '15: The 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2783258.2783265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pellegrina, Leonardo, Cyrus Cousins, Fabio Vandin, and Matteo Riondato. "MCRapper: Monte-Carlo Rademacher Averages for Poset Families and Approximate Pattern Mining." In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3394486.3403267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography