Academic literature on the topic 'Generalization bounds'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Generalization bounds.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Generalization bounds"

1

Cohn, David, and Gerald Tesauro. "How Tight Are the Vapnik-Chervonenkis Bounds?" Neural Computation 4, no. 2 (March 1992): 249–69. http://dx.doi.org/10.1162/neco.1992.4.2.249.

Full text
Abstract:
We describe a series of numerical experiments that measure the average generalization capability of neural networks trained on a variety of simple functions. These experiments are designed to test the relationship between average generalization performance and the worst-case bounds obtained from formal learning theory using the Vapnik-Chervonenkis (VC) dimension (Blumer et al. 1989; Haussler et al. 1990). Recent statistical learning theories (Tishby et al. 1989; Schwartz et al. 1990) suggest that surpassing these bounds might be possible if the spectrum of possible generalizations has a “gap” near perfect performance. We indeed find that, in some cases, the average generalization is significantly better than the VC bound: the approach to perfect performance is exponential in the number of examples m, rather than the 1/m result of the bound. However, in these cases, we have not found evidence of the gap predicted by the above statistical theories. In other cases, we do find the 1/m behavior of the VC bound, and in these cases, the numerical prefactor is closely related to the prefactor contained in the bound.
APA, Harvard, Vancouver, ISO, and other styles
2

Nedovic, M., and Lj Cvetkovic. "Norm bounds for the inverse and error bounds for linear complementarity problems for {P1,P2}-Nekrasov matrices." Filomat 35, no. 1 (2021): 239–50. http://dx.doi.org/10.2298/fil2101239n.

Full text
Abstract:
{P1,P2}-Nekrasov matrices represent a generalization of Nekrasov matrices via permutations. In this paper, we obtained an error bound for linear complementarity problems for fP1; P2g-Nekrasov matrices. Numerical examples are given to illustrate that new error bound can give tighter results compared to already known bounds when applied to Nekrasov matrices. Also, we presented new max-norm bounds for the inverse of {P1,P2}-Nekrasov matrices in the block case, considering two different types of block generalizations. Numerical examples show that new norm bounds for the block case can give tighter results compared to already known bounds for the point-wise case.
APA, Harvard, Vancouver, ISO, and other styles
3

Nedovic, M. "Norm bounds for the inverse for generalized Nekrasov matrices in point-wise and block case." Filomat 35, no. 8 (2021): 2705–14. http://dx.doi.org/10.2298/fil2108705n.

Full text
Abstract:
Lower-semi-Nekrasov matrices represent a generalization of Nekrasov matrices. For the inverse of lower-semi-Nekrasov matrices, a max-norm bound is proposed. Numerical examples are given to illustrate that new norm bound can give tighter results compared to already known bounds when applied to Nekrasov matrices. Also, we presented new max-norm bounds for the inverse of lower-semi-Nekrasov matrices in the block case. We considered two types of block generalizations and illustrated the results with numerical examples.
APA, Harvard, Vancouver, ISO, and other styles
4

Rubab, Faiza, Hira Nabi, and Asif R. Khan. "GENERALIZATION AND REFINEMENTS OF JENSEN INEQUALITY." Journal of Mathematical Analysis 12, no. 5 (October 31, 2021): 1–27. http://dx.doi.org/10.54379/jma-2021-5-1.

Full text
Abstract:
We give generalizations and refinements of Jensen and Jensen− Mercer inequalities by using weights which satisfy the conditions of Jensen and Jensen− Steffensen inequalities. We also give some refinements for discrete and integral version of generalized Jensen−Mercer inequality and shown to be an improvement of the upper bound for the Jensen’s difference given in [32]. Applications of our work include new bounds for some important inequalities used in information theory, and generalizing the relations among means.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Tongliang, Dacheng Tao, and Dong Xu. "Dimensionality-Dependent Generalization Bounds for k-Dimensional Coding Schemes." Neural Computation 28, no. 10 (October 2016): 2213–49. http://dx.doi.org/10.1162/neco_a_00872.

Full text
Abstract:
The k-dimensional coding schemes refer to a collection of methods that attempt to represent data using a set of representative k-dimensional vectors and include nonnegative matrix factorization, dictionary learning, sparse coding, k-means clustering, and vector quantization as special cases. Previous generalization bounds for the reconstruction error of the k-dimensional coding schemes are mainly dimensionality-independent. A major advantage of these bounds is that they can be used to analyze the generalization error when data are mapped into an infinite- or high-dimensional feature space. However, many applications use finite-dimensional data features. Can we obtain dimensionality-dependent generalization bounds for k-dimensional coding schemes that are tighter than dimensionality-independent bounds when data are in a finite-dimensional feature space? Yes. In this letter, we address this problem and derive a dimensionality-dependent generalization bound for k-dimensional coding schemes by bounding the covering number of the loss function class induced by the reconstruction error. The bound is of order [Formula: see text], where m is the dimension of features, k is the number of the columns in the linear implementation of coding schemes, and n is the size of sample, [Formula: see text] when n is finite and [Formula: see text] when n is infinite. We show that our bound can be tighter than previous results because it avoids inducing the worst-case upper bound on k of the loss function. The proposed generalization bound is also applied to some specific coding schemes to demonstrate that the dimensionality-dependent bound is an indispensable complement to the dimensionality-independent generalization bounds.
APA, Harvard, Vancouver, ISO, and other styles
6

Pereira, Rajesh, and Mohammad Ali Vali. "Generalizations of the Cauchy and Fujiwara Bounds for Products of Zeros of a Polynomial." Electronic Journal of Linear Algebra 31 (February 5, 2016): 565–71. http://dx.doi.org/10.13001/1081-3810.3333.

Full text
Abstract:
The Cauchy bound is one of the best known upper bounds for the modulus of the zeros of a polynomial. The Fujiwara bound is another useful upper bound for the modulus of the zeros of a polynomial. In this paper, compound matrices are used to derive a generalization of both the Cauchy bound and the Fujiwara bound. This generalization yields upper bounds for the modulus of the product of $m$ zeros of the polynomial.
APA, Harvard, Vancouver, ISO, and other styles
7

Parrondo, J. M. R., and C. Van den Broeck. "Vapnik-Chervonenkis bounds for generalization." Journal of Physics A: Mathematical and General 26, no. 9 (May 7, 1993): 2211–23. http://dx.doi.org/10.1088/0305-4470/26/9/016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Freund, Yoav, Yishay Mansour, and Robert E. Schapire. "Generalization bounds for averaged classifiers." Annals of Statistics 32, no. 4 (August 2004): 1698–722. http://dx.doi.org/10.1214/009053604000000058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Papadatos, N., and V. Papathanasiou. "A generalization of variance bounds." Statistics & Probability Letters 28, no. 2 (June 1996): 191–94. http://dx.doi.org/10.1016/0167-7152(95)00117-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kochedykov, D. A. "Combinatorial shell bounds for generalization ability." Pattern Recognition and Image Analysis 20, no. 4 (December 2010): 459–73. http://dx.doi.org/10.1134/s1054661810040061.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Generalization bounds"

1

McDonald, Daniel J. "Generalization Error Bounds for Time Series." Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/184.

Full text
Abstract:
In this thesis, I derive generalization error bounds — bounds on the expected inaccuracy of the predictions — for time series forecasting models. These bounds allow forecasters to select among competing models, and to declare that, with high probability, their chosen model will perform well — without making strong assumptions about the data generating process or appealing to asymptotic theory. Expanding upon results from statistical learning theory, I demonstrate how these techniques can help time series forecasters to choose models which behave well under uncertainty. I also show how to estimate the β-mixing coefficients for dependent data so that my results can be used empirically. I use the bound explicitly to evaluate different predictive models for the volatility of IBM stock and for a standard set of macroeconomic variables. Taken together my results show how to control the generalization error of time series models with fixed or growing memory.
APA, Harvard, Vancouver, ISO, and other styles
2

Kroon, Rodney Stephen. "Support vector machines, generalization bounds, and transduction." Thesis, Stellenbosch : University of Stellenbosch, 2003. http://hdl.handle.net/10019.1/16375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kelby, Robin J. "Formalized Generalization Bounds for Perceptron-Like Algorithms." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1594805966855804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Giulini, Ilaria. "Generalization bounds for random samples in Hilbert spaces." Thesis, Paris, Ecole normale supérieure, 2015. http://www.theses.fr/2015ENSU0026/document.

Full text
Abstract:
Ce travail de thèse porte sur l'obtention de bornes de généralisation pour des échantillons statistiques à valeur dans des espaces de Hilbert définis par des noyaux reproduisants. L'approche consiste à obtenir des bornes non asymptotiques indépendantes de la dimension dans des espaces de dimension finie, en utilisant des inégalités PAC-Bayesiennes liées à une perturbation Gaussienne du paramètre et à les étendre ensuite aux espaces de Hilbert séparables. On se pose dans un premier temps la question de l'estimation de l'opérateur de Gram à partir d'un échantillon i. i. d. par un estimateur robuste et on propose des bornes uniformes, sous des hypothèses faibles de moments. Ces résultats permettent de caractériser l'analyse en composantes principales indépendamment de la dimension et d'en proposer des variantes robustes. On propose ensuite un nouvel algorithme de clustering spectral. Au lieu de ne garder que la projection sur les premiers vecteurs propres, on calcule une itérée du Laplacian normalisé. Cette itération, justifiée par l'analyse du clustering en termes de chaînes de Markov, opère comme une version régularisée de la projection sur les premiers vecteurs propres et permet d'obtenir un algorithme dans lequel le nombre de clusters est déterminé automatiquement. On présente des bornes non asymptotiques concernant la convergence de cet algorithme, lorsque les points à classer forment un échantillon i. i. d. d'une loi à support compact dans un espace de Hilbert. Ces bornes sont déduites des bornes obtenues pour l'estimation d'un opérateur de Gram dans un espace de Hilbert. On termine par un aperçu de l'intérêt du clustering spectral dans le cadre de l'analyse d'images
This thesis focuses on obtaining generalization bounds for random samples in reproducing kernel Hilbert spaces. The approach consists in first obtaining non-asymptotic dimension-free bounds in finite-dimensional spaces using some PAC-Bayesian inequalities related to Gaussian perturbations and then in generalizing the results in a separable Hilbert space. We first investigate the question of estimating the Gram operator by a robust estimator from an i. i. d. sample and we present uniform bounds that hold under weak moment assumptions. These results allow us to qualify principal component analysis independently of the dimension of the ambient space and to propose stable versions of it. In the last part of the thesis we present a new algorithm for spectral clustering. It consists in replacing the projection on the eigenvectors associated with the largest eigenvalues of the Laplacian matrix by a power of the normalized Laplacian. This iteration, justified by the analysis of clustering in terms of Markov chains, performs a smooth truncation. We prove nonasymptotic bounds for the convergence of our spectral clustering algorithm applied to a random sample of points in a Hilbert space that are deduced from the bounds for the Gram operator in a Hilbert space. Experiments are done in the context of image analysis
APA, Harvard, Vancouver, ISO, and other styles
5

Rakhlin, Alexander. "Applications of empirical processes in learning theory : algorithmic stability and generalization bounds." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34564.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2006.
Includes bibliographical references (p. 141-148).
This thesis studies two key properties of learning algorithms: their generalization ability and their stability with respect to perturbations. To analyze these properties, we focus on concentration inequalities and tools from empirical process theory. We obtain theoretical results and demonstrate their applications to machine learning. First, we show how various notions of stability upper- and lower-bound the bias and variance of several estimators of the expected performance for general learning algorithms. A weak stability condition is shown to be equivalent to consistency of empirical risk minimization. The second part of the thesis derives tight performance guarantees for greedy error minimization methods - a family of computationally tractable algorithms. In particular, we derive risk bounds for a greedy mixture density estimation procedure. We prove that, unlike what is suggested in the literature, the number of terms in the mixture is not a bias-variance trade-off for the performance. The third part of this thesis provides a solution to an open problem regarding the stability of Empirical Risk Minimization (ERM). This algorithm is of central importance in Learning Theory.
(cont.) By studying the suprema of the empirical process, we prove that ERM over Donsker classes of functions is stable in the L1 norm. Hence, as the number of samples grows, it becomes less and less likely that a perturbation of o(v/n) samples will result in a very different empirical minimizer. Asymptotic rates of this stability are proved under metric entropy assumptions on the function class. Through the use of a ratio limit inequality, we also prove stability of expected errors of empirical minimizers. Next, we investigate applications of the stability result. In particular, we focus on procedures that optimize an objective function, such as k-means and other clustering methods. We demonstrate that stability of clustering, just like stability of ERM, is closely related to the geometry of the class and the underlying measure. Furthermore, our result on stability of ERM delineates a phase transition between stability and instability of clustering methods. In the last chapter, we prove a generalization of the bounded-difference concentration inequality for almost-everywhere smooth functions. This result can be utilized to analyze algorithms which are almost always stable. Next, we prove a phase transition in the concentration of almost-everywhere smooth functions. Finally, a tight concentration of empirical errors of empirical minimizers is shown under an assumption on the underlying space.
by Alexander Rakhlin.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
6

Nordenfors, Oskar. "A Literature Study Concerning Generalization Error Bounds for Neural Networks via Rademacher Complexity." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184487.

Full text
Abstract:
In this essay some fundamental results from the theory of machine learning and neural networks are presented, with the goal of finally discussing bounds on the generalization error of neural networks, via Rademacher complexity.
I denna uppsats presenteras några grundläggande resultat från teorin kring maskininlärning och neurala nätverk, med målet att slutligen diskutera övre begräsningar på generaliseringsfelet hos neurala nätverk, via Rademachers komplexitet.
APA, Harvard, Vancouver, ISO, and other styles
7

Bellet, Aurélien. "Supervised metric learning with generalization guarantees." Phd thesis, Université Jean Monnet - Saint-Etienne, 2012. http://tel.archives-ouvertes.fr/tel-00770627.

Full text
Abstract:
In recent years, the crucial importance of metrics in machine learningalgorithms has led to an increasing interest in optimizing distanceand similarity functions using knowledge from training data to make them suitable for the problem at hand.This area of research is known as metric learning. Existing methods typically aim at optimizing the parameters of a given metric with respect to some local constraints over the training sample. The learned metrics are generally used in nearest-neighbor and clustering algorithms.When data consist of feature vectors, a large body of work has focused on learning a Mahalanobis distance, which is parameterized by a positive semi-definite matrix. Recent methods offer good scalability to large datasets.Less work has been devoted to metric learning from structured objects (such as strings or trees), because it often involves complex procedures. Most of the work has focused on optimizing a notion of edit distance, which measures (in terms of number of operations) the cost of turning an object into another.We identify two important limitations of current supervised metric learning approaches. First, they allow to improve the performance of local algorithms such as k-nearest neighbors, but metric learning for global algorithms (such as linear classifiers) has not really been studied so far. Second, and perhaps more importantly, the question of the generalization ability of metric learning methods has been largely ignored.In this thesis, we propose theoretical and algorithmic contributions that address these limitations. Our first contribution is the derivation of a new kernel function built from learned edit probabilities. Unlike other string kernels, it is guaranteed to be valid and parameter-free. Our second contribution is a novel framework for learning string and tree edit similarities inspired by the recent theory of (epsilon,gamma,tau)-good similarity functions and formulated as a convex optimization problem. Using uniform stability arguments, we establish theoretical guarantees for the learned similarity that give a bound on the generalization error of a linear classifier built from that similarity. In our third contribution, we extend the same ideas to metric learning from feature vectors by proposing a bilinear similarity learning method that efficiently optimizes the (epsilon,gamma,tau)-goodness. The similarity is learned based on global constraints that are more appropriate to linear classification. Generalization guarantees are derived for our approach, highlighting that our method minimizes a tighter bound on the generalization error of the classifier. Our last contribution is a framework for establishing generalization bounds for a large class of existing metric learning algorithms. It is based on a simple adaptation of the notion of algorithmic robustness and allows the derivation of bounds for various loss functions and regularizers.
APA, Harvard, Vancouver, ISO, and other styles
8

Musayeva, Khadija. "Generalization Performance of Margin Multi-category Classifiers." Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0096/document.

Full text
Abstract:
Cette thèse porte sur la théorie de la discrimination multi-classe à marge. Elle a pour cadre la théorie statistique de l’apprentissage de Vapnik et Chervonenkis. L’objectif est d’établir des bornes de généralisation possédant une dépendances explicite au nombre C de catégories, à la taille m de l’échantillon et au paramètre de marge gamma, lorsque la fonction de perte considérée est une fonction de perte à marge possédant la propriété d’être lipschitzienne. La borne de généralisation repose sur la performance empirique du classifieur ainsi que sur sa "capacité". Dans cette thèse, les mesures de capacité considérées sont les suivantes : la complexité de Rademacher, les nombres de recouvrement et la dimension fat-shattering. Nos principales contributions sont obtenues sous l’hypothèse que les classes de fonctions composantes calculées par le classifieur ont des dimensions fat-shattering polynomiales et que les fonctions composantes sont indépendantes. Dans le contexte du schéma de calcul introduit par Mendelson, qui repose sur les relations entre les mesures de capacité évoquées plus haut, nous étudions l’impact que la décomposition au niveau de l’une de ces mesures de capacité a sur les dépendances (de la borne de généralisation) à C, m et gamma. En particulier, nous démontrons que la dépendance à C peut être considérablement améliorée par rapport à l’état de l’art si la décomposition est reportée au niveau du nombre de recouvrement ou de la dimension fat-shattering. Ce changement peut affecter négativement le taux de convergence (dépendance à m), ce qui souligne le fait que l’optimisation par rapport aux trois paramètres fondamentaux se traduit par la recherche d’un compromis
This thesis deals with the theory of margin multi-category classification, and is based on the statistical learning theory founded by Vapnik and Chervonenkis. We are interested in deriving generalization bounds with explicit dependencies on the number C of categories, the sample size m and the margin parameter gamma, when the loss function considered is a Lipschitz continuous margin loss function. Generalization bounds rely on the empirical performance of the classifier as well as its "capacity". In this work, the following scale-sensitive capacity measures are considered: the Rademacher complexity, the covering numbers and the fat-shattering dimension. Our main contributions are obtained under the assumption that the classes of component functions implemented by a classifier have polynomially growing fat-shattering dimensions and that the component functions are independent. In the context of the pathway of Mendelson, which relates the Rademacher complexity to the covering numbers and the latter to the fat-shattering dimension, we study the impact that decomposing at the level of one of these capacity measures has on the dependencies on C, m and gamma. In particular, we demonstrate that the dependency on C can be substantially improved over the state of the art if the decomposition is postponed to the level of the metric entropy or the fat-shattering dimension. On the other hand, this impacts negatively the rate of convergence (dependency on m), an indication of the fact that optimizing the dependencies on the three basic parameters amounts to looking for a trade-off
APA, Harvard, Vancouver, ISO, and other styles
9

Philips, Petra Camilla, and petra philips@gmail com. "Data-Dependent Analysis of Learning Algorithms." The Australian National University. Research School of Information Sciences and Engineering, 2005. http://thesis.anu.edu.au./public/adt-ANU20050901.204523.

Full text
Abstract:
This thesis studies the generalization ability of machine learning algorithms in a statistical setting. It focuses on the data-dependent analysis of the generalization performance of learning algorithms in order to make full use of the potential of the actual training sample from which these algorithms learn.¶ First, we propose an extension of the standard framework for the derivation of generalization bounds for algorithms taking their hypotheses from random classes of functions. This approach is motivated by the fact that the function produced by a learning algorithm based on a random sample of data depends on this sample and is therefore a random function. Such an approach avoids the detour of the worst-case uniform bounds as done in the standard approach. We show that the mechanism which allows one to obtain generalization bounds for random classes in our framework is based on a “small complexity” of certain random coordinate projections. We demonstrate how this notion of complexity relates to learnability and how one can explore geometric properties of these projections in order to derive estimates of rates of convergence and good confidence interval estimates for the expected risk. We then demonstrate the generality of our new approach by presenting a range of examples, among them the algorithm-dependent compression schemes and the data-dependent luckiness frameworks, which fall into our random subclass framework.¶ Second, we study in more detail generalization bounds for a specific algorithm which is of central importance in learning theory, namely the Empirical Risk Minimization algorithm (ERM). Recent results show that one can significantly improve the high-probability estimates for the convergence rates for empirical minimizers by a direct analysis of the ERM algorithm. These results are based on a new localized notion of complexity of subsets of hypothesis functions with identical expected errors and are therefore dependent on the underlying unknown distribution. We investigate the extent to which one can estimate these high-probability convergence rates in a data-dependent manner. We provide an algorithm which computes a data-dependent upper bound for the expected error of empirical minimizers in terms of the “complexity” of data-dependent local subsets. These subsets are sets of functions of empirical errors of a given range and can be determined based solely on empirical data. We then show that recent direct estimates, which are essentially sharp estimates on the high-probability convergence rate for the ERM algorithm, can not be recovered universally from empirical data.
APA, Harvard, Vancouver, ISO, and other styles
10

Katsikarelis, Ioannis. "Structurally Parameterized Tight Bounds and Approximation for Generalizations of Independence and Domination." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED048.

Full text
Abstract:
Nous nous concentrons sur les problèmes (k, r)-CENTER et d-SCATTERED SET qui généralisent les concepts de domination et indépendance des sommets, sur les distances plus grandes.Dans la première partie, nous examinons le paramétrage standard, ainsi que les paramètres des graphes mesurant la structure de l’entrée. Nous proposons des résultats qui montrent qu’il n’existe pas d’algorithme avec un temps d’exécution inférieur à certaines limites, si l’hypothèse du temps exponentiel est vraie, nous produisons des algorithmes de complexité essentiellement optimale qui correspondent à ces limites et nous essayons en outre de proposer une alternative au calcul exact en temps considérablement réduit, grâce à l’approximation.Dans la deuxième partie, nous considérons l’approximabilité (super-)polynômiale du problème de d-SCATTERED SET c’est-à-dire que nous déterminons la relation exacte entre un réalisable rapport d’approximation ρ, le paramètre de distance d et le temps d’exécution de l’algorithme avec un rapport de ρ, en fonction de ce qui précède et de la taille de l’entrée n. Nous considérons ensuite les temps d’exécution strictement polynomiaux et les graphes de degré maximal borné, ainsi que les graphes bipartites
In this thesis we focus on the NP-hard problems (k, r)-CENTER and d-SCATTERED SET that generalize the well-studied concepts of domination and independence over larger distances. In the first part we maintain a parameterized viewpoint and examine the standard parameterization as well as the most widely-used graph parameters measuring the input’s structure. We offer hardness results that show there is no algorithm of running-time below certain bounds, subject to the (Strong) Exponential Time Hypothesis, produce essentially optimal algorithms of complexity that matches these lower bounds and further attempt to offer an alternative to exact computation in significantly reduced running-time by way of approximation algorithms. In the second part we consider the (super-)polynomial (in-)approximability of the d-SCATTERED SET problem, i.e. we determine the exact relationship between an achievable approximation ratio ρ, the distance parameter d, and the runningtime of any ρ-approximation algorithm expressed as a function of the above and the size of the input n. We then consider strictly polynomial running-times and improve our understanding on the approximability characteristics of the problem on graphs of bounded maximum degree as well as bipartite graphs
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Generalization bounds"

1

I, Arnolʹd V. Experimental mathematics. Berkeley, California: MSRI Mathematical Sciences Research Institute, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Edmunds, D. E., and W. D. Evans. Sesquilinear Forms in Hilbert Spaces. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198812050.003.0004.

Full text
Abstract:
The centre-pieces of this chapter are the Lax–Milgram Theorem and the existence of weak or variational solutions to problems involving sesquilinear forms. An important application is to Kato’s First Representation Theorem, which associates a unique m-sectorial operator with a closed, densely defined sesquilinear form, thus extending the Friedrichs extension for a lower bounded symmetric operator. Stampacchia’s generalization of the Lax–Milgram Theorem to variational inequalities is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Espiritu, Yen Le. Race and U.S. Panethnic Formation. Edited by Ronald H. Bayor. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199766031.013.013.

Full text
Abstract:
Panethnicity refers to the development of bridging organizations and the generalization of solidarity among subgroups that are racialized to be homogeneous by outsiders. This chapter argues that while the formation of a consolidated white identity in the United States is self-motivated and linked to white privilege, panethnicity for people of color is a product of racial categorization and bound up with power relations. As the influx of new immigrants transforms the demographic composition of existing groups such as Asian Americans and Latinos, group members face the challenge of bridging the class, ethnic, and generational chasms dividing the immigrants and the U.S.-born. In all, existing data confirm the plural and ambivalent nature of panethnicity: it is a highly contested terrain on which different groups merge and clash over terms of inclusion but also an effective site from which to forge alliances with other groups both within and across the U.S. borders.
APA, Harvard, Vancouver, ISO, and other styles
4

Horing, Norman J. Morgenstern. Superfluidity and Superconductivity. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198791942.003.0013.

Full text
Abstract:
Chapter 13 addresses Bose condensation in superfluids (and superconductors), which involves the field operator ψ‎ having a c-number component (<ψ(x,t)>≠0), challenging number conservation. The nonlinear Gross-Pitaevskii equation is derived for this condensate wave function<ψ>=ψ−ψ˜, facilitating identification of the coherence length and the core region of vortex motion. The noncondensate Green’s function G˜1(1,1′)=−i<(ψ˜(1)ψ˜+(1′))+> and the nonvanishing anomalous correlation function F˜∗(2,1′)=−i<(ψ˜+(2)ψ˜+(1′))+> describe the dynamics and elementary excitations of the non-condensate states and are discussed in conjunction with Landau’s criterion for viscosity. Associated concepts of off-diagonal long-range order and the interpretation of <ψ> as a superfluid order parameter are also introduced. Anderson’s Bose-condensed state, as a phase-coherent wave packet superposition of number states, resolves issues of number conservation. Superconductivity involves bound Cooper pairs of electrons capable of Bose condensation and superfluid behavior. Correspondingly, the two-particle Green’s function has a term involving a product of anomalous bound-Cooper-pair condensate wave functions of the type F(1,2)=−i<(ψ(1)ψ(2))+>≠0, such that G2(1,2;1′,2′)=F(1,2)F+(1′,2′)+G˜2(1,2;1′,2′). Here, G˜2 describes the dynamics/excitations of the non-superfluid-condensate states, while nonvanishing F,F+ represent a phase-coherent wave packet superposition of Cooper-pair number states and off-diagonal long range order. Employing this form of G2 in the G1-equation couples the condensed state with the non-condensate excitations. Taken jointly with the dynamical equation for F(1,2), this leads to the Gorkov equations, encompassing the Bardeen–Cooper–Schrieffer (BCS) energy gap, critical temperature, and Bogoliubov-de Gennes eigenfunction Bogoliubons. Superconductor thermodynamics and critical magnetic field are discussed. For a weak magnetic field, the Gorkov-equations lead to Ginzburg–Landau theory and a nonlinear Schrödinger-like equation for the pair wave function and the associated supercurrent, along with identification of the Cooper pair density. Furthermore, Chapter 13 addresses the apparent lack of gauge invariance of London theory with an elegant variational analysis involving re-gauging the potentials, yielding a manifestly gauge invariant generalization of the London equation. Consistency with the equation of continuity implies the existence of Anderson’s acoustic normal mode, which is supplanted by the plasmon for Coulomb interaction. Type II superconductors and the penetration (and interaction) of quantized magnetic flux lines are also discussed. Finally, Chapter 13 addresses Josephson tunneling between superconductors.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Generalization bounds"

1

Zhang, Xinhua, Novi Quadrianto, Kristian Kersting, Zhao Xu, Yaakov Engel, Claude Sammut, Mark Reid, et al. "Generalization Bounds." In Encyclopedia of Machine Learning, 447–54. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reid, Mark. "Generalization Bounds." In Encyclopedia of Machine Learning and Data Mining, 556. Boston, MA: Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rejchel, W. "Generalization Bounds for Ranking Algorithms." In Ensemble Classification Methods with Applicationsin R, 135–39. Chichester, UK: John Wiley & Sons, Ltd, 2018. http://dx.doi.org/10.1002/9781119421566.ch7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shawe-Taylor, John, and Nello Cristianini. "Margin Distribution Bounds on Generalization." In Lecture Notes in Computer Science, 263–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-49097-3_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cadoli, Marco, Luigi Palopoli, and Francesco Scarcello. "Propositional Lower Bounds: Generalization and Algorithms." In Logics in Artificial Intelligence, 355–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/3-540-49545-2_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kääriäinen, Matti. "Generalization Error Bounds Using Unlabeled Data." In Learning Theory, 127–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11503415_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mannor, Shie, and Ron Meir. "Geometric Bounds for Generalization in Boosting." In Lecture Notes in Computer Science, 461–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44581-1_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Blanchard, Gilles. "Generalization Error Bounds for Aggregate Classifiers." In Nonlinear Estimation and Classification, 357–67. New York, NY: Springer New York, 2003. http://dx.doi.org/10.1007/978-0-387-21579-2_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Agarwal, Shivani. "Generalization Bounds for Some Ordinal Regression Algorithms." In Lecture Notes in Computer Science, 7–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-87987-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Vorontsov, Konstantin, and Andrey Ivahnenko. "Tight Combinatorial Generalization Bounds for Threshold Conjunction Rules." In Lecture Notes in Computer Science, 66–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21786-9_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Generalization bounds"

1

Banerjee, Pradeep Kr, and Guido Montufar. "Information Complexity and Generalization Bounds." In 2021 IEEE International Symposium on Information Theory (ISIT). IEEE, 2021. http://dx.doi.org/10.1109/isit45174.2021.9517960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lopez, Adrian Tovar, and Varun Jog. "Generalization error bounds using Wasserstein distances." In 2018 IEEE Information Theory Workshop (ITW). IEEE, 2018. http://dx.doi.org/10.1109/itw.2018.8613445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lei, Yunwen, Shao-Bo Lin, and Ke Tang. "Generalization Bounds for Regularized Pairwise Learning." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/329.

Full text
Abstract:
Pairwise learning refers to learning tasks with the associated loss functions depending on pairs of examples. Recently, pairwise learning has received increasing attention since it covers many machine learning schemes, e.g., metric learning, ranking and AUC maximization, in a unified framework. In this paper, we establish a unified generalization error bound for regularized pairwise learning without either Bernstein conditions or capacity assumptions. We apply this general result to typical learning tasks including distance metric learning and ranking, for each of which our discussion is able to improve the state-of-the-art results.
APA, Harvard, Vancouver, ISO, and other styles
4

Dayal, Abhinav. "Adaptive bounds for quadric based generalization." In 2009 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2009. http://dx.doi.org/10.1109/igarss.2009.5417731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kääriäinen, Matti, and John Langford. "A comparison of tight generalization error bounds." In the 22nd international conference. New York, New York, USA: ACM Press, 2005. http://dx.doi.org/10.1145/1102351.1102403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pensia, Ankit, Varun Jog, and Po-Ling Loh. "Generalization Error Bounds for Noisy, Iterative Algorithms." In 2018 IEEE International Symposium on Information Theory (ISIT). IEEE, 2018. http://dx.doi.org/10.1109/isit.2018.8437571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chan, Wendy. "Improving the Precision of Bounds for Generalization." In 2019 AERA Annual Meeting. Washington DC: AERA, 2019. http://dx.doi.org/10.3102/1426278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Modak, Eeshan, Himanshu Asnani, and Vinod M. Prabhakaran. "Rényi Divergence Based Bounds on Generalization Error." In 2021 IEEE Information Theory Workshop (ITW). IEEE, 2021. http://dx.doi.org/10.1109/itw48936.2021.9611387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bu, Yuheng, Shaofeng Zou, and Venugopal V. Veeravalli. "Tightening Mutual Information Based Bounds on Generalization Error." In 2019 IEEE International Symposium on Information Theory (ISIT). IEEE, 2019. http://dx.doi.org/10.1109/isit.2019.8849590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Issa, Ibrahim, Amedeo Roberto Esposito, and Michael Gastpar. "Strengthened Information-theoretic Bounds on the Generalization Error." In 2019 IEEE International Symposium on Information Theory (ISIT). IEEE, 2019. http://dx.doi.org/10.1109/isit.2019.8849834.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Generalization bounds"

1

Dhankhar, Ritu, and Prasanna Kumar. A Remark on a Generalization of the Cauchy’s Bound. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, October 2020. http://dx.doi.org/10.7546/crabs.2020.10.01.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zarrieß, Benjamin, and Anni-Yasmin Turhan. Most Specific Generalizations w.r.t. General EL-TBoxes. Technische Universität Dresden, 2013. http://dx.doi.org/10.25368/2022.196.

Full text
Abstract:
In the area of Description Logics the least common subsumer (lcs) and the most specific concept (msc) are inferences that generalize a set of concepts or an individual, respectively, into a single concept. If computed w.r.t. a general EL-TBox neither the lcs nor the msc need to exist. So far in this setting no exact conditions for the existence of lcs- or msc-concepts are known. This report provides necessary and suffcient conditions for the existence of these two kinds of concepts. For the lcs of a fixed number of concepts and the msc we show decidability of the existence in PTime and polynomial bounds on the maximal roledepth of the lcs- and msc-concepts. The latter allows to compute the lcs and the msc, respectively.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography