Literatura científica selecionada sobre o tema "Generalization bound"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Generalization bound".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Generalization bound"

1

Cohn, David, e Gerald Tesauro. "How Tight Are the Vapnik-Chervonenkis Bounds?" Neural Computation 4, n.º 2 (março de 1992): 249–69. http://dx.doi.org/10.1162/neco.1992.4.2.249.

Texto completo da fonte
Resumo:
We describe a series of numerical experiments that measure the average generalization capability of neural networks trained on a variety of simple functions. These experiments are designed to test the relationship between average generalization performance and the worst-case bounds obtained from formal learning theory using the Vapnik-Chervonenkis (VC) dimension (Blumer et al. 1989; Haussler et al. 1990). Recent statistical learning theories (Tishby et al. 1989; Schwartz et al. 1990) suggest that surpassing these bounds might be possible if the spectrum of possible generalizations has a “gap” near perfect performance. We indeed find that, in some cases, the average generalization is significantly better than the VC bound: the approach to perfect performance is exponential in the number of examples m, rather than the 1/m result of the bound. However, in these cases, we have not found evidence of the gap predicted by the above statistical theories. In other cases, we do find the 1/m behavior of the VC bound, and in these cases, the numerical prefactor is closely related to the prefactor contained in the bound.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Pereira, Rajesh, e Mohammad Ali Vali. "Generalizations of the Cauchy and Fujiwara Bounds for Products of Zeros of a Polynomial". Electronic Journal of Linear Algebra 31 (5 de fevereiro de 2016): 565–71. http://dx.doi.org/10.13001/1081-3810.3333.

Texto completo da fonte
Resumo:
The Cauchy bound is one of the best known upper bounds for the modulus of the zeros of a polynomial. The Fujiwara bound is another useful upper bound for the modulus of the zeros of a polynomial. In this paper, compound matrices are used to derive a generalization of both the Cauchy bound and the Fujiwara bound. This generalization yields upper bounds for the modulus of the product of $m$ zeros of the polynomial.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Nedovic, M. "Norm bounds for the inverse for generalized Nekrasov matrices in point-wise and block case". Filomat 35, n.º 8 (2021): 2705–14. http://dx.doi.org/10.2298/fil2108705n.

Texto completo da fonte
Resumo:
Lower-semi-Nekrasov matrices represent a generalization of Nekrasov matrices. For the inverse of lower-semi-Nekrasov matrices, a max-norm bound is proposed. Numerical examples are given to illustrate that new norm bound can give tighter results compared to already known bounds when applied to Nekrasov matrices. Also, we presented new max-norm bounds for the inverse of lower-semi-Nekrasov matrices in the block case. We considered two types of block generalizations and illustrated the results with numerical examples.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Liu, Tongliang, Dacheng Tao e Dong Xu. "Dimensionality-Dependent Generalization Bounds for k-Dimensional Coding Schemes". Neural Computation 28, n.º 10 (outubro de 2016): 2213–49. http://dx.doi.org/10.1162/neco_a_00872.

Texto completo da fonte
Resumo:
The k-dimensional coding schemes refer to a collection of methods that attempt to represent data using a set of representative k-dimensional vectors and include nonnegative matrix factorization, dictionary learning, sparse coding, k-means clustering, and vector quantization as special cases. Previous generalization bounds for the reconstruction error of the k-dimensional coding schemes are mainly dimensionality-independent. A major advantage of these bounds is that they can be used to analyze the generalization error when data are mapped into an infinite- or high-dimensional feature space. However, many applications use finite-dimensional data features. Can we obtain dimensionality-dependent generalization bounds for k-dimensional coding schemes that are tighter than dimensionality-independent bounds when data are in a finite-dimensional feature space? Yes. In this letter, we address this problem and derive a dimensionality-dependent generalization bound for k-dimensional coding schemes by bounding the covering number of the loss function class induced by the reconstruction error. The bound is of order [Formula: see text], where m is the dimension of features, k is the number of the columns in the linear implementation of coding schemes, and n is the size of sample, [Formula: see text] when n is finite and [Formula: see text] when n is infinite. We show that our bound can be tighter than previous results because it avoids inducing the worst-case upper bound on k of the loss function. The proposed generalization bound is also applied to some specific coding schemes to demonstrate that the dimensionality-dependent bound is an indispensable complement to the dimensionality-independent generalization bounds.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Rubab, Faiza, Hira Nabi e Asif R. Khan. "GENERALIZATION AND REFINEMENTS OF JENSEN INEQUALITY". Journal of Mathematical Analysis 12, n.º 5 (31 de outubro de 2021): 1–27. http://dx.doi.org/10.54379/jma-2021-5-1.

Texto completo da fonte
Resumo:
We give generalizations and refinements of Jensen and Jensen− Mercer inequalities by using weights which satisfy the conditions of Jensen and Jensen− Steffensen inequalities. We also give some refinements for discrete and integral version of generalized Jensen−Mercer inequality and shown to be an improvement of the upper bound for the Jensen’s difference given in [32]. Applications of our work include new bounds for some important inequalities used in information theory, and generalizing the relations among means.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Nedovic, M., e Lj Cvetkovic. "Norm bounds for the inverse and error bounds for linear complementarity problems for {P1,P2}-Nekrasov matrices". Filomat 35, n.º 1 (2021): 239–50. http://dx.doi.org/10.2298/fil2101239n.

Texto completo da fonte
Resumo:
{P1,P2}-Nekrasov matrices represent a generalization of Nekrasov matrices via permutations. In this paper, we obtained an error bound for linear complementarity problems for fP1; P2g-Nekrasov matrices. Numerical examples are given to illustrate that new error bound can give tighter results compared to already known bounds when applied to Nekrasov matrices. Also, we presented new max-norm bounds for the inverse of {P1,P2}-Nekrasov matrices in the block case, considering two different types of block generalizations. Numerical examples show that new norm bounds for the block case can give tighter results compared to already known bounds for the point-wise case.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Han, Xinyu, Yi Zhao e Michael Small. "A tighter generalization bound for reservoir computing". Chaos: An Interdisciplinary Journal of Nonlinear Science 32, n.º 4 (abril de 2022): 043115. http://dx.doi.org/10.1063/5.0082258.

Texto completo da fonte
Resumo:
While reservoir computing (RC) has demonstrated astonishing performance in many practical scenarios, the understanding of its capability for generalization on previously unseen data is limited. To address this issue, we propose a novel generalization bound for RC based on the empirical Rademacher complexity under the probably approximately correct learning framework. Note that the generalization bound for the RC is derived in terms of the model hyperparameters. For this reason, it can explore the dependencies of the generalization bound for RC on its hyperparameters. Compared with the existing generalization bound, our generalization bound for RC is tighter, which is verified by numerical experiments. Furthermore, we study the generalization bound for the RC corresponding to different reservoir graphs, including directed acyclic graph (DAG) and Erdős–R[Formula: see text]nyi undirected random graph (ER graph). Specifically, the generalization bound for the RC whose reservoir graph is designated as a DAG can be refined by leveraging the structural property (i.e., the longest path length) of the DAG. Finally, both theoretical and experimental findings confirm that the generalization bound for the RC of a DAG is lower and less sensitive to the model hyperparameters than that for the RC of an ER graph.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Gassner, Niklas, Marcus Greferath, Joachim Rosenthal e Violetta Weger. "Bounds for Coding Theory over Rings". Entropy 24, n.º 10 (16 de outubro de 2022): 1473. http://dx.doi.org/10.3390/e24101473.

Texto completo da fonte
Resumo:
Coding theory where the alphabet is identified with the elements of a ring or a module has become an important research topic over the last 30 years. It has been well established that, with the generalization of the algebraic structure to rings, there is a need to also generalize the underlying metric beyond the usual Hamming weight used in traditional coding theory over finite fields. This paper introduces a generalization of the weight introduced by Shi, Wu and Krotov, called overweight. Additionally, this weight can be seen as a generalization of the Lee weight on the integers modulo 4 and as a generalization of Krotov's weight over the integers modulo 2s for any positive integer s. For this weight, we provide a number of well-known bounds, including a Singleton bound, a Plotkin bound, a sphere-packing bound and a Gilbert–Varshamov bound. In addition to the overweight, we also study a well-known metric on finite rings, namely the homogeneous metric, which also extends the Lee metric over the integers modulo 4 and is thus heavily connected to the overweight. We provide a new bound that has been missing in the literature for homogeneous metric, namely the Johnson bound. To prove this bound, we use an upper estimate on the sum of the distances of all distinct codewords that depends only on the length, the average weight and the maximum weight of a codeword. An effective such bound is not known for the overweight.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Abou–Moustafa, Karim, e Csaba Szepesvári. "An Exponential Tail Bound for the Deleted Estimate". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 3143–50. http://dx.doi.org/10.1609/aaai.v33i01.33013143.

Texto completo da fonte
Resumo:
There is an accumulating evidence in the literature that stability of learning algorithms is a key characteristic that permits a learning algorithm to generalize. Despite various insightful results in this direction, there seems to be an overlooked dichotomy in the type of stability-based generalization bounds we have in the literature. On one hand, the literature seems to suggest that exponential generalization bounds for the estimated risk, which are optimal, can be only obtained through stringent, distribution independent and computationally intractable notions of stability such as uniform stability. On the other hand, it seems that weaker notions of stability such as hypothesis stability, although it is distribution dependent and more amenable to computation, can only yield polynomial generalization bounds for the estimated risk, which are suboptimal. In this paper, we address the gap between these two regimes of results. In particular, the main question we address here is whether it is possible to derive exponential generalization bounds for the estimated risk using a notion of stability that is computationally tractable and distribution dependent, but weaker than uniform stability. Using recent advances in concentration inequalities, and using a notion of stability that is weaker than uniform stability but distribution dependent and amenable to computation, we derive an exponential tail bound for the concentration of the estimated risk of a hypothesis returned by a general learning rule, where the estimated risk is expressed in terms of the deleted estimate. Interestingly, we note that our final bound has similarities to previous exponential generalization bounds for the deleted estimate, in particular, the result of Bousquet and Elisseeff (2002) for the regression case.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Harada, Masayasu, Francesco Sannino, Joseph Schechter e Herbert Weigel. "Generalization of the bound state model". Physical Review D 56, n.º 7 (1 de outubro de 1997): 4098–114. http://dx.doi.org/10.1103/physrevd.56.4098.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Generalization bound"

1

McDonald, Daniel J. "Generalization Error Bounds for Time Series". Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/184.

Texto completo da fonte
Resumo:
In this thesis, I derive generalization error bounds — bounds on the expected inaccuracy of the predictions — for time series forecasting models. These bounds allow forecasters to select among competing models, and to declare that, with high probability, their chosen model will perform well — without making strong assumptions about the data generating process or appealing to asymptotic theory. Expanding upon results from statistical learning theory, I demonstrate how these techniques can help time series forecasters to choose models which behave well under uncertainty. I also show how to estimate the β-mixing coefficients for dependent data so that my results can be used empirically. I use the bound explicitly to evaluate different predictive models for the volatility of IBM stock and for a standard set of macroeconomic variables. Taken together my results show how to control the generalization error of time series models with fixed or growing memory.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Kroon, Rodney Stephen. "Support vector machines, generalization bounds, and transduction". Thesis, Stellenbosch : University of Stellenbosch, 2003. http://hdl.handle.net/10019.1/16375.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Kelby, Robin J. "Formalized Generalization Bounds for Perceptron-Like Algorithms". Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1594805966855804.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Giulini, Ilaria. "Generalization bounds for random samples in Hilbert spaces". Thesis, Paris, Ecole normale supérieure, 2015. http://www.theses.fr/2015ENSU0026/document.

Texto completo da fonte
Resumo:
Ce travail de thèse porte sur l'obtention de bornes de généralisation pour des échantillons statistiques à valeur dans des espaces de Hilbert définis par des noyaux reproduisants. L'approche consiste à obtenir des bornes non asymptotiques indépendantes de la dimension dans des espaces de dimension finie, en utilisant des inégalités PAC-Bayesiennes liées à une perturbation Gaussienne du paramètre et à les étendre ensuite aux espaces de Hilbert séparables. On se pose dans un premier temps la question de l'estimation de l'opérateur de Gram à partir d'un échantillon i. i. d. par un estimateur robuste et on propose des bornes uniformes, sous des hypothèses faibles de moments. Ces résultats permettent de caractériser l'analyse en composantes principales indépendamment de la dimension et d'en proposer des variantes robustes. On propose ensuite un nouvel algorithme de clustering spectral. Au lieu de ne garder que la projection sur les premiers vecteurs propres, on calcule une itérée du Laplacian normalisé. Cette itération, justifiée par l'analyse du clustering en termes de chaînes de Markov, opère comme une version régularisée de la projection sur les premiers vecteurs propres et permet d'obtenir un algorithme dans lequel le nombre de clusters est déterminé automatiquement. On présente des bornes non asymptotiques concernant la convergence de cet algorithme, lorsque les points à classer forment un échantillon i. i. d. d'une loi à support compact dans un espace de Hilbert. Ces bornes sont déduites des bornes obtenues pour l'estimation d'un opérateur de Gram dans un espace de Hilbert. On termine par un aperçu de l'intérêt du clustering spectral dans le cadre de l'analyse d'images
This thesis focuses on obtaining generalization bounds for random samples in reproducing kernel Hilbert spaces. The approach consists in first obtaining non-asymptotic dimension-free bounds in finite-dimensional spaces using some PAC-Bayesian inequalities related to Gaussian perturbations and then in generalizing the results in a separable Hilbert space. We first investigate the question of estimating the Gram operator by a robust estimator from an i. i. d. sample and we present uniform bounds that hold under weak moment assumptions. These results allow us to qualify principal component analysis independently of the dimension of the ambient space and to propose stable versions of it. In the last part of the thesis we present a new algorithm for spectral clustering. It consists in replacing the projection on the eigenvectors associated with the largest eigenvalues of the Laplacian matrix by a power of the normalized Laplacian. This iteration, justified by the analysis of clustering in terms of Markov chains, performs a smooth truncation. We prove nonasymptotic bounds for the convergence of our spectral clustering algorithm applied to a random sample of points in a Hilbert space that are deduced from the bounds for the Gram operator in a Hilbert space. Experiments are done in the context of image analysis
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Wade, Modou. "Apprentissage profond pour les processus faiblement dépendants". Electronic Thesis or Diss., CY Cergy Paris Université, 2024. http://www.theses.fr/2024CYUN1299.

Texto completo da fonte
Resumo:
Cette thèse porte sur l'apprentissage profond pour les processus faiblement dépendants. Nous avons considéré une classe d'estimateur de réseau de neurones profonds avec la régularisation par sparsité et/ou la régularisation par pénalité.Le chapitre1 est une synthèse des travaux. Il s'agit ici de présenter le cadre de l'apprentissage profond et de rappeler les principaux résultats obtenus aux chapitres 2, 3, 4, 5, 6.Le chapitre 2 considère l'apprentissage profond pour les processus psi-faiblement dépendants. Nous avons établi une vitesse de convergence de l'algorithme de minimisation du risque empirique (MRE) sur la classe des estimateurs de réseaux de neurones profonds (RNPs). Pour ces estimateurs, nous avons fourni une borne de généralisation et une vitesse asymptotique d'ordre O(n^{-1/alpha}) pour tout alpha > 2 est obtenue. Une borne de l'excès de risque pour une large classe de prédicteurs cibles est aussi établie.Le chapitre 3 présente l'estimateur de réseaux de neurones profonds pénalisé sous la dépendance faible. Nous avons considéré les problèmes de régression non-paramétrique et de classification pour les processus faiblement dépendants. Nous avons utilisé une méthode de régularisation par pénalisation. Pour la régression non-paramétrique et la classification binaire, nous avons établi une inégalité d'oracle pour l'excès de risque de l'estimateur de réseau de neurones pénalisé. Nous avons aussi fourni une vitesse de convergence de ces estimateurs.Le chapitre 4 porte sur l'estimateur de réseaux de neurones profonds pénalisé avec une fonction de perte générale sous la dépendance faible. Nous nous sommes placés dans le cadre de la structure de psi-dépendance faible et dans le cas spécifique où les observations sont bornées, nous avons utilisé la theta_{infty}-dépendance faible. Pour l'apprentissage des processus psi et theta_{infty}-faiblement dépendants, nous avons établi une inégalité d'oracle pour les excès de risque de l'estimateur de réseau de neurones profond pénalisé. Nous avons montré que lorsque la fonction cible est suffisamment régulière, la vitesse de convergence de ces excès de risque est d'ordre O(n^{-1/3}). Le chapitre 5 présente l'apprentissage profond robuste à partir de données faiblement dépendantes. Nous avons supposé que la variable de sortie admet des moments d'ordre r finis, avec r >= 1. Pour l'apprentissage des processus à mélange forts et psi-faiblement dépendants, une borne non asymptotique de l'espérance de l'excès de risque de l'estimateur de réseau de neurones est établie. Nous avons montré que lorsque la fonction cible appartient à la classe des fonctions de H"older régulières la vitesse de convergence de l'espérance de l'excès de risque obtenue sur l'apprentissage des données exponentiellement fort mélangeant est proche de ou égale à celle obtenue avec un échantillon indépendant et identiquement distribué (i.i.d.). Le chapitre 6 porte sur l'apprentissage profond pour les processus fortement mélangeants avec une régularisation par pénalité et minimax optimalité. Nous avons considéré aussi le problème de la régression non-paramétrique sur des données à mélange fort avec un bruit sous-exponentiel. Ainsi, lorsque la fonction cible appartient à la classe de composition de fonctions de H"older nous avons établi une borne supérieure de l'inégalité d'oracle de l'erreur L_2. Dans le cas spécifique de la régression autorégressive avec un bruit de Laplace ou normal standard, nous avons fourni une borne inférieure de l'erreur L_2 dans cette classe, qui correspond à un facteur logarithmique près à la borne supérieure ; ainsi l'estimateur de réseau de neurones profonds atteint une vitesse de convergence optimale
This thesis focuses on deep learning for weakly dependent processes. We consider a class of deep neural network estimators with sparsity regularisation and/or penalty regularisation.Chapter1 is a summary of the work. It presents the deep learning framework and reviews the main results obtained in chapters 2, 3, 4, 5 and 6.Chapter 2 considers deep learning for psi-weakly dependent processes. We have established the convergence rate of the empirical risk minimization (ERM) algorithm on the class of deep neural network (DNN) estimators. For these estimators, we have provided a generalization bound and an asymptotic learning rate of order O(n^{-1/alpha}) for all alpha > 2 is obtained. A bound of the excess risk for a large class of target predictors is also established. Chapter 3 presents the sparse-penalized deep neural networks estimator under weak dependence. We consider nonparametric regression and classification problems for weakly dependent processes. We use a method of regularization by penalization. For nonparametric regression and binary classification, we establish an oracle inequality for the excess risk of the sparse-penalized deep neural networks (SPDNN) estimator. We have also provided a convergence rate for these estimators.Chapter 4 focuses on the penalized deep neural networks estimator with a general loss function under weak dependence. We consider the psi-weak dependence structure and, in the specific case where the observations are bounded, we deal with the theta_{infty}-weak dependence. For learning psi and theta_{infty}-weakly dependent processes, we have established an oracle inequality for the excess risks of the sparse-penalized deep neural networks estimator. We have shown that when the target function is sufficiently smooth, the convergence rate of these excess risks is close to O(n^{-1/3}).Chapter 5 presents robust deep learning from weakly dependent data. We assume that the output variable has finite r moments, with r >= 1. For learning strong mixing and psi-weakly dependent processes, a non-asymptotic bound for the expected excess risk of the deep neural networks estimator is established. We have shown that when the target function belongs to the class of H"older smooth functions, the convergence rate of the expected excess risk for exponentially strongly mixing data is close to or equal to that obtained with an independent and identically distributed sample. Chapter 6 focuses on deep learning for strongly mixing observation with sparse-penalized regularization and minimax optimality. We have provided an oracle inequality and a bound on the class of H"older smooth functions for the expected excess risk of the deep neural network estimator. We have also considered the problem of nonparametric regression from strongly mixing data with sub-exponential noise. When the target function belongs to the class of H"older composition functions, we have established an upper bound for the oracle inequality of the L_2 error. In the specific case of autoregressive regression with standard Laplace or normal error, we have provided a lower bound for the L_2 error in this classe, which matches up to a logarithmic factor the upper bound; thus the deep neural network estimator achieves optimal convergence rate
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Rakhlin, Alexander. "Applications of empirical processes in learning theory : algorithmic stability and generalization bounds". Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34564.

Texto completo da fonte
Resumo:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2006.
Includes bibliographical references (p. 141-148).
This thesis studies two key properties of learning algorithms: their generalization ability and their stability with respect to perturbations. To analyze these properties, we focus on concentration inequalities and tools from empirical process theory. We obtain theoretical results and demonstrate their applications to machine learning. First, we show how various notions of stability upper- and lower-bound the bias and variance of several estimators of the expected performance for general learning algorithms. A weak stability condition is shown to be equivalent to consistency of empirical risk minimization. The second part of the thesis derives tight performance guarantees for greedy error minimization methods - a family of computationally tractable algorithms. In particular, we derive risk bounds for a greedy mixture density estimation procedure. We prove that, unlike what is suggested in the literature, the number of terms in the mixture is not a bias-variance trade-off for the performance. The third part of this thesis provides a solution to an open problem regarding the stability of Empirical Risk Minimization (ERM). This algorithm is of central importance in Learning Theory.
(cont.) By studying the suprema of the empirical process, we prove that ERM over Donsker classes of functions is stable in the L1 norm. Hence, as the number of samples grows, it becomes less and less likely that a perturbation of o(v/n) samples will result in a very different empirical minimizer. Asymptotic rates of this stability are proved under metric entropy assumptions on the function class. Through the use of a ratio limit inequality, we also prove stability of expected errors of empirical minimizers. Next, we investigate applications of the stability result. In particular, we focus on procedures that optimize an objective function, such as k-means and other clustering methods. We demonstrate that stability of clustering, just like stability of ERM, is closely related to the geometry of the class and the underlying measure. Furthermore, our result on stability of ERM delineates a phase transition between stability and instability of clustering methods. In the last chapter, we prove a generalization of the bounded-difference concentration inequality for almost-everywhere smooth functions. This result can be utilized to analyze algorithms which are almost always stable. Next, we prove a phase transition in the concentration of almost-everywhere smooth functions. Finally, a tight concentration of empirical errors of empirical minimizers is shown under an assumption on the underlying space.
by Alexander Rakhlin.
Ph.D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Bellet, Aurélien. "Supervised metric learning with generalization guarantees". Phd thesis, Université Jean Monnet - Saint-Etienne, 2012. http://tel.archives-ouvertes.fr/tel-00770627.

Texto completo da fonte
Resumo:
In recent years, the crucial importance of metrics in machine learningalgorithms has led to an increasing interest in optimizing distanceand similarity functions using knowledge from training data to make them suitable for the problem at hand.This area of research is known as metric learning. Existing methods typically aim at optimizing the parameters of a given metric with respect to some local constraints over the training sample. The learned metrics are generally used in nearest-neighbor and clustering algorithms.When data consist of feature vectors, a large body of work has focused on learning a Mahalanobis distance, which is parameterized by a positive semi-definite matrix. Recent methods offer good scalability to large datasets.Less work has been devoted to metric learning from structured objects (such as strings or trees), because it often involves complex procedures. Most of the work has focused on optimizing a notion of edit distance, which measures (in terms of number of operations) the cost of turning an object into another.We identify two important limitations of current supervised metric learning approaches. First, they allow to improve the performance of local algorithms such as k-nearest neighbors, but metric learning for global algorithms (such as linear classifiers) has not really been studied so far. Second, and perhaps more importantly, the question of the generalization ability of metric learning methods has been largely ignored.In this thesis, we propose theoretical and algorithmic contributions that address these limitations. Our first contribution is the derivation of a new kernel function built from learned edit probabilities. Unlike other string kernels, it is guaranteed to be valid and parameter-free. Our second contribution is a novel framework for learning string and tree edit similarities inspired by the recent theory of (epsilon,gamma,tau)-good similarity functions and formulated as a convex optimization problem. Using uniform stability arguments, we establish theoretical guarantees for the learned similarity that give a bound on the generalization error of a linear classifier built from that similarity. In our third contribution, we extend the same ideas to metric learning from feature vectors by proposing a bilinear similarity learning method that efficiently optimizes the (epsilon,gamma,tau)-goodness. The similarity is learned based on global constraints that are more appropriate to linear classification. Generalization guarantees are derived for our approach, highlighting that our method minimizes a tighter bound on the generalization error of the classifier. Our last contribution is a framework for establishing generalization bounds for a large class of existing metric learning algorithms. It is based on a simple adaptation of the notion of algorithmic robustness and allows the derivation of bounds for various loss functions and regularizers.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Nordenfors, Oskar. "A Literature Study Concerning Generalization Error Bounds for Neural Networks via Rademacher Complexity". Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184487.

Texto completo da fonte
Resumo:
In this essay some fundamental results from the theory of machine learning and neural networks are presented, with the goal of finally discussing bounds on the generalization error of neural networks, via Rademacher complexity.
I denna uppsats presenteras några grundläggande resultat från teorin kring maskininlärning och neurala nätverk, med målet att slutligen diskutera övre begräsningar på generaliseringsfelet hos neurala nätverk, via Rademachers komplexitet.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Katsikarelis, Ioannis. "Structurally Parameterized Tight Bounds and Approximation for Generalizations of Independence and Domination". Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED048.

Texto completo da fonte
Resumo:
Nous nous concentrons sur les problèmes (k, r)-CENTER et d-SCATTERED SET qui généralisent les concepts de domination et indépendance des sommets, sur les distances plus grandes.Dans la première partie, nous examinons le paramétrage standard, ainsi que les paramètres des graphes mesurant la structure de l’entrée. Nous proposons des résultats qui montrent qu’il n’existe pas d’algorithme avec un temps d’exécution inférieur à certaines limites, si l’hypothèse du temps exponentiel est vraie, nous produisons des algorithmes de complexité essentiellement optimale qui correspondent à ces limites et nous essayons en outre de proposer une alternative au calcul exact en temps considérablement réduit, grâce à l’approximation.Dans la deuxième partie, nous considérons l’approximabilité (super-)polynômiale du problème de d-SCATTERED SET c’est-à-dire que nous déterminons la relation exacte entre un réalisable rapport d’approximation ρ, le paramètre de distance d et le temps d’exécution de l’algorithme avec un rapport de ρ, en fonction de ce qui précède et de la taille de l’entrée n. Nous considérons ensuite les temps d’exécution strictement polynomiaux et les graphes de degré maximal borné, ainsi que les graphes bipartites
In this thesis we focus on the NP-hard problems (k, r)-CENTER and d-SCATTERED SET that generalize the well-studied concepts of domination and independence over larger distances. In the first part we maintain a parameterized viewpoint and examine the standard parameterization as well as the most widely-used graph parameters measuring the input’s structure. We offer hardness results that show there is no algorithm of running-time below certain bounds, subject to the (Strong) Exponential Time Hypothesis, produce essentially optimal algorithms of complexity that matches these lower bounds and further attempt to offer an alternative to exact computation in significantly reduced running-time by way of approximation algorithms. In the second part we consider the (super-)polynomial (in-)approximability of the d-SCATTERED SET problem, i.e. we determine the exact relationship between an achievable approximation ratio ρ, the distance parameter d, and the runningtime of any ρ-approximation algorithm expressed as a function of the above and the size of the input n. We then consider strictly polynomial running-times and improve our understanding on the approximability characteristics of the problem on graphs of bounded maximum degree as well as bipartite graphs
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Musayeva, Khadija. "Generalization Performance of Margin Multi-category Classifiers". Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0096/document.

Texto completo da fonte
Resumo:
Cette thèse porte sur la théorie de la discrimination multi-classe à marge. Elle a pour cadre la théorie statistique de l’apprentissage de Vapnik et Chervonenkis. L’objectif est d’établir des bornes de généralisation possédant une dépendances explicite au nombre C de catégories, à la taille m de l’échantillon et au paramètre de marge gamma, lorsque la fonction de perte considérée est une fonction de perte à marge possédant la propriété d’être lipschitzienne. La borne de généralisation repose sur la performance empirique du classifieur ainsi que sur sa "capacité". Dans cette thèse, les mesures de capacité considérées sont les suivantes : la complexité de Rademacher, les nombres de recouvrement et la dimension fat-shattering. Nos principales contributions sont obtenues sous l’hypothèse que les classes de fonctions composantes calculées par le classifieur ont des dimensions fat-shattering polynomiales et que les fonctions composantes sont indépendantes. Dans le contexte du schéma de calcul introduit par Mendelson, qui repose sur les relations entre les mesures de capacité évoquées plus haut, nous étudions l’impact que la décomposition au niveau de l’une de ces mesures de capacité a sur les dépendances (de la borne de généralisation) à C, m et gamma. En particulier, nous démontrons que la dépendance à C peut être considérablement améliorée par rapport à l’état de l’art si la décomposition est reportée au niveau du nombre de recouvrement ou de la dimension fat-shattering. Ce changement peut affecter négativement le taux de convergence (dépendance à m), ce qui souligne le fait que l’optimisation par rapport aux trois paramètres fondamentaux se traduit par la recherche d’un compromis
This thesis deals with the theory of margin multi-category classification, and is based on the statistical learning theory founded by Vapnik and Chervonenkis. We are interested in deriving generalization bounds with explicit dependencies on the number C of categories, the sample size m and the margin parameter gamma, when the loss function considered is a Lipschitz continuous margin loss function. Generalization bounds rely on the empirical performance of the classifier as well as its "capacity". In this work, the following scale-sensitive capacity measures are considered: the Rademacher complexity, the covering numbers and the fat-shattering dimension. Our main contributions are obtained under the assumption that the classes of component functions implemented by a classifier have polynomially growing fat-shattering dimensions and that the component functions are independent. In the context of the pathway of Mendelson, which relates the Rademacher complexity to the covering numbers and the latter to the fat-shattering dimension, we study the impact that decomposing at the level of one of these capacity measures has on the dependencies on C, m and gamma. In particular, we demonstrate that the dependency on C can be substantially improved over the state of the art if the decomposition is postponed to the level of the metric entropy or the fat-shattering dimension. On the other hand, this impacts negatively the rate of convergence (dependency on m), an indication of the fact that optimizing the dependencies on the three basic parameters amounts to looking for a trade-off
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Generalization bound"

1

I, Arnolʹd V. Experimental mathematics. Berkeley, California: MSRI Mathematical Sciences Research Institute, 2015.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Espiritu, Yen Le. Race and U.S. Panethnic Formation. Editado por Ronald H. Bayor. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199766031.013.013.

Texto completo da fonte
Resumo:
Panethnicity refers to the development of bridging organizations and the generalization of solidarity among subgroups that are racialized to be homogeneous by outsiders. This chapter argues that while the formation of a consolidated white identity in the United States is self-motivated and linked to white privilege, panethnicity for people of color is a product of racial categorization and bound up with power relations. As the influx of new immigrants transforms the demographic composition of existing groups such as Asian Americans and Latinos, group members face the challenge of bridging the class, ethnic, and generational chasms dividing the immigrants and the U.S.-born. In all, existing data confirm the plural and ambivalent nature of panethnicity: it is a highly contested terrain on which different groups merge and clash over terms of inclusion but also an effective site from which to forge alliances with other groups both within and across the U.S. borders.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Horing, Norman J. Morgenstern. Superfluidity and Superconductivity. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198791942.003.0013.

Texto completo da fonte
Resumo:
Chapter 13 addresses Bose condensation in superfluids (and superconductors), which involves the field operator ψ‎ having a c-number component (<ψ(x,t)>≠0), challenging number conservation. The nonlinear Gross-Pitaevskii equation is derived for this condensate wave function<ψ>=ψ−ψ˜, facilitating identification of the coherence length and the core region of vortex motion. The noncondensate Green’s function G˜1(1,1′)=−i<(ψ˜(1)ψ˜+(1′))+> and the nonvanishing anomalous correlation function F˜∗(2,1′)=−i<(ψ˜+(2)ψ˜+(1′))+> describe the dynamics and elementary excitations of the non-condensate states and are discussed in conjunction with Landau’s criterion for viscosity. Associated concepts of off-diagonal long-range order and the interpretation of <ψ> as a superfluid order parameter are also introduced. Anderson’s Bose-condensed state, as a phase-coherent wave packet superposition of number states, resolves issues of number conservation. Superconductivity involves bound Cooper pairs of electrons capable of Bose condensation and superfluid behavior. Correspondingly, the two-particle Green’s function has a term involving a product of anomalous bound-Cooper-pair condensate wave functions of the type F(1,2)=−i<(ψ(1)ψ(2))+>≠0, such that G2(1,2;1′,2′)=F(1,2)F+(1′,2′)+G˜2(1,2;1′,2′). Here, G˜2 describes the dynamics/excitations of the non-superfluid-condensate states, while nonvanishing F,F+ represent a phase-coherent wave packet superposition of Cooper-pair number states and off-diagonal long range order. Employing this form of G2 in the G1-equation couples the condensed state with the non-condensate excitations. Taken jointly with the dynamical equation for F(1,2), this leads to the Gorkov equations, encompassing the Bardeen–Cooper–Schrieffer (BCS) energy gap, critical temperature, and Bogoliubov-de Gennes eigenfunction Bogoliubons. Superconductor thermodynamics and critical magnetic field are discussed. For a weak magnetic field, the Gorkov-equations lead to Ginzburg–Landau theory and a nonlinear Schrödinger-like equation for the pair wave function and the associated supercurrent, along with identification of the Cooper pair density. Furthermore, Chapter 13 addresses the apparent lack of gauge invariance of London theory with an elegant variational analysis involving re-gauging the potentials, yielding a manifestly gauge invariant generalization of the London equation. Consistency with the equation of continuity implies the existence of Anderson’s acoustic normal mode, which is supplanted by the plasmon for Coulomb interaction. Type II superconductors and the penetration (and interaction) of quantized magnetic flux lines are also discussed. Finally, Chapter 13 addresses Josephson tunneling between superconductors.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Hecht, Richard D., e Vincent F. Biondo, eds. Religion and Everyday Life and Culture. ABC-CLIO, LLC, 2010. http://dx.doi.org/10.5040/9798216006909.

Texto completo da fonte
Resumo:
This intriguing three-volume set explores the ways in which religion is bound to the practice of daily life and how daily life is bound to religion. InReligion and Everyday Life and Culture, 36 international scholars describe the impact of religious practices around the world, using rich examples drawn from personal observation. Instead of repeating generalizations about what religionshouldmean, these volumes examine how religions actually influence our public and private lives "on the ground," on a day-to-day basis. Volume one introduces regional histories of the world's religions and discusses major ritual practices, such as the Catholic Mass and the Islamic pilgrimage to Mecca. Volume two examines themes that will help readers understand how religions interact with the practices of public life, describing the ways religions influence government, education, criminal justice, economy, technology, and the environment. Volume three takes up themes that are central to how religions are realized in the practices of individuals. In these essays, readers meet a shaman healer in South Africa, laugh with Buddhist monks, sing with Bob Dylan, cheer for Australian rugby, and explore Chicana and Iranian art.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Mandelkern, Matthew. Bounded Meaning. Oxford University PressOxford, 2024. http://dx.doi.org/10.1093/oso/9780192870049.001.0001.

Texto completo da fonte
Resumo:
Abstract ‘Bounded Meaning’ is a monograph on the dynamics of interpretation: how and why the interpretation of the building blocks of human language is sensitive, not just to the context in which the expression is used, but also to the expression’s linguistic environment---in other words, how and why interpretation depends not just on global information, but also on local information. The book motivates a range of generalizations about the dynamics of interpretation, some known and some novel, involving modals, conditionals, and anaphora. It then provides an overview of the best extant theory of those patterns, dynamic semantics. After bringing out the striking motivations and successes of that framework, the book goes on to criticize dynamic semantics, focusing on its puzzling predictions about the logic of natural language. In response to these problems, the book develops a novel framework for explaining dynamic phenomena without dynamic semantics: the bounded theory of meaning. On the bounded theory, dynamic phenomena arise from the interaction of two dimensions of meaning. One dimension is a standard truth-conditional layer, which, relative to a context of use, associates each sentence with a proposition. The second dimension, the dimension of bounds, limits the admissible interpretations of an expression, relative to the expression’s context of use and its local information. Bounds thus play an essential role in coordinating on the resolution of context-sensitive language, explaining dynamic effects in natural language while avoiding a variety of problematic predictions of dynamic semantics.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Generalization bound"

1

Burnaev, Evgeny. "Generalization Bound for Imbalanced Classification". In Recent Developments in Stochastic Methods and Applications, 107–19. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83266-7_8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Baader, Franz. "Unification, weak unification, upper bound, lower bound, and generalization problems". In Rewriting Techniques and Applications, 86–97. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/3-540-53904-2_88.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Fukuchi, Kazuto, e Jun Sakuma. "Neutralized Empirical Risk Minimization with Generalization Neutrality Bound". In Machine Learning and Knowledge Discovery in Databases, 418–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-44848-9_27.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Tang, Li, Zheng Zhao, Xiujun Gong e Huapeng Zeng. "On the Generalization of PAC-Bayes Bound for SVM Linear Classifier". In Communications in Computer and Information Science, 176–86. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34447-3_16.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Wang, Mingda, Canqian Yang e Yi Xu. "Posterior Refinement on Metric Matrix Improves Generalization Bound in Metric Learning". In Lecture Notes in Computer Science, 203–18. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19809-0_12.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Levinson, Norman. "Generalization of Recent Method Giving Lower Bound for N 0(T) of Riemann’s Zeta-Function". In Selected Papers of Norman Levinson, 392–95. Boston, MA: Birkhäuser Boston, 1998. http://dx.doi.org/10.1007/978-1-4612-5335-8_32.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Zhang, Xinhua, Novi Quadrianto, Kristian Kersting, Zhao Xu, Yaakov Engel, Claude Sammut, Mark Reid et al. "Generalization Bounds". In Encyclopedia of Machine Learning, 447–54. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_328.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Reid, Mark. "Generalization Bounds". In Encyclopedia of Machine Learning and Data Mining, 556. Boston, MA: Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_328.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Helleseth, Tor, Torleiv Kløve e Øyvind Ytrehus. "Generalizations of the Griesmer bound". In Lecture Notes in Computer Science, 41–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 1994. http://dx.doi.org/10.1007/3-540-58265-7_6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Rejchel, W. "Generalization Bounds for Ranking Algorithms". In Ensemble Classification Methods with Applicationsin R, 135–39. Chichester, UK: John Wiley & Sons, Ltd, 2018. http://dx.doi.org/10.1002/9781119421566.ch7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Generalization bound"

1

He, Haiyun, Christina Lee Yu e Ziv Goldfeld. "Hierarchical Generalization Bounds for Deep Neural Networks". In 2024 IEEE International Symposium on Information Theory (ISIT), 2688–93. IEEE, 2024. http://dx.doi.org/10.1109/isit57864.2024.10619279.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Sefidgaran, Milad, e Abdellatif Zaidi. "Data-Dependent Generalization Bounds via Variable-Size Compressibility". In 2024 IEEE International Symposium on Information Theory (ISIT), 2682–87. IEEE, 2024. http://dx.doi.org/10.1109/isit57864.2024.10619654.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Rodríguez-Gálvez, Borja, Omar Rivasplata, Ragnar Thobaben e Mikael Skoglund. "A Note on Generalization Bounds for Losses with Finite Moments". In 2024 IEEE International Symposium on Information Theory (ISIT), 2676–81. IEEE, 2024. http://dx.doi.org/10.1109/isit57864.2024.10619194.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Luo, Jin, Yongguang Chen e Xuejun Zhou. "Generalization Bound for Multi-Classification with Push". In 2010 International Conference on Artificial Intelligence and Computational Intelligence (AICI). IEEE, 2010. http://dx.doi.org/10.1109/aici.2010.201.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Wen, Wen, Han Li, Tieliang Gong e Hong Chen. "Towards Sharper Generalization Bounds for Adversarial Contrastive Learning". In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/574.

Texto completo da fonte
Resumo:
Recently, the enhancement on the adversarial robustness of machine learning algorithms has gained significant attention across various application domains. Given the widespread label scarcity issue in real-world data, adversarial contrastive learning (ACL) has been proposed to adversarially train robust models using unlabeled data. Despite the empirical success, its generalization behavior remains poorly understood and far from being well-characterized. This paper aims to address this issue from a learning theory perspective. We establish novel high-probability generalization bounds for the general Lipschitz loss functions. The derived bounds scale O(log(k)) with respect to the number of negative samples k, which improves the existing linear dependency bounds. Our results are generally applicable to many prediction models, including linear models and deep neural networks. In particular, we obtain an optimistic generalization bound O(1/n) under the smoothness assumption of the loss function on the sample size n. To the best of our knowledge, this is the first fast-rate bound valid for ACL. Empirical evaluations on real-world datasets verify our theoretical findings.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Zhu, Bowei, Shaojie Li e Yong Liu. "Towards Sharper Risk Bounds for Minimax Problems". In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/630.

Texto completo da fonte
Resumo:
Minimax problems have achieved success in machine learning such as adversarial training, robust optimization, reinforcement learning. For theoretical analysis, current optimal excess risk bounds, which are composed by generalization error and optimization error, present 1/n-rates in strongly-convex-strongly-concave (SC-SC) settings. Existing studies mainly focus on minimax problems with specific algorithms for optimization error, with only a few studies on generalization performance, which limit better excess risk bounds. In this paper, we study the generalization bounds measured by the gradients of primal functions using uniform localized convergence. We obtain a sharper high probability generalization error bound for nonconvex-strongly-concave (NC-SC) stochastic minimax problems. Furthermore, we provide dimension-independent results under Polyak-Lojasiewicz condition for the outer layer. Based on our generalization error bound, we analyze some popular algorithms such as empirical saddle point (ESP), gradient descent ascent (GDA) and stochastic gradient descent ascent (SGDA). We derive better excess primal risk bounds with further reasonable assumptions, which, to the best of our knowledge, are n times faster than exist results in minimax problems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Chen, Hao, Zhanfeng Mo, Zhouwang Yang e Xiao Wang. "Theoretical Investigation of Generalization Bound for Residual Networks". In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/288.

Texto completo da fonte
Resumo:
This paper presents a framework for norm-based capacity control with respect to an lp,q-norm in weight-normalized Residual Neural Networks (ResNets). We first formulate the representation of each residual block. For the regression problem, we analyze the Rademacher Complexity of the ResNets family. We also establish a tighter generalization upper bound for weight-normalized ResNets. in a more general sight. Using the lp,q-norm weight normalization in which 1/p+1/q >=1, we discuss the properties of a width-independent capacity control, which only relies on the depth according to a square root term. Several comparisons suggest that our result is tighter than previous work. Parallel results for Deep Neural Networks (DNN) and Convolutional Neural Networks (CNN) are included by introducing the lp,q-norm weight normalization for DNN and the lp,q-norm kernel normalization for CNN. Numerical experiments also verify that ResNet structures contribute to better generalization properties.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Zhou, Ruida, Chao Tian e Tie Liu. "Individually Conditional Individual Mutual Information Bound on Generalization Error". In 2021 IEEE International Symposium on Information Theory (ISIT). IEEE, 2021. http://dx.doi.org/10.1109/isit45174.2021.9518016.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Rezazadeh, Arezou, Sharu Theresa Jose, Giuseppe Durisi e Osvaldo Simeone. "Conditional Mutual Information-Based Generalization Bound for Meta Learning". In 2021 IEEE International Symposium on Information Theory (ISIT). IEEE, 2021. http://dx.doi.org/10.1109/isit45174.2021.9518020.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Uchida, Masato. "Tight lower bound of generalization error in ensemble learning". In 2014 Joint 7th International Conference on Soft Computing and Intelligent Systems (SCIS) and 15th International Symposium on Advanced Intelligent Systems (ISIS). IEEE, 2014. http://dx.doi.org/10.1109/scis-isis.2014.7044723.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "Generalization bound"

1

Dhankhar, Ritu, e Prasanna Kumar. A Remark on a Generalization of the Cauchy’s Bound. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, outubro de 2020. http://dx.doi.org/10.7546/crabs.2020.10.01.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Zarrieß, Benjamin, e Anni-Yasmin Turhan. Most Specific Generalizations w.r.t. General EL-TBoxes. Technische Universität Dresden, 2013. http://dx.doi.org/10.25368/2022.196.

Texto completo da fonte
Resumo:
In the area of Description Logics the least common subsumer (lcs) and the most specific concept (msc) are inferences that generalize a set of concepts or an individual, respectively, into a single concept. If computed w.r.t. a general EL-TBox neither the lcs nor the msc need to exist. So far in this setting no exact conditions for the existence of lcs- or msc-concepts are known. This report provides necessary and suffcient conditions for the existence of these two kinds of concepts. For the lcs of a fixed number of concepts and the msc we show decidability of the existence in PTime and polynomial bounds on the maximal roledepth of the lcs- and msc-concepts. The latter allows to compute the lcs and the msc, respectively.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia