Letteratura scientifica selezionata sul tema "Generalization bound"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Generalization bound".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Generalization bound"
Cohn, David, e Gerald Tesauro. "How Tight Are the Vapnik-Chervonenkis Bounds?" Neural Computation 4, n. 2 (marzo 1992): 249–69. http://dx.doi.org/10.1162/neco.1992.4.2.249.
Testo completoPereira, Rajesh, e Mohammad Ali Vali. "Generalizations of the Cauchy and Fujiwara Bounds for Products of Zeros of a Polynomial". Electronic Journal of Linear Algebra 31 (5 febbraio 2016): 565–71. http://dx.doi.org/10.13001/1081-3810.3333.
Testo completoNedovic, M. "Norm bounds for the inverse for generalized Nekrasov matrices in point-wise and block case". Filomat 35, n. 8 (2021): 2705–14. http://dx.doi.org/10.2298/fil2108705n.
Testo completoLiu, Tongliang, Dacheng Tao e Dong Xu. "Dimensionality-Dependent Generalization Bounds for k-Dimensional Coding Schemes". Neural Computation 28, n. 10 (ottobre 2016): 2213–49. http://dx.doi.org/10.1162/neco_a_00872.
Testo completoRubab, Faiza, Hira Nabi e Asif R. Khan. "GENERALIZATION AND REFINEMENTS OF JENSEN INEQUALITY". Journal of Mathematical Analysis 12, n. 5 (31 ottobre 2021): 1–27. http://dx.doi.org/10.54379/jma-2021-5-1.
Testo completoNedovic, M., e Lj Cvetkovic. "Norm bounds for the inverse and error bounds for linear complementarity problems for {P1,P2}-Nekrasov matrices". Filomat 35, n. 1 (2021): 239–50. http://dx.doi.org/10.2298/fil2101239n.
Testo completoHan, Xinyu, Yi Zhao e Michael Small. "A tighter generalization bound for reservoir computing". Chaos: An Interdisciplinary Journal of Nonlinear Science 32, n. 4 (aprile 2022): 043115. http://dx.doi.org/10.1063/5.0082258.
Testo completoGassner, Niklas, Marcus Greferath, Joachim Rosenthal e Violetta Weger. "Bounds for Coding Theory over Rings". Entropy 24, n. 10 (16 ottobre 2022): 1473. http://dx.doi.org/10.3390/e24101473.
Testo completoAbou–Moustafa, Karim, e Csaba Szepesvári. "An Exponential Tail Bound for the Deleted Estimate". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 luglio 2019): 3143–50. http://dx.doi.org/10.1609/aaai.v33i01.33013143.
Testo completoHarada, Masayasu, Francesco Sannino, Joseph Schechter e Herbert Weigel. "Generalization of the bound state model". Physical Review D 56, n. 7 (1 ottobre 1997): 4098–114. http://dx.doi.org/10.1103/physrevd.56.4098.
Testo completoTesi sul tema "Generalization bound"
McDonald, Daniel J. "Generalization Error Bounds for Time Series". Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/184.
Testo completoKroon, Rodney Stephen. "Support vector machines, generalization bounds, and transduction". Thesis, Stellenbosch : University of Stellenbosch, 2003. http://hdl.handle.net/10019.1/16375.
Testo completoKelby, Robin J. "Formalized Generalization Bounds for Perceptron-Like Algorithms". Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1594805966855804.
Testo completoGiulini, Ilaria. "Generalization bounds for random samples in Hilbert spaces". Thesis, Paris, Ecole normale supérieure, 2015. http://www.theses.fr/2015ENSU0026/document.
Testo completoThis thesis focuses on obtaining generalization bounds for random samples in reproducing kernel Hilbert spaces. The approach consists in first obtaining non-asymptotic dimension-free bounds in finite-dimensional spaces using some PAC-Bayesian inequalities related to Gaussian perturbations and then in generalizing the results in a separable Hilbert space. We first investigate the question of estimating the Gram operator by a robust estimator from an i. i. d. sample and we present uniform bounds that hold under weak moment assumptions. These results allow us to qualify principal component analysis independently of the dimension of the ambient space and to propose stable versions of it. In the last part of the thesis we present a new algorithm for spectral clustering. It consists in replacing the projection on the eigenvectors associated with the largest eigenvalues of the Laplacian matrix by a power of the normalized Laplacian. This iteration, justified by the analysis of clustering in terms of Markov chains, performs a smooth truncation. We prove nonasymptotic bounds for the convergence of our spectral clustering algorithm applied to a random sample of points in a Hilbert space that are deduced from the bounds for the Gram operator in a Hilbert space. Experiments are done in the context of image analysis
Wade, Modou. "Apprentissage profond pour les processus faiblement dépendants". Electronic Thesis or Diss., CY Cergy Paris Université, 2024. http://www.theses.fr/2024CYUN1299.
Testo completoThis thesis focuses on deep learning for weakly dependent processes. We consider a class of deep neural network estimators with sparsity regularisation and/or penalty regularisation.Chapter1 is a summary of the work. It presents the deep learning framework and reviews the main results obtained in chapters 2, 3, 4, 5 and 6.Chapter 2 considers deep learning for psi-weakly dependent processes. We have established the convergence rate of the empirical risk minimization (ERM) algorithm on the class of deep neural network (DNN) estimators. For these estimators, we have provided a generalization bound and an asymptotic learning rate of order O(n^{-1/alpha}) for all alpha > 2 is obtained. A bound of the excess risk for a large class of target predictors is also established. Chapter 3 presents the sparse-penalized deep neural networks estimator under weak dependence. We consider nonparametric regression and classification problems for weakly dependent processes. We use a method of regularization by penalization. For nonparametric regression and binary classification, we establish an oracle inequality for the excess risk of the sparse-penalized deep neural networks (SPDNN) estimator. We have also provided a convergence rate for these estimators.Chapter 4 focuses on the penalized deep neural networks estimator with a general loss function under weak dependence. We consider the psi-weak dependence structure and, in the specific case where the observations are bounded, we deal with the theta_{infty}-weak dependence. For learning psi and theta_{infty}-weakly dependent processes, we have established an oracle inequality for the excess risks of the sparse-penalized deep neural networks estimator. We have shown that when the target function is sufficiently smooth, the convergence rate of these excess risks is close to O(n^{-1/3}).Chapter 5 presents robust deep learning from weakly dependent data. We assume that the output variable has finite r moments, with r >= 1. For learning strong mixing and psi-weakly dependent processes, a non-asymptotic bound for the expected excess risk of the deep neural networks estimator is established. We have shown that when the target function belongs to the class of H"older smooth functions, the convergence rate of the expected excess risk for exponentially strongly mixing data is close to or equal to that obtained with an independent and identically distributed sample. Chapter 6 focuses on deep learning for strongly mixing observation with sparse-penalized regularization and minimax optimality. We have provided an oracle inequality and a bound on the class of H"older smooth functions for the expected excess risk of the deep neural network estimator. We have also considered the problem of nonparametric regression from strongly mixing data with sub-exponential noise. When the target function belongs to the class of H"older composition functions, we have established an upper bound for the oracle inequality of the L_2 error. In the specific case of autoregressive regression with standard Laplace or normal error, we have provided a lower bound for the L_2 error in this classe, which matches up to a logarithmic factor the upper bound; thus the deep neural network estimator achieves optimal convergence rate
Rakhlin, Alexander. "Applications of empirical processes in learning theory : algorithmic stability and generalization bounds". Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34564.
Testo completoIncludes bibliographical references (p. 141-148).
This thesis studies two key properties of learning algorithms: their generalization ability and their stability with respect to perturbations. To analyze these properties, we focus on concentration inequalities and tools from empirical process theory. We obtain theoretical results and demonstrate their applications to machine learning. First, we show how various notions of stability upper- and lower-bound the bias and variance of several estimators of the expected performance for general learning algorithms. A weak stability condition is shown to be equivalent to consistency of empirical risk minimization. The second part of the thesis derives tight performance guarantees for greedy error minimization methods - a family of computationally tractable algorithms. In particular, we derive risk bounds for a greedy mixture density estimation procedure. We prove that, unlike what is suggested in the literature, the number of terms in the mixture is not a bias-variance trade-off for the performance. The third part of this thesis provides a solution to an open problem regarding the stability of Empirical Risk Minimization (ERM). This algorithm is of central importance in Learning Theory.
(cont.) By studying the suprema of the empirical process, we prove that ERM over Donsker classes of functions is stable in the L1 norm. Hence, as the number of samples grows, it becomes less and less likely that a perturbation of o(v/n) samples will result in a very different empirical minimizer. Asymptotic rates of this stability are proved under metric entropy assumptions on the function class. Through the use of a ratio limit inequality, we also prove stability of expected errors of empirical minimizers. Next, we investigate applications of the stability result. In particular, we focus on procedures that optimize an objective function, such as k-means and other clustering methods. We demonstrate that stability of clustering, just like stability of ERM, is closely related to the geometry of the class and the underlying measure. Furthermore, our result on stability of ERM delineates a phase transition between stability and instability of clustering methods. In the last chapter, we prove a generalization of the bounded-difference concentration inequality for almost-everywhere smooth functions. This result can be utilized to analyze algorithms which are almost always stable. Next, we prove a phase transition in the concentration of almost-everywhere smooth functions. Finally, a tight concentration of empirical errors of empirical minimizers is shown under an assumption on the underlying space.
by Alexander Rakhlin.
Ph.D.
Bellet, Aurélien. "Supervised metric learning with generalization guarantees". Phd thesis, Université Jean Monnet - Saint-Etienne, 2012. http://tel.archives-ouvertes.fr/tel-00770627.
Testo completoNordenfors, Oskar. "A Literature Study Concerning Generalization Error Bounds for Neural Networks via Rademacher Complexity". Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184487.
Testo completoI denna uppsats presenteras några grundläggande resultat från teorin kring maskininlärning och neurala nätverk, med målet att slutligen diskutera övre begräsningar på generaliseringsfelet hos neurala nätverk, via Rademachers komplexitet.
Katsikarelis, Ioannis. "Structurally Parameterized Tight Bounds and Approximation for Generalizations of Independence and Domination". Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED048.
Testo completoIn this thesis we focus on the NP-hard problems (k, r)-CENTER and d-SCATTERED SET that generalize the well-studied concepts of domination and independence over larger distances. In the first part we maintain a parameterized viewpoint and examine the standard parameterization as well as the most widely-used graph parameters measuring the input’s structure. We offer hardness results that show there is no algorithm of running-time below certain bounds, subject to the (Strong) Exponential Time Hypothesis, produce essentially optimal algorithms of complexity that matches these lower bounds and further attempt to offer an alternative to exact computation in significantly reduced running-time by way of approximation algorithms. In the second part we consider the (super-)polynomial (in-)approximability of the d-SCATTERED SET problem, i.e. we determine the exact relationship between an achievable approximation ratio ρ, the distance parameter d, and the runningtime of any ρ-approximation algorithm expressed as a function of the above and the size of the input n. We then consider strictly polynomial running-times and improve our understanding on the approximability characteristics of the problem on graphs of bounded maximum degree as well as bipartite graphs
Musayeva, Khadija. "Generalization Performance of Margin Multi-category Classifiers". Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0096/document.
Testo completoThis thesis deals with the theory of margin multi-category classification, and is based on the statistical learning theory founded by Vapnik and Chervonenkis. We are interested in deriving generalization bounds with explicit dependencies on the number C of categories, the sample size m and the margin parameter gamma, when the loss function considered is a Lipschitz continuous margin loss function. Generalization bounds rely on the empirical performance of the classifier as well as its "capacity". In this work, the following scale-sensitive capacity measures are considered: the Rademacher complexity, the covering numbers and the fat-shattering dimension. Our main contributions are obtained under the assumption that the classes of component functions implemented by a classifier have polynomially growing fat-shattering dimensions and that the component functions are independent. In the context of the pathway of Mendelson, which relates the Rademacher complexity to the covering numbers and the latter to the fat-shattering dimension, we study the impact that decomposing at the level of one of these capacity measures has on the dependencies on C, m and gamma. In particular, we demonstrate that the dependency on C can be substantially improved over the state of the art if the decomposition is postponed to the level of the metric entropy or the fat-shattering dimension. On the other hand, this impacts negatively the rate of convergence (dependency on m), an indication of the fact that optimizing the dependencies on the three basic parameters amounts to looking for a trade-off
Libri sul tema "Generalization bound"
I, Arnolʹd V. Experimental mathematics. Berkeley, California: MSRI Mathematical Sciences Research Institute, 2015.
Cerca il testo completoEspiritu, Yen Le. Race and U.S. Panethnic Formation. A cura di Ronald H. Bayor. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199766031.013.013.
Testo completoHoring, Norman J. Morgenstern. Superfluidity and Superconductivity. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198791942.003.0013.
Testo completoHecht, Richard D., e Vincent F. Biondo, a cura di. Religion and Everyday Life and Culture. ABC-CLIO, LLC, 2010. http://dx.doi.org/10.5040/9798216006909.
Testo completoMandelkern, Matthew. Bounded Meaning. Oxford University PressOxford, 2024. http://dx.doi.org/10.1093/oso/9780192870049.001.0001.
Testo completoCapitoli di libri sul tema "Generalization bound"
Burnaev, Evgeny. "Generalization Bound for Imbalanced Classification". In Recent Developments in Stochastic Methods and Applications, 107–19. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83266-7_8.
Testo completoBaader, Franz. "Unification, weak unification, upper bound, lower bound, and generalization problems". In Rewriting Techniques and Applications, 86–97. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/3-540-53904-2_88.
Testo completoFukuchi, Kazuto, e Jun Sakuma. "Neutralized Empirical Risk Minimization with Generalization Neutrality Bound". In Machine Learning and Knowledge Discovery in Databases, 418–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-44848-9_27.
Testo completoTang, Li, Zheng Zhao, Xiujun Gong e Huapeng Zeng. "On the Generalization of PAC-Bayes Bound for SVM Linear Classifier". In Communications in Computer and Information Science, 176–86. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34447-3_16.
Testo completoWang, Mingda, Canqian Yang e Yi Xu. "Posterior Refinement on Metric Matrix Improves Generalization Bound in Metric Learning". In Lecture Notes in Computer Science, 203–18. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19809-0_12.
Testo completoLevinson, Norman. "Generalization of Recent Method Giving Lower Bound for N 0(T) of Riemann’s Zeta-Function". In Selected Papers of Norman Levinson, 392–95. Boston, MA: Birkhäuser Boston, 1998. http://dx.doi.org/10.1007/978-1-4612-5335-8_32.
Testo completoZhang, Xinhua, Novi Quadrianto, Kristian Kersting, Zhao Xu, Yaakov Engel, Claude Sammut, Mark Reid et al. "Generalization Bounds". In Encyclopedia of Machine Learning, 447–54. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_328.
Testo completoReid, Mark. "Generalization Bounds". In Encyclopedia of Machine Learning and Data Mining, 556. Boston, MA: Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_328.
Testo completoHelleseth, Tor, Torleiv Kløve e Øyvind Ytrehus. "Generalizations of the Griesmer bound". In Lecture Notes in Computer Science, 41–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 1994. http://dx.doi.org/10.1007/3-540-58265-7_6.
Testo completoRejchel, W. "Generalization Bounds for Ranking Algorithms". In Ensemble Classification Methods with Applicationsin R, 135–39. Chichester, UK: John Wiley & Sons, Ltd, 2018. http://dx.doi.org/10.1002/9781119421566.ch7.
Testo completoAtti di convegni sul tema "Generalization bound"
He, Haiyun, Christina Lee Yu e Ziv Goldfeld. "Hierarchical Generalization Bounds for Deep Neural Networks". In 2024 IEEE International Symposium on Information Theory (ISIT), 2688–93. IEEE, 2024. http://dx.doi.org/10.1109/isit57864.2024.10619279.
Testo completoSefidgaran, Milad, e Abdellatif Zaidi. "Data-Dependent Generalization Bounds via Variable-Size Compressibility". In 2024 IEEE International Symposium on Information Theory (ISIT), 2682–87. IEEE, 2024. http://dx.doi.org/10.1109/isit57864.2024.10619654.
Testo completoRodríguez-Gálvez, Borja, Omar Rivasplata, Ragnar Thobaben e Mikael Skoglund. "A Note on Generalization Bounds for Losses with Finite Moments". In 2024 IEEE International Symposium on Information Theory (ISIT), 2676–81. IEEE, 2024. http://dx.doi.org/10.1109/isit57864.2024.10619194.
Testo completoLuo, Jin, Yongguang Chen e Xuejun Zhou. "Generalization Bound for Multi-Classification with Push". In 2010 International Conference on Artificial Intelligence and Computational Intelligence (AICI). IEEE, 2010. http://dx.doi.org/10.1109/aici.2010.201.
Testo completoWen, Wen, Han Li, Tieliang Gong e Hong Chen. "Towards Sharper Generalization Bounds for Adversarial Contrastive Learning". In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/574.
Testo completoZhu, Bowei, Shaojie Li e Yong Liu. "Towards Sharper Risk Bounds for Minimax Problems". In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/630.
Testo completoChen, Hao, Zhanfeng Mo, Zhouwang Yang e Xiao Wang. "Theoretical Investigation of Generalization Bound for Residual Networks". In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/288.
Testo completoZhou, Ruida, Chao Tian e Tie Liu. "Individually Conditional Individual Mutual Information Bound on Generalization Error". In 2021 IEEE International Symposium on Information Theory (ISIT). IEEE, 2021. http://dx.doi.org/10.1109/isit45174.2021.9518016.
Testo completoRezazadeh, Arezou, Sharu Theresa Jose, Giuseppe Durisi e Osvaldo Simeone. "Conditional Mutual Information-Based Generalization Bound for Meta Learning". In 2021 IEEE International Symposium on Information Theory (ISIT). IEEE, 2021. http://dx.doi.org/10.1109/isit45174.2021.9518020.
Testo completoUchida, Masato. "Tight lower bound of generalization error in ensemble learning". In 2014 Joint 7th International Conference on Soft Computing and Intelligent Systems (SCIS) and 15th International Symposium on Advanced Intelligent Systems (ISIS). IEEE, 2014. http://dx.doi.org/10.1109/scis-isis.2014.7044723.
Testo completoRapporti di organizzazioni sul tema "Generalization bound"
Dhankhar, Ritu, e Prasanna Kumar. A Remark on a Generalization of the Cauchy’s Bound. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, ottobre 2020. http://dx.doi.org/10.7546/crabs.2020.10.01.
Testo completoZarrieß, Benjamin, e Anni-Yasmin Turhan. Most Specific Generalizations w.r.t. General EL-TBoxes. Technische Universität Dresden, 2013. http://dx.doi.org/10.25368/2022.196.
Testo completo