Academic literature on the topic 'Kernel Hilbert Spaces'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Kernel Hilbert Spaces.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Kernel Hilbert Spaces"

1

Ferrer, Osmin, Diego Carrillo, and Arnaldo De La Barrera. "Reproducing Kernel in Krein Spaces." WSEAS TRANSACTIONS ON MATHEMATICS 21 (January 11, 2022): 23–30. http://dx.doi.org/10.37394/23206.2022.21.4.

Full text
Abstract:
This article describes a new form to introduce a reproducing kernel for a Krein space based on orthogonal projectors enabling to describe the kernel of a Krein space as the difference between the kernel of definite positive subspace and the kernel of definite negative subspace corresponding to kernel of the associated Hilbert space. As application, the authors obtain some basic properties of both kernels for Krein spaces and exhibit that each kernel is uniquely determined by the Krein space given. The methods and results employed generalize the notion of reproducing kernel given in Hilbert spaces to the context of spaces endowed with indefinite metric.
APA, Harvard, Vancouver, ISO, and other styles
2

Thirulogasanthar, K., and S. Twareque Ali. "General construction of reproducing kernels on a quaternionic Hilbert space." Reviews in Mathematical Physics 29, no. 05 (May 2, 2017): 1750017. http://dx.doi.org/10.1142/s0129055x17500179.

Full text
Abstract:
A general theory of reproducing kernels and reproducing kernel Hilbert spaces on a right quaternionic Hilbert space is presented. Positive operator-valued measures and their connection to a class of generalized quaternionic coherent states are examined. A Naimark type extension theorem associated with the positive operator-valued measures is proved in a right quaternionic Hilbert space. As illustrative examples, real, complex and quaternionic reproducing kernels and reproducing kernel Hilbert spaces arising from Hermite and Laguerre polynomials are presented. In particular, in the Laguerre case, the Naimark type extension theorem on the associated quaternionic Hilbert space is indicated.
APA, Harvard, Vancouver, ISO, and other styles
3

Ferreira, J. C., and V. A. Menegatto. "Reproducing kernel Hilbert spaces associated with kernels on topological spaces." Functional Analysis and Its Applications 46, no. 2 (April 2012): 152–54. http://dx.doi.org/10.1007/s10688-012-0021-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Scovel, Clint, Don Hush, Ingo Steinwart, and James Theiler. "Radial kernels and their reproducing kernel Hilbert spaces." Journal of Complexity 26, no. 6 (December 2010): 641–60. http://dx.doi.org/10.1016/j.jco.2010.03.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kumari, Rani, Jaydeb Sarkar, Srijan Sarkar, and Dan Timotin. "Factorizations of Kernels and Reproducing Kernel Hilbert Spaces." Integral Equations and Operator Theory 87, no. 2 (February 2017): 225–44. http://dx.doi.org/10.1007/s00020-017-2348-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

CARMELI, C., E. DE VITO, A. TOIGO, and V. UMANITÀ. "VECTOR VALUED REPRODUCING KERNEL HILBERT SPACES AND UNIVERSALITY." Analysis and Applications 08, no. 01 (January 2010): 19–61. http://dx.doi.org/10.1142/s0219530510001503.

Full text
Abstract:
This paper is devoted to the study of vector valued reproducing kernel Hilbert spaces. We focus on two aspects: vector valued feature maps and universal kernels. In particular, we characterize the structure of translation invariant kernels on abelian groups and we relate it to the universality problem.
APA, Harvard, Vancouver, ISO, and other styles
7

Ball, Joseph A., Gregory Marx, and Victor Vinnikov. "Noncommutative reproducing kernel Hilbert spaces." Journal of Functional Analysis 271, no. 7 (October 2016): 1844–920. http://dx.doi.org/10.1016/j.jfa.2016.06.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Alpay, Daniel, Palle Jorgensen, and Dan Volok. "Relative reproducing kernel Hilbert spaces." Proceedings of the American Mathematical Society 142, no. 11 (July 17, 2014): 3889–95. http://dx.doi.org/10.1090/s0002-9939-2014-12121-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

ZHANG, HAIZHANG, and LIANG ZHAO. "ON THE INCLUSION RELATION OF REPRODUCING KERNEL HILBERT SPACES." Analysis and Applications 11, no. 02 (March 2013): 1350014. http://dx.doi.org/10.1142/s0219530513500140.

Full text
Abstract:
To help understand various reproducing kernels used in applied sciences, we investigate the inclusion relation of two reproducing kernel Hilbert spaces. Characterizations in terms of feature maps of the corresponding reproducing kernels are established. A full table of inclusion relations among widely-used translation invariant kernels is given. Concrete examples for Hilbert–Schmidt kernels are presented as well. We also discuss the preservation of such a relation under various operations of reproducing kernels. Finally, we briefly discuss the special inclusion with a norm equivalence.
APA, Harvard, Vancouver, ISO, and other styles
10

Agud, L., J. M. Calabuig, and E. A. Sánchez Pérez. "Weighted p-regular kernels for reproducing kernel Hilbert spaces and Mercer Theorem." Analysis and Applications 18, no. 03 (October 31, 2019): 359–83. http://dx.doi.org/10.1142/s0219530519500179.

Full text
Abstract:
Let [Formula: see text] be a finite measure space and consider a Banach function space [Formula: see text]. Motivated by some previous papers and current applications, we provide a general framework for representing reproducing kernel Hilbert spaces as subsets of Köthe–Bochner (vector-valued) function spaces. We analyze operator-valued kernels [Formula: see text] that define integration maps [Formula: see text] between Köthe–Bochner spaces of Hilbert-valued functions [Formula: see text] We show a reduction procedure which allows to find a factorization of the corresponding kernel operator through weighted Bochner spaces [Formula: see text] and [Formula: see text] — where [Formula: see text] — under the assumption of [Formula: see text]-concavity of [Formula: see text] Equivalently, a new kernel obtained by multiplying [Formula: see text] by scalar functions can be given in such a way that the kernel operator is defined from [Formula: see text] to [Formula: see text] in a natural way. As an application, we prove a new version of Mercer Theorem for matrix-valued weighted kernels.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Kernel Hilbert Spaces"

1

Tipton, James Edward. "Reproducing Kernel Hilbert spaces and complex dynamics." Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/2284.

Full text
Abstract:
Both complex dynamics and the theory of reproducing kernel Hilbert spaces have found widespread application over the last few decades. Although complex dynamics started over a century ago, the gravity of it's importance was only recently realized due to B.B. Mandelbrot's work in the 1980's. B.B. Mandelbrot demonstrated to the world that fractals, which are chaotic patterns containing a high degree of self-similarity, often times serve as better models to nature than conventional smooth models. The theory of reproducing kernel Hilbert spaces also having started over a century ago, didn't pick up until N. Aronszajn's classic was written in 1950. Since then, the theory has found widespread application to fields including machine learning, quantum mechanics, and harmonic analysis. In the paper, Infinite Product Representations of Kernel Functions and Iterated Function Systems, the authors, D. Alpay, P. Jorgensen, I. Lewkowicz, and I. Martiziano, show how a kernel function can be constructed on an attracting set of an iterated function system. Furthermore, they show that when certain conditions are met, one can construct an orthonormal basis of the associated Hilbert space via certain pull-back and multiplier operators. In this thesis we take for our iterated function system, the family of iterates of a given rational map. Thus we investigate for which rational maps their kernel construction holds as well as their orthornormal basis construction. We are able to show that the kernel construction applies to any rational map conjugate to a polynomial with an attracting fixed point at 0. Within such rational maps, we are able to find a family of polynomials for which the orthonormal basis construction holds. It is then natural to ask how the orthonormal basis changes as the polynomial within a given family varies. We are able to determine for certain families of polynomials, that the dynamics of the corresponding orthonormal basis is well behaved. Finally, we conclude with some possible avenues of future investigation.
APA, Harvard, Vancouver, ISO, and other styles
2

Bhujwalla, Yusuf. "Nonlinear System Identification with Kernels : Applications of Derivatives in Reproducing Kernel Hilbert Spaces." Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0315/document.

Full text
Abstract:
Cette thèse se concentrera exclusivement sur l’application de méthodes non paramétriques basées sur le noyau à des problèmes d’identification non-linéaires. Comme pour les autres méthodes non-linéaires, deux questions clés dans l’identification basée sur le noyau sont les questions de comment définir un modèle non-linéaire (sélection du noyau) et comment ajuster la complexité du modèle (régularisation). La contribution principale de cette thèse est la présentation et l’étude de deux critères d’optimisation (un existant dans la littérature et une nouvelle proposition) pour l’approximation structurale et l’accord de complexité dans l’identification de systèmes non-linéaires basés sur le noyau. Les deux méthodes sont basées sur l’idée d’intégrer des contraintes de complexité basées sur des caractéristiques dans le critère d’optimisation, en pénalisant les dérivées de fonctions. Essentiellement, de telles méthodes offrent à l’utilisateur une certaine souplesse dans la définition d’une fonction noyau et dans le choix du terme de régularisation, ce qui ouvre de nouvelles possibilités quant à la facon dont les modèles non-linéaires peuvent être estimés dans la pratique. Les deux méthodes ont des liens étroits avec d’autres méthodes de la littérature, qui seront examinées en détail dans les chapitres 2 et 3 et formeront la base des développements ultérieurs de la thèse. Alors que l’analogie sera faite avec des cadres parallèles, la discussion sera ancrée dans le cadre de Reproducing Kernel Hilbert Spaces (RKHS). L’utilisation des méthodes RKHS permettra d’analyser les méthodes présentées d’un point de vue à la fois théorique et pratique. De plus, les méthodes développées seront appliquées à plusieurs «études de cas» d’identification, comprenant à la fois des exemples de simulation et de données réelles, notamment : • Détection structurelle dans les systèmes statiques non-linéaires. • Contrôle de la fluidité dans les modèles LPV. • Ajustement de la complexité à l’aide de pénalités structurelles dans les systèmes NARX. • Modelisation de trafic internet par l’utilisation des méthodes à noyau
This thesis will focus exclusively on the application of kernel-based nonparametric methods to nonlinear identification problems. As for other nonlinear methods, two key questions in kernel-based identification are the questions of how to define a nonlinear model (kernel selection) and how to tune the complexity of the model (regularisation). The following chapter will discuss how these questions are usually dealt with in the literature. The principal contribution of this thesis is the presentation and investigation of two optimisation criteria (one existing in the literature and one novel proposition) for structural approximation and complexity tuning in kernel-based nonlinear system identification. Both methods are based on the idea of incorporating feature-based complexity constraints into the optimisation criterion, by penalising derivatives of functions. Essentially, such methods offer the user flexibility in the definition of a kernel function and the choice of regularisation term, which opens new possibilities with respect to how nonlinear models can be estimated in practice. Both methods bear strong links with other methods from the literature, which will be examined in detail in Chapters 2 and 3 and will form the basis of the subsequent developments of the thesis. Whilst analogy will be made with parallel frameworks, the discussion will be rooted in the framework of Reproducing Kernel Hilbert Spaces (RKHS). Using RKHS methods will allow analysis of the methods presented from both a theoretical and a practical point-of-view. Furthermore, the methods developed will be applied to several identification ‘case studies’, comprising of both simulation and real-data examples, notably: • Structural detection in static nonlinear systems. • Controlling smoothness in LPV models. • Complexity tuning using structural penalties in NARX systems. • Internet traffic modelling using kernel methods
APA, Harvard, Vancouver, ISO, and other styles
3

Struble, Dale William. "Wavelets on manifolds and multiscale reproducing kernel Hilbert spaces." Related electronic resource:, 2007. http://proquest.umi.com/pqdweb?did=1407687581&sid=1&Fmt=2&clientId=3739&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Quiggin, Peter Philip. "Generalisations of Pick's theorem to reproducing Kernel Hilbert spaces." Thesis, Lancaster University, 1994. http://eprints.lancs.ac.uk/61962/.

Full text
Abstract:
Pick's theorem states that there exists a function in H1, which is bounded by 1 and takes given values at given points, if and only if a certain matrix is positive. H1 is the space of multipliers of H2 and this theorem has a natural generalisation when H1 is replaced by the space of multipliers of a general reproducing kernel Hilbert space H(K) (where K is the reproducing kernel). J. Agler showed that this generalised theorem is true when H(K) is a certain Sobolev space or the Dirichlet space. This thesis widens Agler's approach to cover reproducing kernel Hilbert spaces in general and derives sucient (and usable) conditions on the kernel K, for the generalised Pick's theorem to be true for H(K). These conditions are then used to prove Pick's theorem for certain weighted Hardy and Sobolev spaces and for a functional Hilbert space introduced by Saitoh. The reproducing kernel approach is then used to derived results for several related problems. These include the uniqueness of the optimal interpolating multiplier, the case of operator-valued functions and a proof of the Adamyan-Arov-Kren theorem.
APA, Harvard, Vancouver, ISO, and other styles
5

Marx, Gregory. "The Complete Pick Property and Reproducing Kernel Hilbert Spaces." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/24783.

Full text
Abstract:
We present two approaches towards a characterization of the complete Pick property. We first discuss the lurking isometry method used in a paper by J.A. Ball, T.T. Trent, and V. Vinnikov. They show that a nondegenerate, positive kernel has the complete Pick property if $1/k$ has one positive square. We also look at the one-point extension approach developed by P. Quiggin which leads to a sufficient and necessary condition for a positive kernel to have the complete Pick property. We conclude by connecting the two characterizations of the complete Pick property.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

Giménez, Febrer Pere Joan. "Matrix completion with prior information in reproducing kernel Hilbert spaces." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671718.

Full text
Abstract:
In matrix completion, the objective is to recover an unknown matrix from a small subset of observed entries. Most successful methods for recovering the unknown entries are based on the assumption that the unknown full matrix has low rank. By having low rank, each of its entries are obtained as a function of a small number of coefficients which can be accurately estimated provided that there are enough available observations. Hence, in low-rank matrix completion the estimate is given by the matrix of minimum rank that fits the observed entries. Besides low rankness, the unknown matrix might exhibit other structural properties which can be leveraged in the recovery process. In a smooth matrix, it can be expected that entries that are close in index distance will have similar values. Similarly, groups of rows or columns can be known to contain similarly valued entries according to certain relational structures. This relational information is conveyed through different means such as covariance matrices or graphs, with the inconvenient that these cannot be derived from the data matrix itself since it is incomplete. Hence, any knowledge on how the matrix entries are related among them must be derived from prior information. This thesis deals with matrix completion with prior information, and presents an outlook that generalizes to many situations. In the first part, the columns of the unknown matrix are cast as graph signals with a graph known beforehand. In this, the adjacency matrix of the graph is used to calculate an initial point for a proximal gradient algorithm in order to reduce the iterations needed to converge to a solution. Then, under the assumption that the graph signals are smooth, the graph Laplacian is incorporated into the problem formulation with the aim to enforce smoothness on the solution. This results in an effective denoising of the observed matrix and reduced error, which is shown through theoretical analysis of the proximal gradient coupled with Laplacian regularization, and numerical tests. The second part of the thesis introduces a framework to exploit prior information through reproducing kernel Hilbert spaces. Since a kernel measures similarity between two points in an input set, it enables the encoding of any prior information such as feature vectors, dictionaries or connectivity on a graph. By associating each column and row of the unknown matrix with an item in a set, and defining a pair of kernels measuring similarity between columns or rows, the missing entries can be extrapolated by means of the kernel functions. A method based on kernel regression is presented, with two additional variants aimed at reducing computational cost, and online implementation. These methods prove to be competitive with existing techniques, especially when the number of observations is very small. Furthermore, mean-square error and generalization error analyses are carried out, shedding light on the factors impacting algorithm performance. For the generalization error analysis, the focus is on the transductive case, which measures the ability of an algorithm to transfer knowledge from a set of labelled inputs to an unlabelled set. Here, bounds are derived for the proposed and existing algorithms by means of the transductive Rademacher complexity, and numerical tests confirming the theoretical findings are presented. Finally, the thesis explores the question of how to choose the observed entries of a matrix in order to minimize the recovery error of the full matrix. A passive sampling approach is presented, which entails that no labelled inputs are needed to design the sampling distribution; only the input set and kernel functions are required. The approach is based on building the best Nyström approximation to the kernel matrix by sampling the columns according to their leverage scores, a metric that arises naturally in the theoretical analysis to find an optimal sampling distribution.
A matrix completion, l'objectiu és recuperar una matriu a partir d'un subconjunt d'entrades observables. Els mètodes més eficaços es basen en la idea que la matriu desconeguda és de baix rang. Al ser de baix rang, les seves entrades són funció d'uns pocs coeficients que poden ser estimats sempre que hi hagi suficients observacions. Així, a matrix completion la solució s'obté com la matriu de mínim rang que millor s'ajusta a les entrades visibles. A més de baix rang, la matriu desconeguda pot tenir altres propietats estructurals que poden ser aprofitades en el procés de recuperació. En una matriu suau, pot esperar-se que les entrades en posicions pròximes tinguin valor similar. Igualment, grups de columnes o files poden saber-se similars. Aquesta informació relacional es proporciona a través de diversos mitjans com ara matrius de covariància o grafs, amb l'inconvenient que aquests no poden ser derivats a partir de la matriu de dades ja que està incompleta. Aquesta tesi tracta sobre matrix completion amb informació prèvia, i presenta metodologies que poden aplicar-se a diverses situacions. En la primera part, les columnes de la matriu desconeguda s'identifiquen com a senyals en un graf conegut prèviament. Llavors, la matriu d'adjacència del graf s'usa per calcular un punt inicial per a un algorisme de gradient pròxim amb la finalitat de reduir les iteracions necessàries per arribar a la solució. Després, suposant que els senyals són suaus, la matriu laplaciana del graf s'incorpora en la formulació del problema amb tal forçar suavitat en la solució. Això resulta en una reducció de soroll en la matriu observada i menor error, la qual cosa es demostra a través d'anàlisi teòrica i simulacions numèriques. La segona part de la tesi introdueix eines per a aprofitar informació prèvia mitjançant reproducing kernel Hilbert spaces. Atès que un kernel mesura la similitud entre dos punts en un espai, permet codificar qualsevol tipus d'informació tal com vectors de característiques, diccionaris o grafs. En associar cada columna i fila de la matriu desconeguda amb un element en un set, i definir un parell de kernels que mesuren similitud entre columnes o files, les entrades desconegudes poden ser extrapolades mitjançant les funcions de kernel. Es presenta un mètode basat en regressió amb kernels, amb dues variants addicionals que redueixen el cost computacional. Els mètodes proposats es mostren competitius amb tècniques existents, especialment quan el nombre d'observacions és molt baix. A més, es detalla una anàlisi de l'error quadràtic mitjà i l'error de generalització. Per a l'error de generalització, s'adopta el context transductiu, el qual mesura la capacitat d'un algorisme de transferir informació d'un set de mostres etiquetades a un set no etiquetat. Després, es deriven cotes d'error per als algorismes proposats i existents fent ús de la complexitat de Rademacher, i es presenten proves numèriques que confirmen els resultats teòrics. Finalment, la tesi explora la qüestió de com triar les entrades observables de la matriu per a minimitzar l'error de recuperació de la matriu completa. Una estratègia de mostrejat passiva és proposada, la qual implica que no és necessari conèixer cap etiqueta per a dissenyar la distribució de mostreig. Només les funcions de kernel són necessàries. El mètode es basa en construir la millor aproximació de Nyström a la matriu de kernel mostrejant les columnes segons la seva leverage score, una mètrica que apareix de manera natural durant l'anàlisi teòric.
APA, Harvard, Vancouver, ISO, and other styles
7

Dieuleveut, Aymeric. "Stochastic approximation in Hilbert spaces." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE059/document.

Full text
Abstract:
Le but de l’apprentissage supervisé est d’inférer des relations entre un phénomène que l’on souhaite prédire et des variables « explicatives ». À cette fin, on dispose d’observations de multiples réalisations du phénomène, à partir desquelles on propose une règle de prédiction. L’émergence récente de sources de données à très grande échelle, tant par le nombre d’observations effectuées (en analyse d’image, par exemple) que par le grand nombre de variables explicatives (en génétique), a fait émerger deux difficultés : d’une part, il devient difficile d’éviter l’écueil du sur-apprentissage lorsque le nombre de variables explicatives est très supérieur au nombre d’observations; d’autre part, l’aspect algorithmique devient déterminant, car la seule résolution d’un système linéaire dans les espaces en jeupeut devenir une difficulté majeure. Des algorithmes issus des méthodes d’approximation stochastique proposent uneréponse simultanée à ces deux difficultés : l’utilisation d’une méthode stochastique réduit drastiquement le coût algorithmique, sans dégrader la qualité de la règle de prédiction proposée, en évitant naturellement le sur-apprentissage. En particulier, le cœur de cette thèse portera sur les méthodes de gradient stochastique. Les très populaires méthodes paramétriques proposent comme prédictions des fonctions linéaires d’un ensemble choisi de variables explicatives. Cependant, ces méthodes aboutissent souvent à une approximation imprécise de la structure statistique sous-jacente. Dans le cadre non-paramétrique, qui est un des thèmes centraux de cette thèse, la restriction aux prédicteurs linéaires est levée. La classe de fonctions dans laquelle le prédicteur est construit dépend elle-même des observations. En pratique, les méthodes non-paramétriques sont cruciales pour diverses applications, en particulier pour l’analyse de données non vectorielles, qui peuvent être associées à un vecteur dans un espace fonctionnel via l’utilisation d’un noyau défini positif. Cela autorise l’utilisation d’algorithmes associés à des données vectorielles, mais exige une compréhension de ces algorithmes dans l’espace non-paramétrique associé : l’espace à noyau reproduisant. Par ailleurs, l’analyse de l’estimation non-paramétrique fournit également un éclairage révélateur sur le cadre paramétrique, lorsque le nombre de prédicteurs surpasse largement le nombre d’observations. La première contribution de cette thèse consiste en une analyse détaillée de l’approximation stochastique dans le cadre non-paramétrique, en particulier dans le cadre des espaces à noyaux reproduisants. Cette analyse permet d’obtenir des taux de convergence optimaux pour l’algorithme de descente de gradient stochastique moyennée. L’analyse proposée s’applique à de nombreux cadres, et une attention particulière est portée à l’utilisation d’hypothèses minimales, ainsi qu’à l’étude des cadres où le nombre d’observations est connu à l’avance, ou peut évoluer. La seconde contribution est de proposer un algorithme, basé sur un principe d’accélération, qui converge à une vitesse optimale, tant du point de vue de l’optimisation que du point de vue statistique. Cela permet, dans le cadre non-paramétrique, d’améliorer la convergence jusqu’au taux optimal, dans certains régimes pour lesquels le premier algorithme analysé restait sous-optimal. Enfin, la troisième contribution de la thèse consiste en l’extension du cadre étudié au delà de la perte des moindres carrés : l’algorithme de descente de gradient stochastiqueest analysé comme une chaine de Markov. Cette approche résulte en une interprétation intuitive, et souligne les différences entre le cadre quadratique et le cadre général. Une méthode simple permettant d’améliorer substantiellement la convergence est également proposée
The goal of supervised machine learning is to infer relationships between a phenomenon one seeks to predict and “explanatory” variables. To that end, multiple occurrences of the phenomenon are observed, from which a prediction rule is constructed. The last two decades have witnessed the apparition of very large data-sets, both in terms of the number of observations (e.g., in image analysis) and in terms of the number of explanatory variables (e.g., in genetics). This has raised two challenges: first, avoiding the pitfall of over-fitting, especially when the number of explanatory variables is much higher than the number of observations; and second, dealing with the computational constraints, such as when the mere resolution of a linear system becomes a difficulty of its own. Algorithms that take their roots in stochastic approximation methods tackle both of these difficulties simultaneously: these stochastic methods dramatically reduce the computational cost, without degrading the quality of the proposed prediction rule, and they can naturally avoid over-fitting. As a consequence, the core of this thesis will be the study of stochastic gradient methods. The popular parametric methods give predictors which are linear functions of a set ofexplanatory variables. However, they often result in an imprecise approximation of the underlying statistical structure. In the non-parametric setting, which is paramount in this thesis, this restriction is lifted. The class of functions from which the predictor is proposed depends on the observations. In practice, these methods have multiple purposes, and are essential for learning with non-vectorial data, which can be mapped onto a vector in a functional space using a positive definite kernel. This allows to use algorithms designed for vectorial data, but requires the analysis to be made in the non-parametric associated space: the reproducing kernel Hilbert space. Moreover, the analysis of non-parametric regression also sheds some light on the parametric setting when the number of predictors is much larger than the number of observations. The first contribution of this thesis is to provide a detailed analysis of stochastic approximation in the non-parametric setting, precisely in reproducing kernel Hilbert spaces. This analysis proves optimal convergence rates for the averaged stochastic gradient descent algorithm. As we take special care in using minimal assumptions, it applies to numerous situations, and covers both the settings in which the number of observations is known a priori, and situations in which the learning algorithm works in an on-line fashion. The second contribution is an algorithm based on acceleration, which converges at optimal speed, both from the optimization point of view and from the statistical one. In the non-parametric setting, this can improve the convergence rate up to optimality, even inparticular regimes for which the first algorithm remains sub-optimal. Finally, the third contribution of the thesis consists in an extension of the framework beyond the least-square loss. The stochastic gradient descent algorithm is analyzed as a Markov chain. This point of view leads to an intuitive and insightful interpretation, that outlines the differences between the quadratic setting and the more general setting. A simple method resulting in provable improvements in the convergence is then proposed
APA, Harvard, Vancouver, ISO, and other styles
8

Giulini, Ilaria. "Generalization bounds for random samples in Hilbert spaces." Thesis, Paris, Ecole normale supérieure, 2015. http://www.theses.fr/2015ENSU0026/document.

Full text
Abstract:
Ce travail de thèse porte sur l'obtention de bornes de généralisation pour des échantillons statistiques à valeur dans des espaces de Hilbert définis par des noyaux reproduisants. L'approche consiste à obtenir des bornes non asymptotiques indépendantes de la dimension dans des espaces de dimension finie, en utilisant des inégalités PAC-Bayesiennes liées à une perturbation Gaussienne du paramètre et à les étendre ensuite aux espaces de Hilbert séparables. On se pose dans un premier temps la question de l'estimation de l'opérateur de Gram à partir d'un échantillon i. i. d. par un estimateur robuste et on propose des bornes uniformes, sous des hypothèses faibles de moments. Ces résultats permettent de caractériser l'analyse en composantes principales indépendamment de la dimension et d'en proposer des variantes robustes. On propose ensuite un nouvel algorithme de clustering spectral. Au lieu de ne garder que la projection sur les premiers vecteurs propres, on calcule une itérée du Laplacian normalisé. Cette itération, justifiée par l'analyse du clustering en termes de chaînes de Markov, opère comme une version régularisée de la projection sur les premiers vecteurs propres et permet d'obtenir un algorithme dans lequel le nombre de clusters est déterminé automatiquement. On présente des bornes non asymptotiques concernant la convergence de cet algorithme, lorsque les points à classer forment un échantillon i. i. d. d'une loi à support compact dans un espace de Hilbert. Ces bornes sont déduites des bornes obtenues pour l'estimation d'un opérateur de Gram dans un espace de Hilbert. On termine par un aperçu de l'intérêt du clustering spectral dans le cadre de l'analyse d'images
This thesis focuses on obtaining generalization bounds for random samples in reproducing kernel Hilbert spaces. The approach consists in first obtaining non-asymptotic dimension-free bounds in finite-dimensional spaces using some PAC-Bayesian inequalities related to Gaussian perturbations and then in generalizing the results in a separable Hilbert space. We first investigate the question of estimating the Gram operator by a robust estimator from an i. i. d. sample and we present uniform bounds that hold under weak moment assumptions. These results allow us to qualify principal component analysis independently of the dimension of the ambient space and to propose stable versions of it. In the last part of the thesis we present a new algorithm for spectral clustering. It consists in replacing the projection on the eigenvectors associated with the largest eigenvalues of the Laplacian matrix by a power of the normalized Laplacian. This iteration, justified by the analysis of clustering in terms of Markov chains, performs a smooth truncation. We prove nonasymptotic bounds for the convergence of our spectral clustering algorithm applied to a random sample of points in a Hilbert space that are deduced from the bounds for the Gram operator in a Hilbert space. Experiments are done in the context of image analysis
APA, Harvard, Vancouver, ISO, and other styles
9

Paiva, António R. C. "Reproducing kernel Hilbert spaces for point processes, with applications to neural activity analysis." [Gainesville, Fla.] : University of Florida, 2008. http://purl.fcla.edu/fcla/etd/UFE0022471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sabree, Aqeeb A. "Positive definite kernels, harmonic analysis, and boundary spaces: Drury-Arveson theory, and related." Diss., University of Iowa, 2019. https://ir.uiowa.edu/etd/7023.

Full text
Abstract:
A reproducing kernel Hilbert space (RKHS) is a Hilbert space $\mathscr{H}$ of functions with the property that the values $f(x)$ for $f \in \mathscr{H}$ are reproduced from the inner product in $\mathscr{H}$. Recent applications are found in stochastic processes (Ito Calculus), harmonic analysis, complex analysis, learning theory, and machine learning algorithms. This research began with the study of RKHSs to areas such as learning theory, sampling theory, and harmonic analysis. From the Moore-Aronszajn theorem, we have an explicit correspondence between reproducing kernel Hilbert spaces (RKHS) and reproducing kernel functions—also called positive definite kernels or positive definite functions. The focus here is on the duality between positive definite functions and their boundary spaces; these boundary spaces often lead to the study of Gaussian processes or Brownian motion. It is known that every reproducing kernel Hilbert space has an associated generalized boundary probability space. The Arveson (reproducing) kernel is $K(z,w) = \frac{1}{1-_{\C^d}}, z,w \in \B_d$, and Arveson showed, \cite{Arveson}, that the Arveson kernel does not follow the boundary analysis we were finding in other RKHS. Thus, we were led to define a new reproducing kernel on the unit ball in complex $n$-space, and naturally this lead to the study of a new reproducing kernel Hilbert space. This reproducing kernel Hilbert space stems from boundary analysis of the Arveson kernel. The construction of the new RKHS resolves the problem we faced while researching “natural” boundary spaces (for the Drury-Arveson RKHS) that yield boundary factorizations: \[K(z,w) = \int_{\mathcal{B}} K^{\mathcal{B}}_z(b)\overline{K^{\mathcal{B}}_w(b)}d\mu(b), \;\;\; z,w \in \B_d \text{ and } b \in \mathcal{B} \tag*{\it{(Factorization of} $K$).}\] Results from classical harmonic analysis on the disk (the Hardy space) are generalized and extended to the new RKHS. Particularly, our main theorem proves that, relaxing the criteria to the contractive property, we can do the generalization that Arveson's paper showed (criteria being an isometry) is not possible.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Kernel Hilbert Spaces"

1

Berlinet, Alain, and Christine Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics. Boston, MA: Springer US, 2004. http://dx.doi.org/10.1007/978-1-4419-9096-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Christine, Thomas-Agnan, ed. Reproducing kernel Hilbert spaces in probability and statistics. Boston: Kluwer Academic, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dym, H. J contractive matrix functions, reproducing kernel Hilbert spaces and interpolation. Providence, R.I: Published for the Conference Board of the Mathematical Sciences by the American Mathematical Society, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pereverzyev, Sergei. An Introduction to Artificial Intelligence Based on Reproducing Kernel Hilbert Spaces. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98316-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Minggen, Cui, and Lin Yingzhen, eds. Nonlinear numerical analysis in the reproducing Kernel space. Hauppauge, N.Y: Nova Science Publishers, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Príncipe, J. C. Kernel adaptive filtering: A comprehensive introduction. Hoboken, N.J: Wiley, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Príncipe, J. C. Kernel adaptive filtering: A comprehensive introduction. Hoboken, N.J: Wiley, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Príncipe, J. C. Kernel adaptive filtering: A comprehensive introduction. Hoboken, N.J: Wiley, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

E, Fennell Robert, and Minton Roland B. 1956-, eds. Structured hereditary systems. New York: Marcel Dekker, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Christensen, Jens Gerlach. Trends in harmonic analysis and its applications: AMS special session on harmonic analysis and its applications : March 29-30, 2014, University of Maryland, Baltimore County, Baltimore, MD. Providence, Rhode Island: American Mathematical Society, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Kernel Hilbert Spaces"

1

Suzuki, Joe. "Hilbert Spaces." In Kernel Methods for Machine Learning with Math and R, 27–57. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-0398-4_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Suzuki, Joe. "Hilbert Spaces." In Kernel Methods for Machine Learning with Math and Python, 29–59. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-0401-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Christensen, Ronald. "Reproducing Kernel Hilbert Spaces." In Springer Texts in Statistics, 87–123. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29164-8_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Montesinos López, Osval Antonio, Abelardo Montesinos López, and Jose Crossa. "Reproducing Kernel Hilbert Spaces Regression and Classification Methods." In Multivariate Statistical Machine Learning Methods for Genomic Prediction, 251–336. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-89010-0_8.

Full text
Abstract:
AbstractThe fundamentals for Reproducing Kernel Hilbert Spaces (RKHS) regression methods are described in this chapter. We first point out the virtues of RKHS regression methods and why these methods are gaining a lot of acceptance in statistical machine learning. Key elements for the construction of RKHS regression methods are provided, the kernel trick is explained in some detail, and the main kernel functions for building kernels are provided. This chapter explains some loss functions under a fixed model framework with examples of Gaussian, binary, and categorical response variables. We illustrate the use of mixed models with kernels by providing examples for continuous response variables. Practical issues for tuning the kernels are illustrated. We expand the RKHS regression methods under a Bayesian framework with practical examples applied to continuous and categorical response variables and by including in the predictor the main effects of environments, genotypes, and the genotype ×environment interaction. We show examples of multi-trait RKHS regression methods for continuous response variables. Finally, some practical issues of kernel compression methods are provided which are important for reducing the computation cost of implementing conventional RKHS methods.
APA, Harvard, Vancouver, ISO, and other styles
5

Sawano, Yoshihiro. "Pasting Reproducing Kernel Hilbert Spaces." In Trends in Mathematics, 401–7. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-48812-7_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pillonetto, Gianluigi, Tianshi Chen, Alessandro Chiuso, Giuseppe De Nicolao, and Lennart Ljung. "Regularization in Reproducing Kernel Hilbert Spaces." In Regularized System Identification, 181–246. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95860-2_6.

Full text
Abstract:
AbstractMethods for obtaining a function g in a relationship $$y=g(x)$$ y = g ( x ) from observed samples of y and x are the building blocks for black-box estimation. The classical parametric approach discussed in the previous chapters uses a function model that depends on a finite-dimensional vector, like, e.g., a polynomial model. We have seen that an important issue is the model order choice. This chapter describes some regularization approaches which permit to reconcile flexibility of the model class with well-posedness of the solution exploiting an alternative paradigm to traditional parametric estimation. Instead of constraining the unknown function to a specific parametric structure, the function will be searched over a possibly infinite-dimensional functional space. Overfitting and ill-posedness are circumvented by using reproducing kernel Hilbert spaces as hypothesis spaces and related norms as regularizers. Such kernel-based approaches thus permit to cast all the regularized estimators based on quadratic penalties encountered in the previous chapters as special cases of a more general theory.
APA, Harvard, Vancouver, ISO, and other styles
7

Gualtierotti, Antonio F. "Reproducing Kernel Hilbert Spaces: The Rudiments." In Detection of Random Signals in Dependent Gaussian Noise, 3–123. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-22315-5_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gualtierotti, Antonio F. "Relations Between Reproducing Kernel Hilbert Spaces." In Detection of Random Signals in Dependent Gaussian Noise, 217–305. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-22315-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gualtierotti, Antonio F. "Reproducing Kernel Hilbert Spaces and Discrimination." In Detection of Random Signals in Dependent Gaussian Noise, 329–430. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-22315-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ball, Joseph A., and Victor Vinnikov. "Formal Reproducing Kernel Hilbert Spaces: The Commutative and Noncommutative Settings." In Reproducing Kernel Spaces and Applications, 77–134. Basel: Birkhäuser Basel, 2003. http://dx.doi.org/10.1007/978-3-0348-8077-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Kernel Hilbert Spaces"

1

Tuia, Devis, Gustavo Camps-Valls, and Manel Martinez-Ramon. "Explicit recursivity into reproducing kernel Hilbert spaces." In ICASSP 2011 - 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2011. http://dx.doi.org/10.1109/icassp.2011.5947266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

SUQUET, CHARLES. "REPRODUCING KERNEL HILBERT SPACES AND RANDOM MEASURES." In Proceedings of the 5th International ISAAC Congress. WORLD SCIENTIFIC, 2009. http://dx.doi.org/10.1142/9789812835635_0013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bobade, Parag, Suprotim Majumdar, Savio Pereira, Andrew J. Kurdila, and John B. Ferris. "Adaptive estimation in reproducing kernel Hilbert spaces." In 2017 American Control Conference (ACC). IEEE, 2017. http://dx.doi.org/10.23919/acc.2017.7963839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Paiva, Antonio R. C., Il Park, and Jose C. Principe. "Reproducing kernel Hilbert spaces for spike train analysis." In ICASSP 2008 - 2008 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2008. http://dx.doi.org/10.1109/icassp.2008.4518834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hasanbelliu, Erion, and Jose C. Principe. "Content addressable memories in reproducing Kernel Hilbert spaces." In 2008 IEEE Workshop on Machine Learning for Signal Processing (MLSP) (Formerly known as NNSP). IEEE, 2008. http://dx.doi.org/10.1109/mlsp.2008.4685447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tanaka, Akira, Hideyuki Imai, and Koji Takamiya. "Variance analyses for kernel regressors with nested reproducing kernel hilbert spaces." In ICASSP 2012 - 2012 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2012. http://dx.doi.org/10.1109/icassp.2012.6288300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Xiao, and Shizhong Liao. "Hypothesis Sketching for Online Kernel Selection in Continuous Kernel Space." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/346.

Full text
Abstract:
Online kernel selection in continuous kernel space is more complex than that in discrete kernel set. But existing online kernel selection approaches for continuous kernel spaces have linear computational complexities at each round with respect to the current number of rounds and lack sublinear regret guarantees due to the continuously many candidate kernels. To address these issues, we propose a novel hypothesis sketching approach to online kernel selection in continuous kernel space, which has constant computational complexities at each round and enjoys a sublinear regret bound. The main idea of the proposed hypothesis sketching approach is to maintain the orthogonality of the basis functions and the prediction accuracy of the hypothesis sketches in a time-varying reproducing kernel Hilbert space. We first present an efficient dependency condition to maintain the basis functions of the hypothesis sketches under a computational budget. Then we update the weights and the optimal kernels by minimizing the instantaneous loss of the hypothesis sketches using the online gradient descent with a compensation strategy. We prove that the proposed hypothesis sketching approach enjoys a regret bound of order O(√T) for online kernel selection in continuous kernel space, which is optimal for convex loss functions, where T is the number of rounds, and reduces the computational complexities at each round from linear to constant with respect to the number of rounds. Experimental results demonstrate that the proposed hypothesis sketching approach significantly improves the efficiency of online kernel selection in continuous kernel space while retaining comparable predictive accuracies.
APA, Harvard, Vancouver, ISO, and other styles
8

Bouboulis, Pantelis, Sergios Theodoridis, and Konstantinos Slavakis. "Edge Preserving Image Denoising in Reproducing Kernel Hilbert Spaces." In 2010 20th International Conference on Pattern Recognition (ICPR). IEEE, 2010. http://dx.doi.org/10.1109/icpr.2010.652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhaoda Deng, J. Gregory, and A. Kurdila. "Learning theory with consensus in reproducing kernel Hilbert spaces." In 2012 American Control Conference - ACC 2012. IEEE, 2012. http://dx.doi.org/10.1109/acc.2012.6315086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kurdila, Andrew, and Yu Lei. "Adaptive control via embedding in reproducing kernel Hilbert spaces." In 2013 American Control Conference (ACC). IEEE, 2013. http://dx.doi.org/10.1109/acc.2013.6580354.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Kernel Hilbert Spaces"

1

Fukumizu, Kenji, Francis R. Bach, and Michael I. Jordan. Dimensionality Reduction for Supervised Learning With Reproducing Kernel Hilbert Spaces. Fort Belvoir, VA: Defense Technical Information Center, May 2003. http://dx.doi.org/10.21236/ada446572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography