Dissertations / Theses on the topic 'Kernel Hilbert Spaces'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Kernel Hilbert Spaces.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Tipton, James Edward. "Reproducing Kernel Hilbert spaces and complex dynamics." Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/2284.
Full textBhujwalla, Yusuf. "Nonlinear System Identification with Kernels : Applications of Derivatives in Reproducing Kernel Hilbert Spaces." Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0315/document.
Full textThis thesis will focus exclusively on the application of kernel-based nonparametric methods to nonlinear identification problems. As for other nonlinear methods, two key questions in kernel-based identification are the questions of how to define a nonlinear model (kernel selection) and how to tune the complexity of the model (regularisation). The following chapter will discuss how these questions are usually dealt with in the literature. The principal contribution of this thesis is the presentation and investigation of two optimisation criteria (one existing in the literature and one novel proposition) for structural approximation and complexity tuning in kernel-based nonlinear system identification. Both methods are based on the idea of incorporating feature-based complexity constraints into the optimisation criterion, by penalising derivatives of functions. Essentially, such methods offer the user flexibility in the definition of a kernel function and the choice of regularisation term, which opens new possibilities with respect to how nonlinear models can be estimated in practice. Both methods bear strong links with other methods from the literature, which will be examined in detail in Chapters 2 and 3 and will form the basis of the subsequent developments of the thesis. Whilst analogy will be made with parallel frameworks, the discussion will be rooted in the framework of Reproducing Kernel Hilbert Spaces (RKHS). Using RKHS methods will allow analysis of the methods presented from both a theoretical and a practical point-of-view. Furthermore, the methods developed will be applied to several identification ‘case studies’, comprising of both simulation and real-data examples, notably: • Structural detection in static nonlinear systems. • Controlling smoothness in LPV models. • Complexity tuning using structural penalties in NARX systems. • Internet traffic modelling using kernel methods
Struble, Dale William. "Wavelets on manifolds and multiscale reproducing kernel Hilbert spaces." Related electronic resource:, 2007. http://proquest.umi.com/pqdweb?did=1407687581&sid=1&Fmt=2&clientId=3739&RQT=309&VName=PQD.
Full textQuiggin, Peter Philip. "Generalisations of Pick's theorem to reproducing Kernel Hilbert spaces." Thesis, Lancaster University, 1994. http://eprints.lancs.ac.uk/61962/.
Full textMarx, Gregory. "The Complete Pick Property and Reproducing Kernel Hilbert Spaces." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/24783.
Full textMaster of Science
Giménez, Febrer Pere Joan. "Matrix completion with prior information in reproducing kernel Hilbert spaces." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671718.
Full textA matrix completion, l'objectiu és recuperar una matriu a partir d'un subconjunt d'entrades observables. Els mètodes més eficaços es basen en la idea que la matriu desconeguda és de baix rang. Al ser de baix rang, les seves entrades són funció d'uns pocs coeficients que poden ser estimats sempre que hi hagi suficients observacions. Així, a matrix completion la solució s'obté com la matriu de mínim rang que millor s'ajusta a les entrades visibles. A més de baix rang, la matriu desconeguda pot tenir altres propietats estructurals que poden ser aprofitades en el procés de recuperació. En una matriu suau, pot esperar-se que les entrades en posicions pròximes tinguin valor similar. Igualment, grups de columnes o files poden saber-se similars. Aquesta informació relacional es proporciona a través de diversos mitjans com ara matrius de covariància o grafs, amb l'inconvenient que aquests no poden ser derivats a partir de la matriu de dades ja que està incompleta. Aquesta tesi tracta sobre matrix completion amb informació prèvia, i presenta metodologies que poden aplicar-se a diverses situacions. En la primera part, les columnes de la matriu desconeguda s'identifiquen com a senyals en un graf conegut prèviament. Llavors, la matriu d'adjacència del graf s'usa per calcular un punt inicial per a un algorisme de gradient pròxim amb la finalitat de reduir les iteracions necessàries per arribar a la solució. Després, suposant que els senyals són suaus, la matriu laplaciana del graf s'incorpora en la formulació del problema amb tal forçar suavitat en la solució. Això resulta en una reducció de soroll en la matriu observada i menor error, la qual cosa es demostra a través d'anàlisi teòrica i simulacions numèriques. La segona part de la tesi introdueix eines per a aprofitar informació prèvia mitjançant reproducing kernel Hilbert spaces. Atès que un kernel mesura la similitud entre dos punts en un espai, permet codificar qualsevol tipus d'informació tal com vectors de característiques, diccionaris o grafs. En associar cada columna i fila de la matriu desconeguda amb un element en un set, i definir un parell de kernels que mesuren similitud entre columnes o files, les entrades desconegudes poden ser extrapolades mitjançant les funcions de kernel. Es presenta un mètode basat en regressió amb kernels, amb dues variants addicionals que redueixen el cost computacional. Els mètodes proposats es mostren competitius amb tècniques existents, especialment quan el nombre d'observacions és molt baix. A més, es detalla una anàlisi de l'error quadràtic mitjà i l'error de generalització. Per a l'error de generalització, s'adopta el context transductiu, el qual mesura la capacitat d'un algorisme de transferir informació d'un set de mostres etiquetades a un set no etiquetat. Després, es deriven cotes d'error per als algorismes proposats i existents fent ús de la complexitat de Rademacher, i es presenten proves numèriques que confirmen els resultats teòrics. Finalment, la tesi explora la qüestió de com triar les entrades observables de la matriu per a minimitzar l'error de recuperació de la matriu completa. Una estratègia de mostrejat passiva és proposada, la qual implica que no és necessari conèixer cap etiqueta per a dissenyar la distribució de mostreig. Només les funcions de kernel són necessàries. El mètode es basa en construir la millor aproximació de Nyström a la matriu de kernel mostrejant les columnes segons la seva leverage score, una mètrica que apareix de manera natural durant l'anàlisi teòric.
Dieuleveut, Aymeric. "Stochastic approximation in Hilbert spaces." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE059/document.
Full textThe goal of supervised machine learning is to infer relationships between a phenomenon one seeks to predict and “explanatory” variables. To that end, multiple occurrences of the phenomenon are observed, from which a prediction rule is constructed. The last two decades have witnessed the apparition of very large data-sets, both in terms of the number of observations (e.g., in image analysis) and in terms of the number of explanatory variables (e.g., in genetics). This has raised two challenges: first, avoiding the pitfall of over-fitting, especially when the number of explanatory variables is much higher than the number of observations; and second, dealing with the computational constraints, such as when the mere resolution of a linear system becomes a difficulty of its own. Algorithms that take their roots in stochastic approximation methods tackle both of these difficulties simultaneously: these stochastic methods dramatically reduce the computational cost, without degrading the quality of the proposed prediction rule, and they can naturally avoid over-fitting. As a consequence, the core of this thesis will be the study of stochastic gradient methods. The popular parametric methods give predictors which are linear functions of a set ofexplanatory variables. However, they often result in an imprecise approximation of the underlying statistical structure. In the non-parametric setting, which is paramount in this thesis, this restriction is lifted. The class of functions from which the predictor is proposed depends on the observations. In practice, these methods have multiple purposes, and are essential for learning with non-vectorial data, which can be mapped onto a vector in a functional space using a positive definite kernel. This allows to use algorithms designed for vectorial data, but requires the analysis to be made in the non-parametric associated space: the reproducing kernel Hilbert space. Moreover, the analysis of non-parametric regression also sheds some light on the parametric setting when the number of predictors is much larger than the number of observations. The first contribution of this thesis is to provide a detailed analysis of stochastic approximation in the non-parametric setting, precisely in reproducing kernel Hilbert spaces. This analysis proves optimal convergence rates for the averaged stochastic gradient descent algorithm. As we take special care in using minimal assumptions, it applies to numerous situations, and covers both the settings in which the number of observations is known a priori, and situations in which the learning algorithm works in an on-line fashion. The second contribution is an algorithm based on acceleration, which converges at optimal speed, both from the optimization point of view and from the statistical one. In the non-parametric setting, this can improve the convergence rate up to optimality, even inparticular regimes for which the first algorithm remains sub-optimal. Finally, the third contribution of the thesis consists in an extension of the framework beyond the least-square loss. The stochastic gradient descent algorithm is analyzed as a Markov chain. This point of view leads to an intuitive and insightful interpretation, that outlines the differences between the quadratic setting and the more general setting. A simple method resulting in provable improvements in the convergence is then proposed
Giulini, Ilaria. "Generalization bounds for random samples in Hilbert spaces." Thesis, Paris, Ecole normale supérieure, 2015. http://www.theses.fr/2015ENSU0026/document.
Full textThis thesis focuses on obtaining generalization bounds for random samples in reproducing kernel Hilbert spaces. The approach consists in first obtaining non-asymptotic dimension-free bounds in finite-dimensional spaces using some PAC-Bayesian inequalities related to Gaussian perturbations and then in generalizing the results in a separable Hilbert space. We first investigate the question of estimating the Gram operator by a robust estimator from an i. i. d. sample and we present uniform bounds that hold under weak moment assumptions. These results allow us to qualify principal component analysis independently of the dimension of the ambient space and to propose stable versions of it. In the last part of the thesis we present a new algorithm for spectral clustering. It consists in replacing the projection on the eigenvectors associated with the largest eigenvalues of the Laplacian matrix by a power of the normalized Laplacian. This iteration, justified by the analysis of clustering in terms of Markov chains, performs a smooth truncation. We prove nonasymptotic bounds for the convergence of our spectral clustering algorithm applied to a random sample of points in a Hilbert space that are deduced from the bounds for the Gram operator in a Hilbert space. Experiments are done in the context of image analysis
Paiva, António R. C. "Reproducing kernel Hilbert spaces for point processes, with applications to neural activity analysis." [Gainesville, Fla.] : University of Florida, 2008. http://purl.fcla.edu/fcla/etd/UFE0022471.
Full textSabree, Aqeeb A. "Positive definite kernels, harmonic analysis, and boundary spaces: Drury-Arveson theory, and related." Diss., University of Iowa, 2019. https://ir.uiowa.edu/etd/7023.
Full textBarbian, Christoph [Verfasser], and Jörg [Akademischer Betreuer] Eschmeier. "Beurling-type representation of invariant subspaces in reproducing kernel Hilbert spaces / Christoph Barbian. Betreuer: Jörg Eschmeier." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2011. http://d-nb.info/1051285119/34.
Full textBarbosa, Victor Simões. "Universalidade e ortogonalidade em espaços de Hilbert de reprodução." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/55/55135/tde-18032013-142251/.
Full textWe analyze the role of feature maps of a positive denite kernel K acting on a Hausdorff topological space E in two specific properties: the universality of K and the orthogonality in the reproducing kernel Hilbert space of K from disjoint supports. Feature maps always exist but may not be unique. A feature map may be interpreted as a kernel based procedure that maps the data from the original input space E into a potentially higher dimensional \"feature space\" in which linear methods may then be used. Both properties, universality and orthogonality from disjoint supports, make sense under continuity of the kernel. Universality of K is equivalent to the fundamentality of {K(. ; y) : y \'IT BELONGS\' X} in the space of all continuous functions on X, with the topology of uniform convergence, for all nonempty compact subsets X of E. One of the main results in this work is a characterization of the universality of K from a similar concept for the feature map. Orthogonality from disjoint supports seeks the orthogonality of any two functions in the reproducing kernel Hilbert space of K when the functions have disjoint supports
Plumlee, Matthew. "Fast methods for identifying high dimensional systems using observations." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53544.
Full textFerreira, José Claudinei. "Operadores integrais positivos e espaços de Hilbert de reprodução." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/55/55135/tde-17082010-100716/.
Full textIn this work we study theoretical properties of positive integral operators on \'L POT. 2\'(X; u), in the case when X is a topological space, either locally compact or first countable, and u is a strictly positive measure. The analysis is directed to spectral properties of the operator which are related to some extensions of Mercer\'s Theorem and to the study of the reproducing kernel Hilbert spaces involved. As applications, we deduce decay rates for the eigenvalues of the operators in a special but relevant case. We also consider smoothness properties for functions in the reproducing kernel Hilbert spaces when X is a subset of the Euclidean space and u is the Lebesgue measure of the space
Niedzialomski, Robert. "Extension of positive definite functions." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/2595.
Full textKingravi, Hassan. "Reduced-set models for improving the training and execution speed of kernel methods." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51799.
Full textSantana, Júnior Ewaldo éder Carvalho. "EXTRAÇÃO CEGA DE SINAIS COM ESTRUTURAS TEMPORAIS UTILIZANDO ESPAÇOS DE HILBERT REPRODUZIDOS POR KERNEIS." Universidade Federal do Maranhão, 2012. http://tedebc.ufma.br:8080/jspui/handle/tede/476.
Full textThis work derives and evaluates a nonlinear method for Blind Source Extraction (BSE) in a Reproducing Kernel Hilbert Space (RKHS) framework. For extracting the desired signal from a mixture a priori information about the autocorrelation function of that signal translated in a linear transformation of the Gram matrix of the nonlinearly transformed data to the Hilbert space. Our method proved to be more robust than methods presented in the literature of BSE with respect to ambiguities in the available a priori information of the signal to be extracted. The approach here introduced can also be seen as a generalization of Kernel Principal Component Analysis to analyze autocorrelation matrices at specific time lags. Henceforth, the method here presented is a kernelization of Dependent Component Analysis, it will be called Kernel Dependent Component Analysis (KDCA). Also in this dissertation it will be show a Information-Theoretic Learning perspective of the analysis, this will study the transformations in the extracted signals probability density functions while linear operations calculated in the RKHS.
Esta dissertação deriva e avalia um novo método nãolinear para Extração Cega de Sinais através de operações algébricas em um Espaço de Hilbert Reproduzido por Kernel (RKHS, do inglês Reproducing Kernel Hilbert Space). O processo de extração de sinais desejados de misturas é realizado utilizando-se informação sobre a estrutura temporal deste sinal desejado. No presente trabalho, esta informação temporal será utilizada para realizar uma transformação linear na matriz de Gram das misturas transformadas para o espaço de Hilbert. Aqui, mostrarse- á também que o método proposto é mais robusto, com relação a ambigüidades sobre a informação temporal do sinal desejado, que aqueles previamente apresentados na literatura para realizar a mesma operação de extração. A abordagem estudada a seguir pode ser vista como uma generalização da Análise de Componentes Principais utilizando Kerneis para analisar matriz de autocorrelação dos dados para um atraso específico. Sendo também uma kernelização da Análise de Componentes Dependentes, o método aqui desenvolvido é denominado Análise de Componentes Dependentes utilizando Kerneis (KDCA, do inglês Kernel Dependent Component Analysis). Também será abordada nesta dissertação, a perspectiva da Aprendizagem de Máquina utilizando Teoria da Informação do novo método apresentado, mostrando assim, que transformações são realizadas na função densidade de probabilidade do sinal extraído enquanto que operação lineares são calculadas no RKHS.
Benelmadani, Djihad. "Contribution à la régression non paramétrique avec un processus erreur d'autocovariance générale et application en pharmacocinétique." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM034/document.
Full textIn this thesis, we consider the fixed design regression model with repeated measurements, where the errors form a process with general autocovariance function, i.e. a second order process (stationary or nonstationary), with a non-differentiable covariance function along the diagonal. We are interested, among other problems, in the nonparametric estimation of the regression function of this model.We first consider the well-known kernel regression estimator proposed by Gasser and Müller. We study its asymptotic performance when the number of experimental units and the number of observations tend to infinity. For a regular sequence of designs, we improve the higher rates of convergence of the variance and the bias. We also prove the asymptotic normality of this estimator in the case of correlated errors.Second, we propose a new kernel estimator of the regression function based on a projection property. This estimator is constructed through the autocovariance function of the errors, and a specific function belonging to the Reproducing Kernel Hilbert Space (RKHS) associated to the autocovariance function. We study its asymptotic performance using the RKHS properties. These properties allow to obtain the optimal convergence rate of the variance. We also prove its asymptotic normality. We show that this new estimator has a smaller asymptotic variance then the one of Gasser and Müller. A simulation study is conducted to confirm this theoretical result.Third, we propose a new kernel estimator for the regression function. This estimator is constructed through the trapezoidal numerical approximation of the kernel regression estimator based on continuous observations. We study its asymptotic performance, and we prove its asymptotic normality. Moreover, this estimator allow to obtain the asymptotic optimal sampling design for the estimation of the regression function. We run a simulation study to test the performance of the proposed estimator in a finite sample set, where we see its good performance, in terms of Integrated Mean Squared Error (IMSE). In addition, we show the reduction of the IMSE using the optimal sampling design instead of the uniform design in a finite sample set.Finally, we consider an application of the regression function estimation in pharmacokinetics problems. We propose to use the nonparametric kernel methods, for the concentration-time curve estimation, instead of the classical parametric ones. We prove its good performance via simulation study and real data analysis. We also investigate the problem of estimating the Area Under the concentration Curve (AUC), where we introduce a new kernel estimator, obtained by the integration of the regression function estimator. We prove, using a simulation study, that the proposed estimators outperform the classical one in terms of Mean Squared Error. The crucial problem of finding the optimal sampling design for the AUC estimation is investigated using the Generalized Simulating Annealing algorithm
Varagnolo, Damiano. "Distributed Parametric-Nonparametric Estimation in Networked Control Systems." Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3421610.
Full textIn questa tesi vengono introdotti e analizzati alcuni algoritmi di regressione distribuita parametrica e nonparametrica, basati su tecniche di consenso e parametrizzati da un parametro il cui significato è una stima del numero di sensori presenti nella rete. Gli algoritmi parametrici assumono la conoscenza di informazione a-priori sulle quantità da stimare, mentre quelli nonparametrici utilizzano come spazio delle ipotesi uno spazio di Hilbert a nucleo riproducente. Dall'analisi degli stimatori distribuiti proposti si ricavano alcune condizioni sufficienti che, se assicurate, garantiscono che le prestazioni degli stimatori distribuiti sono migliori di quelli locali (usando come metrica la varianza dell'errore di stima). Inoltre dalla stessa analisi si caratterizzano le perdite di prestazioni che si hanno usando gli stimatori distribuiti invece che quelli centralizzati e ottimi (usando come metrica la distanza euclidea tra le due diverse stime ottenute). Inoltre viene offerto un nuovo algoritmo che calcola in maniera distribuita dei certificati di qualità che garantiscono la bontà dei risultati ottenuti con gli stimatori distribuiti. Si mostra inoltre come lo stimatore nonparametrico distribuito proposto sia in realtà una versione approssimata delle cosiddette ``Reti di Regolarizzazione'', e come esso richieda poche risorse computazionali, di memoria e di comunicazione tra sensori. Si analizza quindi il caso di sensori spazialmente distribuiti e soggetti a ritardi temporali sconosciuti. Si mostra dunque come si possano stimare, minimizzando opportune funzioni di prodotti interni negli spazi di Hilbert precedentemente considerati, sia la funzione vista dai sensori che i relativi ritardi visti da questi. A causa dell'importanza della conoscenza del numero di agenti negli algoritmi proposti precedentemente, viene proposta una nuova metodologia per sviluppare algoritmi di stima distribuita di tale numero, basata sulla seguente idea: come primo passo gli agenti generano localmente alcuni numeri, in maniera casuale e da una densità di probabilità nota a tutti. Quindi i sensori si scambiano e modificano questi dati usando algoritmi di consenso quali la media o il massimo; infine, tramite analisi statistiche sulla distribuzione finale dei dati modificati, si può ottenere dell'informazione su quanti agenti hanno partecipato al processo di consenso e modifica. Una caratteristica di questo approccio è che gli algoritmi sono completamente distribuiti, in quanto non richiedono passi di elezione di leaders. Un'altra è che ai sensori non è richiesto di trasmettere informazioni sensibili quali codici identificativi o altro, quindi la strategia è implementabile anche se in presenza di problemi di riservatezza. Dopo una formulazione rigorosa del paradigma, analizziamo alcuni esempi pratici, li caratterizziamo completamente dal punto di vista statistico, e infine offriamo alcuni risultati teorici generali e analisi asintotiche.
Ren, Haobo. "Functional inverse regression and reproducing kernel Hilbert space." Diss., Texas A&M University, 2005. http://hdl.handle.net/1969.1/4203.
Full textKamari, Halaleh. "Qualité prédictive des méta-modèles construits sur des espaces de Hilbert à noyau auto-reproduisant et analyse de sensibilité des modèles complexes." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASE010.
Full textIn this work, the problem of estimating a meta-model of a complex model, denoted m, is considered. The model m depends on d input variables X1 , ..., Xd that are independent and have a known law. The meta-model, denoted f ∗ , approximates the Hoeffding decomposition of m, and allows to estimate its Sobol indices. It belongs to a reproducing kernel Hilbert space (RKHS), denoted H, which is constructed as a direct sum of Hilbert spaces (Durrande et al. (2013)). The estimator of the meta-model, denoted f^, is calculated by minimizing a least-squares criterion penalized by the sum of the Hilbert norm and the empirical L2-norm (Huet and Taupin (2017)). This procedure, called RKHS ridge group sparse, allows both to select and estimate the terms in the Hoeffding decomposition, and therefore, to select the Sobol indices that are non-zero and estimate them. It makes possible to estimate the Sobol indices even of high order, a point known to be difficult in practice.This work consists of a theoretical part and a practical part. In the theoretical part, I established upper bounds of the empirical L2 risk and the L2 risk of the estimator f^. That is, upper bounds with respect to the L2-norm and the empirical L2-norm for the f^ distance between the model m and its estimation f into the RKHS H. In the practical part, I developed an R package, called RKHSMetaMod, that implements the RKHS ridge group sparse procedure and a spacial case of it called the RKHS group lasso procedure. This package can be applied to a known model that is calculable in all points or an unknown regression model. In order to optimize the execution time and the storage memory, except for a function that is written in R, all of the functions of the RKHSMetaMod package are written using C++ libraries GSL and Eigen. These functions are then interfaced with the R environment in order to propose an user friendly package. The performance of the package functions in terms of the predictive quality of the estimator and the estimation of the Sobol indices, is validated by a simulation study
Amaya, Austin J. "Beurling-Lax Representations of Shift-Invariant Spaces, Zero-Pole Data Interpolation, and Dichotomous Transfer Function Realizations: Half-Plane/Continuous-Time Versions." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/27636.
Full textPh. D.
Das, Suporna. "Frames and reproducing kernels in a Hilbert space." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ54293.pdf.
Full textZhang, Xinhua, and xinhua zhang cs@gmail com. "Graphical Models: Modeling, Optimization, and Hilbert Space Embedding." The Australian National University. ANU College of Engineering and Computer Sciences, 2010. http://thesis.anu.edu.au./public/adt-ANU20100729.072500.
Full textLeon, Ralph Daniel. "Module structure of a Hilbert space." CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2469.
Full textJordão, Thaís. "Diferenciabilidade em espaços de Hilbert de reprodução sobre a esfera." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55135/tde-29032012-103159/.
Full textA reproducing kernel Hilbert space (EHR) is a Hilbert space of functions constructed in a unique manner from a fixed positive definite generating kernel. The values of a function in a reproducing kernel Hilbert space can be reproduced through an elementary operation involving the function itself, the generating kernel and the inner product of the space. In this work, we consider reproducing kernel Hilbert spaces generated by a positive definite kernel on the usual m-dimensional sphere. The main goal is to analyze differentiability properties inherited by the functions in the space when the generating kernel carries a differentiability assumption. That is done in two different cases: using the usual notion of differentiability on the sphere and using another one defined through multiplicative operators. The second case includes the Laplace-Beltrami derivative and fractional derivatives as well. In both cases we consider specific properties of the embeddings of the reproducing kernel Hilbert space into spaces of smooth functions induced by notion of differentiability used
Tullo, Alessandra. "Apprendimento automatico con metodo kernel." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23200/.
Full textAgrawal, Devanshu. "The Complete Structure of Linear and Nonlinear Deformations of Frames on a Hilbert Space." Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etd/3003.
Full textWang, Roy Chih Chung. "Adaptive Kernel Functions and Optimization Over a Space of Rank-One Decompositions." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36975.
Full textGräf, Manuel. "Efficient Algorithms for the Computation of Optimal Quadrature Points on Riemannian Manifolds." Doctoral thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-115287.
Full textPan, Yue. "Currents- and varifolds-based registration of lung vessels and lung surfaces." Thesis, University of Iowa, 2016. https://ir.uiowa.edu/etd/2257.
Full textShin, Hyejin. "Infinite dimensional discrimination and classification." Texas A&M University, 2003. http://hdl.handle.net/1969.1/5832.
Full textBui, Thi Thien Trang. "Modèle de régression pour des données non-Euclidiennes en grande dimension. Application à la classification de taxons en anatomie computationnelle." Thesis, Toulouse, INSA, 2019. http://www.theses.fr/2019ISAT0021.
Full textIn this thesis, we study a regression model with distribution entries and the testing hypothesis problem for signal detection in a regression model. We aim to apply these models in hearing sensitivity measured by the transient evoked otoacoustic emissions (TEOAEs) data to improve our knowledge in the auditory investigation. In the first part, a new distribution regression model for probability distributions is introduced. This model is based on a Reproducing Kernel Hilbert Space (RKHS) regression framework, where universal kernels are built using Wasserstein distances for distributions belonging to \Omega) and \Omega is a compact subspace of the real space. We prove the universal kernel property of such kernels and use this setting to perform regressions on functions. Different regression models are first compared with the proposed one on simulated functional data. We then apply our regression model to transient evoked otoascoutic emission (TEOAE) distribution responses and real predictors of the age. This part is a joint work with Loubes, J-M., Risser, L. and Balaresque, P..In the second part, considering a regression model, we address the question of testing the nullity of the regression function. The testing procedure is available when the variance of the observations is unknown and does not depend on any prior information on the alternative. We first propose a single testing procedure based on a general symmetric kernel and an estimation of the variance of the observations. The corresponding critical values are constructed to obtain non asymptotic level \alpha tests. We then introduce an aggregation procedure to avoid the difficult choice of the kernel and of the parameters of the kernel. The multiple tests satisfy non-asymptotic properties and are adaptive in the minimax sense over several classes of regular alternatives
Ke, Chenlu. "A NEW INDEPENDENCE MEASURE AND ITS APPLICATIONS IN HIGH DIMENSIONAL DATA ANALYSIS." UKnowledge, 2019. https://uknowledge.uky.edu/statistics_etds/41.
Full textHenchiri, Yousri. "L'approche Support Vector Machines (SVM) pour le traitement des données fonctionnelles." Thesis, Montpellier 2, 2013. http://www.theses.fr/2013MON20187/document.
Full textFunctional Data Analysis is an important and dynamic area of statistics. It offers effective new tools and proposes new methodological and theoretical developments in the presence of functional type data (functions, curves, surfaces, ...). The work outlined in this dissertation provides a new contribution to the themes of statistical learning and quantile regression when data can be considered as functions. Special attention is devoted to use the Support Vector Machines (SVM) technique, which involves the notion of a Reproducing Kernel Hilbert Space. In this context, the main goal is to extend this nonparametric estimation technique to conditional models that take into account functional data. We investigated the theoretical aspects and practical attitude of the proposed and adapted technique to the following regression models.The first model is the conditional quantile functional model when the covariate takes its values in a bounded subspace of the functional space of infinite dimension, the response variable takes its values in a compact of the real line, and the observations are i.i.d.. The second model is the functional additive quantile regression model where the response variable depends on a vector of functional covariates. The last model is the conditional quantile functional model in the dependent functional data case. We obtained the weak consistency and a convergence rate of these estimators. Simulation studies are performed to evaluate the performance of the inference procedures. Applications to chemometrics, environmental and climatic data analysis are considered. The good behavior of the SVM estimator is thus highlighted
Saide, Chafic. "Filtrage adaptatif à l’aide de méthodes à noyau : application au contrôle d’un palier magnétique actif." Thesis, Troyes, 2013. http://www.theses.fr/2013TROY0018/document.
Full textFunction approximation methods based on reproducing kernel Hilbert spaces are of great importance in kernel-based regression. However, the order of the model is equal to the number of observations, which makes this method inappropriate for online identification. To overcome this drawback, many sparsification methods have been proposed to control the order of the model. The coherence criterion is one of these sparsification methods. It has been shown possible to select a subset of the most relevant passed input vectors to form a dictionary to identify the model.A kernel function, once introduced into the dictionary, remains unchanged even if the non-stationarity of the system makes it less influent in estimating the output of the model. This observation leads to the idea of adapting the elements of the dictionary to obtain an improved one with an objective to minimize the resulting instantaneous mean square error and/or to control the order of the model.The first part deals with adaptive algorithms using the coherence criterion. The adaptation of the elements of the dictionary using a stochastic gradient method is presented for two types of kernel functions. Another topic is covered in this part which is the implementation of adaptive algorithms using the coherence criterion to identify Multiple-Outputs models.The second part introduces briefly the active magnetic bearing (AMB). A proposed method to control an AMB by an adaptive algorithm using kernel methods is presented to replace an existing method using neural networks
Ammanouil, Rita. "Contributions au démélange non-supervisé et non-linéaire de données hyperspectrales." Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4079/document.
Full textSpectral unmixing has been an active field of research since the earliest days of hyperspectralremote sensing. It is concerned with the case where various materials are found inthe spatial extent of a pixel, resulting in a spectrum that is a mixture of the signatures ofthose materials. Unmixing then reduces to estimating the pure spectral signatures and theircorresponding proportions in every pixel. In the hyperspectral unmixing jargon, the puresignatures are known as the endmembers and their proportions as the abundances. Thisthesis focuses on spectral unmixing of remotely sensed hyperspectral data. In particular,it is aimed at improving the accuracy of the extraction of compositional information fromhyperspectral data. This is done through the development of new unmixing techniques intwo main contexts, namely in the unsupervised and nonlinear case. In particular, we proposea new technique for blind unmixing, we incorporate spatial information in (linear and nonlinear)unmixing, and we finally propose a new nonlinear mixing model. More precisely, first,an unsupervised unmixing approach based on collaborative sparse regularization is proposedwhere the library of endmembers candidates is built from the observations themselves. Thisapproach is then extended in order to take into account the presence of noise among theendmembers candidates. Second, within the unsupervised unmixing framework, two graphbasedregularizations are used in order to incorporate prior local and nonlocal contextualinformation. Next, within a supervised nonlinear unmixing framework, a new nonlinearmixing model based on vector-valued functions in reproducing kernel Hilbert space (RKHS)is proposed. The aforementioned model allows to consider different nonlinear functions atdifferent bands, regularize the discrepancies between these functions, and account for neighboringnonlinear contributions. Finally, the vector-valued kernel framework is used in orderto promote spatial smoothness of the nonlinear part in a kernel-based nonlinear mixingmodel. Simulations on synthetic and real data show the effectiveness of all the proposedtechniques
Kuo, David, and 郭立維. "Reproducing Kernel Hilbert Spaces and Reproducing Kernel Hilbert Spacesand Feichtinger’s Conjecture." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/76783081720520823585.
Full text逢甲大學
應用數學所
93
We will prove a theorem on a class of unitary operators between a Hilbert space and related reproducing kernel Hilbert space. Using this theorem, we then prove that every bounded frame is a finite union of Riesz sequences, which have been conjectured by Feichtinger. This in turn will imply that for any WH-frame the modulation constant must be irrational.
Evgeniou, Theodoros, and Massimiliano Pontil. "On the V(subscript gamma) Dimension for Regression in Reproducing Kernel Hilbert Spaces." 1999. http://hdl.handle.net/1721.1/7262.
Full textGirosi, Federico. "An Equivalence Between Sparse Approximation and Support Vector Machines." 1997. http://hdl.handle.net/1721.1/7289.
Full textBhattacharjee, Monojit. "Analytic Models, Dilations, Wandering Subspaces and Inner Functions." Thesis, 2017. http://etd.iisc.ac.in/handle/2005/4241.
Full textZhang, Xinhua. "Graphical Models: Modeling, Optimization, and Hilbert Space Embedding." Phd thesis, 2010. http://hdl.handle.net/1885/49340.
Full textHota, Tapan Kumar. "Subnormality and Moment Sequences." Thesis, 2012. http://etd.iisc.ac.in/handle/2005/3242.
Full textHota, Tapan Kumar. "Subnormality and Moment Sequences." Thesis, 2012. http://hdl.handle.net/2005/3242.
Full textRieger, Christian. "Sampling Inequalities and Applications." Doctoral thesis, 2008. http://hdl.handle.net/11858/00-1735-0000-0006-B3B9-0.
Full textVito, Ernesto De, and Andrea Caponnetto. "Risk Bounds for Regularized Least-squares Algorithm with Operator-valued kernels." 2005. http://hdl.handle.net/1721.1/30543.
Full textScheuerer, Michael. "A Comparison of Models and Methods for Spatial Interpolation in Statistics and Numerical Analysis." Doctoral thesis, 2009. http://hdl.handle.net/11858/00-1735-0000-0006-B3D5-1.
Full textOesting, Marco. "Spatial Interpolation and Prediction of Gaussian and Max-Stable Processes." Doctoral thesis, 2012. http://hdl.handle.net/11858/00-1735-0000-000D-F069-0.
Full textCaponnetto, Andrea, and Ernesto De Vito. "Fast Rates for Regularized Least-squares Algorithm." 2005. http://hdl.handle.net/1721.1/30539.
Full textLessig, Christian. "Modern Foundations of Light Transport Simulation." Thesis, 2012. http://hdl.handle.net/1807/32808.
Full text