To see the other types of publications on this topic, follow the link: Kernel Hilbert Spaces.

Dissertations / Theses on the topic 'Kernel Hilbert Spaces'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Kernel Hilbert Spaces.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tipton, James Edward. "Reproducing Kernel Hilbert spaces and complex dynamics." Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/2284.

Full text
Abstract:
Both complex dynamics and the theory of reproducing kernel Hilbert spaces have found widespread application over the last few decades. Although complex dynamics started over a century ago, the gravity of it's importance was only recently realized due to B.B. Mandelbrot's work in the 1980's. B.B. Mandelbrot demonstrated to the world that fractals, which are chaotic patterns containing a high degree of self-similarity, often times serve as better models to nature than conventional smooth models. The theory of reproducing kernel Hilbert spaces also having started over a century ago, didn't pick up until N. Aronszajn's classic was written in 1950. Since then, the theory has found widespread application to fields including machine learning, quantum mechanics, and harmonic analysis. In the paper, Infinite Product Representations of Kernel Functions and Iterated Function Systems, the authors, D. Alpay, P. Jorgensen, I. Lewkowicz, and I. Martiziano, show how a kernel function can be constructed on an attracting set of an iterated function system. Furthermore, they show that when certain conditions are met, one can construct an orthonormal basis of the associated Hilbert space via certain pull-back and multiplier operators. In this thesis we take for our iterated function system, the family of iterates of a given rational map. Thus we investigate for which rational maps their kernel construction holds as well as their orthornormal basis construction. We are able to show that the kernel construction applies to any rational map conjugate to a polynomial with an attracting fixed point at 0. Within such rational maps, we are able to find a family of polynomials for which the orthonormal basis construction holds. It is then natural to ask how the orthonormal basis changes as the polynomial within a given family varies. We are able to determine for certain families of polynomials, that the dynamics of the corresponding orthonormal basis is well behaved. Finally, we conclude with some possible avenues of future investigation.
APA, Harvard, Vancouver, ISO, and other styles
2

Bhujwalla, Yusuf. "Nonlinear System Identification with Kernels : Applications of Derivatives in Reproducing Kernel Hilbert Spaces." Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0315/document.

Full text
Abstract:
Cette thèse se concentrera exclusivement sur l’application de méthodes non paramétriques basées sur le noyau à des problèmes d’identification non-linéaires. Comme pour les autres méthodes non-linéaires, deux questions clés dans l’identification basée sur le noyau sont les questions de comment définir un modèle non-linéaire (sélection du noyau) et comment ajuster la complexité du modèle (régularisation). La contribution principale de cette thèse est la présentation et l’étude de deux critères d’optimisation (un existant dans la littérature et une nouvelle proposition) pour l’approximation structurale et l’accord de complexité dans l’identification de systèmes non-linéaires basés sur le noyau. Les deux méthodes sont basées sur l’idée d’intégrer des contraintes de complexité basées sur des caractéristiques dans le critère d’optimisation, en pénalisant les dérivées de fonctions. Essentiellement, de telles méthodes offrent à l’utilisateur une certaine souplesse dans la définition d’une fonction noyau et dans le choix du terme de régularisation, ce qui ouvre de nouvelles possibilités quant à la facon dont les modèles non-linéaires peuvent être estimés dans la pratique. Les deux méthodes ont des liens étroits avec d’autres méthodes de la littérature, qui seront examinées en détail dans les chapitres 2 et 3 et formeront la base des développements ultérieurs de la thèse. Alors que l’analogie sera faite avec des cadres parallèles, la discussion sera ancrée dans le cadre de Reproducing Kernel Hilbert Spaces (RKHS). L’utilisation des méthodes RKHS permettra d’analyser les méthodes présentées d’un point de vue à la fois théorique et pratique. De plus, les méthodes développées seront appliquées à plusieurs «études de cas» d’identification, comprenant à la fois des exemples de simulation et de données réelles, notamment : • Détection structurelle dans les systèmes statiques non-linéaires. • Contrôle de la fluidité dans les modèles LPV. • Ajustement de la complexité à l’aide de pénalités structurelles dans les systèmes NARX. • Modelisation de trafic internet par l’utilisation des méthodes à noyau
This thesis will focus exclusively on the application of kernel-based nonparametric methods to nonlinear identification problems. As for other nonlinear methods, two key questions in kernel-based identification are the questions of how to define a nonlinear model (kernel selection) and how to tune the complexity of the model (regularisation). The following chapter will discuss how these questions are usually dealt with in the literature. The principal contribution of this thesis is the presentation and investigation of two optimisation criteria (one existing in the literature and one novel proposition) for structural approximation and complexity tuning in kernel-based nonlinear system identification. Both methods are based on the idea of incorporating feature-based complexity constraints into the optimisation criterion, by penalising derivatives of functions. Essentially, such methods offer the user flexibility in the definition of a kernel function and the choice of regularisation term, which opens new possibilities with respect to how nonlinear models can be estimated in practice. Both methods bear strong links with other methods from the literature, which will be examined in detail in Chapters 2 and 3 and will form the basis of the subsequent developments of the thesis. Whilst analogy will be made with parallel frameworks, the discussion will be rooted in the framework of Reproducing Kernel Hilbert Spaces (RKHS). Using RKHS methods will allow analysis of the methods presented from both a theoretical and a practical point-of-view. Furthermore, the methods developed will be applied to several identification ‘case studies’, comprising of both simulation and real-data examples, notably: • Structural detection in static nonlinear systems. • Controlling smoothness in LPV models. • Complexity tuning using structural penalties in NARX systems. • Internet traffic modelling using kernel methods
APA, Harvard, Vancouver, ISO, and other styles
3

Struble, Dale William. "Wavelets on manifolds and multiscale reproducing kernel Hilbert spaces." Related electronic resource:, 2007. http://proquest.umi.com/pqdweb?did=1407687581&sid=1&Fmt=2&clientId=3739&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Quiggin, Peter Philip. "Generalisations of Pick's theorem to reproducing Kernel Hilbert spaces." Thesis, Lancaster University, 1994. http://eprints.lancs.ac.uk/61962/.

Full text
Abstract:
Pick's theorem states that there exists a function in H1, which is bounded by 1 and takes given values at given points, if and only if a certain matrix is positive. H1 is the space of multipliers of H2 and this theorem has a natural generalisation when H1 is replaced by the space of multipliers of a general reproducing kernel Hilbert space H(K) (where K is the reproducing kernel). J. Agler showed that this generalised theorem is true when H(K) is a certain Sobolev space or the Dirichlet space. This thesis widens Agler's approach to cover reproducing kernel Hilbert spaces in general and derives sucient (and usable) conditions on the kernel K, for the generalised Pick's theorem to be true for H(K). These conditions are then used to prove Pick's theorem for certain weighted Hardy and Sobolev spaces and for a functional Hilbert space introduced by Saitoh. The reproducing kernel approach is then used to derived results for several related problems. These include the uniqueness of the optimal interpolating multiplier, the case of operator-valued functions and a proof of the Adamyan-Arov-Kren theorem.
APA, Harvard, Vancouver, ISO, and other styles
5

Marx, Gregory. "The Complete Pick Property and Reproducing Kernel Hilbert Spaces." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/24783.

Full text
Abstract:
We present two approaches towards a characterization of the complete Pick property. We first discuss the lurking isometry method used in a paper by J.A. Ball, T.T. Trent, and V. Vinnikov. They show that a nondegenerate, positive kernel has the complete Pick property if $1/k$ has one positive square. We also look at the one-point extension approach developed by P. Quiggin which leads to a sufficient and necessary condition for a positive kernel to have the complete Pick property. We conclude by connecting the two characterizations of the complete Pick property.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

Giménez, Febrer Pere Joan. "Matrix completion with prior information in reproducing kernel Hilbert spaces." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671718.

Full text
Abstract:
In matrix completion, the objective is to recover an unknown matrix from a small subset of observed entries. Most successful methods for recovering the unknown entries are based on the assumption that the unknown full matrix has low rank. By having low rank, each of its entries are obtained as a function of a small number of coefficients which can be accurately estimated provided that there are enough available observations. Hence, in low-rank matrix completion the estimate is given by the matrix of minimum rank that fits the observed entries. Besides low rankness, the unknown matrix might exhibit other structural properties which can be leveraged in the recovery process. In a smooth matrix, it can be expected that entries that are close in index distance will have similar values. Similarly, groups of rows or columns can be known to contain similarly valued entries according to certain relational structures. This relational information is conveyed through different means such as covariance matrices or graphs, with the inconvenient that these cannot be derived from the data matrix itself since it is incomplete. Hence, any knowledge on how the matrix entries are related among them must be derived from prior information. This thesis deals with matrix completion with prior information, and presents an outlook that generalizes to many situations. In the first part, the columns of the unknown matrix are cast as graph signals with a graph known beforehand. In this, the adjacency matrix of the graph is used to calculate an initial point for a proximal gradient algorithm in order to reduce the iterations needed to converge to a solution. Then, under the assumption that the graph signals are smooth, the graph Laplacian is incorporated into the problem formulation with the aim to enforce smoothness on the solution. This results in an effective denoising of the observed matrix and reduced error, which is shown through theoretical analysis of the proximal gradient coupled with Laplacian regularization, and numerical tests. The second part of the thesis introduces a framework to exploit prior information through reproducing kernel Hilbert spaces. Since a kernel measures similarity between two points in an input set, it enables the encoding of any prior information such as feature vectors, dictionaries or connectivity on a graph. By associating each column and row of the unknown matrix with an item in a set, and defining a pair of kernels measuring similarity between columns or rows, the missing entries can be extrapolated by means of the kernel functions. A method based on kernel regression is presented, with two additional variants aimed at reducing computational cost, and online implementation. These methods prove to be competitive with existing techniques, especially when the number of observations is very small. Furthermore, mean-square error and generalization error analyses are carried out, shedding light on the factors impacting algorithm performance. For the generalization error analysis, the focus is on the transductive case, which measures the ability of an algorithm to transfer knowledge from a set of labelled inputs to an unlabelled set. Here, bounds are derived for the proposed and existing algorithms by means of the transductive Rademacher complexity, and numerical tests confirming the theoretical findings are presented. Finally, the thesis explores the question of how to choose the observed entries of a matrix in order to minimize the recovery error of the full matrix. A passive sampling approach is presented, which entails that no labelled inputs are needed to design the sampling distribution; only the input set and kernel functions are required. The approach is based on building the best Nyström approximation to the kernel matrix by sampling the columns according to their leverage scores, a metric that arises naturally in the theoretical analysis to find an optimal sampling distribution.
A matrix completion, l'objectiu és recuperar una matriu a partir d'un subconjunt d'entrades observables. Els mètodes més eficaços es basen en la idea que la matriu desconeguda és de baix rang. Al ser de baix rang, les seves entrades són funció d'uns pocs coeficients que poden ser estimats sempre que hi hagi suficients observacions. Així, a matrix completion la solució s'obté com la matriu de mínim rang que millor s'ajusta a les entrades visibles. A més de baix rang, la matriu desconeguda pot tenir altres propietats estructurals que poden ser aprofitades en el procés de recuperació. En una matriu suau, pot esperar-se que les entrades en posicions pròximes tinguin valor similar. Igualment, grups de columnes o files poden saber-se similars. Aquesta informació relacional es proporciona a través de diversos mitjans com ara matrius de covariància o grafs, amb l'inconvenient que aquests no poden ser derivats a partir de la matriu de dades ja que està incompleta. Aquesta tesi tracta sobre matrix completion amb informació prèvia, i presenta metodologies que poden aplicar-se a diverses situacions. En la primera part, les columnes de la matriu desconeguda s'identifiquen com a senyals en un graf conegut prèviament. Llavors, la matriu d'adjacència del graf s'usa per calcular un punt inicial per a un algorisme de gradient pròxim amb la finalitat de reduir les iteracions necessàries per arribar a la solució. Després, suposant que els senyals són suaus, la matriu laplaciana del graf s'incorpora en la formulació del problema amb tal forçar suavitat en la solució. Això resulta en una reducció de soroll en la matriu observada i menor error, la qual cosa es demostra a través d'anàlisi teòrica i simulacions numèriques. La segona part de la tesi introdueix eines per a aprofitar informació prèvia mitjançant reproducing kernel Hilbert spaces. Atès que un kernel mesura la similitud entre dos punts en un espai, permet codificar qualsevol tipus d'informació tal com vectors de característiques, diccionaris o grafs. En associar cada columna i fila de la matriu desconeguda amb un element en un set, i definir un parell de kernels que mesuren similitud entre columnes o files, les entrades desconegudes poden ser extrapolades mitjançant les funcions de kernel. Es presenta un mètode basat en regressió amb kernels, amb dues variants addicionals que redueixen el cost computacional. Els mètodes proposats es mostren competitius amb tècniques existents, especialment quan el nombre d'observacions és molt baix. A més, es detalla una anàlisi de l'error quadràtic mitjà i l'error de generalització. Per a l'error de generalització, s'adopta el context transductiu, el qual mesura la capacitat d'un algorisme de transferir informació d'un set de mostres etiquetades a un set no etiquetat. Després, es deriven cotes d'error per als algorismes proposats i existents fent ús de la complexitat de Rademacher, i es presenten proves numèriques que confirmen els resultats teòrics. Finalment, la tesi explora la qüestió de com triar les entrades observables de la matriu per a minimitzar l'error de recuperació de la matriu completa. Una estratègia de mostrejat passiva és proposada, la qual implica que no és necessari conèixer cap etiqueta per a dissenyar la distribució de mostreig. Només les funcions de kernel són necessàries. El mètode es basa en construir la millor aproximació de Nyström a la matriu de kernel mostrejant les columnes segons la seva leverage score, una mètrica que apareix de manera natural durant l'anàlisi teòric.
APA, Harvard, Vancouver, ISO, and other styles
7

Dieuleveut, Aymeric. "Stochastic approximation in Hilbert spaces." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE059/document.

Full text
Abstract:
Le but de l’apprentissage supervisé est d’inférer des relations entre un phénomène que l’on souhaite prédire et des variables « explicatives ». À cette fin, on dispose d’observations de multiples réalisations du phénomène, à partir desquelles on propose une règle de prédiction. L’émergence récente de sources de données à très grande échelle, tant par le nombre d’observations effectuées (en analyse d’image, par exemple) que par le grand nombre de variables explicatives (en génétique), a fait émerger deux difficultés : d’une part, il devient difficile d’éviter l’écueil du sur-apprentissage lorsque le nombre de variables explicatives est très supérieur au nombre d’observations; d’autre part, l’aspect algorithmique devient déterminant, car la seule résolution d’un système linéaire dans les espaces en jeupeut devenir une difficulté majeure. Des algorithmes issus des méthodes d’approximation stochastique proposent uneréponse simultanée à ces deux difficultés : l’utilisation d’une méthode stochastique réduit drastiquement le coût algorithmique, sans dégrader la qualité de la règle de prédiction proposée, en évitant naturellement le sur-apprentissage. En particulier, le cœur de cette thèse portera sur les méthodes de gradient stochastique. Les très populaires méthodes paramétriques proposent comme prédictions des fonctions linéaires d’un ensemble choisi de variables explicatives. Cependant, ces méthodes aboutissent souvent à une approximation imprécise de la structure statistique sous-jacente. Dans le cadre non-paramétrique, qui est un des thèmes centraux de cette thèse, la restriction aux prédicteurs linéaires est levée. La classe de fonctions dans laquelle le prédicteur est construit dépend elle-même des observations. En pratique, les méthodes non-paramétriques sont cruciales pour diverses applications, en particulier pour l’analyse de données non vectorielles, qui peuvent être associées à un vecteur dans un espace fonctionnel via l’utilisation d’un noyau défini positif. Cela autorise l’utilisation d’algorithmes associés à des données vectorielles, mais exige une compréhension de ces algorithmes dans l’espace non-paramétrique associé : l’espace à noyau reproduisant. Par ailleurs, l’analyse de l’estimation non-paramétrique fournit également un éclairage révélateur sur le cadre paramétrique, lorsque le nombre de prédicteurs surpasse largement le nombre d’observations. La première contribution de cette thèse consiste en une analyse détaillée de l’approximation stochastique dans le cadre non-paramétrique, en particulier dans le cadre des espaces à noyaux reproduisants. Cette analyse permet d’obtenir des taux de convergence optimaux pour l’algorithme de descente de gradient stochastique moyennée. L’analyse proposée s’applique à de nombreux cadres, et une attention particulière est portée à l’utilisation d’hypothèses minimales, ainsi qu’à l’étude des cadres où le nombre d’observations est connu à l’avance, ou peut évoluer. La seconde contribution est de proposer un algorithme, basé sur un principe d’accélération, qui converge à une vitesse optimale, tant du point de vue de l’optimisation que du point de vue statistique. Cela permet, dans le cadre non-paramétrique, d’améliorer la convergence jusqu’au taux optimal, dans certains régimes pour lesquels le premier algorithme analysé restait sous-optimal. Enfin, la troisième contribution de la thèse consiste en l’extension du cadre étudié au delà de la perte des moindres carrés : l’algorithme de descente de gradient stochastiqueest analysé comme une chaine de Markov. Cette approche résulte en une interprétation intuitive, et souligne les différences entre le cadre quadratique et le cadre général. Une méthode simple permettant d’améliorer substantiellement la convergence est également proposée
The goal of supervised machine learning is to infer relationships between a phenomenon one seeks to predict and “explanatory” variables. To that end, multiple occurrences of the phenomenon are observed, from which a prediction rule is constructed. The last two decades have witnessed the apparition of very large data-sets, both in terms of the number of observations (e.g., in image analysis) and in terms of the number of explanatory variables (e.g., in genetics). This has raised two challenges: first, avoiding the pitfall of over-fitting, especially when the number of explanatory variables is much higher than the number of observations; and second, dealing with the computational constraints, such as when the mere resolution of a linear system becomes a difficulty of its own. Algorithms that take their roots in stochastic approximation methods tackle both of these difficulties simultaneously: these stochastic methods dramatically reduce the computational cost, without degrading the quality of the proposed prediction rule, and they can naturally avoid over-fitting. As a consequence, the core of this thesis will be the study of stochastic gradient methods. The popular parametric methods give predictors which are linear functions of a set ofexplanatory variables. However, they often result in an imprecise approximation of the underlying statistical structure. In the non-parametric setting, which is paramount in this thesis, this restriction is lifted. The class of functions from which the predictor is proposed depends on the observations. In practice, these methods have multiple purposes, and are essential for learning with non-vectorial data, which can be mapped onto a vector in a functional space using a positive definite kernel. This allows to use algorithms designed for vectorial data, but requires the analysis to be made in the non-parametric associated space: the reproducing kernel Hilbert space. Moreover, the analysis of non-parametric regression also sheds some light on the parametric setting when the number of predictors is much larger than the number of observations. The first contribution of this thesis is to provide a detailed analysis of stochastic approximation in the non-parametric setting, precisely in reproducing kernel Hilbert spaces. This analysis proves optimal convergence rates for the averaged stochastic gradient descent algorithm. As we take special care in using minimal assumptions, it applies to numerous situations, and covers both the settings in which the number of observations is known a priori, and situations in which the learning algorithm works in an on-line fashion. The second contribution is an algorithm based on acceleration, which converges at optimal speed, both from the optimization point of view and from the statistical one. In the non-parametric setting, this can improve the convergence rate up to optimality, even inparticular regimes for which the first algorithm remains sub-optimal. Finally, the third contribution of the thesis consists in an extension of the framework beyond the least-square loss. The stochastic gradient descent algorithm is analyzed as a Markov chain. This point of view leads to an intuitive and insightful interpretation, that outlines the differences between the quadratic setting and the more general setting. A simple method resulting in provable improvements in the convergence is then proposed
APA, Harvard, Vancouver, ISO, and other styles
8

Giulini, Ilaria. "Generalization bounds for random samples in Hilbert spaces." Thesis, Paris, Ecole normale supérieure, 2015. http://www.theses.fr/2015ENSU0026/document.

Full text
Abstract:
Ce travail de thèse porte sur l'obtention de bornes de généralisation pour des échantillons statistiques à valeur dans des espaces de Hilbert définis par des noyaux reproduisants. L'approche consiste à obtenir des bornes non asymptotiques indépendantes de la dimension dans des espaces de dimension finie, en utilisant des inégalités PAC-Bayesiennes liées à une perturbation Gaussienne du paramètre et à les étendre ensuite aux espaces de Hilbert séparables. On se pose dans un premier temps la question de l'estimation de l'opérateur de Gram à partir d'un échantillon i. i. d. par un estimateur robuste et on propose des bornes uniformes, sous des hypothèses faibles de moments. Ces résultats permettent de caractériser l'analyse en composantes principales indépendamment de la dimension et d'en proposer des variantes robustes. On propose ensuite un nouvel algorithme de clustering spectral. Au lieu de ne garder que la projection sur les premiers vecteurs propres, on calcule une itérée du Laplacian normalisé. Cette itération, justifiée par l'analyse du clustering en termes de chaînes de Markov, opère comme une version régularisée de la projection sur les premiers vecteurs propres et permet d'obtenir un algorithme dans lequel le nombre de clusters est déterminé automatiquement. On présente des bornes non asymptotiques concernant la convergence de cet algorithme, lorsque les points à classer forment un échantillon i. i. d. d'une loi à support compact dans un espace de Hilbert. Ces bornes sont déduites des bornes obtenues pour l'estimation d'un opérateur de Gram dans un espace de Hilbert. On termine par un aperçu de l'intérêt du clustering spectral dans le cadre de l'analyse d'images
This thesis focuses on obtaining generalization bounds for random samples in reproducing kernel Hilbert spaces. The approach consists in first obtaining non-asymptotic dimension-free bounds in finite-dimensional spaces using some PAC-Bayesian inequalities related to Gaussian perturbations and then in generalizing the results in a separable Hilbert space. We first investigate the question of estimating the Gram operator by a robust estimator from an i. i. d. sample and we present uniform bounds that hold under weak moment assumptions. These results allow us to qualify principal component analysis independently of the dimension of the ambient space and to propose stable versions of it. In the last part of the thesis we present a new algorithm for spectral clustering. It consists in replacing the projection on the eigenvectors associated with the largest eigenvalues of the Laplacian matrix by a power of the normalized Laplacian. This iteration, justified by the analysis of clustering in terms of Markov chains, performs a smooth truncation. We prove nonasymptotic bounds for the convergence of our spectral clustering algorithm applied to a random sample of points in a Hilbert space that are deduced from the bounds for the Gram operator in a Hilbert space. Experiments are done in the context of image analysis
APA, Harvard, Vancouver, ISO, and other styles
9

Paiva, António R. C. "Reproducing kernel Hilbert spaces for point processes, with applications to neural activity analysis." [Gainesville, Fla.] : University of Florida, 2008. http://purl.fcla.edu/fcla/etd/UFE0022471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sabree, Aqeeb A. "Positive definite kernels, harmonic analysis, and boundary spaces: Drury-Arveson theory, and related." Diss., University of Iowa, 2019. https://ir.uiowa.edu/etd/7023.

Full text
Abstract:
A reproducing kernel Hilbert space (RKHS) is a Hilbert space $\mathscr{H}$ of functions with the property that the values $f(x)$ for $f \in \mathscr{H}$ are reproduced from the inner product in $\mathscr{H}$. Recent applications are found in stochastic processes (Ito Calculus), harmonic analysis, complex analysis, learning theory, and machine learning algorithms. This research began with the study of RKHSs to areas such as learning theory, sampling theory, and harmonic analysis. From the Moore-Aronszajn theorem, we have an explicit correspondence between reproducing kernel Hilbert spaces (RKHS) and reproducing kernel functions—also called positive definite kernels or positive definite functions. The focus here is on the duality between positive definite functions and their boundary spaces; these boundary spaces often lead to the study of Gaussian processes or Brownian motion. It is known that every reproducing kernel Hilbert space has an associated generalized boundary probability space. The Arveson (reproducing) kernel is $K(z,w) = \frac{1}{1-_{\C^d}}, z,w \in \B_d$, and Arveson showed, \cite{Arveson}, that the Arveson kernel does not follow the boundary analysis we were finding in other RKHS. Thus, we were led to define a new reproducing kernel on the unit ball in complex $n$-space, and naturally this lead to the study of a new reproducing kernel Hilbert space. This reproducing kernel Hilbert space stems from boundary analysis of the Arveson kernel. The construction of the new RKHS resolves the problem we faced while researching “natural” boundary spaces (for the Drury-Arveson RKHS) that yield boundary factorizations: \[K(z,w) = \int_{\mathcal{B}} K^{\mathcal{B}}_z(b)\overline{K^{\mathcal{B}}_w(b)}d\mu(b), \;\;\; z,w \in \B_d \text{ and } b \in \mathcal{B} \tag*{\it{(Factorization of} $K$).}\] Results from classical harmonic analysis on the disk (the Hardy space) are generalized and extended to the new RKHS. Particularly, our main theorem proves that, relaxing the criteria to the contractive property, we can do the generalization that Arveson's paper showed (criteria being an isometry) is not possible.
APA, Harvard, Vancouver, ISO, and other styles
11

Barbian, Christoph [Verfasser], and Jörg [Akademischer Betreuer] Eschmeier. "Beurling-type representation of invariant subspaces in reproducing kernel Hilbert spaces / Christoph Barbian. Betreuer: Jörg Eschmeier." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2011. http://d-nb.info/1051285119/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Barbosa, Victor Simões. "Universalidade e ortogonalidade em espaços de Hilbert de reprodução." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/55/55135/tde-18032013-142251/.

Full text
Abstract:
Neste trabalho analisamos o papel das funções layout de um núcleo positivo definido K sobre um espaço topológico de Hausdor E com relação a duas propriedades específicas: a universalidade de K e a ortogonalidade no espaço de Hilbert de reprodução de K a partir de suportes disjuntos. As funções layout sempre existem mas podem não ser únicas. De uma maneira geral, a função layout e uma aplicação que transfere, convenientemente, informações do espaço E para um espaço com produto interno de dimensão alta, onde métodos lineares podem ser usados. Tanto a universalidade quanto a ortogonalidade pressupõem a continuidade do núcleo. O primeiro conceito exige que para cada compacto não vazio X de E, o conjunto de \"seções\" {K(., y) : y \'PERTENCE\' X} seja total no espaço de todas as funções contínuas com domínio X, munido da topologia da convergência uniforme. Um dos resultados principais do trabalho caracteriza a universalidade de um núcleo K através de uma propriedade de universalidade semelhante da função layout. A ortogonalidade a partir de suportes disjuntos almeja então a ortogonalidade de quaisquer duas funções do espaço de Hilbert de reprodução de K quando seus suportes não se intersectam
We analyze the role of feature maps of a positive denite kernel K acting on a Hausdorff topological space E in two specific properties: the universality of K and the orthogonality in the reproducing kernel Hilbert space of K from disjoint supports. Feature maps always exist but may not be unique. A feature map may be interpreted as a kernel based procedure that maps the data from the original input space E into a potentially higher dimensional \"feature space\" in which linear methods may then be used. Both properties, universality and orthogonality from disjoint supports, make sense under continuity of the kernel. Universality of K is equivalent to the fundamentality of {K(. ; y) : y \'IT BELONGS\' X} in the space of all continuous functions on X, with the topology of uniform convergence, for all nonempty compact subsets X of E. One of the main results in this work is a characterization of the universality of K from a similar concept for the feature map. Orthogonality from disjoint supports seeks the orthogonality of any two functions in the reproducing kernel Hilbert space of K when the functions have disjoint supports
APA, Harvard, Vancouver, ISO, and other styles
13

Plumlee, Matthew. "Fast methods for identifying high dimensional systems using observations." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53544.

Full text
Abstract:
This thesis proposes new analysis tools for simulation models in the presence of data. To achieve a representation close to reality, simulation models are typically endowed with a set of inputs, termed parameters, that represent several controllable, stochastic or unknown components of the system. Because these models often utilize computationally expensive procedures, even modern supercomputers require a nontrivial amount of time, money, and energy to run for complex systems. Existing statistical frameworks avoid repeated evaluations of deterministic models through an emulator, constructed by conducting an experiment on the code. In high dimensional scenarios, the traditional framework for emulator-based analysis can fail due to the computational burden of inference. This thesis proposes a new class of experiments where inference from half a million observations is possible in seconds versus the days required for the traditional technique. In a case study presented in this thesis, the parameter of interest is a function as opposed to a scalar or a set of scalars, meaning the problem exists in the high dimensional regime. This work develops a new modeling strategy to nonparametrically study the functional parameter using Bayesian inference. Stochastic simulations are also investigated in the thesis. I describe the development of emulators through a framework termed quantile kriging, which allows for non-parametric representations of the stochastic behavior of the output whereas previous work has focused on normally distributed outputs. Furthermore, this work studied asymptotic properties of this methodology that yielded practical insights. Under certain regulatory conditions, there is the following result: By using an experiment that has the appropriate ratio of replications to sets of different inputs, we can achieve an optimal rate of convergence. Additionally, this method provided the basic tool for the study of defect patterns and a case study is explored.
APA, Harvard, Vancouver, ISO, and other styles
14

Ferreira, José Claudinei. "Operadores integrais positivos e espaços de Hilbert de reprodução." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/55/55135/tde-17082010-100716/.

Full text
Abstract:
Este trabalho é dedicado ao estudo de propriedades teóricas dos operadores integrais positivos em \'L POT. 2\' (X; u), quando X é um espaço topológico localmente compacto ou primeiro enumerável e u é uma medida estritamente positiva. Damos ênfase à análise de propriedades espectrais relacionadas com extensões do Teorema de Mercer e ao estudo dos espaços de Hilbert de reprodução relacionados. Como aplicação, estudamos o decaimento dos autovalores destes operadores, em um contexto especial. Finalizamos o trabalho com a análise de propriedades de suavidade das funções do espaço de Hilbert de reprodução, quando X é um subconjunto do espaço euclidiano usual e u é a medida de Lebesgue usual de X
In this work we study theoretical properties of positive integral operators on \'L POT. 2\'(X; u), in the case when X is a topological space, either locally compact or first countable, and u is a strictly positive measure. The analysis is directed to spectral properties of the operator which are related to some extensions of Mercer\'s Theorem and to the study of the reproducing kernel Hilbert spaces involved. As applications, we deduce decay rates for the eigenvalues of the operators in a special but relevant case. We also consider smoothness properties for functions in the reproducing kernel Hilbert spaces when X is a subset of the Euclidean space and u is the Lebesgue measure of the space
APA, Harvard, Vancouver, ISO, and other styles
15

Niedzialomski, Robert. "Extension of positive definite functions." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/2595.

Full text
Abstract:
Let $\Omega\subset\mathbb{R}^n$ be an open and connected subset of $\mathbb{R}^n$. We say that a function $F\colon \Omega-\Omega\to\mathbb{C}$, where $\Omega-\Omega=\{x-y\colon x,y\in\Omega\}$, is positive definite if for any $x_1,\ldots,x_m\in\Omega$ and any $c_1,\ldots,c_m\in \mathbb{C}$ we have that $\sum_{j,k=1}^m F(x_j-x_k)c_j\overline{c_k}\geq 0$. Let $F\colon\Omega-\Omega\to\mathbb{C}$ be a continuous positive definite function. We give necessary and sufficient conditions for $F$ to have an extension to a continuous and positive definite function defined on the entire Euclidean space $\mathbb{R}^n$. The conditions are formulated in terms of strong commutativity of some certain selfadjoint operators defined on a Hilbert space associated to our positive definite function.
APA, Harvard, Vancouver, ISO, and other styles
16

Kingravi, Hassan. "Reduced-set models for improving the training and execution speed of kernel methods." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51799.

Full text
Abstract:
This thesis aims to contribute to the area of kernel methods, which are a class of machine learning methods known for their wide applicability and state-of-the-art performance, but which suffer from high training and evaluation complexity. The work in this thesis utilizes the notion of reduced-set models to alleviate the training and testing complexities of these methods in a unified manner. In the first part of the thesis, we use recent results in kernel smoothing and integral-operator learning to design a generic strategy to speed up various kernel methods. In Chapter 3, we present a method to speed up kernel PCA (KPCA), which is one of the fundamental kernel methods for manifold learning, by using reduced-set density estimates (RSDE) of the data. The proposed method induces an integral operator that is an approximation of the ideal integral operator associated to KPCA. It is shown that the error between the ideal and approximate integral operators is related to the error between the ideal and approximate kernel density estimates of the data. In Chapter 4, we derive similar approximation algorithms for Gaussian process regression, diffusion maps, and kernel embeddings of conditional distributions. In the second part of the thesis, we use reduced-set models for kernel methods to tackle online learning in model-reference adaptive control (MRAC). In Chapter 5, we relate the properties of the feature spaces induced by Mercer kernels to make a connection between persistency-of-excitation and the budgeted placement of kernels to minimize tracking and modeling error. In Chapter 6, we use a Gaussian process (GP) formulation of the modeling error to accommodate a larger class of errors, and design a reduced-set algorithm to learn a GP model of the modeling error. Proofs of stability for all the algorithms are presented, and simulation results on a challenging control problem validate the methods.
APA, Harvard, Vancouver, ISO, and other styles
17

Santana, Júnior Ewaldo éder Carvalho. "EXTRAÇÃO CEGA DE SINAIS COM ESTRUTURAS TEMPORAIS UTILIZANDO ESPAÇOS DE HILBERT REPRODUZIDOS POR KERNEIS." Universidade Federal do Maranhão, 2012. http://tedebc.ufma.br:8080/jspui/handle/tede/476.

Full text
Abstract:
Made available in DSpace on 2016-08-17T14:53:18Z (GMT). No. of bitstreams: 1 Dissertacao Ewaldo.pdf: 1169300 bytes, checksum: fc5d4b9840bbafe39d03cd1221da615e (MD5) Previous issue date: 2012-02-10
This work derives and evaluates a nonlinear method for Blind Source Extraction (BSE) in a Reproducing Kernel Hilbert Space (RKHS) framework. For extracting the desired signal from a mixture a priori information about the autocorrelation function of that signal translated in a linear transformation of the Gram matrix of the nonlinearly transformed data to the Hilbert space. Our method proved to be more robust than methods presented in the literature of BSE with respect to ambiguities in the available a priori information of the signal to be extracted. The approach here introduced can also be seen as a generalization of Kernel Principal Component Analysis to analyze autocorrelation matrices at specific time lags. Henceforth, the method here presented is a kernelization of Dependent Component Analysis, it will be called Kernel Dependent Component Analysis (KDCA). Also in this dissertation it will be show a Information-Theoretic Learning perspective of the analysis, this will study the transformations in the extracted signals probability density functions while linear operations calculated in the RKHS.
Esta dissertação deriva e avalia um novo método nãolinear para Extração Cega de Sinais através de operações algébricas em um Espaço de Hilbert Reproduzido por Kernel (RKHS, do inglês Reproducing Kernel Hilbert Space). O processo de extração de sinais desejados de misturas é realizado utilizando-se informação sobre a estrutura temporal deste sinal desejado. No presente trabalho, esta informação temporal será utilizada para realizar uma transformação linear na matriz de Gram das misturas transformadas para o espaço de Hilbert. Aqui, mostrarse- á também que o método proposto é mais robusto, com relação a ambigüidades sobre a informação temporal do sinal desejado, que aqueles previamente apresentados na literatura para realizar a mesma operação de extração. A abordagem estudada a seguir pode ser vista como uma generalização da Análise de Componentes Principais utilizando Kerneis para analisar matriz de autocorrelação dos dados para um atraso específico. Sendo também uma kernelização da Análise de Componentes Dependentes, o método aqui desenvolvido é denominado Análise de Componentes Dependentes utilizando Kerneis (KDCA, do inglês Kernel Dependent Component Analysis). Também será abordada nesta dissertação, a perspectiva da Aprendizagem de Máquina utilizando Teoria da Informação do novo método apresentado, mostrando assim, que transformações são realizadas na função densidade de probabilidade do sinal extraído enquanto que operação lineares são calculadas no RKHS.
APA, Harvard, Vancouver, ISO, and other styles
18

Benelmadani, Djihad. "Contribution à la régression non paramétrique avec un processus erreur d'autocovariance générale et application en pharmacocinétique." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM034/document.

Full text
Abstract:
Dans cette thèse, nous considérons le modèle de régression avec plusieurs unités expérimentales, où les erreurs forment un processus d'autocovariance dans un cadre générale, c'est-à-dire, un processus du second ordre (stationnaire ou non stationnaire) avec une autocovariance non différentiable le long de la diagonale. Nous sommes intéressés, entre autres, à l'estimation non paramétrique de la fonction de régression de ce modèle.Premièrement, nous considérons l'estimateur classique proposé par Gasser et Müller. Nous étudions ses performances asymptotiques quand le nombre d'unités expérimentales et le nombre d'observations tendent vers l'infini. Pour un échantillonnage régulier, nous améliorons les vitesses de convergence d'ordre supérieur de son biais et de sa variance. Nous montrons aussi sa normalité asymptotique dans le cas des erreurs corrélées.Deuxièmement, nous proposons un nouvel estimateur à noyau pour la fonction de régression, basé sur une propriété de projection. Cet estimateur est construit à travers la fonction d'autocovariance des erreurs et une fonction particulière appartenant à l'Espace de Hilbert à Noyau Autoreproduisant (RKHS) associé à la fonction d'autocovariance. Nous étudions les performances asymptotiques de l'estimateur en utilisant les propriétés de RKHS. Ces propriétés nous permettent d'obtenir la vitesse optimale de convergence de la variance de cet estimateur. Nous prouvons sa normalité asymptotique, et montrons que sa variance est asymptotiquement plus petite que celle de l'estimateur de Gasser et Müller. Nous conduisons une étude de simulation pour confirmer nos résultats théoriques.Troisièmement, nous proposons un nouvel estimateur à noyau pour la fonction de régression. Cet estimateur est construit en utilisant la règle numérique des trapèzes, pour approximer l'estimateur basé sur des données continues. Nous étudions aussi sa performance asymptotique et nous montrons sa normalité asymptotique. En outre, cet estimateur permet d'obtenir le plan d'échantillonnage optimal pour l'estimation de la fonction de régression. Une étude de simulation est conduite afin de tester le comportement de cet estimateur dans un plan d'échantillonnage de taille finie, en terme d'erreur en moyenne quadratique intégrée (IMSE). De plus, nous montrons la réduction dans l'IMSE en utilisant le plan d'échantillonnage optimal au lieu de l'échantillonnage uniforme.Finalement, nous considérons une application de la régression non paramétrique dans le domaine pharmacocinétique. Nous proposons l'utilisation de l'estimateur non paramétrique à noyau pour l'estimation de la fonction de concentration. Nous vérifions son bon comportement par des simulations et une analyse de données réelles. Nous investiguons aussi le problème de l'estimation de l'Aire Sous la Courbe de concentration (AUC), pour lequel nous proposons un nouvel estimateur à noyau, obtenu par l'intégration de l'estimateur à noyau de la fonction de régression. Nous montrons, par une étude de simulation, que le nouvel estimateur est meilleur que l'estimateur classique en terme d'erreur en moyenne quadratique. Le problème crucial de l'obtention d'un plan d'échantillonnage optimale pour l'estimation de l'AUC est discuté en utilisant l'algorithme de recuit simulé généralisé
In this thesis, we consider the fixed design regression model with repeated measurements, where the errors form a process with general autocovariance function, i.e. a second order process (stationary or nonstationary), with a non-differentiable covariance function along the diagonal. We are interested, among other problems, in the nonparametric estimation of the regression function of this model.We first consider the well-known kernel regression estimator proposed by Gasser and Müller. We study its asymptotic performance when the number of experimental units and the number of observations tend to infinity. For a regular sequence of designs, we improve the higher rates of convergence of the variance and the bias. We also prove the asymptotic normality of this estimator in the case of correlated errors.Second, we propose a new kernel estimator of the regression function based on a projection property. This estimator is constructed through the autocovariance function of the errors, and a specific function belonging to the Reproducing Kernel Hilbert Space (RKHS) associated to the autocovariance function. We study its asymptotic performance using the RKHS properties. These properties allow to obtain the optimal convergence rate of the variance. We also prove its asymptotic normality. We show that this new estimator has a smaller asymptotic variance then the one of Gasser and Müller. A simulation study is conducted to confirm this theoretical result.Third, we propose a new kernel estimator for the regression function. This estimator is constructed through the trapezoidal numerical approximation of the kernel regression estimator based on continuous observations. We study its asymptotic performance, and we prove its asymptotic normality. Moreover, this estimator allow to obtain the asymptotic optimal sampling design for the estimation of the regression function. We run a simulation study to test the performance of the proposed estimator in a finite sample set, where we see its good performance, in terms of Integrated Mean Squared Error (IMSE). In addition, we show the reduction of the IMSE using the optimal sampling design instead of the uniform design in a finite sample set.Finally, we consider an application of the regression function estimation in pharmacokinetics problems. We propose to use the nonparametric kernel methods, for the concentration-time curve estimation, instead of the classical parametric ones. We prove its good performance via simulation study and real data analysis. We also investigate the problem of estimating the Area Under the concentration Curve (AUC), where we introduce a new kernel estimator, obtained by the integration of the regression function estimator. We prove, using a simulation study, that the proposed estimators outperform the classical one in terms of Mean Squared Error. The crucial problem of finding the optimal sampling design for the AUC estimation is investigated using the Generalized Simulating Annealing algorithm
APA, Harvard, Vancouver, ISO, and other styles
19

Varagnolo, Damiano. "Distributed Parametric-Nonparametric Estimation in Networked Control Systems." Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3421610.

Full text
Abstract:
In the framework of parametric and nonparametric distributed estimation, we introduce and mathematically analyze some consensus-based regression strategies characterized by a guess of the number of agents in the network as a parameter. The parametric estimators assume a-priori information about the finite set of parameters to be estimated, while the the nonparametric use a reproducing kernel Hilbert space as the hypothesis space. The analysis of the proposed distributed regressors offers some sufficient conditions assuring the estimators to perform better, under the variance of the estimation error metric, than local optimal ones. Moreover it characterizes, under euclidean distance metrics, the performance losses of the distributed estimators with respect to centralized optimal ones. We also offer a novel on-line algorithm that distributedly computes certificates of quality attesting the goodness of the estimation results, and show that the nonparametric distributed regressor is an approximate distributed Regularization Network requiring small computational, communication and data storage efforts. We then analyze the problem of estimating a function from different noisy data sets collected by spatially distributed sensors and subject to unknown temporal shifts, and perform time delay estimation through the minimization of functions of inner products in reproducing kernel Hilbert spaces. Due to the importance of the knowledge of the number of agents in the previously analyzed algorithms, we also propose a design methodology for its distributed estimation. This algorithm is based on the following paradigm: some locally randomly generated values are exchanged among the various sensors, and are then modified by known consensus-based strategies. Statistical analysis of the a-consensus values allows the estimation of the number of sensors participating in the process. The first main feature of this approach is that algorithms are completely distributed, since they do not require leader election steps. Moreover sensors are not requested to transmit authenticating information like identification numbers or similar data, and thus the strategy can be implemented even if privacy problems arise. After a rigorous formulation of the paradigm we analyze some practical examples, fully characterize them from a statistical point of view, and finally provide some general theoretical results among with asymptotic analyses.
In questa tesi vengono introdotti e analizzati alcuni algoritmi di regressione distribuita parametrica e nonparametrica, basati su tecniche di consenso e parametrizzati da un parametro il cui significato è una stima del numero di sensori presenti nella rete. Gli algoritmi parametrici assumono la conoscenza di informazione a-priori sulle quantità da stimare, mentre quelli nonparametrici utilizzano come spazio delle ipotesi uno spazio di Hilbert a nucleo riproducente. Dall'analisi degli stimatori distribuiti proposti si ricavano alcune condizioni sufficienti che, se assicurate, garantiscono che le prestazioni degli stimatori distribuiti sono migliori di quelli locali (usando come metrica la varianza dell'errore di stima). Inoltre dalla stessa analisi si caratterizzano le perdite di prestazioni che si hanno usando gli stimatori distribuiti invece che quelli centralizzati e ottimi (usando come metrica la distanza euclidea tra le due diverse stime ottenute). Inoltre viene offerto un nuovo algoritmo che calcola in maniera distribuita dei certificati di qualità che garantiscono la bontà dei risultati ottenuti con gli stimatori distribuiti. Si mostra inoltre come lo stimatore nonparametrico distribuito proposto sia in realtà una versione approssimata delle cosiddette ``Reti di Regolarizzazione'', e come esso richieda poche risorse computazionali, di memoria e di comunicazione tra sensori. Si analizza quindi il caso di sensori spazialmente distribuiti e soggetti a ritardi temporali sconosciuti. Si mostra dunque come si possano stimare, minimizzando opportune funzioni di prodotti interni negli spazi di Hilbert precedentemente considerati, sia la funzione vista dai sensori che i relativi ritardi visti da questi. A causa dell'importanza della conoscenza del numero di agenti negli algoritmi proposti precedentemente, viene proposta una nuova metodologia per sviluppare algoritmi di stima distribuita di tale numero, basata sulla seguente idea: come primo passo gli agenti generano localmente alcuni numeri, in maniera casuale e da una densità di probabilità nota a tutti. Quindi i sensori si scambiano e modificano questi dati usando algoritmi di consenso quali la media o il massimo; infine, tramite analisi statistiche sulla distribuzione finale dei dati modificati, si può ottenere dell'informazione su quanti agenti hanno partecipato al processo di consenso e modifica. Una caratteristica di questo approccio è che gli algoritmi sono completamente distribuiti, in quanto non richiedono passi di elezione di leaders. Un'altra è che ai sensori non è richiesto di trasmettere informazioni sensibili quali codici identificativi o altro, quindi la strategia è implementabile anche se in presenza di problemi di riservatezza. Dopo una formulazione rigorosa del paradigma, analizziamo alcuni esempi pratici, li caratterizziamo completamente dal punto di vista statistico, e infine offriamo alcuni risultati teorici generali e analisi asintotiche.
APA, Harvard, Vancouver, ISO, and other styles
20

Ren, Haobo. "Functional inverse regression and reproducing kernel Hilbert space." Diss., Texas A&M University, 2005. http://hdl.handle.net/1969.1/4203.

Full text
Abstract:
The basic philosophy of Functional Data Analysis (FDA) is to think of the observed data functions as elements of a possibly infinite-dimensional function space. Most of the current research topics on FDA focus on advancing theoretical tools and extending existing multivariate techniques to accommodate the infinite-dimensional nature of data. This dissertation reports contributions on both fronts, where a unifying inverse regression theory for both the multivariate setting (Li 1991) and functional data from a Reproducing Kernel Hilbert Space (RKHS) prospective is developed. We proposed a functional multiple-index model which models a real response variable as a function of a few predictor variables called indices. These indices are random elements of the Hilbert space spanned by a second order stochastic process and they constitute the so-called Effective Dimensional Reduction Space (EDRS). To conduct inference on the EDRS, we discovered a fundamental result which reveals the geometrical association between the EDRS and the RKHS of the process. Two inverse regression procedures, a “slicing” approach and a kernel approach, were introduced to estimate the counterpart of the EDRS in the RKHS. Further the estimate of the EDRS was achieved via the transformation from the RKHS to the original Hilbert space. To construct an asymptotic theory, we introduced an isometric mapping from the empirical RKHS to the theoretical RKHS, which can be used to measure the distance between the estimator and the target. Some general computational issues of FDA were discussed, which led to the smoothed versions of the functional inverse regression methods. Simulation studies were performed to evaluate the performance of the inference procedures and applications to biological and chemometrical data analysis were illustrated.
APA, Harvard, Vancouver, ISO, and other styles
21

Kamari, Halaleh. "Qualité prédictive des méta-modèles construits sur des espaces de Hilbert à noyau auto-reproduisant et analyse de sensibilité des modèles complexes." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASE010.

Full text
Abstract:
Ce travail porte sur le problème de l'estimation d'un méta-modèle d'un modèle complexe, noté m. Le modèle m dépend de d variables d'entrées X1,...,Xd qui sont indépendantes et ont une loi connue. Le méta-modèle, noté f∗, approche la décomposition de Hoeffding de m et permet d'estimer ses indices de Sobol. Il appartient à un espace de Hilbert à noyau auto-reproduisant (RKHS), noté H, qui est construit comme une somme directe d'espaces de Hilbert (Durrande et al. (2013)). L'estimateur du méta-modèle, noté f^, est calculé en minimisant un critère des moindres carrés pénalisé par la somme de la norme de Hilbert et de la norme empirique L2 (Huet and Taupin (2017)). Cette procédure, appelée RKHS ridge groupe sparse, permet à la fois de sélectionner et d'estimer les termes de la décomposition de Hoeffding, et donc de sélectionner les indices de Sobol non-nuls et de les estimer. Il permet d'estimer les indices de Sobol même d'ordre élevé, un point connu pour être difficile à mettre en pratique.Ce travail se compose d'une partie théorique et d'une partie pratique. Dans la partie théorique, j'ai établi les majorations du risque empirique L2 et du risque quadratique de l'estimateur f^ d'un modèle de régression où l'erreur est non-gaussienne et non-bornée. Il s'agit des bornes supérieures par rapport à la norme empirique L2 et à la norme L2 pour la distance entre le modèle m et son estimation f^ dans le RKHS H. Dans la partie pratique, j'ai développé un package R appelé RKHSMetaMod, pour la mise en œuvre des méthodes d'estimation du méta-modèle f∗ de m. Ce package s'applique indifféremment dans le cas où le modèle m est calculable et le cas du modèle de régression. Afin d'optimiser le temps de calcul et la mémoire de stockage, toutes les fonctions de ce package ont été écrites en utilisant les bibliothèques GSL et Eigen de C++ à l'exception d'une fonction qui est écrite en R. Elles sont ensuite interfacées avec l'environnement R afin de proposer un package facilement exploitable aux utilisateurs. La performance des fonctions du package en termes de qualité prédictive de l'estimateur et de l'estimation des indices de Sobol, est validée par une étude de simulation
In this work, the problem of estimating a meta-model of a complex model, denoted m, is considered. The model m depends on d input variables X1 , ..., Xd that are independent and have a known law. The meta-model, denoted f ∗ , approximates the Hoeffding decomposition of m, and allows to estimate its Sobol indices. It belongs to a reproducing kernel Hilbert space (RKHS), denoted H, which is constructed as a direct sum of Hilbert spaces (Durrande et al. (2013)). The estimator of the meta-model, denoted f^, is calculated by minimizing a least-squares criterion penalized by the sum of the Hilbert norm and the empirical L2-norm (Huet and Taupin (2017)). This procedure, called RKHS ridge group sparse, allows both to select and estimate the terms in the Hoeffding decomposition, and therefore, to select the Sobol indices that are non-zero and estimate them. It makes possible to estimate the Sobol indices even of high order, a point known to be difficult in practice.This work consists of a theoretical part and a practical part. In the theoretical part, I established upper bounds of the empirical L2 risk and the L2 risk of the estimator f^. That is, upper bounds with respect to the L2-norm and the empirical L2-norm for the f^ distance between the model m and its estimation f into the RKHS H. In the practical part, I developed an R package, called RKHSMetaMod, that implements the RKHS ridge group sparse procedure and a spacial case of it called the RKHS group lasso procedure. This package can be applied to a known model that is calculable in all points or an unknown regression model. In order to optimize the execution time and the storage memory, except for a function that is written in R, all of the functions of the RKHSMetaMod package are written using C++ libraries GSL and Eigen. These functions are then interfaced with the R environment in order to propose an user friendly package. The performance of the package functions in terms of the predictive quality of the estimator and the estimation of the Sobol indices, is validated by a simulation study
APA, Harvard, Vancouver, ISO, and other styles
22

Amaya, Austin J. "Beurling-Lax Representations of Shift-Invariant Spaces, Zero-Pole Data Interpolation, and Dichotomous Transfer Function Realizations: Half-Plane/Continuous-Time Versions." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/27636.

Full text
Abstract:
Given a full-range simply-invariant shift-invariant subspace M of the vector-valued L2 space on the unit circle, the classical Beurling-Lax-Halmos (BLH) theorem obtains a unitary operator-valued function W so that M may be represented as the image of of the Hardy space H2 on the disc under multiplication by W. The work of Ball-Helton later extended this result to find a single function representing a so-called dual shift-invariant pair of subspaces (M,MÃ ) which together form a direct-sum decomposition of L2. In the case where the pair (M,MÃ ) are finite-dimensional perturbations of the Hardy space H2 and its orthogonal complement, Ball-Gohberg-Rodman obtained a transfer function realization for the representing function W; this realization was parameterized in terms of zero-pole data computed from the pair (M,MÃ ). Later work by Ball-Raney extended this analysis to the case of nonrational functions W where the zero-pole data is taken in an infinite-dimensional operator theoretic sense. The current work obtains analogues of these various results for arbitrary dual shift-invariant pairs (M,MÃ ) of the L2 spaces on the real line; here, shift-invariance refers to invariance under the translation group. These new results rely on recent advances in the understanding of continuous-time infinite-dimensional input-state-output linear systems which have been codified in the book by Staffans.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
23

Das, Suporna. "Frames and reproducing kernels in a Hilbert space." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ54293.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Xinhua, and xinhua zhang cs@gmail com. "Graphical Models: Modeling, Optimization, and Hilbert Space Embedding." The Australian National University. ANU College of Engineering and Computer Sciences, 2010. http://thesis.anu.edu.au./public/adt-ANU20100729.072500.

Full text
Abstract:
Over the past two decades graphical models have been widely used as powerful tools for compactly representing distributions. On the other hand, kernel methods have been used extensively to come up with rich representations. This thesis aims to combine graphical models with kernels to produce compact models with rich representational abilities. Graphical models are a powerful underlying formalism in machine learning. Their graph theoretic properties provide both an intuitive modular interface to model the interacting factors, and a data structure facilitating efficient learning and inference. The probabilistic nature ensures the global consistency of the whole framework, and allows convenient interface of models to data. Kernel methods, on the other hand, provide an effective means of representing rich classes of features for general objects, and at the same time allow efficient search for the optimal model. Recently, kernels have been used to characterize distributions by embedding them into high dimensional feature space. Interestingly, graphical models again decompose this characterization and lead to novel and direct ways of comparing distributions based on samples. Among the many uses of graphical models and kernels, this thesis is devoted to the following four areas: Conditional random fields for multi-agent reinforcement learning Conditional random fields (CRFs) are graphical models for modelling the probability of labels given the observations. They have traditionally been trained with using a set of observation and label pairs. Underlying all CRFs is the assumption that, conditioned on the training data, the label sequences of different training examples are independent and identically distributed (iid ). We extended the use of CRFs to a class of temporal learning algorithms, namely policy gradient reinforcement learning (RL). Now the labels are no longer iid. They are actions that update the environment and affect the next observation. From an RL point of view, CRFs provide a natural way to model joint actions in a decentralized Markov decision process. They define how agents can communicate with each other to choose the optimal joint action. We tested our framework on a synthetic network alignment problem, a distributed sensor network, and a road traffic control system. Using tree sampling by Hamze & de Freitas (2004) for inference, the RL methods employing CRFs clearly outperform those which do not model the proper joint policy. Bayesian online multi-label classification Gaussian density filtering (GDF) provides fast and effective inference for graphical models (Maybeck, 1982). Based on this natural online learner, we propose a Bayesian online multi-label classification (BOMC) framework which learns a probabilistic model of the linear classifier. The training labels are incorporated to update the posterior of the classifiers via a graphical model similar to TrueSkill (Herbrich et al., 2007), and inference is based on GDF with expectation propagation. Using samples from the posterior, we label the test data by maximizing the expected F-score. Our experiments on Reuters1-v2 dataset show that BOMC delivers significantly higher macro-averaged F-score than the state-of-the-art online maximum margin learners such as LaSVM (Bordes et al., 2005) and passive aggressive online learning (Crammer et al., 2006). The online nature of BOMC also allows us to effciently use a large amount of training data. Hilbert space embedment of distributions Graphical models are also an essential tool in kernel measures of independence for non-iid data. Traditional information theory often requires density estimation, which makes it unideal for statistical estimation. Motivated by the fact that distributions often appear in machine learning via expectations, we can characterize the distance between distributions in terms of distances between means, especially means in reproducing kernel Hilbert spaces which are called kernel embedment. Under this framework, the undirected graphical models further allow us to factorize the kernel embedment onto cliques, which yields efficient measures of independence for non-iid data (Zhang et al., 2009). We show the effectiveness of this framework for ICA and sequence segmentation, and a number of further applications and research questions are identified. Optimization in maximum margin models for structured data Maximum margin estimation for structured data, e.g. (Taskar et al., 2004), is an important task in machine learning where graphical models also play a key role. They are special cases of regularized risk minimization, for which bundle methods (BMRM, Teo et al., 2007) and the closely related SVMStruct (Tsochantaridis et al., 2005) are state-of-the-art general purpose solvers. Smola et al. (2007b) proved that BMRM requires O(1/έ) iterations to converge to an έ accurate solution, and we further show that this rate hits the lower bound. By utilizing the structure of the objective function, we devised an algorithm for the structured loss which converges to an έ accurate solution in O(1/√έ) iterations. This algorithm originates from Nesterov's optimal first order methods (Nesterov, 2003, 2005b).
APA, Harvard, Vancouver, ISO, and other styles
25

Leon, Ralph Daniel. "Module structure of a Hilbert space." CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2469.

Full text
Abstract:
This paper demonstrates the properties of a Hilbert structure. In order to have a Hilbert structure it is necessary to satisfy certain properties or axioms. The main body of the paper is centered on six questions that develop these ideas.
APA, Harvard, Vancouver, ISO, and other styles
26

Jordão, Thaís. "Diferenciabilidade em espaços de Hilbert de reprodução sobre a esfera." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55135/tde-29032012-103159/.

Full text
Abstract:
Um espaço de Hilbert de reprodução (EHR) é um espaço de Hilbert de funções construído de maneira específica e única a partir de um núcleo positivo definido. As funções do EHR tem a seguinte peculiaridade: seus valores podem ser reproduzidos através de uma operação elementar envolvendo a própria função, o núcleo gerador e o produto interno do espaço. Neste trabalho, consideramos EHR gerados por núcleos positivos definidos sobre a esfera unitária m-dimensional usual. Analisamos quais propriedades são herdadas pelos elementos do espaço, quando o núcleo gerador possui alguma hipótese de diferenciabilidade. A análise é elaborada em duas frentes: com a noção de diferenciabilidade usual sobre a esfera e com uma noção de diferenciabilidade definida por uma operação multiplicativa genérica. Esta última inclui como caso particular as derivadas fracionárias e a derivada forte de Laplace-Beltrami. Em cada um dos casos consideramos ainda propriedades específicas do mergulho do EHR em espaços de funções suaves definidos pela diferenciabilidade utilizada
A reproducing kernel Hilbert space (EHR) is a Hilbert space of functions constructed in a unique manner from a fixed positive definite generating kernel. The values of a function in a reproducing kernel Hilbert space can be reproduced through an elementary operation involving the function itself, the generating kernel and the inner product of the space. In this work, we consider reproducing kernel Hilbert spaces generated by a positive definite kernel on the usual m-dimensional sphere. The main goal is to analyze differentiability properties inherited by the functions in the space when the generating kernel carries a differentiability assumption. That is done in two different cases: using the usual notion of differentiability on the sphere and using another one defined through multiplicative operators. The second case includes the Laplace-Beltrami derivative and fractional derivatives as well. In both cases we consider specific properties of the embeddings of the reproducing kernel Hilbert space into spaces of smooth functions induced by notion of differentiability used
APA, Harvard, Vancouver, ISO, and other styles
27

Tullo, Alessandra. "Apprendimento automatico con metodo kernel." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23200/.

Full text
Abstract:
Il seguente lavoro ha come obbiettivo lo studio dei metodi kernel nell'apprendimento automatico. Partendo dalla definizione di spazi di Hilbert a nucleo riproducente vengono esaminate le funzioni kernel e i metodi kernel. In particolare vengono analizzati il kernel trick e il representer theorem. Infine viene dato un esempio di problema dell'apprendimento automatico supervisionato, il problema di regressione lineare del kernel, risolto attraverso il representer theorem.
APA, Harvard, Vancouver, ISO, and other styles
28

Agrawal, Devanshu. "The Complete Structure of Linear and Nonlinear Deformations of Frames on a Hilbert Space." Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etd/3003.

Full text
Abstract:
A frame is a possibly linearly dependent set of vectors in a Hilbert space that facilitates the decomposition and reconstruction of vectors. A Parseval frame is a frame that acts as its own dual frame. A Gabor frame comprises all translations and phase modulations of an appropriate window function. We show that the space of all frames on a Hilbert space indexed by a common measure space can be fibrated into orbits under the action of invertible linear deformations and that any maximal set of unitarily inequivalent Parseval frames is a complete set of representatives of the orbits. We show that all such frames are connected by transformations that are linear in the larger Hilbert space of square-integrable functions on the indexing space. We apply our results to frames on finite-dimensional Hilbert spaces and to the discretization of the Gabor frame with a band-limited window function.
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Roy Chih Chung. "Adaptive Kernel Functions and Optimization Over a Space of Rank-One Decompositions." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36975.

Full text
Abstract:
The representer theorem from the reproducing kernel Hilbert space theory is the origin of many kernel-based machine learning and signal modelling techniques that are popular today. Most kernel functions used in practical applications behave in a homogeneous manner across the domain of the signal of interest, and they are called stationary kernels. One open problem in the literature is the specification of a non-stationary kernel that is computationally tractable. Some recent works solve large-scale optimization problems to obtain such kernels, and they often suffer from non-identifiability issues in their optimization problem formulation. Many practical problems can benefit from using application-specific prior knowledge on the signal of interest. For example, if one can adequately encode the prior assumption that edge contours are smooth, one does not need to learn a finite-dimensional dictionary from a database of sampled image patches that each contains a circular object in order to up-convert images that contain circular edges. In the first portion of this thesis, we present a novel method for constructing non-stationary kernels that incorporates prior knowledge. A theorem is presented that ensures the result of this construction yields a symmetric and positive-definite kernel function. This construction does not require one to solve any non-identifiable optimization problems. It does require one to manually design some portions of the kernel while deferring the specification of the remaining portions to when an observation of the signal is available. In this sense, the resultant kernel is adaptive to the data observed. We give two examples of this construction technique via the grayscale image up-conversion task where we chose to incorporate the prior assumption that edge contours are smooth. Both examples use a novel local analysis algorithm that summarizes the p-most dominant directions for a given grayscale image patch. The non-stationary properties of these two types of kernels are empirically demonstrated on the Kodak image database that is popular within the image processing research community. Tensors and tensor decomposition methods are gaining popularity in the signal processing and machine learning literature, and most of the recently proposed tensor decomposition methods are based on the tensor power and alternating least-squares algorithms, which were both originally devised over a decade ago. The algebraic approach for the canonical polyadic (CP) symmetric tensor decomposition problem is an exception. This approach exploits the bijective relationship between symmetric tensors and homogeneous polynomials. The solution of a CP symmetric tensor decomposition problem is a set of p rank-one tensors, where p is fixed. In this thesis, we refer to such a set of tensors as a rank-one decomposition with cardinality p. Existing works show that the CP symmetric tensor decomposition problem is non-unique in the general case, so there is no bijective mapping between a rank-one decomposition and a symmetric tensor. However, a proposition in this thesis shows that a particular space of rank-one decompositions, SE, is isomorphic to a space of moment matrices that are called quasi-Hankel matrices in the literature. Optimization over Riemannian manifolds is an area of optimization literature that is also gaining popularity within the signal processing and machine learning community. Under some settings, one can formulate optimization problems over differentiable manifolds where each point is an equivalence class. Such manifolds are called quotient manifolds. This type of formulation can reduce or eliminate some of the sources of non-identifiability issues for certain optimization problems. An example is the learning of a basis for a subspace by formulating the solution space as a type of quotient manifold called the Grassmann manifold, while the conventional formulation is to optimize over a space of full column rank matrices. The second portion of this thesis is about the development of a general-purpose numerical optimization framework over SE. A general-purpose numerical optimizer can solve different approximations or regularized versions of the CP decomposition problem, and they can be applied to tensor-related applications that do not use a tensor decomposition formulation. The proposed optimizer uses many concepts from the Riemannian optimization literature. We present a novel formulation of SE as an embedded differentiable submanifold of the space of real-valued matrices with full column rank, and as a quotient manifold. Riemannian manifold structures and tangent space projectors are derived as well. The CP symmetric tensor decomposition problem is used to empirically demonstrate that the proposed scheme is indeed a numerical optimization framework over SE. Future investigations will concentrate on extending the proposed optimization framework to handle decompositions that correspond to non-symmetric tensors.
APA, Harvard, Vancouver, ISO, and other styles
30

Gräf, Manuel. "Efficient Algorithms for the Computation of Optimal Quadrature Points on Riemannian Manifolds." Doctoral thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-115287.

Full text
Abstract:
We consider the problem of numerical integration, where one aims to approximate an integral of a given continuous function from the function values at given sampling points, also known as quadrature points. A useful framework for such an approximation process is provided by the theory of reproducing kernel Hilbert spaces and the concept of the worst case quadrature error. However, the computation of optimal quadrature points, which minimize the worst case quadrature error, is in general a challenging task and requires efficient algorithms, in particular for large numbers of points. The focus of this thesis is on the efficient computation of optimal quadrature points on the torus T^d, the sphere S^d, and the rotation group SO(3). For that reason we present a general framework for the minimization of the worst case quadrature error on Riemannian manifolds, in order to construct numerically such quadrature points. Therefore, we consider, for N quadrature points on a manifold M, the worst case quadrature error as a function defined on the product manifold M^N. For the optimization on such high dimensional manifolds we make use of the method of steepest descent, the Newton method, and the conjugate gradient method, where we propose two efficient evaluation approaches for the worst case quadrature error and its derivatives. The first evaluation approach follows ideas from computational physics, where we interpret the quadrature error as a pairwise potential energy. These ideas allow us to reduce for certain instances the complexity of the evaluations from O(M^2) to O(M log(M)). For the second evaluation approach we express the worst case quadrature error in Fourier domain. This enables us to utilize the nonequispaced fast Fourier transforms for the torus T^d, the sphere S^2, and the rotation group SO(3), which reduce the computational complexity of the worst case quadrature error for polynomial spaces with degree N from O(N^k M) to O(N^k log^2(N) + M), where k is the dimension of the corresponding manifold. For the usual choice N^k ~ M we achieve the complexity O(M log^2(M)) instead of O(M^2). In conjunction with the proposed conjugate gradient method on Riemannian manifolds we arrive at a particular efficient optimization approach for the computation of optimal quadrature points on the torus T^d, the sphere S^d, and the rotation group SO(3). Finally, with the proposed optimization methods we are able to provide new lists with quadrature formulas for high polynomial degrees N on the sphere S^2, and the rotation group SO(3). Further applications of the proposed optimization framework are found due to the interesting connections between worst case quadrature errors, discrepancies and potential energies. Especially, discrepancies provide us with an intuitive notion for describing the uniformity of point distributions and are of particular importance for high dimensional integration in quasi-Monte Carlo methods. A generalized form of uniform point distributions arises in applications of image processing and computer graphics, where one is concerned with the problem of distributing points in an optimal way accordingly to a prescribed density function. We will show that such problems can be naturally described by the notion of discrepancy, and thus fit perfectly into the proposed framework. A typical application is halftoning of images, where nonuniform distributions of black dots create the illusion of gray toned images. We will see that the proposed optimization methods compete with state-of-the-art halftoning methods.
APA, Harvard, Vancouver, ISO, and other styles
31

Pan, Yue. "Currents- and varifolds-based registration of lung vessels and lung surfaces." Thesis, University of Iowa, 2016. https://ir.uiowa.edu/etd/2257.

Full text
Abstract:
This thesis compares and contrasts currents- and varifolds-based diffeomorphic image registration approaches for registering tree-like structures in the lung and surface of the lung. In these approaches, curve-like structures in the lung—for example, the skeletons of vessels and airways segmentation—and surface of the lung are represented by currents or varifolds in the dual space of a Reproducing Kernel Hilbert Space (RKHS). Currents and varifolds representations are discretized and are parameterized via of a collection of momenta. A momenta corresponds to a line segment via the coordinates of the center of the line segment and the tangent direction of the line segment at the center. A momentum corresponds to a mesh via the coordinates of the center of the mesh and the normal direction of the mesh at the center. The magnitude of the tangent vector for the line segment and the normal vector for the mesh are the length of the line segment and the area of the mesh respectively. A varifolds-based registration approach is similar to currents except that two varifolds representations are aligned independent of the tangent (normal) vector orientation. An advantage of varifolds over currents is that the orientation of the tangent vectors can be difficult to determine especially when the vessel and airway trees are not connected. In this thesis, we examine the image registration sensitivity and accuracy of currents- and varifolds-based registration as a function of the number and location of momenta used to represent tree like-structures in the lung and the surface of the lung. The registrations presented in this thesis were generated using the Deformetrica software package, which is publicly available at www.deformetrica.org.
APA, Harvard, Vancouver, ISO, and other styles
32

Shin, Hyejin. "Infinite dimensional discrimination and classification." Texas A&M University, 2003. http://hdl.handle.net/1969.1/5832.

Full text
Abstract:
Modern data collection methods are now frequently returning observations that should be viewed as the result of digitized recording or sampling from stochastic processes rather than vectors of finite length. In spite of great demands, only a few classification methodologies for such data have been suggested and supporting theory is quite limited. The focus of this dissertation is on discrimination and classification in this infinite dimensional setting. The methodology and theory we develop are based on the abstract canonical correlation concept of Eubank and Hsing (2005), and motivated by the fact that Fisher's discriminant analysis method is intimately tied to canonical correlation analysis. Specifically, we have developed a theoretical framework for discrimination and classification of sample paths from stochastic processes through use of the Loeve-Parzen isomorphism that connects a second order process to the reproducing kernel Hilbert space generated by its covariance kernel. This approach provides a seamless transition between the finite and infinite dimensional settings and lends itself well to computation via smoothing and regularization. In addition, we have developed a new computational procedure and illustrated it with simulated data and Canadian weather data.
APA, Harvard, Vancouver, ISO, and other styles
33

Bui, Thi Thien Trang. "Modèle de régression pour des données non-Euclidiennes en grande dimension. Application à la classification de taxons en anatomie computationnelle." Thesis, Toulouse, INSA, 2019. http://www.theses.fr/2019ISAT0021.

Full text
Abstract:
Dans cette thèse, nous étudions un modèle de régression avec des entrées de type distribution et le problème de test d'hypothèse pour la détection de signaux dans un modèle de régression. Nos modèles ont été appliqués aux données de sensibilité auditive mesurées par otoémissions acoustiques, cette mesure biologique contenant potentiellement des informations annexes sur l'individu (age, sexe, population/espèce).Dans la première partie, un nouveau modèle de régression de distribution pour les distributions de probabilité est introduit. Ce modèle est basé sur un cadre de régression RKHS, dans lequel les noyaux universels sont construits à l'aide de distances de Wasserstein pour les distributions appartenant à l'espace Wasserstein de \Omega, où \Omega est un sous-espace compact de l'espace réel. Nous prouvons la propriété de noyau universel de ces noyaux et utilisons ce cadre pour effectuer des régressions sur des fonctions. Différents modèles de régression sont d'abord comparés à celui proposé sur des données fonctionnelles simulées. Nous appliquons ensuite notre modèle de régression aux réponses de distribution des émissions otoascoutiques évoquées transitoires (TEOAE) et aux prédicteurs réels de l'âge. Dans la deuxième partie, en considérant un modèle de régression, nous abordons la question du test de la nullité de la fonction de régression. Nous proposons tout d'abord une nouvelle procédure de test unique basée sur un noyau symétrique général et une estimation de la variance des observations. Les valeurs critiques correspondantes sont construites pour obtenir des tests non-asymptotiques de niveau \alpha. Nous introduisons ensuite une procédure d'agrégation afin d'éviter le choix complexe du noyau et des paramètres de celui-ci. Les tests multiples vérifient les propriétés non asymptotiques et adaptatives au sens minimax sur plusieurs classes d'alternatives régulières
In this thesis, we study a regression model with distribution entries and the testing hypothesis problem for signal detection in a regression model. We aim to apply these models in hearing sensitivity measured by the transient evoked otoacoustic emissions (TEOAEs) data to improve our knowledge in the auditory investigation. In the first part, a new distribution regression model for probability distributions is introduced. This model is based on a Reproducing Kernel Hilbert Space (RKHS) regression framework, where universal kernels are built using Wasserstein distances for distributions belonging to \Omega) and \Omega is a compact subspace of the real space. We prove the universal kernel property of such kernels and use this setting to perform regressions on functions. Different regression models are first compared with the proposed one on simulated functional data. We then apply our regression model to transient evoked otoascoutic emission (TEOAE) distribution responses and real predictors of the age. This part is a joint work with Loubes, J-M., Risser, L. and Balaresque, P..In the second part, considering a regression model, we address the question of testing the nullity of the regression function. The testing procedure is available when the variance of the observations is unknown and does not depend on any prior information on the alternative. We first propose a single testing procedure based on a general symmetric kernel and an estimation of the variance of the observations. The corresponding critical values are constructed to obtain non asymptotic level \alpha tests. We then introduce an aggregation procedure to avoid the difficult choice of the kernel and of the parameters of the kernel. The multiple tests satisfy non-asymptotic properties and are adaptive in the minimax sense over several classes of regular alternatives
APA, Harvard, Vancouver, ISO, and other styles
34

Ke, Chenlu. "A NEW INDEPENDENCE MEASURE AND ITS APPLICATIONS IN HIGH DIMENSIONAL DATA ANALYSIS." UKnowledge, 2019. https://uknowledge.uky.edu/statistics_etds/41.

Full text
Abstract:
This dissertation has three consecutive topics. First, we propose a novel class of independence measures for testing independence between two random vectors based on the discrepancy between the conditional and the marginal characteristic functions. If one of the variables is categorical, our asymmetric index extends the typical ANOVA to a kernel ANOVA that can test a more general hypothesis of equal distributions among groups. The index is also applicable when both variables are continuous. Second, we develop a sufficient variable selection procedure based on the new measure in a large p small n setting. Our approach incorporates marginal information between each predictor and the response as well as joint information among predictors. As a result, our method is more capable of selecting all truly active variables than marginal selection methods. Furthermore, our procedure can handle both continuous and discrete responses with mixed-type predictors. We establish the sure screening property of the proposed approach under mild conditions. Third, we focus on a model-free sufficient dimension reduction approach using the new measure. Our method does not require strong assumptions on predictors and responses. An algorithm is developed to find dimension reduction directions using sequential quadratic programming. We illustrate the advantages of our new measure and its two applications in high dimensional data analysis by numerical studies across a variety of settings.
APA, Harvard, Vancouver, ISO, and other styles
35

Henchiri, Yousri. "L'approche Support Vector Machines (SVM) pour le traitement des données fonctionnelles." Thesis, Montpellier 2, 2013. http://www.theses.fr/2013MON20187/document.

Full text
Abstract:
L'Analyse des Données Fonctionnelles est un domaine important et dynamique en statistique. Elle offre des outils efficaces et propose de nouveaux développements méthodologiques et théoriques en présence de données de type fonctionnel (fonctions, courbes, surfaces, ...). Le travail exposé dans cette thèse apporte une nouvelle contribution aux thèmes de l'apprentissage statistique et des quantiles conditionnels lorsque les données sont assimilables à des fonctions. Une attention particulière a été réservée à l'utilisation de la technique Support Vector Machines (SVM). Cette technique fait intervenir la notion d'Espace de Hilbert à Noyau Reproduisant. Dans ce cadre, l'objectif principal est d'étendre cette technique non-paramétrique d'estimation aux modèles conditionnels où les données sont fonctionnelles. Nous avons étudié les aspects théoriques et le comportement pratique de la technique présentée et adaptée sur les modèles de régression suivants. Le premier modèle est le modèle fonctionnel de quantiles de régression quand la variable réponse est réelle, les variables explicatives sont à valeurs dans un espace fonctionnel de dimension infinie et les observations sont i.i.d.. Le deuxième modèle est le modèle additif fonctionnel de quantiles de régression où la variable d'intérêt réelle dépend d'un vecteur de variables explicatives fonctionnelles. Le dernier modèle est le modèle fonctionnel de quantiles de régression quand les observations sont dépendantes. Nous avons obtenu des résultats sur la consistance et les vitesses de convergence des estimateurs dans ces modèles. Des simulations ont été effectuées afin d'évaluer la performance des procédures d'inférence. Des applications sur des jeux de données réelles ont été considérées. Le bon comportement de l'estimateur SVM est ainsi mis en évidence
Functional Data Analysis is an important and dynamic area of statistics. It offers effective new tools and proposes new methodological and theoretical developments in the presence of functional type data (functions, curves, surfaces, ...). The work outlined in this dissertation provides a new contribution to the themes of statistical learning and quantile regression when data can be considered as functions. Special attention is devoted to use the Support Vector Machines (SVM) technique, which involves the notion of a Reproducing Kernel Hilbert Space. In this context, the main goal is to extend this nonparametric estimation technique to conditional models that take into account functional data. We investigated the theoretical aspects and practical attitude of the proposed and adapted technique to the following regression models.The first model is the conditional quantile functional model when the covariate takes its values in a bounded subspace of the functional space of infinite dimension, the response variable takes its values in a compact of the real line, and the observations are i.i.d.. The second model is the functional additive quantile regression model where the response variable depends on a vector of functional covariates. The last model is the conditional quantile functional model in the dependent functional data case. We obtained the weak consistency and a convergence rate of these estimators. Simulation studies are performed to evaluate the performance of the inference procedures. Applications to chemometrics, environmental and climatic data analysis are considered. The good behavior of the SVM estimator is thus highlighted
APA, Harvard, Vancouver, ISO, and other styles
36

Saide, Chafic. "Filtrage adaptatif à l’aide de méthodes à noyau : application au contrôle d’un palier magnétique actif." Thesis, Troyes, 2013. http://www.theses.fr/2013TROY0018/document.

Full text
Abstract:
L’estimation fonctionnelle basée sur les espaces de Hilbert à noyau reproduisant demeure un sujet de recherche actif pour l’identification des systèmes non linéaires. L'ordre du modèle croit avec le nombre de couples entrée-sortie, ce qui rend cette méthode inadéquate pour une identification en ligne. Le critère de cohérence est une méthode de parcimonie pour contrôler l’ordre du modèle. Le modèle est donc défini à partir d'un dictionnaire de faible taille qui est formé par les fonctions noyau les plus pertinentes.Une fonction noyau introduite dans le dictionnaire y demeure même si la non-stationnarité du système rend sa contribution faible dans l'estimation de la sortie courante. Il apparaît alors opportun d'adapter les éléments du dictionnaire pour réduire l'erreur quadratique instantanée et/ou mieux contrôler l'ordre du modèle.La première partie traite le sujet des algorithmes adaptatifs utilisant le critère de cohérence. L'adaptation des éléments du dictionnaire en utilisant une méthode de gradient stochastique est abordée pour deux familles de fonctions noyau. Cette partie a un autre objectif qui est la dérivation des algorithmes adaptatifs utilisant le critère de cohérence pour identifier des modèles à sorties multiples.La deuxième partie introduit d'une manière abrégée le palier magnétique actif (PMA). La proposition de contrôler un PMA par un algorithme adaptatif à noyau est présentée pour remplacer une méthode utilisant les réseaux de neurones à couches multiples
Function approximation methods based on reproducing kernel Hilbert spaces are of great importance in kernel-based regression. However, the order of the model is equal to the number of observations, which makes this method inappropriate for online identification. To overcome this drawback, many sparsification methods have been proposed to control the order of the model. The coherence criterion is one of these sparsification methods. It has been shown possible to select a subset of the most relevant passed input vectors to form a dictionary to identify the model.A kernel function, once introduced into the dictionary, remains unchanged even if the non-stationarity of the system makes it less influent in estimating the output of the model. This observation leads to the idea of adapting the elements of the dictionary to obtain an improved one with an objective to minimize the resulting instantaneous mean square error and/or to control the order of the model.The first part deals with adaptive algorithms using the coherence criterion. The adaptation of the elements of the dictionary using a stochastic gradient method is presented for two types of kernel functions. Another topic is covered in this part which is the implementation of adaptive algorithms using the coherence criterion to identify Multiple-Outputs models.The second part introduces briefly the active magnetic bearing (AMB). A proposed method to control an AMB by an adaptive algorithm using kernel methods is presented to replace an existing method using neural networks
APA, Harvard, Vancouver, ISO, and other styles
37

Ammanouil, Rita. "Contributions au démélange non-supervisé et non-linéaire de données hyperspectrales." Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4079/document.

Full text
Abstract:
Le démélange spectral est l’un des problèmes centraux pour l’exploitation des images hyperspectrales. En raison de la faible résolution spatiale des imageurs hyperspectraux en télédetection, la surface représentée par un pixel peut contenir plusieurs matériaux. Dans ce contexte, le démélange consiste à estimer les spectres purs (les end members) ainsi que leurs fractions (les abondances) pour chaque pixel de l’image. Le but de cette thèse estde proposer de nouveaux algorithmes de démélange qui visent à améliorer l’estimation des spectres purs et des abondances. En particulier, les algorithmes de démélange proposés s’inscrivent dans le cadre du démélange non-supervisé et non-linéaire. Dans un premier temps, on propose un algorithme de démelange non-supervisé dans lequel une régularisation favorisant la parcimonie des groupes est utilisée pour identifier les spectres purs parmi les observations. Une extension de ce premier algorithme permet de prendre en compte la présence du bruit parmi les observations choisies comme étant les plus pures. Dans un second temps, les connaissances a priori des ressemblances entre les spectres à l’échelle localeet non-locale ainsi que leurs positions dans l’image sont exploitées pour construire un graphe adapté à l’image. Ce graphe est ensuite incorporé dans le problème de démélange non supervisé par le biais d’une régularisation basée sur le Laplacian du graphe. Enfin, deux algorithmes de démélange non-linéaires sont proposés dans le cas supervisé. Les modèles de mélanges non-linéaires correspondants incorporent des fonctions à valeurs vectorielles appartenant à un espace de Hilbert à noyaux reproduisants. L’intérêt de ces fonctions par rapport aux fonctions à valeurs scalaires est qu’elles permettent d’incorporer un a priori sur la ressemblance entre les différentes fonctions. En particulier, un a priori spectral, dans un premier temps, et un a priori spatial, dans un second temps, sont incorporés pour améliorer la caractérisation du mélange non-linéaire. La validation expérimentale des modèles et des algorithmes proposés sur des données synthétiques et réelles montre une amélioration des performances par rapport aux méthodes de l’état de l’art. Cette amélioration se traduit par une meilleure erreur de reconstruction des données
Spectral unmixing has been an active field of research since the earliest days of hyperspectralremote sensing. It is concerned with the case where various materials are found inthe spatial extent of a pixel, resulting in a spectrum that is a mixture of the signatures ofthose materials. Unmixing then reduces to estimating the pure spectral signatures and theircorresponding proportions in every pixel. In the hyperspectral unmixing jargon, the puresignatures are known as the endmembers and their proportions as the abundances. Thisthesis focuses on spectral unmixing of remotely sensed hyperspectral data. In particular,it is aimed at improving the accuracy of the extraction of compositional information fromhyperspectral data. This is done through the development of new unmixing techniques intwo main contexts, namely in the unsupervised and nonlinear case. In particular, we proposea new technique for blind unmixing, we incorporate spatial information in (linear and nonlinear)unmixing, and we finally propose a new nonlinear mixing model. More precisely, first,an unsupervised unmixing approach based on collaborative sparse regularization is proposedwhere the library of endmembers candidates is built from the observations themselves. Thisapproach is then extended in order to take into account the presence of noise among theendmembers candidates. Second, within the unsupervised unmixing framework, two graphbasedregularizations are used in order to incorporate prior local and nonlocal contextualinformation. Next, within a supervised nonlinear unmixing framework, a new nonlinearmixing model based on vector-valued functions in reproducing kernel Hilbert space (RKHS)is proposed. The aforementioned model allows to consider different nonlinear functions atdifferent bands, regularize the discrepancies between these functions, and account for neighboringnonlinear contributions. Finally, the vector-valued kernel framework is used in orderto promote spatial smoothness of the nonlinear part in a kernel-based nonlinear mixingmodel. Simulations on synthetic and real data show the effectiveness of all the proposedtechniques
APA, Harvard, Vancouver, ISO, and other styles
38

Kuo, David, and 郭立維. "Reproducing Kernel Hilbert Spaces and Reproducing Kernel Hilbert Spacesand Feichtinger’s Conjecture." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/76783081720520823585.

Full text
Abstract:
碩士
逢甲大學
應用數學所
93
We will prove a theorem on a class of unitary operators between a Hilbert space and related reproducing kernel Hilbert space. Using this theorem, we then prove that every bounded frame is a finite union of Riesz sequences, which have been conjectured by Feichtinger. This in turn will imply that for any WH-frame the modulation constant must be irrational.
APA, Harvard, Vancouver, ISO, and other styles
39

Evgeniou, Theodoros, and Massimiliano Pontil. "On the V(subscript gamma) Dimension for Regression in Reproducing Kernel Hilbert Spaces." 1999. http://hdl.handle.net/1721.1/7262.

Full text
Abstract:
This paper presents a computation of the $V_gamma$ dimension for regression in bounded subspaces of Reproducing Kernel Hilbert Spaces (RKHS) for the Support Vector Machine (SVM) regression $epsilon$-insensitive loss function, and general $L_p$ loss functions. Finiteness of the RV_gamma$ dimension is shown, which also proves uniform convergence in probability for regression machines in RKHS subspaces that use the $L_epsilon$ or general $L_p$ loss functions. This paper presenta a novel proof of this result also for the case that a bias is added to the functions in the RKHS.
APA, Harvard, Vancouver, ISO, and other styles
40

Girosi, Federico. "An Equivalence Between Sparse Approximation and Support Vector Machines." 1997. http://hdl.handle.net/1721.1/7289.

Full text
Abstract:
In the first part of this paper we show a similarity between the principle of Structural Risk Minimization Principle (SRM) (Vapnik, 1982) and the idea of Sparse Approximation, as defined in (Chen, Donoho and Saunders, 1995) and Olshausen and Field (1996). Then we focus on two specific (approximate) implementations of SRM and Sparse Approximation, which have been used to solve the problem of function approximation. For SRM we consider the Support Vector Machine technique proposed by V. Vapnik and his team at AT&T Bell Labs, and for Sparse Approximation we consider a modification of the Basis Pursuit De-Noising algorithm proposed by Chen, Donoho and Saunders (1995). We show that, under certain conditions, these two techniques are equivalent: they give the same solution and they require the solution of the same quadratic programming problem.
APA, Harvard, Vancouver, ISO, and other styles
41

Bhattacharjee, Monojit. "Analytic Models, Dilations, Wandering Subspaces and Inner Functions." Thesis, 2017. http://etd.iisc.ac.in/handle/2005/4241.

Full text
Abstract:
This thesis concerns dilation theory, analytic models, joint invariant subspaces, reproducing kernelHilbert spaces and multipliers associated to commuting tuples of bounded linear operators on Hilbert spaces. The main contribution of this thesis is twofold: dilation and analytic model theory for n-tuples of (1) commuting contractions (in the setting of the unit polydisc), and (2) commuting row contractions (in the setting of the unit ball). On n-tuples of commuting contractions: We study analytic models of operators with some positivity assumptions and quotient modules of function Hilbert spaces over polydisc. We prove that for an m-hypercontraction T 2 C¢0 on a Hilbert space H, there exist Hilbert spaces E and E¤, and a partially isometric multiplier µ 2M ¡H2 E (D), A2 m(E¤) ¢ such that H » Æ Qµ Æ A2 m(E¤)ªµH2 E (D), and T » Æ PQµMz jQµ , where A2 m(E¤) is the E¤-valued weighted Bergman space and H2 E (D) is the E -valued Hardy space over the unit disc D. We then proceed to study and develop analytic models for doubly commuting n-tuples of operators and investigate their applications to joint shift co-invariant subspaces of reproducing kernel Hilbert spaces over polydisc. In particular, we completely analyze doubly commuting quotient modules of a large class of reproducing kernel Hilbert modules, in the sense of Arazy and Englis, over the unit polydisc Dn. On commuting row contractions: We study wandering subspaces for commuting tuples of bounded operators on Hilbert spaces. We prove that for a large class of analytic functional Hilbert spaces HK on the unit ball in Cn, wandering subspaces for restrictions of the multiplication tupleMz Æ (Mz1 , . . . ,Mzn ) can be described in terms of suitable HK -inner functions. We prove that, HK -inner functions are contractive multipliers and deduce a result on the multiplier norm of quasi-homogenous polynomials as an application. Along the way we prove a refinement of a result of Arveson on the uniqueness of minimal dilations of pure row contractions.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhang, Xinhua. "Graphical Models: Modeling, Optimization, and Hilbert Space Embedding." Phd thesis, 2010. http://hdl.handle.net/1885/49340.

Full text
Abstract:
Over the past two decades graphical models have been widely used as a powerful tool for compactly representing distributions. On the other hand, kernel methods have also been used extensively to come up with rich representations. This thesis aims to combine graphical models with kernels to produce compact models with rich representational abilities. The following four areas are our focus. 1. Conditional random fields for multi-agent reinforcement learning. Conditional random fields (CRFs) are graphical models for modeling the probability of labels given the observations. They have traditionally assumed that, conditioned on the training data, the label sequences of different training examples are independent and identically distributed (iid). We extended the use of CRFs to a class of temporal learning algorithms, namely policy gradient reinforcement learning (RL). Now the labels are no longer iid. They are actions that update the environment and affect the next observation. From an RL point of view, CRFs provide a natural way to model joint actions in a decentralized Markov decision process. Using tree sampling for inference, our experiment shows the RL methods employing CRFs clearly outperform those which do not model the proper joint policy. 2. Bayesian online multi-label classification. Gaussian density filtering provides fast and effective inference for graphical models (Maybeck, 1982). Based on it, we propose a Bayesian online multi-label classification (BOMC) framework which learns a probabilistic model of the linear classifier. The training labels are incorporated to update the posterior of the classifiers via a graphical model similar to TrueSkill (Herbrich et al, 2007). Using samples from the posterior, we label the test data by maximizing the expected F1-score. In our experiments, BOMC delivers significantly higher macro-averaged F1-score than the state-of-the-art online maximum margin learners. 3. Hilbert space embedment of distributions. Graphical models are also an essential tool in kernel measures of independence for non-iid data. Traditional information theory often requires density estimation, which makes it unideal for statistical estimation. Motivated by the fact that distributions often appear in machine learning via expectations, we can characterize the distance between distributions in terms of distances between means, especially means in reproducing kernel Hilbert spaces which are called kernel embeddings. Under this framework, the undirected graphical models further allow us to factorize the kernel embeddings onto cliques, which yields efficient measures of independence for non-iid data (Zhang et al, 2009). 4. Optimization in maximum margin models for structured data. Maximum margin estimation for structured data is an important task where graphical models also play a key role. They are special cases of regularized risk minimization, for which bundle methods (BMRM, Teo et al, 2007) are a state-of-the-art general purpose solver. Smola et al (2007) proved that BMRM requires O(1/epsilon) iterations to converge to an epsilon accurate solution, and we further show that this rate hits the lower bound. Motivated by (Nesterov 2003, 2005), we utilized the composite structure of the objective function and devised an algorithm for the structured loss which converges to an epsilon accurate solution in O(1/sqrt{epsilon}) iterations.
APA, Harvard, Vancouver, ISO, and other styles
43

Hota, Tapan Kumar. "Subnormality and Moment Sequences." Thesis, 2012. http://etd.iisc.ac.in/handle/2005/3242.

Full text
Abstract:
In this report we survey some recent developments of relationship between Hausdorff moment sequences and subnormality of an unilateral weighted shift operator. Although discrete convolution of two Haudorff moment sequences may not be a Hausdorff moment sequence, but Hausdorff convolution of two moment sequences is always a moment sequence. Observing from the Berg and Dur´an result that the multiplication operator on Is subnormal, we discuss further work on the subnormality of the multiplication operator on a reproducing kernel Hilbert space, whose kernel is a point-wise product of two diagonal positive kernels. The relationship between infinitely divisible matrices and moment sequence is discussed and some open problems are listed.
APA, Harvard, Vancouver, ISO, and other styles
44

Hota, Tapan Kumar. "Subnormality and Moment Sequences." Thesis, 2012. http://hdl.handle.net/2005/3242.

Full text
Abstract:
In this report we survey some recent developments of relationship between Hausdorff moment sequences and subnormality of an unilateral weighted shift operator. Although discrete convolution of two Haudorff moment sequences may not be a Hausdorff moment sequence, but Hausdorff convolution of two moment sequences is always a moment sequence. Observing from the Berg and Dur´an result that the multiplication operator on Is subnormal, we discuss further work on the subnormality of the multiplication operator on a reproducing kernel Hilbert space, whose kernel is a point-wise product of two diagonal positive kernels. The relationship between infinitely divisible matrices and moment sequence is discussed and some open problems are listed.
APA, Harvard, Vancouver, ISO, and other styles
45

Rieger, Christian. "Sampling Inequalities and Applications." Doctoral thesis, 2008. http://hdl.handle.net/11858/00-1735-0000-0006-B3B9-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Vito, Ernesto De, and Andrea Caponnetto. "Risk Bounds for Regularized Least-squares Algorithm with Operator-valued kernels." 2005. http://hdl.handle.net/1721.1/30543.

Full text
Abstract:
We show that recent results in [3] on risk bounds for regularized least-squares on reproducing kernel Hilbert spaces can be straightforwardly extended to the vector-valued regression setting. We first briefly introduce central concepts on operator-valued kernels. Then we show how risk bounds can be expressed in terms of a generalization of effective dimension.
APA, Harvard, Vancouver, ISO, and other styles
47

Scheuerer, Michael. "A Comparison of Models and Methods for Spatial Interpolation in Statistics and Numerical Analysis." Doctoral thesis, 2009. http://hdl.handle.net/11858/00-1735-0000-0006-B3D5-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Oesting, Marco. "Spatial Interpolation and Prediction of Gaussian and Max-Stable Processes." Doctoral thesis, 2012. http://hdl.handle.net/11858/00-1735-0000-000D-F069-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Caponnetto, Andrea, and Ernesto De Vito. "Fast Rates for Regularized Least-squares Algorithm." 2005. http://hdl.handle.net/1721.1/30539.

Full text
Abstract:
We develop a theoretical analysis of generalization performances of regularized least-squares on reproducing kernel Hilbert spaces for supervised learning. We show that the concept of effective dimension of an integral operator plays a central role in the definition of a criterion for the choice of the regularization parameter as a function of the number of samples. In fact, a minimax analysis is performed which shows asymptotic optimality of the above-mentioned criterion.
APA, Harvard, Vancouver, ISO, and other styles
50

Lessig, Christian. "Modern Foundations of Light Transport Simulation." Thesis, 2012. http://hdl.handle.net/1807/32808.

Full text
Abstract:
Light transport simulation aims at the numerical computation of the propagation of visible electromagnetic energy in macroscopic environments. In this thesis, we develop the foundations for a modern theory of light transport simulation, unveiling the geometric structure of the continuous theory and providing a formulation of computational techniques that furnishes remarkably efficacy with only local information. Utilizing recent results from various communities, we develop the physical and mathematical structure of light transport from Maxwell's equations by studying a lifted representation of electromagnetic theory on the cotangent bundle. At the short wavelength limit, this yields a Hamiltonian description on six-dimensional phase space, with the classical formulation over the space of "positions and directions" resulting from a reduction to the five-dimensional cosphere bundle. We establish the connection between light transport and geometrical optics by a non-canonical Legendre transform, and we derive classical concepts from radiometry, such as radiance and irradiance, by considering measurements of the light energy density. We also show that in idealized environments light transport is a Lie-Poisson system for the group of symplectic diffeomorphisms, unveiling a tantalizing similarity between light transport and fluid dynamics. Using Stone's theorem, we also derive a functional analytic description of light transport. This bridges the gap to existing formulations in the literature and naturally leads to computational questions. We then address one of the central challenges for light transport simulation in everyday environments with scattering surfaces: how are efficient computations possible when the light energy density can only be evaluated pointwise? Using biorthogonal and possibly overcomplete bases formed by reproducing kernel functions, we develop a comprehensive theory for computational techniques that are restricted to pointwise information, subsuming for example sampling theorems, interpolation formulas, quadrature rules, density estimation schemes, and Monte Carlo integration. The use of overcomplete representations makes us thereby robust to imperfect information, as is often unavoidable in practical applications, and numerical optimization of the sampling locations leads to close to optimal techniques, providing performance which considerably improves over the state of the art in the literature.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography