Добірка наукової літератури з теми "Kernel Inference"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Kernel Inference".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Kernel Inference":

1

Nishiyama, Yu, Motonobu Kanagawa, Arthur Gretton, and Kenji Fukumizu. "Model-based kernel sum rule: kernel Bayesian inference with probabilistic models." Machine Learning 109, no. 5 (January 2, 2020): 939–72. http://dx.doi.org/10.1007/s10994-019-05852-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractKernel Bayesian inference is a principled approach to nonparametric inference in probabilistic graphical models, where probabilistic relationships between variables are learned from data in a nonparametric manner. Various algorithms of kernel Bayesian inference have been developed by combining kernelized basic probabilistic operations such as the kernel sum rule and kernel Bayes’ rule. However, the current framework is fully nonparametric, and it does not allow a user to flexibly combine nonparametric and model-based inferences. This is inefficient when there are good probabilistic models (or simulation models) available for some parts of a graphical model; this is in particular true in scientific fields where “models” are the central topic of study. Our contribution in this paper is to introduce a novel approach, termed the model-based kernel sum rule (Mb-KSR), to combine a probabilistic model and kernel Bayesian inference. By combining the Mb-KSR with the existing kernelized probabilistic rules, one can develop various algorithms for hybrid (i.e., nonparametric and model-based) inferences. As an illustrative example, we consider Bayesian filtering in a state space model, where typically there exists an accurate probabilistic model for the state transition process. We propose a novel filtering method that combines model-based inference for the state transition process and data-driven, nonparametric inference for the observation generating process. We empirically validate our approach with synthetic and real-data experiments, the latter being the problem of vision-based mobile robot localization in robotics, which illustrates the effectiveness of the proposed hybrid approach.
2

Rogers, Mark F., Colin Campbell, and Yiming Ying. "Probabilistic Inference of Biological Networks via Data Integration." BioMed Research International 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/707453.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
There is significant interest in inferring the structure of subcellular networks of interaction. Here we consider supervised interactive network inference in which a reference set of known network links and nonlinks is used to train a classifier for predicting new links. Many types of data are relevant to inferring functional links between genes, motivating the use of data integration. We use pairwise kernels to predict novel links, along with multiple kernel learning to integrate distinct sources of data into a decision function. We evaluate various pairwise kernels to establish which are most informative and compare individual kernel accuracies with accuracies for weighted combinations. By associating a probability measure with classifier predictions, we enable cautious classification, which can increase accuracy by restricting predictions to high-confidence instances, and data cleaning that can mitigate the influence of mislabeled training instances. Although one pairwise kernel (the tensor product pairwise kernel) appears to work best, different kernels may contribute complimentary information about interactions: experiments inS. cerevisiae(yeast) reveal that a weighted combination of pairwise kernels applied to different types of data yields the highest predictive accuracy. Combined with cautious classification and data cleaning, we can achieve predictive accuracies of up to 99.6%.
3

LUGO-MARTINEZ, JOSE, and PREDRAG RADIVOJAC. "Generalized graphlet kernels for probabilistic inference in sparse graphs." Network Science 2, no. 2 (August 2014): 254–76. http://dx.doi.org/10.1017/nws.2014.14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractGraph kernels for learning and inference on sparse graphs have been widely studied. However, the problem of designing robust kernel functions that can effectively compare graph neighborhoods in the presence of noisy and complex data remains less explored. Here we propose a novel graph-based kernel method referred to as an edit distance graphlet kernel. The method was designed to add flexibility in capturing similarities between local graph neighborhoods as a means of probabilistically annotating vertices in sparse and labeled graphs. We report experiments on nine real-life data sets from molecular biology and social sciences and provide evidence that the new kernels perform favorably compared to established approaches. However, when both performance accuracy and run time are considered, we suggest that edit distance kernels are best suited for inference on graphs derived from protein structures. Finally, we demonstrate that the new approach facilitates simple and principled ways of integrating domain knowledge into classification and point out that our methodology extends beyond classification; e.g. to applications such as kernel-based clustering of graphs or approximate motif finding. Availability:www.sourceforge.net/projects/graphletkernels/
4

Lazarus, Eben, Daniel J. Lewis, and James H. Stock. "The Size‐Power Tradeoff in HAR Inference." Econometrica 89, no. 5 (2021): 2497–516. http://dx.doi.org/10.3982/ecta15404.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Heteroskedasticity‐ and autocorrelation‐robust (HAR) inference in time series regression typically involves kernel estimation of the long‐run variance. Conventional wisdom holds that, for a given kernel, the choice of truncation parameter trades off a test's null rejection rate and power, and that this tradeoff differs across kernels. We formalize this intuition: using higher‐order expansions, we provide a unified size‐power frontier for both kernel and weighted orthonormal series tests using nonstandard “fixed‐ b” critical values. We also provide a frontier for the subset of these tests for which the fixed‐ b distribution is t or F. These frontiers are respectively achieved by the QS kernel and equal‐weighted periodogram. The frontiers have simple closed‐form expressions, which show that the price paid for restricting attention to tests with t and F critical values is small. The frontiers are derived for the Gaussian multivariate location model, but simulations suggest the qualitative findings extend to stochastic regressors.
5

Billio, M. "Kernel-Based Indirect Inference." Journal of Financial Econometrics 1, no. 3 (September 1, 2003): 297–326. http://dx.doi.org/10.1093/jjfinec/nbg014.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zhang, Li Lyna, Shihao Han, Jianyu Wei, Ningxin Zheng, Ting Cao, and Yunxin Liu. "nn-METER." GetMobile: Mobile Computing and Communications 25, no. 4 (March 30, 2022): 19–23. http://dx.doi.org/10.1145/3529706.3529712.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Inference latency has become a crucial metric in running Deep Neural Network (DNN) models on various mobile and edge devices. To this end, latency prediction of DNN inference is highly desirable for many tasks where measuring the latency on real devices is infeasible or too costly. Yet it is very challenging and existing approaches fail to achieve a high accuracy of prediction, due to the varying model-inference latency caused by the runtime optimizations on diverse edge devices. In this paper, we propose and develop nn-Meter, a novel and efficient system to accurately predict the DNN inference latency on diverse edge devices. The key idea of nn-Meter is dividing a whole model inference into kernels, i.e., the execution units on a device, and conducting kernel-level prediction. nn-Meter builds atop two key techniques: (i) kernel detection to automatically detect the execution unit of model inference via a set of well-designed test cases; and (ii) adaptive sampling to efficiently sample the most beneficial configurations from a large space to build accurate kernel-level latency predictors. nn-Meter achieves significant high prediction accuracy on four types of edge devices.
7

Robinson, P. M. "INFERENCE ON NONPARAMETRICALLY TRENDING TIME SERIES WITH FRACTIONAL ERRORS." Econometric Theory 25, no. 6 (December 2009): 1716–33. http://dx.doi.org/10.1017/s0266466609990302.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The central limit theorem for nonparametric kernel estimates of a smooth trend, with linearly generated errors, indicates asymptotic independence and homoskedasticity across fixed points, irrespective of whether disturbances have short memory, long memory, or antipersistence. However, the asymptotic variance depends on the kernel function in a way that varies across these three circumstances, and in the latter two it involves a double integral that cannot necessarily be evaluated in closed form. For a particular class of kernels, we obtain analytic formulas. We discuss extensions to more general settings, including ones involving possible cross-sectional or spatial dependence.
8

Yuan, Ao. "Semiparametric inference with kernel likelihood." Journal of Nonparametric Statistics 21, no. 2 (February 2009): 207–28. http://dx.doi.org/10.1080/10485250802553382.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Cheng, Yansong, and Surajit Ray. "Multivariate Modality Inference Using Gaussian Kernel." Open Journal of Statistics 04, no. 05 (2014): 419–34. http://dx.doi.org/10.4236/ojs.2014.45041.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Agbokou, Komi, and Yaogan Mensah. "INFERENCE ON THE REPRODUCING KERNEL HILBERT SPACES." Universal Journal of Mathematics and Mathematical Sciences 15 (October 10, 2021): 11–29. http://dx.doi.org/10.17654/2277141722002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Kernel Inference":

1

Fouchet, Arnaud. "Kernel methods for gene regulatory network inference." Thesis, Evry-Val d'Essonne, 2014. http://www.theses.fr/2014EVRY0058/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
De nouvelles technologies, notamment les puces à adn, multiplient la quantité de données disponibles pour la biologie moléculaire. dans ce contexte, des méthodes informatiques et mathématiques sont activement développées pour extraire le plus d'information d'un grand nombre de données. en particulier, le problème d'inférence de réseaux de régulation génique a été abordé au moyen de multiples modèles mathématiques et statistiques, des plus basiques (corrélation, modèle booléen ou linéaire) aux plus sophistiqués (arbre de régression, modèles bayésiens avec variables cachées). malgré leurs qualités pour des problèmes similaires, les modèles à noyaux ont été peu utilisés pour l'inférence de réseaux de régulation génique. en effet, ces méthodes fournissent en général des modèles difficiles a interpréter. dans cette thèse, nous avons développé deux façons d'obtenir des méthodes à noyaux interprétables. dans un premier temps, d'un point de vue théorique, nous montrons que les méthodes à noyaux permettent d'estimer, a partir d'un ensemble d'apprentissage, une fonction de transition et ses dérivées partielles de façon consistante. ces estimations de dérivées partielles permettent, sur des exemples réalistes, de mieux identifier le réseau de régulation génique que des méthodes standards. dans un deuxième temps, nous développons une méthode à noyau interprétable grâce à l'apprentissage à noyaux multiples. ce modèle fournit des résultats du niveau de l'état de l'art sur des réseaux réels et des réseaux simulés réalistes
New technologies in molecular biology, in particular dna microarrays, have greatly increased the quantity of available data. in this context, methods from mathematics and computer science have been actively developed to extract information from large datasets. in particular, the problem of gene regulatory network inference has been tackled using many different mathematical and statistical models, from the most basic ones (correlation, boolean or linear models) to the most elaborate (regression trees, bayesian models with latent variables). despite their qualities when applied to similar problems, kernel methods have scarcely been used for gene network inference, because of their lack of interpretability. in this thesis, two approaches are developed to obtain interpretable kernel methods. firstly, from a theoretical point of view, some kernel methods are shown to consistently estimate a transition function and its partial derivatives from a learning dataset. these estimations of partial derivatives allow to better infer the gene regulatory network than previous methods on realistic gene regulatory networks. secondly, an interpretable kernel methods through multiple kernel learning is presented. this method, called lockni, provides state-of-the-art results on real and realistically simulated datasets
2

Chan, Karen Pui-Shan. "Kernel density estimation, Bayesian inference and random effects model." Thesis, University of Edinburgh, 1990. http://hdl.handle.net/1842/13350.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis contains results of a study in kernel density estimation, Bayesian inference and random effects models, with application to forensic problems. Estimation of the Bayes' factor in a forensic science problem involved the derivation of predictive distributions in non-standard situations. The distribution of the values of a characteristic of interest among different items in forensic science problems is often non-Normal. Background, or training, data were available to assist in the estimation of the distribution for measurements on cat and dog hairs. An informative prior, based on the kernel method of density estimation, was used to derive the appropriate predictive distributions. The training data may be considered to be derived from a random effects model. This was taken into consideration in modelling the Bayes' factor. The usual assumption of the random factor being Normally distributed is unrealistic, so a kernel density estimate was used as the distribution of the unknown random factor. Two kernel methods were employed: the ordinary and adaptive kernel methods. The adaptive kernel method allowed for the longer tail, where little information was available. Formulae for the Bayes' factor in a forensic science context were derived assuming the training data were grouped or not grouped (for example, hairs from one cat would be thought of as belonging to the same group), and that the within-group variance was or was not known. The Bayes' factor, assuming known within-group variance, for the training data, grouped or not grouped, was extended to the multivariate case. The method was applied to a practical example in a bivariate situation. Similar modelling of the Bayes' factor was derived to cope with a particular form of mixture data. Boundary effects were also taken into consideration. Application of kernel density estimation to make inferences about the variance components under the random effects model was studied. Employing the maximum likelihood estimation method, it was shown that the between-group variance and the smoothing parameter in the kernel density estimation were related. They were not identifiable separately. With the smoothing parameter fixed at some predetermined value, the within-and between-group variance estimates from the proposed model were equivalent to the usual ANOVA estimates. Within the Bayesian framework, posterior distribution for the variance components, using various prior distributions for the parameters were derived incorporating kernel density functions. The modes of these posterior distributions were used as estimates for the variance components. A Student-t within a Bayesian framework was derived after introduction of a prior for the smoothing prameter. Two methods of obtaining hyper-parameters for the prior were suggested, both involving empirical Bayes methods. They were a modified leave-one-out maximum likelihood method and a method of moments based on the optimum smoothing parameter determined from Normality assumption.
3

Araya, Valdivia Ernesto. "Kernel spectral learning and inference in random geometric graphs." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette thèse comporte deux objectifs. Un premier objectif concerne l’étude des propriétés de concentration des matrices à noyau, qui sont fondamentales dans l’ensemble des méthodes à noyau. Le deuxième objectif repose quant à lui sur l’étude des problèmes d’inférence statistique dans le modèle des graphes aléatoires géométriques. Ces deux objectifs sont liés entre eux par le formalisme du graphon, qui permet représenter un graphe par un noyau. Nous rappelons les rudiments du modèle du graphon dans le premier chapitre. Le chapitre 2 présente des bornes précises pour les valeurs propres individuelles d’une matrice à noyau, où notre principale contribution est d’obtenir des inégalités à l’échelle de la valeur propre en considération. Ceci donne des vitesses de convergence qui sont meilleures que la vitesse paramétrique et, en occasions, exponentielles. Jusqu’ici cela n’avait été établi qu’avec des hypothèses contraignantes dans le contexte des graphes. Nous spécialisons les résultats au cas de noyaux de produit scalaire, en soulignant sa relation avec le modèle des graphes géométriques. Le chapitre 3 étudie le problème d’estimation des distances latentes pour le modèle des graphes aléatoires géométriques dans la sphère Euclidienne. Nous proposons un algorithme spectral efficace qui utilise la matrice d’adjacence pour construire un estimateur de la matrice des distances latentes, et des garanties théoriques pour l’erreur d’estimation, ainsi que la vitesse de convergence, sont montrées. Le chapitre 4 étend les méthodes développées dans le chapitre précédent au cas des graphes aléatoires géométriques dans la boule Euclidienne, un modèle qui, en dépit des similarités formelles avec le cas sphérique, est plus flexible en termes de modélisation. En particulier, nous montrons que pour certains choix des paramètres le profil des dégrées est distribué selon une loi de puissance, ce qui a été vérifié empiriquement dans plusieurs réseaux réels. Tous les résultats théoriques des deux derniers chapitres sont confirmés par des expériences numériques
This thesis has two main objectives. The first is to investigate the concentration properties of random kernel matrices, which are central in the study of kernel methods. The second objective is to study statistical inference problems on random geometric graphs. Both objectives are connected by the graphon formalism, which allows to represent a graph by a kernel function. We briefly recall the basics of the graphon model in the first chapter. In chapter two, we present a set of accurate concentration inequalities for individual eigenvalues of the kernel matrix, where our main contribution is to obtain inequalities that scale with the eigenvalue in consideration, implying convergence rates that are faster than parametric and often exponential, which hitherto has only been establish under assumptions which are too restrictive for graph applications. We specialized our results to the case of dot products kernels, highlighting its relation with the random geometric graph model. In chapter three, we study the problem of latent distances estimation on random geometric graphs on the Euclidean sphere. We propose an efficient spectral algorithm that use the adjacency matrix to construct an estimator for the latent distances, and prove finite sample guaranties for the estimation error, establishing its convergence rate. In chapter four, we extend the method developed in the previous chapter to the case of random geometric graphs on the Euclidean ball, a model that despite its formal similarities with the spherical case it is more flexible for modelling purposes. In particular, we prove that for certain parameter choices its degree profile is power law distributed, which has been observed in many real life networks. All the theoretical findings of the last two chapters are verified and complemented by numerical experiments
4

Jitkrittum, Wittawat. "Kernel-based distribution features for statistical tests and Bayesian inference." Thesis, University College London (University of London), 2017. http://discovery.ucl.ac.uk/10037987/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The kernel mean embedding is known to provide a data representation which preserves full information of the data distribution. While typically computationally costly, its nonparametric nature has an advantage of requiring no explicit model specification of the data. At the other extreme are approaches which summarize data distributions into a finite-dimensional vector of hand-picked summary statistics. This explicit finite-dimensional representation offers a computationally cheaper alternative. Clearly, there is a trade-off between cost and sufficiency of the representation, and it is of interest to have a computationally efficient technique which can produce a data-driven representation, thus combining the advantages from both extremes. The main focus of this thesis is on the development of linear-time mean-embedding-based methods to automatically extract informative features of data distributions, for statistical tests and Bayesian inference. In the first part on statistical tests, several new linear-time techniques are developed. These include a new kernel-based distance measure for distributions, a new linear-time nonparametric dependence measure, and a linear-time discrepancy measure between a probabilistic model and a sample, based on a Stein operator. These new measures give rise to linear-time and consistent tests of homogeneity, independence, and goodness of fit, respectively. The key idea behind these new tests is to explicitly learn distribution-characterizing feature vectors, by maximizing a proxy for the probability of correctly rejecting the null hypothesis. We theoretically show that these new tests are consistent for any finite number of features. In the second part, we explore the use of random Fourier features to construct approximate kernel mean embeddings, for representing messages in expectation propagation (EP) algorithm. The goal is to learn a message operator which predicts EP outgoing messages from incoming messages. We derive a novel two-layer random feature representation of the input messages, allowing online learning of the operator during EP inference.
5

Hsu, Yuan-Shuo Kelvin. "Bayesian Perspectives on Conditional Kernel Mean Embeddings: Hyperparameter Learning and Probabilistic Inference." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/24309.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis presents the narrative of a particular journey towards discovering and developing Bayesian perspectives on conditional kernel mean embeddings. It is motivated by the desire and need to learn flexible and richer representations of conditional distributions for probabilistic inference in various contexts. While conditional kernel mean embeddings are able to achieve such representations, it is unclear how their hyperparameters can be learned for probabilistic inference in various settings. These hyperparameters govern the space of possible representations, and critically influence the degree of inference accuracy. At its core, this thesis argues for the notion that Bayesian perspectives lead to principled ways for formulating frameworks that provides a holistic treatment to model, learning, and inference. The story begins by emulating required properties of Bayesian frameworks via learning theoretic bounds. This is carried through the lens of a probabilistic multiclass setting, resulting in the multiclass conditional embedding framework. Through establishing convergence to multiclass probabilities and deriving learning theoretic and Rademacher complexity bounds, the framework arrives at an expected risk bound whose realizations exhibits desirable properties for hyperparameter learning such as the ever-crucial balance between data-fit error and model complexity, emulating marginal likelihoods. The probabilistic nature of this bound enable batch learning for scalability, and the generality of the model allow for various model architectures to be used and learned end-to-end. The narrative unfolds into forming approximate Bayesian inference frameworks directly for the likelihood-free Bayesian inference problem, leading to the kernel embedding likelihood-free inference framework. The core motivator centers around the natural suitability of conditional kernel mean embeddings to forming surrogate probabilistic models. By leveraging the likelihood-free Bayesian inference problem structure, surrogate models for both hyperparameter learning and posterior inference are developed. Finally, the journey concludes with a Bayesian regression framework that aligns the learning and inference to both the problem and the model. This begins by a careful formulation of the conditional mean and the novel deconditional mean problem, thereby revealing the novel deconditional mean embeddings as core elements of the wider kernel mean embedding framework. They can further be established as a nonparametric Bayes' rule with applications towards Bayesian inference. Crucially, by introducing the task transformed regression problem, they can be extended to the novel task transformed Gaussian processes as their Bayesian form, whose marginal likelihood can be used to learn hyperparameters in various forms and contexts. These perspectives and frameworks developed in this thesis shed light into creative ways conditional kernel mean embeddings can be learned and applied in existing problem domains, and further inspire elegant solutions in novel problem settings.
6

Adams, R. P. "Kernel methods for nonparametric Bayesian inference of probability densities and point processes." Thesis, University of Cambridge, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.595350.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
I propose two new kernel-based models that enable an exact generative procedure: the Gaussian process density sampler (GPDS) for probability density functions, and the sigmoidal Gaussian Cox process (SGCP) for the Poisson process. With generative priors, I show how it is now possible to construct two different kinds of Markov chains for inference in these models. These Markov chains have the desired posterior distribution as their equilibrium distributions, and, despite a parameter space with uncountably many dimensions, require only a finite amount of computation to simulate. The GPDS and SGCP, and the associated inference procedures, are the first kernel-based nonparametric Bayesian methods that allow inference without a finite-dimensional approximation. I also present several additional kernel-based models for data that extend the Gaussian process density sampler and sigmoidal Gaussian Cox process to other situations. The Archipelago model extends the GPDS to address the task of semi-supervised learning, where a flexible density estimate can improve the performance of a classifier when unlabelled data are available. I also generalise the SGCP to enable a nonparametric inhomogeneous Neyman-Scott process, and present a soft-core generalisation of the Matérn repulsive process that similarly allows non-approximate inference via Markov chain Monte Carlo.
7

Gogolashvili, Davit. "Global and local Kernel methods for dataset shift, scalable inference and optimization." Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS363v2.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans de nombreux problèmes du monde réel, les données de formation et les données de test ont des distributions différentes. Cette situation est communément appelée " décalage de l'ensemble de données ". Les paramètres les plus courants pour le décalage des ensembles de données souvent considérés dans la littérature sont le décalage des covariables et le décalage des cibles. Dans cette thèse, nous étudions les modèles nonparamétriques appliqués au scénario de changement d'ensemble de données. Nous développons un nouveau cadre pour accélérer la régression par processus gaussien. En particulier, nous considérons des noyaux de localisation à chaque point de données pour réduire les contributions des autres points de données éloignés, et nous dérivons le modèle GPR découlant de l'application de cette opération de localisation. Grâce à une série d'expériences, nous démontrons la performance compétitive de l'approche proposée par rapport au GPR complet, à d'autres modèles localisés et aux processus gaussiens profonds. De manière cruciale, ces performances sont obtenues avec des accélérations considérables par rapport au GPR global standard en raison de l'effet de sparsification de la matrice de Gram induit par l'opération de localisation. Nous proposons une nouvelle méthode pour estimer le minimiseur et la valeur minimale d'une fonction de régression lisse et fortement convexe à partir d'observations contaminées par du bruit aléatoire
In many real world problems, the training data and test data have different distributions. The most common settings for dataset shift often considered in the literature are covariate shift and target shift. In this thesis, we investigate nonparametric models applied to the dataset shift scenario. We develop a novel framework to accelerate Gaussian process regression. In particular, we consider localization kernels at each data point to down-weigh the contributions from other data points that are far away, and we derive the GPR model stemming from the application of such localization operation. We propose a new method for estimating the minimizer and the minimum value of a smooth and strongly convex regression function from the observations contaminated by random noise
8

Maity, Arnab. "Efficient inference in general semiparametric regression models." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-3075.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Minnier, Jessica. "Inference and Prediction for High Dimensional Data via Penalized Regression and Kernel Machine Methods." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10327.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Analysis of high dimensional data often seeks to identify a subset of important features and assess their effects on the outcome. Furthermore, the ultimate goal is often to build a prediction model with these features that accurately assesses risk for future subjects. Such statistical challenges arise in the study of genetic associations with health outcomes. However, accurate inference and prediction with genetic information remains challenging, in part due to the complexity in the genetic architecture of human health and disease. A valuable approach for improving prediction models with a large number of potential predictors is to build a parsimonious model that includes only important variables. Regularized regression methods are useful, though often pose challenges for inference due to nonstandard limiting distributions or finite sample distributions that are difficult to approximate. In Chapter 1 we propose and theoretically justify a perturbation-resampling method to derive confidence regions and covariance estimates for marker effects estimated from regularized procedures with a general class of objective functions and concave penalties. Our methods outperform their asymptotic-based counterparts, even when effects are estimated as zero. In Chapters 2 and 3 we focus on genetic risk prediction. The difficulty in accurate risk assessment with genetic studies can in part be attributed to several potential obstacles: sparsity in marker effects, a large number of weak signals, and non-linear effects. Single marker analyses often lack power to select informative markers and typically do not account for non-linearity. One approach to gain predictive power and efficiency is to group markers based on biological knowledge such genetic pathways or gene structure. In Chapter 2 we propose and theoretically justify a multi-stage method for risk assessment that imposes a naive bayes kernel machine (KM) model to estimate gene-set specific risk models, and then aggregates information across all gene-sets by adaptively estimating gene-set weights via a regularization procedure. In Chapter 3 we extend these methods to meta-analyses by introducing sampling-based weights in the KM model. This permits building risk prediction models with multiple studies that have heterogeneous sampling schemes
10

Weller, Jennifer N. "Bayesian Inference In Forecasting Volcanic Hazards: An Example From Armenia." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000485.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Kernel Inference":

1

Fauzi, Rizky Reza, and Yoshihiko Maesono. Statistical Inference Based on Kernel Distribution Function Estimators. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1862-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Silva, Catarina. Inductive inference for large scale text classification: Kernel approaches and techniques. Berlin: Springer, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wand, M. P. Kernel smoothing. London: Chapman & Hall, 1995.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Causal Inference from Statistical Data. Berlin, Germany: Logos-Verlag Berlin, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Silva, Catarina, and Bernadete Ribeiro. Inductive Inference for Large Scale Text Classification: Kernel Approaches and Techniques. Springer, 2012.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Jones, M. C., and M. P. Wand. Kernel Smoothing. Taylor & Francis Group, 1994.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Jones, M. C., and M. P. Wand. Kernel Smoothing. Taylor & Francis Group, 1994.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Kernel Inference":

1

Vovk, Vladimir. "Kernel Ridge Regression." In Empirical Inference, 105–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41136-6_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Fauzi, Rizky Reza, and Yoshihiko Maesono. "Kernel Quantile Estimation." In Statistical Inference Based on Kernel Distribution Function Estimators, 29–44. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1862-1_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Fauzi, Rizky Reza, and Yoshihiko Maesono. "Kernel Density Function Estimator." In Statistical Inference Based on Kernel Distribution Function Estimators, 1–16. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1862-1_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Fauzi, Rizky Reza, and Yoshihiko Maesono. "Kernel Distribution Function Estimator." In Statistical Inference Based on Kernel Distribution Function Estimators, 17–28. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1862-1_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Fauzi, Rizky Reza, and Yoshihiko Maesono. "Kernel-Based Nonparametric Tests." In Statistical Inference Based on Kernel Distribution Function Estimators, 67–96. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1862-1_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Silva, Catarina, and Bernardete Ribeiro. "Kernel Machines for Text Classification." In Inductive Inference for Large Scale Text Classification, 31–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-04533-2_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Vert, Jean-Philippe. "Classification of Biological Sequences with Kernel Methods." In Grammatical Inference: Algorithms and Applications, 7–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11872436_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Christmann, Andreas, and Robert Hable. "On the Consistency of the Bootstrap Approach for Support Vector Machines and Related Kernel-Based Methods." In Empirical Inference, 231–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41136-6_20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Fukumizu, Kenji. "Nonparametric Bayesian Inference with Kernel Mean Embedding." In Modern Methodology and Applications in Spatial-Temporal Modeling, 1–24. Tokyo: Springer Japan, 2015. http://dx.doi.org/10.1007/978-4-431-55339-7_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Fauzi, Rizky Reza, and Yoshihiko Maesono. "Mean Residual Life Estimator." In Statistical Inference Based on Kernel Distribution Function Estimators, 45–65. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1862-1_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Kernel Inference":

1

Krajsek, Kai, and Hanno Scharr. "Bayesian inference in kernel feature space." In BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING: 31st International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. AIP, 2012. http://dx.doi.org/10.1063/1.3703633.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Sigal, L., R. Memisevic, and D. J. Fleet. "Shared Kernel Information Embedding for discriminative inference." In 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2009. http://dx.doi.org/10.1109/cvprw.2009.5206576.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Castro, Ivan, Cristobal Silva, and Felipe Tobar. "Initialising kernel adaptive filters via probabilistic inference." In 2017 22nd International Conference on Digital Signal Processing (DSP). IEEE, 2017. http://dx.doi.org/10.1109/icdsp.2017.8096055.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Sigal, Leonid, Roland Memisevic, and David J. Fleet. "Shared Kernel Information Embedding for discriminative inference." In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops). IEEE, 2009. http://dx.doi.org/10.1109/cvpr.2009.5206576.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Doherty, Kevin, Jinkun Wang, and Brendan Englot. "Bayesian generalized kernel inference for occupancy map prediction." In 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017. http://dx.doi.org/10.1109/icra.2017.7989356.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Moschitti, Alessandro, Daniele Pighin, and Roberto Basili. "Semantic role labeling via tree kernel joint inference." In the Tenth Conference. Morristown, NJ, USA: Association for Computational Linguistics, 2006. http://dx.doi.org/10.3115/1596276.1596289.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Seki, Hirosato, Fuhito Mizuguchi, Satoshi Watanabe, Hiroaki Ishii, and Masaharu Mizumoto. "SIRMs connected fuzzy inference method using kernel method." In 2008 IEEE International Conference on Systems, Man and Cybernetics (SMC). IEEE, 2008. http://dx.doi.org/10.1109/icsmc.2008.4811546.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kekatos, Vassilis, Yu Zhang, and Georgios B. Giannakis. "Low-rank kernel learning for electricity market inference." In 2013 Asilomar Conference on Signals, Systems and Computers. IEEE, 2013. http://dx.doi.org/10.1109/acssc.2013.6810605.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Perez-Suay, Adrian, and Gustau Camps-Valls. "Causal inference in geosciences with kernel sensitivity maps." In 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS). IEEE, 2017. http://dx.doi.org/10.1109/igarss.2017.8127064.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Guevara, Jorge, Jerry M. Mendel, and R. Hirata. "Connections Between Fuzzy Inference Systems and Kernel Machines." In 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). IEEE, 2020. http://dx.doi.org/10.1109/fuzz48607.2020.9177604.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії