Thèses sur le sujet « Kernel Inference »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les 26 meilleures thèses pour votre recherche sur le sujet « Kernel Inference ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.
Fouchet, Arnaud. « Kernel methods for gene regulatory network inference ». Thesis, Evry-Val d'Essonne, 2014. http://www.theses.fr/2014EVRY0058/document.
Texte intégralNew technologies in molecular biology, in particular dna microarrays, have greatly increased the quantity of available data. in this context, methods from mathematics and computer science have been actively developed to extract information from large datasets. in particular, the problem of gene regulatory network inference has been tackled using many different mathematical and statistical models, from the most basic ones (correlation, boolean or linear models) to the most elaborate (regression trees, bayesian models with latent variables). despite their qualities when applied to similar problems, kernel methods have scarcely been used for gene network inference, because of their lack of interpretability. in this thesis, two approaches are developed to obtain interpretable kernel methods. firstly, from a theoretical point of view, some kernel methods are shown to consistently estimate a transition function and its partial derivatives from a learning dataset. these estimations of partial derivatives allow to better infer the gene regulatory network than previous methods on realistic gene regulatory networks. secondly, an interpretable kernel methods through multiple kernel learning is presented. this method, called lockni, provides state-of-the-art results on real and realistically simulated datasets
Chan, Karen Pui-Shan. « Kernel density estimation, Bayesian inference and random effects model ». Thesis, University of Edinburgh, 1990. http://hdl.handle.net/1842/13350.
Texte intégralAraya, Valdivia Ernesto. « Kernel spectral learning and inference in random geometric graphs ». Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM020.
Texte intégralThis thesis has two main objectives. The first is to investigate the concentration properties of random kernel matrices, which are central in the study of kernel methods. The second objective is to study statistical inference problems on random geometric graphs. Both objectives are connected by the graphon formalism, which allows to represent a graph by a kernel function. We briefly recall the basics of the graphon model in the first chapter. In chapter two, we present a set of accurate concentration inequalities for individual eigenvalues of the kernel matrix, where our main contribution is to obtain inequalities that scale with the eigenvalue in consideration, implying convergence rates that are faster than parametric and often exponential, which hitherto has only been establish under assumptions which are too restrictive for graph applications. We specialized our results to the case of dot products kernels, highlighting its relation with the random geometric graph model. In chapter three, we study the problem of latent distances estimation on random geometric graphs on the Euclidean sphere. We propose an efficient spectral algorithm that use the adjacency matrix to construct an estimator for the latent distances, and prove finite sample guaranties for the estimation error, establishing its convergence rate. In chapter four, we extend the method developed in the previous chapter to the case of random geometric graphs on the Euclidean ball, a model that despite its formal similarities with the spherical case it is more flexible for modelling purposes. In particular, we prove that for certain parameter choices its degree profile is power law distributed, which has been observed in many real life networks. All the theoretical findings of the last two chapters are verified and complemented by numerical experiments
Jitkrittum, Wittawat. « Kernel-based distribution features for statistical tests and Bayesian inference ». Thesis, University College London (University of London), 2017. http://discovery.ucl.ac.uk/10037987/.
Texte intégralHsu, Yuan-Shuo Kelvin. « Bayesian Perspectives on Conditional Kernel Mean Embeddings : Hyperparameter Learning and Probabilistic Inference ». Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/24309.
Texte intégralAdams, R. P. « Kernel methods for nonparametric Bayesian inference of probability densities and point processes ». Thesis, University of Cambridge, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.595350.
Texte intégralGogolashvili, Davit. « Global and local Kernel methods for dataset shift, scalable inference and optimization ». Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS363v2.pdf.
Texte intégralIn many real world problems, the training data and test data have different distributions. The most common settings for dataset shift often considered in the literature are covariate shift and target shift. In this thesis, we investigate nonparametric models applied to the dataset shift scenario. We develop a novel framework to accelerate Gaussian process regression. In particular, we consider localization kernels at each data point to down-weigh the contributions from other data points that are far away, and we derive the GPR model stemming from the application of such localization operation. We propose a new method for estimating the minimizer and the minimum value of a smooth and strongly convex regression function from the observations contaminated by random noise
Maity, Arnab. « Efficient inference in general semiparametric regression models ». [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-3075.
Texte intégralMinnier, Jessica. « Inference and Prediction for High Dimensional Data via Penalized Regression and Kernel Machine Methods ». Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10327.
Texte intégralWeller, Jennifer N. « Bayesian Inference In Forecasting Volcanic Hazards : An Example From Armenia ». [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000485.
Texte intégralEl, Ghouch Anouar. « Nonparametric statistical inference for dependent censored data ». Université catholique de Louvain, 2007. http://edoc.bib.ucl.ac.be:81/ETD-db/collection/available/BelnUcetd-09262007-123927/.
Texte intégralRazavian, Narges Sharif. « Continuous Graphical Models for Static and Dynamic Distributions : Application to Structural Biology ». Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/340.
Texte intégralBoussaid, Haithem. « Efficient inference and learning in graphical models for multi-organ shape segmentation ». Thesis, Châtenay-Malabry, Ecole centrale de Paris, 2015. http://www.theses.fr/2015ECAP0002/document.
Texte intégralThis thesis explores the use of discriminatively trained deformable contour models (DCMs) for shape-based segmentation in medical images. We make contributions in two fronts: in the learning problem, where the model is trained from a set of annotated images, and in the inference problem, whose aim is to segment an image given a model. We demonstrate the merit of our techniques in a large X-Ray image segmentation benchmark, where we obtain systematic improvements in accuracy and speedups over the current state-of-the-art. For learning, we formulate training the DCM scoring function as large-margin structured prediction and construct a training objective that aims at giving the highest score to the ground-truth contour configuration. We incorporate a loss function adapted to DCM-based structured prediction. In particular, we consider training with the Mean Contour Distance (MCD) performance measure. Using this loss function during training amounts to scoring each candidate contour according to its Mean Contour Distance to the ground truth configuration. Training DCMs using structured prediction with the standard zero-one loss already outperforms the current state-of-the-art method [Seghers et al. 2007] on the considered medical benchmark [Shiraishi et al. 2000, van Ginneken et al. 2006]. We demonstrate that training with the MCD structured loss further improves over the generic zero-one loss results by a statistically significant amount. For inference, we propose efficient solvers adapted to combinatorial problems with discretized spatial variables. Our contributions are three-fold:first, we consider inference for loopy graphical models, making no assumption about the underlying graph topology. We use an efficient decomposition-coordination algorithm to solve the resulting optimization problem: we decompose the model’s graph into a set of open, chain-structured graphs. We employ the Alternating Direction Method of Multipliers (ADMM) to fix the potential inconsistencies of the individual solutions. Even-though ADMMis an approximate inference scheme, we show empirically that our implementation delivers the exact solution for the considered examples. Second,we accelerate optimization of chain-structured graphical models by using the Hierarchical A∗ search algorithm of [Felzenszwalb & Mcallester 2007] couple dwith the pruning techniques developed in [Kokkinos 2011a]. We achieve a one order of magnitude speedup in average over the state-of-the-art technique based on Dynamic Programming (DP) coupled with Generalized DistanceTransforms (GDTs) [Felzenszwalb & Huttenlocher 2004]. Third, we incorporate the Hierarchical A∗ algorithm in the ADMM scheme to guarantee an efficient optimization of the underlying chain structured subproblems. The resulting algorithm is naturally adapted to solve the loss-augmented inference problem in structured prediction learning, and hence is used during training and inference. In Appendix A, we consider the case of 3D data and we develop an efficientmethod to find the mode of a 3D kernel density distribution. Our algorithm has guaranteed convergence to the global optimum, and scales logarithmically in the volume size by virtue of recursively subdividing the search space. We use this method to rapidly initialize 3D brain tumor segmentation where we demonstrate substantial acceleration with respect to a standard mean-shift implementation. In Appendix B, we describe in more details our extension of the Hierarchical A∗ search algorithm of [Felzenszwalb & Mcallester 2007] to inference on chain-structured graphs
Bon, Joshua J. « Advances in sequential Monte Carlo methods ». Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/235897/1/Joshua%2BBon%2BThesis%284%29.pdf.
Texte intégralPriddle, Jacob William. « Efficient and flexible Bayesian synthetic likelihood via transformations ». Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/205902/1/Jacob_Priddle_Thesis.pdf.
Texte intégralJeunesse, Paulien. « Estimation non paramétrique du taux de mort dans un modèle de population générale : Théorie et applications. A new inference strategy for general population mortality tables Nonparametric adaptive inference of birth and death models in a large population limit Nonparametric inference of age-structured models in a large population limit with interactions, immigration and characteristics Nonparametric test of time dependance of age-structured models in a large population limit ». Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED013.
Texte intégralIn this thesis, we study the mortality rate in different population models to apply our results to demography or biology. The mathematical framework includes statistics of process, nonparametric estimations and analysis.In a first part, an algorithm is proposed to estimate the mortality tables. This problematic comes from actuarial science and the aim is to apply our results in the insurance field. This algorithm is founded on a deterministic population model. The new estimates we gets improve the actual results. Its advantage is to take into account the global population dynamics. Thanks to that, births are used in our model to compute the mortality rate. Finally these estimations are linked with the precedent works. This is a point of great importance in the field of actuarial science.In a second part, we are interested in the estimation of the mortality rate in a stochastic population model. We need to use the tools coming from nonparametric estimations and statistics of process to do so. Indeed, the mortality rate is a function of two parameters, the time and the age. We propose minimax optimal and adaptive estimators for the mortality and the population density. We also demonstrate some non asymptotics concentration inequalities. These inequalities quantifiy the deviation between the stochastic process and its deterministic limit we used in the first part. We prove that our estimators are still optimal in a model where the mortality is influenced by interactions. This is for example the case for the logistic population.In a third part, we consider the testing problem to detect the existence of interactions. This test is in fact designed to detect the time dependance of the mortality rate. Under the assumption the time dependance in the mortality rate comes only from the interactions, we can detect the presence of interactions. Finally we propose an algorithm to do this test
Verbyla, Petras. « Network inference using independence criteria ». Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/277912.
Texte intégralMassaroppe, Lucas. « Estimação da causalidade de Granger no caso de interação não-linear ». Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-20122016-083110/.
Texte intégralThis work examines the connectivity detection problem between time series in the Granger sense when the nonlinear nature of interactions determination is impossible via linear vector autoregressive models, but is, nonetheless, feasible with the aid of the so-called Kernel methods that are popular in machine learning. The kernelization approach allows defining generalised versions for Granger tests, partial directed coherence and directed transfer function, which the simulation of some examples shows that the asymptotic detection results originally deducted for linear estimators, can also be employed under kernelization if suitably adapted.
Akcin, Haci Mustafa. « NONPARAMETRIC INFERENCES FOR THE HAZARD FUNCTION WITH RIGHT TRUNCATION ». Digital Archive @ GSU, 2013. http://digitalarchive.gsu.edu/math_diss/12.
Texte intégralSemolini, Robinson. « Support vector machines, inferencia transdutiva e o problema de classificação ». [s.n.], 2002. http://repositorio.unicamp.br/jspui/handle/REPOSIP/262026.
Texte intégralDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-02T22:45:22Z (GMT). No. of bitstreams: 1 Semolini_Robinson_M.pdf: 2460751 bytes, checksum: ebce4f71a94df85c3c47c496d4feae2a (MD5) Previous issue date: 2002
Mestrado
Dumora, Christophe. « Estimation de paramètres clés liés à la gestion d'un réseau de distribution d'eau potable : Méthode d'inférence sur les noeuds d'un graphe ». Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0325.
Texte intégralThe rise of data generated by sensors and operational tools around water distribution network (WDN) management make these systems more and more complex and in general the events more difficult to predict. The history of data related to the quality of distributed water crossed with the knowledge of network assets, contextual data and temporal parameters lead to study a complex system due to its volume and the existence of interactions between these various type of data which may vary in time and space. This big variety of data is grouped by the use of mathematical graph and allow to represent WDN as a whole and all the events that may arise therein or influence their proper functioning. The graph theory associated with these mathematical graphs allow a structural and spectral analysis of WDN to answer to specific needs and enhance existing process. These graphs are then used to answer the probleme of inference on the nodes of large graph from the observation of data on a small number of nodes. An approach by optminisation algorithm is used to construct a variable of flow on every nodes of a graph (therefore at any point of a physical network) using flow algorithm and data measured in real time by flowmeters. Then, a kernel prediction approach based on a Ridge estimator, which raises spectral analysis problems of a large sparse matrix, allow the inference of a signal measured on specific nodes of a graph at any point of a WDN
Allain, Cédric. « Temporal point processes and scalable convolutional dictionary learning : a unified framework for m/eeg signal analysis in neuroscience ». Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG008.
Texte intégralIn the field of non-invasive brain imaging, Magnetoencephalography and Electroencephalography (M/EEG) offer invaluable insights into neural activities. The recorded data consist of multivariate time series that provide information about cognitive processes and are often complemented by auxiliary details related to the experimental paradigm, such as timestamps of external stimuli or actions undertaken by the subjects. Additionally, the dataset may include recordings from multiple subjects, facilitating population- level analyses.This doctoral research presents a novel framework for M/EEG signal analysis that synergizes Convolutional Dictionary Learning (CDL) and Temporal Point Processes (TPPs). The work is segmented into two primary components: temporal modeling advancements and computational scalability. For temporal modeling, two novel point process models are introduced with efficient inference methods to capture task-specific neural activities. The proposed Fast Discretized Inference for Hawkes Processes (FaDIn) method also has implications for broader applications. Additionally, this work addresses the computational challenges of large-scale M/EEG data CDL-based analysis, by introducing a novel Stochastic Robust Windowing CDL algorithm. This algorithm allows to process efficiently artifact-ridden signals as well as large population studies. Population CDL was then used on the large open-access dataset Cam-CAN, shedding light on age-related neural activity
Song, Song. « Confidence bands in quantile regression and generalized dynamic semiparametric factor models ». Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2010. http://dx.doi.org/10.18452/16341.
Texte intégralIn many applications it is necessary to know the stochastic fluctuation of the maximal deviations of the nonparametric quantile estimates, e.g. for various parametric models check. Uniform confidence bands are therefore constructed for nonparametric quantile estimates of regression functions. The first method is based on the strong approximations of the empirical process and extreme value theory. The strong uniform consistency rate is also established under general conditions. The second method is based on the bootstrap resampling method. It is proved that the bootstrap approximation provides a substantial improvement. The case of multidimensional and discrete regressor variables is dealt with using a partial linear model. A labor market analysis is provided to illustrate the method. High dimensional time series which reveal nonstationary and possibly periodic behavior occur frequently in many fields of science, e.g. macroeconomics, meteorology, medicine and financial engineering. One of the common approach is to separate the modeling of high dimensional time series to time propagation of low dimensional time series and high dimensional time invariant functions via dynamic factor analysis. We propose a two-step estimation procedure. At the first step, we detrend the time series by incorporating time basis selected by the group Lasso-type technique and choose the space basis based on smoothed functional principal component analysis. We show properties of this estimator under the dependent scenario. At the second step, we obtain the detrended low dimensional stochastic process (stationary).
Pawlowski, Filip igor. « High-performance dense tensor and sparse matrix kernels for machine learning ». Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN081.
Texte intégralIn this thesis, we develop high performance algorithms for certain computations involving dense tensors and sparse matrices. We address kernel operations that are useful for machine learning tasks, such as inference with deep neural networks (DNNs). We develop data structures and techniques to reduce memory use, to improve data locality and hence to improve cache reuse of the kernel operations. We design both sequential and shared-memory parallel algorithms. In the first part of the thesis we focus on dense tensors kernels. Tensor kernels include the tensor--vector multiplication (TVM), tensor--matrix multiplication (TMM), and tensor--tensor multiplication (TTM). Among these, TVM is the most bandwidth-bound and constitutes a building block for many algorithms. We focus on this operation and develop a data structure and sequential and parallel algorithms for it. We propose a novel data structure which stores the tensor as blocks, which are ordered using the space-filling curve known as the Morton curve (or Z-curve). The key idea consists of dividing the tensor into blocks small enough to fit cache, and storing them according to the Morton order, while keeping a simple, multi-dimensional order on the individual elements within them. Thus, high performance BLAS routines can be used as microkernels for each block. We evaluate our techniques on a set of experiments. The results not only demonstrate superior performance of the proposed approach over the state-of-the-art variants by up to 18%, but also show that the proposed approach induces 71% less sample standard deviation for the TVM across the d possible modes. Finally, we show that our data structure naturally expands to other tensor kernels by demonstrating that it yields up to 38% higher performance for the higher-order power method. Finally, we investigate shared-memory parallel TVM algorithms which use the proposed data structure. Several alternative parallel algorithms were characterized theoretically and implemented using OpenMP to compare them experimentally. Our results on up to 8 socket systems show near peak performance for the proposed algorithm for 2, 3, 4, and 5-dimensional tensors. In the second part of the thesis, we explore the sparse computations in neural networks focusing on the high-performance sparse deep inference problem. The sparse DNN inference is the task of using sparse DNN networks to classify a batch of data elements forming, in our case, a sparse feature matrix. The performance of sparse inference hinges on efficient parallelization of the sparse matrix--sparse matrix multiplication (SpGEMM) repeated for each layer in the inference function. We first characterize efficient sequential SpGEMM algorithms for our use case. We then introduce the model-parallel inference, which uses a two-dimensional partitioning of the weight matrices obtained using the hypergraph partitioning software. The model-parallel variant uses barriers to synchronize at layers. Finally, we introduce tiling model-parallel and tiling hybrid algorithms, which increase cache reuse between the layers, and use a weak synchronization module to hide load imbalance and synchronization costs. We evaluate our techniques on the large network data from the IEEE HPEC 2019 Graph Challenge on shared-memory systems and report up to 2x times speed-up versus the baseline
Chang, Chia-Hung, et 張嘉宏. « Design of an Inference Accelerator for CNN with Sparse Row-wise Kernel ». Thesis, 2019. http://ndltd.ncl.edu.tw/handle/vgqj7n.
Texte intégralSu, Wanhua. « Efficient Kernel Methods for Statistical Detection ». Thesis, 2008. http://hdl.handle.net/10012/3598.
Texte intégral