Índice
Literatura académica sobre el tema "Apprentissage non-paramétrique"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Apprentissage non-paramétrique".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Apprentissage non-paramétrique"
Mohammed, Mokhtar Riad, Abdelkader Maizia, Mohamed Mhamed Salaheddine Seddiki y Lakhdar Mokhtari. "Les effets de l’intégration de la simulation sur l’apprentissage des gestes procéduraux de base et de l’examen physique en stage hospitalier dans le cursus pré-gradué des études médicales d’une faculté de médecine en Algérie". Pédagogie Médicale 21, n.º 2 (2020): 83–89. http://dx.doi.org/10.1051/pmed/2020034.
Texto completoTesis sobre el tema "Apprentissage non-paramétrique"
Knefati, Muhammad Anas. "Estimation non-paramétrique du quantile conditionnel et apprentissage semi-paramétrique : applications en assurance et actuariat". Thesis, Poitiers, 2015. http://www.theses.fr/2015POIT2280/document.
Texto completoThe thesis consists of two parts: One part is about the estimation of conditional quantiles and the other is about supervised learning. The "conditional quantile estimate" part is organized into 3 chapters. Chapter 1 is devoted to an introduction to the local linear regression and then goes on to present the methods, the most used in the literature to estimate the smoothing parameter. Chapter 2 addresses the nonparametric estimation methods of conditional quantile and then gives numerical experiments on simulated data and real data. Chapter 3 is devoted to a new conditional quantile estimator, we propose. This estimator is based on the use of asymmetrical kernels w.r.t. x. We show, under some hypothesis, that this new estimator is more efficient than the other estimators already used. The "supervised learning" part is, too, with 3 chapters: Chapter 4 provides an introduction to statistical learning, remembering the basic concepts used in this part. Chapter 5 discusses the conventional methods of supervised classification. Chapter 6 is devoted to propose a method of transferring a semiparametric model. The performance of this method is shown by numerical experiments on morphometric data and credit-scoring data
Lahbib, Dhafer. "Préparation non paramétrique des données pour la fouille de données multi-tables". Phd thesis, Université de Cergy Pontoise, 2012. http://tel.archives-ouvertes.fr/tel-00854142.
Texto completoSolnon, Matthieu. "Apprentissage statistique multi-tâches". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2013. http://tel.archives-ouvertes.fr/tel-00911498.
Texto completoScornet, Erwan. "Apprentissage et forêts aléatoires". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066533/document.
Texto completoThis is devoted to a nonparametric estimation method called random forests, introduced by Breiman in 2001. Extensively used in a variety of areas, random forests exhibit good empirical performance and can handle massive data sets. However, the mathematical forces driving the algorithm remain largely unknown. After reviewing theoretical literature, we focus on the link between infinite forests (theoretically analyzed) and finite forests (used in practice) aiming at narrowing the gap between theory and practice. In particular, we propose a way to select the number of trees such that the errors of finite and infinite forests are similar. On the other hand, we study quantile forests, a type of algorithms close in spirit to Breiman's forests. In this context, we prove the benefit of trees aggregation: while each tree of quantile forest is not consistent, with a proper subsampling step, the forest is. Next, we show the connection between forests and some particular kernel estimates, which can be made explicit in some cases. We also establish upper bounds on the rate of convergence for these kernel estimates. Then we demonstrate two theorems on the consistency of both pruned and unpruned Breiman forests. We stress the importance of subsampling to demonstrate the consistency of the unpruned Breiman's forests. At last, we present the results of a Dreamchallenge whose goal was to predict the toxicity of several compounds for several patients based on their genetic profile
Lasserre, Marvin. "Apprentissages dans les réseaux bayésiens à base de copules non-paramétriques". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS029.
Texto completoModeling multivariate continuous distributions is a task of central interest in statistics and machine learning with many applications in science and engineering. However, high-dimensional distributions are difficult to handle and can lead to intractable computations. The Copula Bayesian Networks (CBNs) take advantage of both Bayesian networks (BNs) and copula theory to compactly represent such multivariate distributions. Bayesian networks rely on conditional independences in order to reduce the complexity of the problem, while copula functions allow to model the dependence relation between random variables. The goal of this thesis is to give a common framework to both domains and to propose new learning algorithms for copula Bayesian networks. To do so, we use the fact that CBNs have the same graphical language as BNs which allows us to adapt their learning methods to this model. Moreover, using the empirical Bernstein copula both to design conditional independence tests and to estimate copulas from data, we avoid making parametric assumptions, which gives greater generality to our methods
Genuer, Robin. "Forêts aléatoires : aspects théoriques, sélection de variables et applications". Phd thesis, Université Paris Sud - Paris XI, 2010. http://tel.archives-ouvertes.fr/tel-00550989.
Texto completoPatra, Benoît. "Apprentissage à "grande échelle" : contribution à l'étude d'algorithmes de clustering répartis asynchrones". Paris 6, 2012. http://www.theses.fr/2012PA066040.
Texto completoThe subjects addressed in this thesis manuscript are inspired from research problems encountered by the company Lokad, which are summarized in the first chapter. Chapter 2 deals with a nonparametric method for forecasting the quantiles of a real-valued time series. In particular, we establish a consistency result for this technique under minimal assumptions. The remainder of the dissertation is devoted to the analysis of distributed asynchronous clustering algorithms (DALVQ). Chapter 3 first proposes a mathematical description of the models and then offers a theoretical analysis, where the existence of an asymptotical consensus and the almost sure convergence towards critical points of the distortion are proved. In the next chapter, we propose a thorough discussion as well as some experiments on parallelization schemes to be implemented for a practical deployment of DALVQ algorithms. Finally, Chapter 5 contains an effective implementation of DALVQ on the Cloud Computing platform Microsoft Windows Azure. We study, among other topics, the speed ups brought by the algorithm with more parallel computing ressources, and we compare this algorithm with the so-called Lloyd's method, which is also distributed and deployed on Windows Azure
Dallaire, Patrick. "Bayesian nonparametric latent variable models". Doctoral thesis, Université Laval, 2016. http://hdl.handle.net/20.500.11794/26848.
Texto completoOne of the important problems in machine learning is determining the complexity of the model to learn. Too much complexity leads to overfitting, which finds structures that do not actually exist in the data, while too low complexity leads to underfitting, which means that the expressiveness of the model is insufficient to capture all the structures present in the data. For some probabilistic models, the complexity depends on the introduction of one or more latent variables whose role is to explain the generative process of the data. There are various approaches to identify the appropriate number of latent variables of a model. This thesis covers various Bayesian nonparametric methods capable of determining the number of latent variables to be used and their dimensionality. The popularization of Bayesian nonparametric statistics in the machine learning community is fairly recent. Their main attraction is the fact that they offer highly flexible models and their complexity scales appropriately with the amount of available data. In recent years, research on Bayesian nonparametric learning methods have focused on three main aspects: the construction of new models, the development of inference algorithms and new applications. This thesis presents our contributions to these three topics of research in the context of learning latent variables models. Firstly, we introduce the Pitman-Yor process mixture of Gaussians, a model for learning infinite mixtures of Gaussians. We also present an inference algorithm to discover the latent components of the model and we evaluate it on two practical robotics applications. Our results demonstrate that the proposed approach outperforms, both in performance and flexibility, the traditional learning approaches. Secondly, we propose the extended cascading Indian buffet process, a Bayesian nonparametric probability distribution on the space of directed acyclic graphs. In the context of Bayesian networks, this prior is used to identify the presence of latent variables and the network structure among them. A Markov Chain Monte Carlo inference algorithm is presented and evaluated on structure identification problems and as well as density estimation problems. Lastly, we propose the Indian chefs process, a model more general than the extended cascading Indian buffet process for learning graphs and orders. The advantage of the new model is that it accepts connections among observable variables and it takes into account the order of the variables. We also present a reversible jump Markov Chain Monte Carlo inference algorithm which jointly learns graphs and orders. Experiments are conducted on density estimation problems and testing independence hypotheses. This model is the first Bayesian nonparametric model capable of learning Bayesian learning networks with completely arbitrary graph structures.
Arlot, Sylvain. "Rééchantillonnage et Sélection de modèles". Phd thesis, Université Paris Sud - Paris XI, 2007. http://tel.archives-ouvertes.fr/tel-00198803.
Texto completoLa majeure partie de ce travail de thèse consiste dans la calibration précise de méthodes de sélection de modèles optimales en pratique, pour le problème de la prédiction. Nous étudions la validation croisée V-fold (très couramment utilisée, mais mal comprise en théorie, notamment pour ce qui est de choisir V) et plusieurs méthodes de pénalisation. Nous proposons des méthodes de calibration précise de pénalités, aussi bien pour ce qui est de leur forme générale que des constantes multiplicatives. L'utilisation du rééchantillonnage permet de résoudre des problèmes difficiles, notamment celui de la régression avec un niveau de bruit variable. Nous validons théoriquement ces méthodes du point de vue non-asymptotique, en prouvant des inégalités oracle et des propriétés d'adaptation. Ces résultats reposent entre autres sur des inégalités de concentration.
Un second problème que nous abordons est celui des régions de confiance et des tests multiples, lorsque l'on dispose d'observations de grande dimension, présentant des corrélations générales et inconnues. L'utilisation de méthodes de rééchantillonnage permet de s'affranchir du fléau de la dimension, et d'"apprendre" ces corrélations. Nous proposons principalement deux méthodes, et prouvons pour chacune un contrôle non-asymptotique de leur niveau.
Averyanov, Yaroslav. "Concevoir et analyser de nouvelles règles d’arrêt prématuré pour économiser les ressources de calcul". Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1I048.
Texto completoThis work develops and analyzes strategies for constructing instances of the so-called early stopping rules applied to some iterative learning algorithms for estimating the regression function. Such quantities are data-driven rules indicating when to stop the iterative learning process to reach a trade-off between computational costs and the statistical precision. Unlike a large part of the existing literature on early stopping, where these rules only depend on the data in a "weak manner", we provide data-driven solutions for the aforementioned problem without utilizing validation data. The crucial idea exploited here is that of the minimum discrepancy principle (MDP), which shows when to stop an iterative learning algorithm. To the best of our knowledge, this idea dates back to the work of Vladimir A. Morozov in the 1960s-1970s who studied linear ill-posed problems and their regularization, mostly inspired by mathematical physics problems. Among different applications of this line of work, the so-called spectral filter estimators such as spectral cut-off, Landweber iterations, and Tikhonov (ridge) regularization have received quite a lot of attention (e.g., in statistical inverse problems). It is worth mentioning that the minimum discrepancy principle consists in controlling the residuals of an estimator (which are iteratively minimized) and properly setting a threshold for them such that one can achieve some (minimax) optimality. The first part of this thesis is dedicated to theoretical guarantees of stopping rules based on the minimum discrepancy principle and applied to gradient descent, and Tikhonov (ridge) regression in the framework of reproducing kernel Hilbert space (RKHS). There, we show that this principle provides a minimax optimal functional estimator of the regression function when the rank of the kernel is finite. However, when one deals with infinite-rank reproducing kernels, the resulting estimator will be only suboptimal. While looking for a solution, we found the existence of the so-called residuals polynomial smoothing strategy. This strategy (combined with MDP) has been proved to be optimal for the spectral cut-off estimator in the linear Gaussian sequence model. We borrow this strategy, modify the stopping rule accordingly, and prove that the smoothed minimum discrepancy principle yields a minimax optimal functional estimator over a range of function spaces, which includes the well-known Sobolev function class. Our second contribution consists in exploring the theoretical properties of the minimum discrepancy stopping rule applied to the more general family of linear estimators. The main difficulty of this approach is that, unlike the spectral filter estimators considered earlier, linear estimators do no longer lead to monotonic quantities (the bias and variance terms). Let us mention that this is also the case for famous algorithms such as Stochastic Gradient Descent. Motivated by further practical applications, we work with the widely used k-NN regression estimator as a reliable first example. We prove that the aforementioned stopping rule leads to a minimax optimal functional estimator, in particular, over the class of Lipschitz functions on a bounded domain.The third contribution consists in illustrating through empirical experiments that for choosing the tuning parameter in a linear estimator (the k-NN regression, Nadaraya-Watson, and variable selection estimators), the MDP-based early stopping rule performs comparably well with respect to other widely used and known model selection criteria