Indice
Letteratura scientifica selezionata sul tema "Borne de généralisation"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Borne de généralisation".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Borne de généralisation"
Grenot, Nicolas. "La généralisation du tiers-payant, une fausse bonne idée !" Revue du Podologue 9, n. 54 (novembre 2013): 1. http://dx.doi.org/10.1016/j.revpod.2013.10.004.
Testo completoLenté, Christophe, e Jean-Louis Bouquard. "Généralisation Max-Plus des bornes de Lageweg, Lenstra et Rinnooy Kan". RAIRO - Operations Research 37, n. 4 (ottobre 2003): 273–89. http://dx.doi.org/10.1051/ro:2004006.
Testo completoFabiani, Jean-Louis. "La généralisation dans les sciences historiques". Annales. Histoire, Sciences Sociales 62, n. 1 (febbraio 2007): 9–28. http://dx.doi.org/10.1017/s0395264900020199.
Testo completoAugustin, Jean-Pierre, e Yaya K. Drabo. "«Au sport, citoyens !»". Politique africaine 33, n. 1 (1989): 59–65. http://dx.doi.org/10.3406/polaf.1989.5248.
Testo completoCreissels, Denis. "Typologie linguistique et description des langues en danger". Histoire Épistémologie Langage 39, n. 1 (2017): 25–35. http://dx.doi.org/10.3406/hel.2017.3585.
Testo completoDufour, Jean-Marie, e Malika Neifar. "Méthodes d’inférence exactes pour un modèle de régression avec erreurs AR(2) gaussiennes". Articles 80, n. 4 (26 gennaio 2006): 593–618. http://dx.doi.org/10.7202/012129ar.
Testo completoARCHIMÈDE, H., D. BASTIANELLI, M. BOVAL, G. TRAN e D. SAUVANT. "Ressources tropicales : disponibilité et valeur alimentaire". INRAE Productions Animales 24, n. 1 (4 marzo 2011): 23–40. http://dx.doi.org/10.20870/productions-animales.2011.24.1.3235.
Testo completoCalle, Allicia, Florencia Montagnini e Andrès Felipe Zuluaga. "Perception paysannes de la promotion de systèmes sylvo-pastoraux à Quindio, Colombie". BOIS & FORETS DES TROPIQUES 300, n. 300 (1 giugno 2009): 79. http://dx.doi.org/10.19182/bft2009.300.a20417.
Testo completoChagny, Odile. "Allemagne : en quête de nouvelles modalités de partage de la valeur ajoutée". Revue de l'OFCE 61, n. 2 (1 giugno 1997): 165–200. http://dx.doi.org/10.3917/reof.p1997.61n1.0165.
Testo completoFicheux, Guillaume, Jean-Paul Niguet, Thierry Van der Linden, Hélène Bulckaen, Marie-Laure Charkaluk, Pierrette Perimenis, Françoise Roy Saint-Georges, Élodie Hernandez e Mathieu Lorenzo. "Dans quelle mesure les examens cliniques objectifs structurés (ECOS) sont-ils un outil valide pour l’évaluation des performances cliniques à la fin du second cycle des études médicales ? Analyse d’une expérience lilloise selon le modèle de Kane". Pédagogie Médicale, 2023, 200086. http://dx.doi.org/10.1051/pmed/2023007.
Testo completoTesi sul tema "Borne de généralisation"
Audibert, Jean-Yves. "Théorie statistique de l'apprentissage : une approche PAC-Bayésienne". Paris 6, 2004. http://www.theses.fr/2004PA066003.
Testo completoWade, Modou. "Apprentissage profond pour les processus faiblement dépendants". Electronic Thesis or Diss., CY Cergy Paris Université, 2024. http://www.theses.fr/2024CYUN1299.
Testo completoThis thesis focuses on deep learning for weakly dependent processes. We consider a class of deep neural network estimators with sparsity regularisation and/or penalty regularisation.Chapter1 is a summary of the work. It presents the deep learning framework and reviews the main results obtained in chapters 2, 3, 4, 5 and 6.Chapter 2 considers deep learning for psi-weakly dependent processes. We have established the convergence rate of the empirical risk minimization (ERM) algorithm on the class of deep neural network (DNN) estimators. For these estimators, we have provided a generalization bound and an asymptotic learning rate of order O(n^{-1/alpha}) for all alpha > 2 is obtained. A bound of the excess risk for a large class of target predictors is also established. Chapter 3 presents the sparse-penalized deep neural networks estimator under weak dependence. We consider nonparametric regression and classification problems for weakly dependent processes. We use a method of regularization by penalization. For nonparametric regression and binary classification, we establish an oracle inequality for the excess risk of the sparse-penalized deep neural networks (SPDNN) estimator. We have also provided a convergence rate for these estimators.Chapter 4 focuses on the penalized deep neural networks estimator with a general loss function under weak dependence. We consider the psi-weak dependence structure and, in the specific case where the observations are bounded, we deal with the theta_{infty}-weak dependence. For learning psi and theta_{infty}-weakly dependent processes, we have established an oracle inequality for the excess risks of the sparse-penalized deep neural networks estimator. We have shown that when the target function is sufficiently smooth, the convergence rate of these excess risks is close to O(n^{-1/3}).Chapter 5 presents robust deep learning from weakly dependent data. We assume that the output variable has finite r moments, with r >= 1. For learning strong mixing and psi-weakly dependent processes, a non-asymptotic bound for the expected excess risk of the deep neural networks estimator is established. We have shown that when the target function belongs to the class of H"older smooth functions, the convergence rate of the expected excess risk for exponentially strongly mixing data is close to or equal to that obtained with an independent and identically distributed sample. Chapter 6 focuses on deep learning for strongly mixing observation with sparse-penalized regularization and minimax optimality. We have provided an oracle inequality and a bound on the class of H"older smooth functions for the expected excess risk of the deep neural network estimator. We have also considered the problem of nonparametric regression from strongly mixing data with sub-exponential noise. When the target function belongs to the class of H"older composition functions, we have established an upper bound for the oracle inequality of the L_2 error. In the specific case of autoregressive regression with standard Laplace or normal error, we have provided a lower bound for the L_2 error in this classe, which matches up to a logarithmic factor the upper bound; thus the deep neural network estimator achieves optimal convergence rate
Sérié, Emmanuel. "Théories de jauge en géométrie non commutative et généralisation du modèle de Born-Infeld". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2005. http://tel.archives-ouvertes.fr/tel-00010487.
Testo completoEl, Hosseiny Hany. "Problème de Fatou ponctuel pour les quotients harmoniques et généralisations". Université Joseph Fourier (Grenoble), 1994. http://www.theses.fr/1994GRE10109.
Testo completoGlaria, Arnaud. "Généralisation d'une approche de synthèse par voie organométallique, à température ambiante, de nanoparticules monocristallines d'oxydes métalliquess : étude de leurs propriétés optiques ou magnétiques". Toulouse 3, 2007. http://thesesups.ups-tlse.fr/157/.
Testo completoOur group has recently developed an organometallic synthetic approach for the synthesis of ZnO nanoparticles with controlled size and shape. The aim of this thesis is to generalise this method to other metal oxide nanoparticles with a size smaller than 5 nm. This work deals with the synthesis and the study of metal-oxide nanoparticles exhibiting either luminescent (ZnO) or magnetic properties (Gamma-Fe2O3, Co3O4, CoFe2O4, FeO or CoO). In the first part of this thesis, we show that adding an organolithium precursor during the ZnO nanoparticles synthesis modifies their growth mechanism. Therefore, the size of the particles is directly related to the amount of the organolithium precursor and varies from 2. 5 to 4. 3 nm. In this way, colloidal solutions and nanoparticles in the solid state are obtained which display a luminescence in the visible range from yellow to blue through white. In the second part, we show the generalisation of this approach to magnetic nanoparticles such as Gamma-Fe2O3 and CoFe2O4. We show therefore the variation of the interactions between the particles depending on the experimental conditions. Finally, we show that we can also adapt this approach to the synthesis of unstable phases such as FeO and CoO. The last part of this manuscript deals with the synthesis of metallic particles (Fe, Co, Zn) using an amine-borane complex as reducing agent of our organometallic complexes
Usunier, Nicolas. "Apprentissage de fonctions d'ordonnancement : une étude théorique de la réduction à la classification et deux applications à la recherche d'information". Paris 6, 2006. http://www.theses.fr/2006PA066425.
Testo completoBellet, Aurélien. "Supervised metric learning with generalization guarantees". Phd thesis, Université Jean Monnet - Saint-Etienne, 2012. http://tel.archives-ouvertes.fr/tel-00770627.
Testo completoPeel, Thomas. "Algorithmes de poursuite stochastiques et inégalités de concentration empiriques pour l'apprentissage statistique". Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4769/document.
Testo completoThe first part of this thesis introduces new algorithms for the sparse encoding of signals. Based on Matching Pursuit (MP) they focus on the following problem : how to reduce the computation time of the selection step of MP. As an answer, we sub-sample the dictionary in line and column at each iteration. We show that this theoretically grounded approach has good empirical performances. We then propose a bloc coordinate gradient descent algorithm for feature selection problems in the multiclass classification setting. Thanks to the use of error-correcting output codes, this task can be seen as a simultaneous sparse encoding of signals problem. The second part exposes new empirical Bernstein inequalities. Firstly, they concern the theory of the U-Statistics and are applied in order to design generalization bounds for ranking algorithms. These bounds take advantage of a variance estimator and we propose an efficient algorithm to compute it. Then, we present an empirical version of the Bernstein type inequality for martingales by Freedman [1975]. Again, the strength of our result lies in the variance estimator computable from the data. This allows us to propose generalization bounds for online learning algorithms which improve the state of the art and pave the way to a new family of learning algorithms taking advantage of this empirical information