Contents
Academic literature on the topic 'Borne de généralisation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Borne de généralisation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Borne de généralisation"
Grenot, Nicolas. "La généralisation du tiers-payant, une fausse bonne idée !" Revue du Podologue 9, no. 54 (November 2013): 1. http://dx.doi.org/10.1016/j.revpod.2013.10.004.
Full textLenté, Christophe, and Jean-Louis Bouquard. "Généralisation Max-Plus des bornes de Lageweg, Lenstra et Rinnooy Kan." RAIRO - Operations Research 37, no. 4 (October 2003): 273–89. http://dx.doi.org/10.1051/ro:2004006.
Full textFabiani, Jean-Louis. "La généralisation dans les sciences historiques." Annales. Histoire, Sciences Sociales 62, no. 1 (February 2007): 9–28. http://dx.doi.org/10.1017/s0395264900020199.
Full textAugustin, Jean-Pierre, and Yaya K. Drabo. "«Au sport, citoyens !»." Politique africaine 33, no. 1 (1989): 59–65. http://dx.doi.org/10.3406/polaf.1989.5248.
Full textCreissels, Denis. "Typologie linguistique et description des langues en danger." Histoire Épistémologie Langage 39, no. 1 (2017): 25–35. http://dx.doi.org/10.3406/hel.2017.3585.
Full textDufour, Jean-Marie, and Malika Neifar. "Méthodes d’inférence exactes pour un modèle de régression avec erreurs AR(2) gaussiennes." Articles 80, no. 4 (January 26, 2006): 593–618. http://dx.doi.org/10.7202/012129ar.
Full textARCHIMÈDE, H., D. BASTIANELLI, M. BOVAL, G. TRAN, and D. SAUVANT. "Ressources tropicales : disponibilité et valeur alimentaire." INRAE Productions Animales 24, no. 1 (March 4, 2011): 23–40. http://dx.doi.org/10.20870/productions-animales.2011.24.1.3235.
Full textCalle, Allicia, Florencia Montagnini, and Andrès Felipe Zuluaga. "Perception paysannes de la promotion de systèmes sylvo-pastoraux à Quindio, Colombie." BOIS & FORETS DES TROPIQUES 300, no. 300 (June 1, 2009): 79. http://dx.doi.org/10.19182/bft2009.300.a20417.
Full textChagny, Odile. "Allemagne : en quête de nouvelles modalités de partage de la valeur ajoutée." Revue de l'OFCE 61, no. 2 (June 1, 1997): 165–200. http://dx.doi.org/10.3917/reof.p1997.61n1.0165.
Full textFicheux, Guillaume, Jean-Paul Niguet, Thierry Van der Linden, Hélène Bulckaen, Marie-Laure Charkaluk, Pierrette Perimenis, Françoise Roy Saint-Georges, Élodie Hernandez, and Mathieu Lorenzo. "Dans quelle mesure les examens cliniques objectifs structurés (ECOS) sont-ils un outil valide pour l’évaluation des performances cliniques à la fin du second cycle des études médicales ? Analyse d’une expérience lilloise selon le modèle de Kane." Pédagogie Médicale, 2023, 200086. http://dx.doi.org/10.1051/pmed/2023007.
Full textDissertations / Theses on the topic "Borne de généralisation"
Audibert, Jean-Yves. "Théorie statistique de l'apprentissage : une approche PAC-Bayésienne." Paris 6, 2004. http://www.theses.fr/2004PA066003.
Full textWade, Modou. "Apprentissage profond pour les processus faiblement dépendants." Electronic Thesis or Diss., CY Cergy Paris Université, 2024. http://www.theses.fr/2024CYUN1299.
Full textThis thesis focuses on deep learning for weakly dependent processes. We consider a class of deep neural network estimators with sparsity regularisation and/or penalty regularisation.Chapter1 is a summary of the work. It presents the deep learning framework and reviews the main results obtained in chapters 2, 3, 4, 5 and 6.Chapter 2 considers deep learning for psi-weakly dependent processes. We have established the convergence rate of the empirical risk minimization (ERM) algorithm on the class of deep neural network (DNN) estimators. For these estimators, we have provided a generalization bound and an asymptotic learning rate of order O(n^{-1/alpha}) for all alpha > 2 is obtained. A bound of the excess risk for a large class of target predictors is also established. Chapter 3 presents the sparse-penalized deep neural networks estimator under weak dependence. We consider nonparametric regression and classification problems for weakly dependent processes. We use a method of regularization by penalization. For nonparametric regression and binary classification, we establish an oracle inequality for the excess risk of the sparse-penalized deep neural networks (SPDNN) estimator. We have also provided a convergence rate for these estimators.Chapter 4 focuses on the penalized deep neural networks estimator with a general loss function under weak dependence. We consider the psi-weak dependence structure and, in the specific case where the observations are bounded, we deal with the theta_{infty}-weak dependence. For learning psi and theta_{infty}-weakly dependent processes, we have established an oracle inequality for the excess risks of the sparse-penalized deep neural networks estimator. We have shown that when the target function is sufficiently smooth, the convergence rate of these excess risks is close to O(n^{-1/3}).Chapter 5 presents robust deep learning from weakly dependent data. We assume that the output variable has finite r moments, with r >= 1. For learning strong mixing and psi-weakly dependent processes, a non-asymptotic bound for the expected excess risk of the deep neural networks estimator is established. We have shown that when the target function belongs to the class of H"older smooth functions, the convergence rate of the expected excess risk for exponentially strongly mixing data is close to or equal to that obtained with an independent and identically distributed sample. Chapter 6 focuses on deep learning for strongly mixing observation with sparse-penalized regularization and minimax optimality. We have provided an oracle inequality and a bound on the class of H"older smooth functions for the expected excess risk of the deep neural network estimator. We have also considered the problem of nonparametric regression from strongly mixing data with sub-exponential noise. When the target function belongs to the class of H"older composition functions, we have established an upper bound for the oracle inequality of the L_2 error. In the specific case of autoregressive regression with standard Laplace or normal error, we have provided a lower bound for the L_2 error in this classe, which matches up to a logarithmic factor the upper bound; thus the deep neural network estimator achieves optimal convergence rate
Sérié, Emmanuel. "Théories de jauge en géométrie non commutative et généralisation du modèle de Born-Infeld." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2005. http://tel.archives-ouvertes.fr/tel-00010487.
Full textEl, Hosseiny Hany. "Problème de Fatou ponctuel pour les quotients harmoniques et généralisations." Université Joseph Fourier (Grenoble), 1994. http://www.theses.fr/1994GRE10109.
Full textGlaria, Arnaud. "Généralisation d'une approche de synthèse par voie organométallique, à température ambiante, de nanoparticules monocristallines d'oxydes métalliquess : étude de leurs propriétés optiques ou magnétiques." Toulouse 3, 2007. http://thesesups.ups-tlse.fr/157/.
Full textOur group has recently developed an organometallic synthetic approach for the synthesis of ZnO nanoparticles with controlled size and shape. The aim of this thesis is to generalise this method to other metal oxide nanoparticles with a size smaller than 5 nm. This work deals with the synthesis and the study of metal-oxide nanoparticles exhibiting either luminescent (ZnO) or magnetic properties (Gamma-Fe2O3, Co3O4, CoFe2O4, FeO or CoO). In the first part of this thesis, we show that adding an organolithium precursor during the ZnO nanoparticles synthesis modifies their growth mechanism. Therefore, the size of the particles is directly related to the amount of the organolithium precursor and varies from 2. 5 to 4. 3 nm. In this way, colloidal solutions and nanoparticles in the solid state are obtained which display a luminescence in the visible range from yellow to blue through white. In the second part, we show the generalisation of this approach to magnetic nanoparticles such as Gamma-Fe2O3 and CoFe2O4. We show therefore the variation of the interactions between the particles depending on the experimental conditions. Finally, we show that we can also adapt this approach to the synthesis of unstable phases such as FeO and CoO. The last part of this manuscript deals with the synthesis of metallic particles (Fe, Co, Zn) using an amine-borane complex as reducing agent of our organometallic complexes
Usunier, Nicolas. "Apprentissage de fonctions d'ordonnancement : une étude théorique de la réduction à la classification et deux applications à la recherche d'information." Paris 6, 2006. http://www.theses.fr/2006PA066425.
Full textBellet, Aurélien. "Supervised metric learning with generalization guarantees." Phd thesis, Université Jean Monnet - Saint-Etienne, 2012. http://tel.archives-ouvertes.fr/tel-00770627.
Full textPeel, Thomas. "Algorithmes de poursuite stochastiques et inégalités de concentration empiriques pour l'apprentissage statistique." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4769/document.
Full textThe first part of this thesis introduces new algorithms for the sparse encoding of signals. Based on Matching Pursuit (MP) they focus on the following problem : how to reduce the computation time of the selection step of MP. As an answer, we sub-sample the dictionary in line and column at each iteration. We show that this theoretically grounded approach has good empirical performances. We then propose a bloc coordinate gradient descent algorithm for feature selection problems in the multiclass classification setting. Thanks to the use of error-correcting output codes, this task can be seen as a simultaneous sparse encoding of signals problem. The second part exposes new empirical Bernstein inequalities. Firstly, they concern the theory of the U-Statistics and are applied in order to design generalization bounds for ranking algorithms. These bounds take advantage of a variance estimator and we propose an efficient algorithm to compute it. Then, we present an empirical version of the Bernstein type inequality for martingales by Freedman [1975]. Again, the strength of our result lies in the variance estimator computable from the data. This allows us to propose generalization bounds for online learning algorithms which improve the state of the art and pave the way to a new family of learning algorithms taking advantage of this empirical information