Literatura científica selecionada sobre o tema "Borne de généralisation"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Borne de généralisation".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Borne de généralisation"
Grenot, Nicolas. "La généralisation du tiers-payant, une fausse bonne idée !" Revue du Podologue 9, n.º 54 (novembro de 2013): 1. http://dx.doi.org/10.1016/j.revpod.2013.10.004.
Texto completo da fonteLenté, Christophe, e Jean-Louis Bouquard. "Généralisation Max-Plus des bornes de Lageweg, Lenstra et Rinnooy Kan". RAIRO - Operations Research 37, n.º 4 (outubro de 2003): 273–89. http://dx.doi.org/10.1051/ro:2004006.
Texto completo da fonteFabiani, Jean-Louis. "La généralisation dans les sciences historiques". Annales. Histoire, Sciences Sociales 62, n.º 1 (fevereiro de 2007): 9–28. http://dx.doi.org/10.1017/s0395264900020199.
Texto completo da fonteAugustin, Jean-Pierre, e Yaya K. Drabo. "«Au sport, citoyens !»". Politique africaine 33, n.º 1 (1989): 59–65. http://dx.doi.org/10.3406/polaf.1989.5248.
Texto completo da fonteCreissels, Denis. "Typologie linguistique et description des langues en danger". Histoire Épistémologie Langage 39, n.º 1 (2017): 25–35. http://dx.doi.org/10.3406/hel.2017.3585.
Texto completo da fonteDufour, Jean-Marie, e Malika Neifar. "Méthodes d’inférence exactes pour un modèle de régression avec erreurs AR(2) gaussiennes". Articles 80, n.º 4 (26 de janeiro de 2006): 593–618. http://dx.doi.org/10.7202/012129ar.
Texto completo da fonteARCHIMÈDE, H., D. BASTIANELLI, M. BOVAL, G. TRAN e D. SAUVANT. "Ressources tropicales : disponibilité et valeur alimentaire". INRAE Productions Animales 24, n.º 1 (4 de março de 2011): 23–40. http://dx.doi.org/10.20870/productions-animales.2011.24.1.3235.
Texto completo da fonteCalle, Allicia, Florencia Montagnini e Andrès Felipe Zuluaga. "Perception paysannes de la promotion de systèmes sylvo-pastoraux à Quindio, Colombie". BOIS & FORETS DES TROPIQUES 300, n.º 300 (1 de junho de 2009): 79. http://dx.doi.org/10.19182/bft2009.300.a20417.
Texto completo da fonteChagny, Odile. "Allemagne : en quête de nouvelles modalités de partage de la valeur ajoutée". Revue de l'OFCE 61, n.º 2 (1 de junho de 1997): 165–200. http://dx.doi.org/10.3917/reof.p1997.61n1.0165.
Texto completo da fonteFicheux, Guillaume, Jean-Paul Niguet, Thierry Van der Linden, Hélène Bulckaen, Marie-Laure Charkaluk, Pierrette Perimenis, Françoise Roy Saint-Georges, Élodie Hernandez e Mathieu Lorenzo. "Dans quelle mesure les examens cliniques objectifs structurés (ECOS) sont-ils un outil valide pour l’évaluation des performances cliniques à la fin du second cycle des études médicales ? Analyse d’une expérience lilloise selon le modèle de Kane". Pédagogie Médicale, 2023, 200086. http://dx.doi.org/10.1051/pmed/2023007.
Texto completo da fonteTeses / dissertações sobre o assunto "Borne de généralisation"
Audibert, Jean-Yves. "Théorie statistique de l'apprentissage : une approche PAC-Bayésienne". Paris 6, 2004. http://www.theses.fr/2004PA066003.
Texto completo da fonteWade, Modou. "Apprentissage profond pour les processus faiblement dépendants". Electronic Thesis or Diss., CY Cergy Paris Université, 2024. http://www.theses.fr/2024CYUN1299.
Texto completo da fonteThis thesis focuses on deep learning for weakly dependent processes. We consider a class of deep neural network estimators with sparsity regularisation and/or penalty regularisation.Chapter1 is a summary of the work. It presents the deep learning framework and reviews the main results obtained in chapters 2, 3, 4, 5 and 6.Chapter 2 considers deep learning for psi-weakly dependent processes. We have established the convergence rate of the empirical risk minimization (ERM) algorithm on the class of deep neural network (DNN) estimators. For these estimators, we have provided a generalization bound and an asymptotic learning rate of order O(n^{-1/alpha}) for all alpha > 2 is obtained. A bound of the excess risk for a large class of target predictors is also established. Chapter 3 presents the sparse-penalized deep neural networks estimator under weak dependence. We consider nonparametric regression and classification problems for weakly dependent processes. We use a method of regularization by penalization. For nonparametric regression and binary classification, we establish an oracle inequality for the excess risk of the sparse-penalized deep neural networks (SPDNN) estimator. We have also provided a convergence rate for these estimators.Chapter 4 focuses on the penalized deep neural networks estimator with a general loss function under weak dependence. We consider the psi-weak dependence structure and, in the specific case where the observations are bounded, we deal with the theta_{infty}-weak dependence. For learning psi and theta_{infty}-weakly dependent processes, we have established an oracle inequality for the excess risks of the sparse-penalized deep neural networks estimator. We have shown that when the target function is sufficiently smooth, the convergence rate of these excess risks is close to O(n^{-1/3}).Chapter 5 presents robust deep learning from weakly dependent data. We assume that the output variable has finite r moments, with r >= 1. For learning strong mixing and psi-weakly dependent processes, a non-asymptotic bound for the expected excess risk of the deep neural networks estimator is established. We have shown that when the target function belongs to the class of H"older smooth functions, the convergence rate of the expected excess risk for exponentially strongly mixing data is close to or equal to that obtained with an independent and identically distributed sample. Chapter 6 focuses on deep learning for strongly mixing observation with sparse-penalized regularization and minimax optimality. We have provided an oracle inequality and a bound on the class of H"older smooth functions for the expected excess risk of the deep neural network estimator. We have also considered the problem of nonparametric regression from strongly mixing data with sub-exponential noise. When the target function belongs to the class of H"older composition functions, we have established an upper bound for the oracle inequality of the L_2 error. In the specific case of autoregressive regression with standard Laplace or normal error, we have provided a lower bound for the L_2 error in this classe, which matches up to a logarithmic factor the upper bound; thus the deep neural network estimator achieves optimal convergence rate
Sérié, Emmanuel. "Théories de jauge en géométrie non commutative et généralisation du modèle de Born-Infeld". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2005. http://tel.archives-ouvertes.fr/tel-00010487.
Texto completo da fonteEl, Hosseiny Hany. "Problème de Fatou ponctuel pour les quotients harmoniques et généralisations". Université Joseph Fourier (Grenoble), 1994. http://www.theses.fr/1994GRE10109.
Texto completo da fonteGlaria, Arnaud. "Généralisation d'une approche de synthèse par voie organométallique, à température ambiante, de nanoparticules monocristallines d'oxydes métalliquess : étude de leurs propriétés optiques ou magnétiques". Toulouse 3, 2007. http://thesesups.ups-tlse.fr/157/.
Texto completo da fonteOur group has recently developed an organometallic synthetic approach for the synthesis of ZnO nanoparticles with controlled size and shape. The aim of this thesis is to generalise this method to other metal oxide nanoparticles with a size smaller than 5 nm. This work deals with the synthesis and the study of metal-oxide nanoparticles exhibiting either luminescent (ZnO) or magnetic properties (Gamma-Fe2O3, Co3O4, CoFe2O4, FeO or CoO). In the first part of this thesis, we show that adding an organolithium precursor during the ZnO nanoparticles synthesis modifies their growth mechanism. Therefore, the size of the particles is directly related to the amount of the organolithium precursor and varies from 2. 5 to 4. 3 nm. In this way, colloidal solutions and nanoparticles in the solid state are obtained which display a luminescence in the visible range from yellow to blue through white. In the second part, we show the generalisation of this approach to magnetic nanoparticles such as Gamma-Fe2O3 and CoFe2O4. We show therefore the variation of the interactions between the particles depending on the experimental conditions. Finally, we show that we can also adapt this approach to the synthesis of unstable phases such as FeO and CoO. The last part of this manuscript deals with the synthesis of metallic particles (Fe, Co, Zn) using an amine-borane complex as reducing agent of our organometallic complexes
Usunier, Nicolas. "Apprentissage de fonctions d'ordonnancement : une étude théorique de la réduction à la classification et deux applications à la recherche d'information". Paris 6, 2006. http://www.theses.fr/2006PA066425.
Texto completo da fonteBellet, Aurélien. "Supervised metric learning with generalization guarantees". Phd thesis, Université Jean Monnet - Saint-Etienne, 2012. http://tel.archives-ouvertes.fr/tel-00770627.
Texto completo da fontePeel, Thomas. "Algorithmes de poursuite stochastiques et inégalités de concentration empiriques pour l'apprentissage statistique". Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4769/document.
Texto completo da fonteThe first part of this thesis introduces new algorithms for the sparse encoding of signals. Based on Matching Pursuit (MP) they focus on the following problem : how to reduce the computation time of the selection step of MP. As an answer, we sub-sample the dictionary in line and column at each iteration. We show that this theoretically grounded approach has good empirical performances. We then propose a bloc coordinate gradient descent algorithm for feature selection problems in the multiclass classification setting. Thanks to the use of error-correcting output codes, this task can be seen as a simultaneous sparse encoding of signals problem. The second part exposes new empirical Bernstein inequalities. Firstly, they concern the theory of the U-Statistics and are applied in order to design generalization bounds for ranking algorithms. These bounds take advantage of a variance estimator and we propose an efficient algorithm to compute it. Then, we present an empirical version of the Bernstein type inequality for martingales by Freedman [1975]. Again, the strength of our result lies in the variance estimator computable from the data. This allows us to propose generalization bounds for online learning algorithms which improve the state of the art and pave the way to a new family of learning algorithms taking advantage of this empirical information