Letteratura scientifica selezionata sul tema "Optimisation et apprentissage distribués"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Optimisation et apprentissage distribués".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Optimisation et apprentissage distribués"
Tendon, Julie. "L’APPORT DES NEUROSCIENCES POUR FAVORISER LES APPRENTISSAGES CHEZ LES 15-20 ANS PRÉSENTANT DES DIFFICULTÉS D’APPRENTISSAGE". Cortica 2, n. 1 (20 marzo 2023): 51–78. http://dx.doi.org/10.26034/cortica.2023.3798.
Testo completoMarin, Didier, Lionel Rigoux e Olivier Sigaud. "Apprentissage et optimisation de politiques pour un bras articulé actionné par des muscles". Revue d'intelligence artificielle 27, n. 2 (30 aprile 2013): 195–215. http://dx.doi.org/10.3166/ria.27.195-215.
Testo completoALI-BENCHERIF, Mohammed Zakaria. "L’enseignement des langues en Algérie à l’épreuve du plurilinguisme : Quelles stratégies adopter ?" Revue plurilingue : Études des Langues, Littératures et Cultures 3, n. 1 (15 novembre 2019): 31–42. http://dx.doi.org/10.46325/ellic.v3i1.40.
Testo completoFEYEN, LUC, MILAN KALAS e JASPER A. VRUGT. "Semi-distributed parameter optimization and uncertainty assessment for large-scale streamflow simulation using global optimization / Optimisation de paramètres semi-distribués et évaluation de l'incertitude pour la simulation de débits à grande échelle par l'utilisation d'une optimisation globale". Hydrological Sciences Journal 53, n. 2 (aprile 2008): 293–308. http://dx.doi.org/10.1623/hysj.53.2.293.
Testo completoLacroix, Paul-Maxime, Paul Commeil, Dominique Chauveaux e Thierry Fabre. "Apprentissage et optimisation des nœuds arthroscopiques : évaluation chez l’interne en formation par simulation procédurale". Revue de Chirurgie Orthopédique et Traumatologique, giugno 2021. http://dx.doi.org/10.1016/j.rcot.2021.04.020.
Testo completoTesi sul tema "Optimisation et apprentissage distribués"
Martinez, Medina Lourdes. "Optimisation des requêtes distribuées par apprentissage". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM015.
Testo completoDistributed data systems are becoming increasingly complex. They interconnect devices (e.g. smartphones, tablets, etc.) that are heterogeneous, autonomous, either static or mobile, and with physical limitations. Such devices run applications (e.g. virtual games, social networks, etc.) for the online interaction of users producing / consuming data on demand or continuously. The characteristics of these systems add new dimensions to the query optimization problem, such as multi-optimization criteria, scarce information on data, lack of global system view, among others. Traditional query optimization techniques focus on semi (or not at all) autonomous systems. They rely on information about data and make strong assumptions about the system behavior. Moreover, most of these techniques are centered on the optimization of execution time only. The difficulty for evaluating queries efficiently on nowadays applications motivates this work to revisit traditional query optimization techniques. This thesis faces these challenges by adapting the Case Based Reasoning (CBR) paradigm to query processing, providing a way to optimize queries when there is no prior knowledge of data. It focuses on optimizing queries using cases generated from the evaluation of similar past queries. A query case comprises: (i) the query, (ii) the query plan and (iii) the measures (computational resources consumed) of the query plan. The thesis also concerns the way the CBR process interacts with the query plan generation process. This process uses classical heuristics and makes decisions randomly (e.g. when there are no statistics for join ordering and selection of algorithms, routing protocols). It also (re)uses cases (existing query plans) for similar queries parts, improving the query optimization, and therefore evaluation efficiency. The propositions of this thesis have been validated within the CoBRa optimizer developed in the context of the UBIQUEST project
Jankee, Christopher. "Optimisation par métaheuristique adaptative distribuée en environnement de calcul parallèle". Thesis, Littoral, 2018. http://www.theses.fr/2018DUNK0480/document.
Testo completoTo solve discrete optimization problems of black box type, many stochastic algorithms such as evolutionary algorithms or metaheuristics exist and prove to be particularly effective according to the problem to be solved. Depending on the observed properties of the problem, choosing the most relevant algorithm is a difficult problem. In the original framework of parallel and distributed computing environments, we propose and analyze different adaptive optimization algorithm selection strategies. These selection strategies are based on reinforcement learning methods automatic, from the field of artificial intelligence, and on information sharing between computing nodes. We compare and analyze selection strategies in different situations. Two types of synchronous distributed computing environment are discussed : the island model and the master-slave model. On the set of nodes synchronously at each iteration, the adaptive selection strategy chooses an algorithm according to the state of the search for the solution. In the first part, two problems OneMax and NK, one unimodal and the other multimodal, are used as benchmarks for this work. Then, to better understand and improve the design of adaptive selection strategies, we propose a modeling of the optimization problem and its local search operator. In this modeling, an important characteristic is the average gain of an operator according to the fitness of the candidate solution. The model is used in the synchronous framework of the master-slave model. A selection strategy is broken down into three main components : the aggregation of the rewards exchanged, the learning scheme and the distribution of the algorithms on the computing nodes. In the final part, we study three scenarios, and we give keys to understanding the relevant use of adaptive selection strategies over naïve strategies. In the framework of the master-slave model, we study the different ways of aggregating the rewards on the master node, the distribution of the optimization algorithms of the nodes of computation and the time of communication. This thesis ends with perspectives in the field of distributed adaptive stochastic optimization
Mhanna, Elissa. "Beyond gradients : zero-order approaches to optimization and learning in multi-agent environments". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG123.
Testo completoThe rise of connected devices and the data they produce has driven the development of large-scale applications. These devices form distributed networks with decentralized data processing. As the number of devices grows, challenges like communication overhead and computational costs increase, requiring optimization methods that work under strict resource constraints, especially where derivatives are unavailable or costly. This thesis focuses on zero-order (ZO) optimization methods are ideal for scenarios where explicit function derivatives are inaccessible. ZO methods estimate gradients based only on function evaluations, making them highly suitable for distributed and federated learning environments where devices collaborate to solve global optimization tasks with limited information and noisy data. In the first chapter, we address distributed ZO optimization for strongly convex functions across multiple agents in a network. We propose a distributed zero-order projected gradient descent algorithm that uses one-point gradient estimates, where the function is queried only once per stochastic realization, and noisy function evaluations estimate the gradient. The chapter establishes the almost sure convergence of the algorithm and derives theoretical upper bounds on the convergence rate. With constant step sizes, the algorithm achieves a linear convergence rate. This is the first time this rate has been established for one-point (and even two-point) gradient estimates. We also analyze the effects of diminishing step sizes, establishing a convergence rate that matches centralized ZO methods' lower bounds. The second chapter addresses the challenges of federated learning (FL) which is often hindered by the communication bottleneck—the high cost of transmitting large amounts of data over limited-bandwidth networks. To address this, we propose a novel zero-order federated learning (ZOFL) algorithm that reduces communication overhead using one-point gradient estimates. Devices transmit scalar values instead of large gradient vectors, lowering the data sent over the network. Moreover, the algorithm incorporates wireless communication disturbances directly into the optimization process, eliminating the need for explicit knowledge of the channel state. This approach is the first to integrate wireless channel properties into a learning algorithm, making it resilient to real-world communication issues. We prove the almost sure convergence of this method in nonconvex optimization settings, establish its convergence rate, and validate its effectiveness through experiments. In the final chapter, we extend the ZOFL algorithm to include two-point gradient estimates. Unlike one-point estimates, which rely on a single function evaluation, two-point estimates query the function twice, providing a more accurate gradient approximation and enhancing the convergence rate. This method maintains the communication efficiency of one-point estimates, where only scalar values are transmitted, and relaxes the assumption that the objective function must be bounded. The chapter demonstrates that the proposed two-point ZO method achieves linear convergence rates for strongly convex and smooth objective functions. For nonconvex problems, the method shows improved convergence speed, particularly when the objective function is smooth and K-gradient-dominated, where a linear rate is also achieved. We also analyze the impact of constant versus diminishing step sizes and provide numerical results showing the method's communication efficiency compared to other federated learning techniques
Vicard, Annie. "Formalisation et optimisation des systèmes informatiques distribués temps réel embarqués". Paris 13, 1999. http://www.theses.fr/1999PA132032.
Testo completoMériaux, François. "Théorie des jeux et apprentissage pour les réseaux sans fil distribués". Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00952069.
Testo completoZerrik, El Hassan. "Controlabilité et observalité régionales d'une classe de systèmes distribués". Perpignan, 1994. http://www.theses.fr/1994PERP0176.
Testo completoVan, Grieken Milagros. "Optimisation pour l'apprentissage et apprentissage pour l'optimisation". Phd thesis, Université Paul Sabatier - Toulouse III, 2004. http://tel.archives-ouvertes.fr/tel-00010106.
Testo completoBERNY, ARNAUD. "Apprentissage et optimisation statistiques. Application a la radiotelephonie mobile". Nantes, 2000. http://www.theses.fr/2000NANT2081.
Testo completoLe, Lann Marie-Véronique. "Commande prédictive et commande par apprentissage : étude d'une unité pilote d'extraction, optimisation par apprentissage". Toulouse, INPT, 1988. http://www.theses.fr/1988INPT023G.
Testo completoLe, Lann Marie-Véronique. "Commande prédictive et commande par apprentissage étude d'une unité pilote d'extraction, optimisation par apprentissage /". Grenoble 2 : ANRT, 1988. http://catalogue.bnf.fr/ark:/12148/cb37615168p.
Testo completoLibri sul tema "Optimisation et apprentissage distribués"
CHELOUAH, Siarry. Optimisation et Apprentissage Hb: Optimisation et Apprentissage. ISTE Editions Ltd., 2023.
Cerca il testo completoMiclet, Laurent, Yves Kodratoff, Antoine Cornuéjols e Tom Mitchell. Apprentissage artificiel : Concepts et algorithmes. Eyrolles, 2002.
Cerca il testo completoCapitoli di libri sul tema "Optimisation et apprentissage distribués"
DE’ FAVERI TRON, Alvise. "La détection d’intrusion au moyen des réseaux de neurones : un tutoriel". In Optimisation et apprentissage, 211–47. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9071.ch8.
Testo completoYAHYAOUI, Khadidja. "Approche hybride pour la navigation autonome des robots mobiles". In Optimisation et apprentissage, 173–209. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9071.ch7.
Testo completoSASSI, Mohamed. "Résolution de problèmes de sélection de caractéristiques à l’aide de métaheuristiques". In Optimisation et apprentissage, 59–92. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9071.ch3.
Testo completoDRIF, Ahlem, Saad Eddine SELMANI e Hocine CHERIFI. "Réseau interactif et apprentissage automatique pour les recommandations". In Optimisation et apprentissage, 123–51. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9071.ch5.
Testo completoBELKHARROUBI, Lakhdar, e Khadidja YAHYAOUI. "La résolution du problème d’équilibrage d’une chaîne de montage à modèles mixtes". In Optimisation et apprentissage, 93–119. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9071.ch4.
Testo completoSBAI, Ines, e Saoussen KRICHEN. "Tournées de véhicules avec contraintes de chargement : des méthodes de résolution". In Optimisation et apprentissage, 7–27. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9071.ch1.
Testo completoFLEURY SOARES, Gustavo, e Induraj PUDHUPATTU RAMAMURTHY. "Comparaison de modèles d’apprentissage automatique et d’apprentissage profond". In Optimisation et apprentissage, 153–71. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9071.ch6.
Testo completoMOKNI, Marwa, e Sonia YASSA. "Ordonnancement du flux de travail IoT basé sur la qualité de service". In Optimisation et apprentissage, 29–57. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9071.ch2.
Testo completo