Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Optimisation convexe en ligne“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Optimisation convexe en ligne" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Optimisation convexe en ligne"
Hilout, Saïd. „Stabilité en optimisation convexe non différentiable“. Comptes Rendus de l'Académie des Sciences - Series I - Mathematics 329, Nr. 11 (Dezember 1999): 1027–32. http://dx.doi.org/10.1016/s0764-4442(00)88631-0.
Der volle Inhalt der QuelleKouada, I. „Sur la dualité en optimisation vectorielle convexe“. RAIRO - Operations Research 28, Nr. 3 (1994): 255–81. http://dx.doi.org/10.1051/ro/1994280302551.
Der volle Inhalt der QuelleBoyer, R. „Algorithmes de type F.A.C. en optimisation convexe“. ESAIM: Mathematical Modelling and Numerical Analysis 28, Nr. 1 (1994): 95–119. http://dx.doi.org/10.1051/m2an/1994280100951.
Der volle Inhalt der QuelleRodriguez, Pedro, und Didier Dumur. „Robustification d'une commande GPC par optimisation convexe du paramètre de Youla“. Journal Européen des Systèmes Automatisés 37, Nr. 1 (30.01.2003): 109–34. http://dx.doi.org/10.3166/jesa.37.109-134.
Der volle Inhalt der QuelleAbbas-Turki, Mohamed, Gilles Duc und Benoît Clément. „Retouche de correcteurs par optimisation convexe. Application au pilotage d'un lanceur spatial“. Journal Européen des Systèmes Automatisés 40, Nr. 9-10 (30.12.2006): 997–1017. http://dx.doi.org/10.3166/jesa.40.997-1017.
Der volle Inhalt der QuelleBelkeziz, K., und A. Metrane. „Optimisation d’une fonction linéaire sur l’ensemble des solutions efficaces d’un problème multicritère quadratique convexe“. Annales mathématiques Blaise Pascal 11, Nr. 1 (2004): 19–33. http://dx.doi.org/10.5802/ambp.182.
Der volle Inhalt der QuelleKouada, A. Issoufou. „Sur la propriété de domination et l'existence de points Pareto-efficaces en optimisation vectorielle convexe“. RAIRO - Operations Research 28, Nr. 1 (1994): 77–84. http://dx.doi.org/10.1051/ro/1994280100771.
Der volle Inhalt der QuelleBarbet, C., H. Longuet, P. Gatault, N. Rabot und J. M. Halimi. „Ligne directe ville–hôpital en néphrologie : optimisation du parcours de soins“. Néphrologie & Thérapeutique 12, Nr. 5 (September 2016): 402–3. http://dx.doi.org/10.1016/j.nephro.2016.07.120.
Der volle Inhalt der QuelleClément, Benoît. „Analyse par intervalles et optimisation convexe pour résoudre un problème général de faisabilité d’une contrainte robuste“. Journal Européen des Systèmes Automatisés 46, Nr. 4-5 (30.07.2012): 381–95. http://dx.doi.org/10.3166/jesa.46.381-395.
Der volle Inhalt der QuelleF. Aziz, Rahma, und Maha S. Younis. „A New Hybrid Conjugate Gradient Method with Global Convergence Properties“. Wasit Journal for Pure sciences 3, Nr. 3 (30.09.2024): 58–68. http://dx.doi.org/10.31185/wjps.453.
Der volle Inhalt der QuelleDissertationen zum Thema "Optimisation convexe en ligne"
Fernandez, Camila. „Contributions and applications to survival analysis“. Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS230.
Der volle Inhalt der QuelleSurvival analysis has attracted interest from a wide range of disciplines, spanning from medicine and predictive maintenance to various industrial applications. Its growing popularity can be attributed to significant advancements in computational power and the increased availability of data. Diverse approaches have been developed to address the challenge of censored data, from classical statistical tools to contemporary machine learning techniques. However, there is still considerable room for improvement. This thesis aims to introduce innovative approaches that provide deeper insights into survival distributions and to propose new methods with theoretical guarantees that enhance prediction accuracy. Notably, we notice the lack of models able to treat sequential data, a setting that is relevant due to its ability to adapt quickly to new information and its efficiency in handling large data streams without requiring significant memory resources. The first contribution of this thesis is to propose a theoretical framework for modeling online survival data. We model the hazard function as a parametric exponential that depends on the covariates, and we use online convex optimization algorithms to minimize the negative log-likelihood of our model, an approach that is novel in this field. We propose a new adaptive second-order algorithm, SurvONS, which ensures robustness in hyperparameter selection while maintaining fast regret bounds. Additionally, we introduce a stochastic approach that enhances the convexity properties to achieve faster convergence rates. The second contribution of this thesis is to provide a detailed comparison of diverse survival models, including semi-parametric, parametric, and machine learning models. We study the dataset character- istics that influence the methods performance, and we propose an aggregation procedure that enhances prediction accuracy and robustness. Finally, we apply the different approaches discussed throughout the thesis to an industrial case study : predicting employee attrition, a fundamental issue in modern business. Additionally, we study the impact of employee characteristics on attrition predictions using permutation feature importance and Shapley values
Reiffers-Masson, Alexandre. „Compétition sur la visibilité et la popularité dans les réseaux sociaux en ligne“. Thesis, Avignon, 2016. http://www.theses.fr/2016AVIG0210/document.
Der volle Inhalt der QuelleThis Ph.D. is dedicated to the application of the game theory for the understanding of users behaviour in Online Social Networks. The three main questions of this Ph.D. are: " How to maximize contents popularity ? "; " How to model the distribution of messages across sources and topics in OSNs ? "; " How to minimize gossip propagation and how to maximize contents diversity? ". After a survey concerning the research made about the previous problematics in chapter 1, we propose to study a competition over visibility in chapter 2. In chapter 3, we model and provide insight concerning the posting behaviour of publishers in OSNs by using the stochastic approximation framework. In chapter 4, it is a popularity competition which is described by using a differential game formulation. The chapter 5 is dedicated to the formulation of two convex optimization problems in the context of Online Social Networks. Finally conclusions and perspectives are given in chapter 6
Akhavanfoomani, Aria. „Derivative-free stochastic optimization, online learning and fairness“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAG001.
Der volle Inhalt der QuelleIn this thesis, we first study the problem of zero-order optimization in the active setting for smooth and three different classes of functions: i) the functions that satisfy the Polyak-Łojasiewicz condition, ii) strongly convex functions, and iii) the larger class of highly smooth non-convex functions.Furthermore, we propose a novel algorithm that is based on l1-type randomization, and we study its properties for Lipschitz convex functions in an online optimization setting. Our analysis is due to deriving a new Poincar'e type inequality for the uniform measure on the l1-sphere with explicit constants.Then, we study the zero-order optimization problem in the passive schemes. We propose a new method for estimating the minimizer and the minimum value of a smooth and strongly convex regression function f. We derive upper bounds for this algorithm and prove minimax lower bounds for such a setting.In the end, we study the linear contextual bandit problem under fairness constraints where an agent has to select one candidate from a pool, and each candidate belongs to a sensitive group. We propose a novel notion of fairness which is practical in the aforementioned example. We design a greedy policy that computes an estimate of the relative rank of each candidate using the empirical cumulative distribution function, and we proved its optimal property
Ho, Vinh Thanh. „Techniques avancées d'apprentissage automatique basées sur la programmation DC et DCA“. Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0289/document.
Der volle Inhalt der QuelleIn this dissertation, we develop some advanced machine learning techniques in the framework of online learning and reinforcement learning (RL). The backbones of our approaches are DC (Difference of Convex functions) programming and DCA (DC Algorithm), and their online version that are best known as powerful nonsmooth, nonconvex optimization tools. This dissertation is composed of two parts: the first part studies some online machine learning techniques and the second part concerns RL in both batch and online modes. The first part includes two chapters corresponding to online classification (Chapter 2) and prediction with expert advice (Chapter 3). These two chapters mention a unified DC approximation approach to different online learning algorithms where the observed objective functions are 0-1 loss functions. We thoroughly study how to develop efficient online DCA algorithms in terms of theoretical and computational aspects. The second part consists of four chapters (Chapters 4, 5, 6, 7). After a brief introduction of RL and its related works in Chapter 4, Chapter 5 aims to provide effective RL techniques in batch mode based on DC programming and DCA. In particular, we first consider four different DC optimization formulations for which corresponding attractive DCA-based algorithms are developed, then carefully address the key issues of DCA, and finally, show the computational efficiency of these algorithms through various experiments. Continuing this study, in Chapter 6 we develop DCA-based RL techniques in online mode and propose their alternating versions. As an application, we tackle the stochastic shortest path (SSP) problem in Chapter 7. Especially, a particular class of SSP problems can be reformulated in two directions as a cardinality minimization formulation and an RL formulation. Firstly, the cardinality formulation involves the zero-norm in objective and the binary variables. We propose a DCA-based algorithm by exploiting a DC approximation approach for the zero-norm and an exact penalty technique for the binary variables. Secondly, we make use of the aforementioned DCA-based batch RL algorithm. All proposed algorithms are tested on some artificial road networks
Ho, Vinh Thanh. „Techniques avancées d'apprentissage automatique basées sur la programmation DC et DCA“. Electronic Thesis or Diss., Université de Lorraine, 2017. http://www.theses.fr/2017LORR0289.
Der volle Inhalt der QuelleIn this dissertation, we develop some advanced machine learning techniques in the framework of online learning and reinforcement learning (RL). The backbones of our approaches are DC (Difference of Convex functions) programming and DCA (DC Algorithm), and their online version that are best known as powerful nonsmooth, nonconvex optimization tools. This dissertation is composed of two parts: the first part studies some online machine learning techniques and the second part concerns RL in both batch and online modes. The first part includes two chapters corresponding to online classification (Chapter 2) and prediction with expert advice (Chapter 3). These two chapters mention a unified DC approximation approach to different online learning algorithms where the observed objective functions are 0-1 loss functions. We thoroughly study how to develop efficient online DCA algorithms in terms of theoretical and computational aspects. The second part consists of four chapters (Chapters 4, 5, 6, 7). After a brief introduction of RL and its related works in Chapter 4, Chapter 5 aims to provide effective RL techniques in batch mode based on DC programming and DCA. In particular, we first consider four different DC optimization formulations for which corresponding attractive DCA-based algorithms are developed, then carefully address the key issues of DCA, and finally, show the computational efficiency of these algorithms through various experiments. Continuing this study, in Chapter 6 we develop DCA-based RL techniques in online mode and propose their alternating versions. As an application, we tackle the stochastic shortest path (SSP) problem in Chapter 7. Especially, a particular class of SSP problems can be reformulated in two directions as a cardinality minimization formulation and an RL formulation. Firstly, the cardinality formulation involves the zero-norm in objective and the binary variables. We propose a DCA-based algorithm by exploiting a DC approximation approach for the zero-norm and an exact penalty technique for the binary variables. Secondly, we make use of the aforementioned DCA-based batch RL algorithm. All proposed algorithms are tested on some artificial road networks
Weiss, Pierre. „Algorithmes rapides d'optimisation convexe. Applications à la reconstruction d'images et à la détection de changements“. Phd thesis, Université de Nice Sophia-Antipolis, 2008. http://tel.archives-ouvertes.fr/tel-00349452.
Der volle Inhalt der QuelleKarimi, Belhal. „Non-Convex Optimization for Latent Data Models : Algorithms, Analysis and Applications“. Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX040/document.
Der volle Inhalt der QuelleMany problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Many problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Euclidean space.Examples include topic models, neural networks or sparse logistic regression.Optimization methods, used to solve those problems, have been widely studied in the literature for convex objective functions and are extensively used in practice.However, recent breakthroughs in statistical modeling, such as deep learning, coupled with an explosion of data samples, require improvements of non-convex optimization procedure for large datasets.This thesis is an attempt to address those two challenges by developing algorithms with cheaper updates, ideally independent of the number of samples, and improving the theoretical understanding of non-convex optimization that remains rather limited.In this manuscript, we are interested in the minimization of such objective functions for latent data models, ie, when the data is partially observed which includes the conventional sense of missing data but is much broader than that.In the first part, we consider the minimization of a (possibly) non-convex and non-smooth objective function using incremental and online updates.To that end, we propose several algorithms exploiting the latent structure to efficiently optimize the objective and illustrate our findings with numerous applications.In the second part, we focus on the maximization of non-convex likelihood using the EM algorithm and its stochastic variants.We analyze several faster and cheaper algorithms and propose two new variants aiming at speeding the convergence of the estimated parameters
DANIILIDIS, Aris. „Analyse convexe et quasi-convexe ; applications en optimisation“. Habilitation à diriger des recherches, Université de Pau et des Pays de l'Adour, 2002. http://tel.archives-ouvertes.fr/tel-00001355.
Der volle Inhalt der QuelleBahraoui, Mohamed-Amin. „Suites diagonalement stationnaires en optimisation convexe“. Montpellier 2, 1994. http://www.theses.fr/1994MON20153.
Der volle Inhalt der QuelleYagoubi, Mohamed. „Commande robuste structurée et optimisation convexe“. Nantes, 2003. http://www.theses.fr/2003NANT2027.
Der volle Inhalt der QuelleBücher zum Thema "Optimisation convexe en ligne"
Willem, Michel. Analyse convexe et optimisation. [S.l.]: CIACO, 1987.
Den vollen Inhalt der Quelle findenWillem, Michel. Analyse convexe et optimisation. 3. Aufl. Louvain-la-Neuve: CIACO, 1989.
Den vollen Inhalt der Quelle findenCrouzeix, Jean-Pierre, Abdelhak Hassouni und Eladio Ocaña-Anaya. Optimisation convexe et inéquations variationnelles monotones. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30681-5.
Der volle Inhalt der QuelleHiriart-Urruty, Jean-Baptiste. Optimisation et analyse convexe: Exercices et problèmes corrigés, avec rappels de cours. Les Ulis: EDP sciences, 2009.
Den vollen Inhalt der Quelle findenGrötschel, Martin. Geometric algorithms and combinatorial optimization. 2. Aufl. Berlin: Springer-Verlag, 1993.
Den vollen Inhalt der Quelle findenGrötschel, Martin. Geometric algorithms and combinatorial optimization. Berlin: Springer-Verlag, 1988.
Den vollen Inhalt der Quelle findenEvripidis, Bampis, Jansen Klaus und Kenyon Claire, Hrsg. Efficient approximation and online algorithms: Recent progress on classical combinatorical optimization problems and new applications. New York: Springer, 2006.
Den vollen Inhalt der Quelle findenEvripidis, Bampis, Jansen Klaus und Kenyon Claire, Hrsg. Efficient approximation and online algorithms: Recent progress on classical combinatorical optimization problems and new applications. New York: Springer, 2006.
Den vollen Inhalt der Quelle findenHiriart-Urruty, Jean-Baptiste. Optimisation et analyse convexe. EDP Sciences, 2020. http://dx.doi.org/10.1051/978-2-7598-0700-0.
Der volle Inhalt der QuelleOptimisation et analyse convexe. Presses Universitaires de France - PUF, 1998.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Optimisation convexe en ligne"
Crouzeix, Jean-Pierre, Abdelhak Hassouni und Eladio Ocaña-Anaya. „Monotonie et maximale monotonie“. In Optimisation convexe et inéquations variationnelles monotones, 117–44. Cham: Springer Nature Switzerland, 2012. http://dx.doi.org/10.1007/978-3-031-30681-5_4.
Der volle Inhalt der QuelleCrouzeix, Jean-Pierre, Abdelhak Hassouni und Eladio Ocaña-Anaya. „Ensembles et fonctions convexes“. In Optimisation convexe et inéquations variationnelles monotones, 1–36. Cham: Springer Nature Switzerland, 2012. http://dx.doi.org/10.1007/978-3-031-30681-5_1.
Der volle Inhalt der QuelleCrouzeix, Jean-Pierre, Abdelhak Hassouni und Eladio Ocaña-Anaya. „Inéquations Variationnelles“. In Optimisation convexe et inéquations variationnelles monotones, 145–62. Cham: Springer Nature Switzerland, 2012. http://dx.doi.org/10.1007/978-3-031-30681-5_5.
Der volle Inhalt der QuelleCrouzeix, Jean-Pierre, Abdelhak Hassouni und Eladio Ocaña-Anaya. „Dualité et Inéquations Variationnelles“. In Optimisation convexe et inéquations variationnelles monotones, 163–80. Cham: Springer Nature Switzerland, 2012. http://dx.doi.org/10.1007/978-3-031-30681-5_6.
Der volle Inhalt der QuelleCrouzeix, Jean-Pierre, Abdelhak Hassouni und Eladio Ocaña-Anaya. „Dualité, Lagrangien, Points de Selle“. In Optimisation convexe et inéquations variationnelles monotones, 65–116. Cham: Springer Nature Switzerland, 2012. http://dx.doi.org/10.1007/978-3-031-30681-5_3.
Der volle Inhalt der QuelleCrouzeix, Jean-Pierre, Abdelhak Hassouni und Eladio Ocaña-Anaya. „Dualité et Sous-Différentiabilité“. In Optimisation convexe et inéquations variationnelles monotones, 37–63. Cham: Springer Nature Switzerland, 2012. http://dx.doi.org/10.1007/978-3-031-30681-5_2.
Der volle Inhalt der Quelle„Frontmatter“. In Optimisation et analyse convexe, i—ii. EDP Sciences, 2020. http://dx.doi.org/10.1051/978-2-7598-0700-0-fm.
Der volle Inhalt der Quelle„V.2 Optimisation à données affines (Programmation linéaire)“. In Optimisation et analyse convexe, 168–71. EDP Sciences, 2020. http://dx.doi.org/10.1051/978-2-7598-0700-0-016.
Der volle Inhalt der Quelle„IV.3. Premiers pas dans la théorie de la dualité“. In Optimisation et analyse convexe, 129–64. EDP Sciences, 2020. http://dx.doi.org/10.1051/978-2-7598-0700-0-014.
Der volle Inhalt der Quelle„VII.1. La transformation de Legendre-Fenchel“. In Optimisation et analyse convexe, 271–73. EDP Sciences, 2020. http://dx.doi.org/10.1051/978-2-7598-0700-0-021.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Optimisation convexe en ligne"
Lourenço, Pedro, Hugo Costa, João Branco, Pierre-Loïc Garoche, Arash Sadeghzadeh, Jonathan Frey, Gianluca Frison, Anthea Comellini, Massimo Barbero und Valentin Preda. „Verification & validation of optimisation-based control systems: methods and outcomes of VV4RTOS“. In ESA 12th International Conference on Guidance Navigation and Control and 9th International Conference on Astrodynamics Tools and Techniques. ESA, 2023. http://dx.doi.org/10.5270/esa-gnc-icatt-2023-155.
Der volle Inhalt der QuellePeschel, U., A. Shipulin, G. Onishukov und F. Lederer. „Optimisation of a Raman Frequency Converter based on highly Ge-doped fibres“. In The European Conference on Lasers and Electro-Optics. Washington, D.C.: Optica Publishing Group, 1996. http://dx.doi.org/10.1364/cleo_europe.1996.cwg3.
Der volle Inhalt der QuelleField, David. „Sand Fill Clean-Out on Wireline Enables Access to Additional Perforation Zones in Gas Well Producer“. In International Petroleum Technology Conference. IPTC, 2023. http://dx.doi.org/10.2523/iptc-23048-ea.
Der volle Inhalt der QuelleMurchie, Stuart William, Bård Martin Tinnen, Arne Motland, Bjarte Bore und Peter Gaballa. „Highly Instrumented Electric Line Deployed Intervention Technology Platform Provides Precise, Controlled High Expansion Completion Manipulation Capabilities“. In SPE/ICoTA Well Intervention Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/208987-ms.
Der volle Inhalt der Quelle