Littérature scientifique sur le sujet « Optimisation convexe en ligne »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Optimisation convexe en ligne ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Optimisation convexe en ligne"
Hilout, Saïd. « Stabilité en optimisation convexe non différentiable ». Comptes Rendus de l'Académie des Sciences - Series I - Mathematics 329, no 11 (décembre 1999) : 1027–32. http://dx.doi.org/10.1016/s0764-4442(00)88631-0.
Texte intégralKouada, I. « Sur la dualité en optimisation vectorielle convexe ». RAIRO - Operations Research 28, no 3 (1994) : 255–81. http://dx.doi.org/10.1051/ro/1994280302551.
Texte intégralBoyer, R. « Algorithmes de type F.A.C. en optimisation convexe ». ESAIM : Mathematical Modelling and Numerical Analysis 28, no 1 (1994) : 95–119. http://dx.doi.org/10.1051/m2an/1994280100951.
Texte intégralRodriguez, Pedro, et Didier Dumur. « Robustification d'une commande GPC par optimisation convexe du paramètre de Youla ». Journal Européen des Systèmes Automatisés 37, no 1 (30 janvier 2003) : 109–34. http://dx.doi.org/10.3166/jesa.37.109-134.
Texte intégralAbbas-Turki, Mohamed, Gilles Duc et Benoît Clément. « Retouche de correcteurs par optimisation convexe. Application au pilotage d'un lanceur spatial ». Journal Européen des Systèmes Automatisés 40, no 9-10 (30 décembre 2006) : 997–1017. http://dx.doi.org/10.3166/jesa.40.997-1017.
Texte intégralBelkeziz, K., et A. Metrane. « Optimisation d’une fonction linéaire sur l’ensemble des solutions efficaces d’un problème multicritère quadratique convexe ». Annales mathématiques Blaise Pascal 11, no 1 (2004) : 19–33. http://dx.doi.org/10.5802/ambp.182.
Texte intégralKouada, A. Issoufou. « Sur la propriété de domination et l'existence de points Pareto-efficaces en optimisation vectorielle convexe ». RAIRO - Operations Research 28, no 1 (1994) : 77–84. http://dx.doi.org/10.1051/ro/1994280100771.
Texte intégralBarbet, C., H. Longuet, P. Gatault, N. Rabot et J. M. Halimi. « Ligne directe ville–hôpital en néphrologie : optimisation du parcours de soins ». Néphrologie & ; Thérapeutique 12, no 5 (septembre 2016) : 402–3. http://dx.doi.org/10.1016/j.nephro.2016.07.120.
Texte intégralClément, Benoît. « Analyse par intervalles et optimisation convexe pour résoudre un problème général de faisabilité d’une contrainte robuste ». Journal Européen des Systèmes Automatisés 46, no 4-5 (30 juillet 2012) : 381–95. http://dx.doi.org/10.3166/jesa.46.381-395.
Texte intégralF. Aziz, Rahma, et Maha S. Younis. « A New Hybrid Conjugate Gradient Method with Global Convergence Properties ». Wasit Journal for Pure sciences 3, no 3 (30 septembre 2024) : 58–68. http://dx.doi.org/10.31185/wjps.453.
Texte intégralThèses sur le sujet "Optimisation convexe en ligne"
Fernandez, Camila. « Contributions and applications to survival analysis ». Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS230.
Texte intégralSurvival analysis has attracted interest from a wide range of disciplines, spanning from medicine and predictive maintenance to various industrial applications. Its growing popularity can be attributed to significant advancements in computational power and the increased availability of data. Diverse approaches have been developed to address the challenge of censored data, from classical statistical tools to contemporary machine learning techniques. However, there is still considerable room for improvement. This thesis aims to introduce innovative approaches that provide deeper insights into survival distributions and to propose new methods with theoretical guarantees that enhance prediction accuracy. Notably, we notice the lack of models able to treat sequential data, a setting that is relevant due to its ability to adapt quickly to new information and its efficiency in handling large data streams without requiring significant memory resources. The first contribution of this thesis is to propose a theoretical framework for modeling online survival data. We model the hazard function as a parametric exponential that depends on the covariates, and we use online convex optimization algorithms to minimize the negative log-likelihood of our model, an approach that is novel in this field. We propose a new adaptive second-order algorithm, SurvONS, which ensures robustness in hyperparameter selection while maintaining fast regret bounds. Additionally, we introduce a stochastic approach that enhances the convexity properties to achieve faster convergence rates. The second contribution of this thesis is to provide a detailed comparison of diverse survival models, including semi-parametric, parametric, and machine learning models. We study the dataset character- istics that influence the methods performance, and we propose an aggregation procedure that enhances prediction accuracy and robustness. Finally, we apply the different approaches discussed throughout the thesis to an industrial case study : predicting employee attrition, a fundamental issue in modern business. Additionally, we study the impact of employee characteristics on attrition predictions using permutation feature importance and Shapley values
Reiffers-Masson, Alexandre. « Compétition sur la visibilité et la popularité dans les réseaux sociaux en ligne ». Thesis, Avignon, 2016. http://www.theses.fr/2016AVIG0210/document.
Texte intégralThis Ph.D. is dedicated to the application of the game theory for the understanding of users behaviour in Online Social Networks. The three main questions of this Ph.D. are: " How to maximize contents popularity ? "; " How to model the distribution of messages across sources and topics in OSNs ? "; " How to minimize gossip propagation and how to maximize contents diversity? ". After a survey concerning the research made about the previous problematics in chapter 1, we propose to study a competition over visibility in chapter 2. In chapter 3, we model and provide insight concerning the posting behaviour of publishers in OSNs by using the stochastic approximation framework. In chapter 4, it is a popularity competition which is described by using a differential game formulation. The chapter 5 is dedicated to the formulation of two convex optimization problems in the context of Online Social Networks. Finally conclusions and perspectives are given in chapter 6
Akhavanfoomani, Aria. « Derivative-free stochastic optimization, online learning and fairness ». Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAG001.
Texte intégralIn this thesis, we first study the problem of zero-order optimization in the active setting for smooth and three different classes of functions: i) the functions that satisfy the Polyak-Łojasiewicz condition, ii) strongly convex functions, and iii) the larger class of highly smooth non-convex functions.Furthermore, we propose a novel algorithm that is based on l1-type randomization, and we study its properties for Lipschitz convex functions in an online optimization setting. Our analysis is due to deriving a new Poincar'e type inequality for the uniform measure on the l1-sphere with explicit constants.Then, we study the zero-order optimization problem in the passive schemes. We propose a new method for estimating the minimizer and the minimum value of a smooth and strongly convex regression function f. We derive upper bounds for this algorithm and prove minimax lower bounds for such a setting.In the end, we study the linear contextual bandit problem under fairness constraints where an agent has to select one candidate from a pool, and each candidate belongs to a sensitive group. We propose a novel notion of fairness which is practical in the aforementioned example. We design a greedy policy that computes an estimate of the relative rank of each candidate using the empirical cumulative distribution function, and we proved its optimal property
Ho, Vinh Thanh. « Techniques avancées d'apprentissage automatique basées sur la programmation DC et DCA ». Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0289/document.
Texte intégralIn this dissertation, we develop some advanced machine learning techniques in the framework of online learning and reinforcement learning (RL). The backbones of our approaches are DC (Difference of Convex functions) programming and DCA (DC Algorithm), and their online version that are best known as powerful nonsmooth, nonconvex optimization tools. This dissertation is composed of two parts: the first part studies some online machine learning techniques and the second part concerns RL in both batch and online modes. The first part includes two chapters corresponding to online classification (Chapter 2) and prediction with expert advice (Chapter 3). These two chapters mention a unified DC approximation approach to different online learning algorithms where the observed objective functions are 0-1 loss functions. We thoroughly study how to develop efficient online DCA algorithms in terms of theoretical and computational aspects. The second part consists of four chapters (Chapters 4, 5, 6, 7). After a brief introduction of RL and its related works in Chapter 4, Chapter 5 aims to provide effective RL techniques in batch mode based on DC programming and DCA. In particular, we first consider four different DC optimization formulations for which corresponding attractive DCA-based algorithms are developed, then carefully address the key issues of DCA, and finally, show the computational efficiency of these algorithms through various experiments. Continuing this study, in Chapter 6 we develop DCA-based RL techniques in online mode and propose their alternating versions. As an application, we tackle the stochastic shortest path (SSP) problem in Chapter 7. Especially, a particular class of SSP problems can be reformulated in two directions as a cardinality minimization formulation and an RL formulation. Firstly, the cardinality formulation involves the zero-norm in objective and the binary variables. We propose a DCA-based algorithm by exploiting a DC approximation approach for the zero-norm and an exact penalty technique for the binary variables. Secondly, we make use of the aforementioned DCA-based batch RL algorithm. All proposed algorithms are tested on some artificial road networks
Ho, Vinh Thanh. « Techniques avancées d'apprentissage automatique basées sur la programmation DC et DCA ». Electronic Thesis or Diss., Université de Lorraine, 2017. http://www.theses.fr/2017LORR0289.
Texte intégralIn this dissertation, we develop some advanced machine learning techniques in the framework of online learning and reinforcement learning (RL). The backbones of our approaches are DC (Difference of Convex functions) programming and DCA (DC Algorithm), and their online version that are best known as powerful nonsmooth, nonconvex optimization tools. This dissertation is composed of two parts: the first part studies some online machine learning techniques and the second part concerns RL in both batch and online modes. The first part includes two chapters corresponding to online classification (Chapter 2) and prediction with expert advice (Chapter 3). These two chapters mention a unified DC approximation approach to different online learning algorithms where the observed objective functions are 0-1 loss functions. We thoroughly study how to develop efficient online DCA algorithms in terms of theoretical and computational aspects. The second part consists of four chapters (Chapters 4, 5, 6, 7). After a brief introduction of RL and its related works in Chapter 4, Chapter 5 aims to provide effective RL techniques in batch mode based on DC programming and DCA. In particular, we first consider four different DC optimization formulations for which corresponding attractive DCA-based algorithms are developed, then carefully address the key issues of DCA, and finally, show the computational efficiency of these algorithms through various experiments. Continuing this study, in Chapter 6 we develop DCA-based RL techniques in online mode and propose their alternating versions. As an application, we tackle the stochastic shortest path (SSP) problem in Chapter 7. Especially, a particular class of SSP problems can be reformulated in two directions as a cardinality minimization formulation and an RL formulation. Firstly, the cardinality formulation involves the zero-norm in objective and the binary variables. We propose a DCA-based algorithm by exploiting a DC approximation approach for the zero-norm and an exact penalty technique for the binary variables. Secondly, we make use of the aforementioned DCA-based batch RL algorithm. All proposed algorithms are tested on some artificial road networks
Weiss, Pierre. « Algorithmes rapides d'optimisation convexe. Applications à la reconstruction d'images et à la détection de changements ». Phd thesis, Université de Nice Sophia-Antipolis, 2008. http://tel.archives-ouvertes.fr/tel-00349452.
Texte intégralKarimi, Belhal. « Non-Convex Optimization for Latent Data Models : Algorithms, Analysis and Applications ». Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX040/document.
Texte intégralMany problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Many problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Euclidean space.Examples include topic models, neural networks or sparse logistic regression.Optimization methods, used to solve those problems, have been widely studied in the literature for convex objective functions and are extensively used in practice.However, recent breakthroughs in statistical modeling, such as deep learning, coupled with an explosion of data samples, require improvements of non-convex optimization procedure for large datasets.This thesis is an attempt to address those two challenges by developing algorithms with cheaper updates, ideally independent of the number of samples, and improving the theoretical understanding of non-convex optimization that remains rather limited.In this manuscript, we are interested in the minimization of such objective functions for latent data models, ie, when the data is partially observed which includes the conventional sense of missing data but is much broader than that.In the first part, we consider the minimization of a (possibly) non-convex and non-smooth objective function using incremental and online updates.To that end, we propose several algorithms exploiting the latent structure to efficiently optimize the objective and illustrate our findings with numerous applications.In the second part, we focus on the maximization of non-convex likelihood using the EM algorithm and its stochastic variants.We analyze several faster and cheaper algorithms and propose two new variants aiming at speeding the convergence of the estimated parameters
DANIILIDIS, Aris. « Analyse convexe et quasi-convexe ; applications en optimisation ». Habilitation à diriger des recherches, Université de Pau et des Pays de l'Adour, 2002. http://tel.archives-ouvertes.fr/tel-00001355.
Texte intégralBahraoui, Mohamed-Amin. « Suites diagonalement stationnaires en optimisation convexe ». Montpellier 2, 1994. http://www.theses.fr/1994MON20153.
Texte intégralYagoubi, Mohamed. « Commande robuste structurée et optimisation convexe ». Nantes, 2003. http://www.theses.fr/2003NANT2027.
Texte intégralLivres sur le sujet "Optimisation convexe en ligne"
Willem, Michel. Analyse convexe et optimisation. [S.l.] : CIACO, 1987.
Trouver le texte intégralWillem, Michel. Analyse convexe et optimisation. 3e éd. Louvain-la-Neuve : CIACO, 1989.
Trouver le texte intégralCrouzeix, Jean-Pierre, Abdelhak Hassouni et Eladio Ocaña-Anaya. Optimisation convexe et inéquations variationnelles monotones. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30681-5.
Texte intégralHiriart-Urruty, Jean-Baptiste. Optimisation et analyse convexe : Exercices et problèmes corrigés, avec rappels de cours. Les Ulis : EDP sciences, 2009.
Trouver le texte intégralGrötschel, Martin. Geometric algorithms and combinatorial optimization. 2e éd. Berlin : Springer-Verlag, 1993.
Trouver le texte intégralGrötschel, Martin. Geometric algorithms and combinatorial optimization. Berlin : Springer-Verlag, 1988.
Trouver le texte intégralEvripidis, Bampis, Jansen Klaus et Kenyon Claire, dir. Efficient approximation and online algorithms : Recent progress on classical combinatorical optimization problems and new applications. New York : Springer, 2006.
Trouver le texte intégralEvripidis, Bampis, Jansen Klaus et Kenyon Claire, dir. Efficient approximation and online algorithms : Recent progress on classical combinatorical optimization problems and new applications. New York : Springer, 2006.
Trouver le texte intégralHiriart-Urruty, Jean-Baptiste. Optimisation et analyse convexe. EDP Sciences, 2020. http://dx.doi.org/10.1051/978-2-7598-0700-0.
Texte intégralOptimisation et analyse convexe. Presses Universitaires de France - PUF, 1998.
Trouver le texte intégralChapitres de livres sur le sujet "Optimisation convexe en ligne"
Crouzeix, Jean-Pierre, Abdelhak Hassouni et Eladio Ocaña-Anaya. « Monotonie et maximale monotonie ». Dans Optimisation convexe et inéquations variationnelles monotones, 117–44. Cham : Springer Nature Switzerland, 2012. http://dx.doi.org/10.1007/978-3-031-30681-5_4.
Texte intégralCrouzeix, Jean-Pierre, Abdelhak Hassouni et Eladio Ocaña-Anaya. « Ensembles et fonctions convexes ». Dans Optimisation convexe et inéquations variationnelles monotones, 1–36. Cham : Springer Nature Switzerland, 2012. http://dx.doi.org/10.1007/978-3-031-30681-5_1.
Texte intégralCrouzeix, Jean-Pierre, Abdelhak Hassouni et Eladio Ocaña-Anaya. « Inéquations Variationnelles ». Dans Optimisation convexe et inéquations variationnelles monotones, 145–62. Cham : Springer Nature Switzerland, 2012. http://dx.doi.org/10.1007/978-3-031-30681-5_5.
Texte intégralCrouzeix, Jean-Pierre, Abdelhak Hassouni et Eladio Ocaña-Anaya. « Dualité et Inéquations Variationnelles ». Dans Optimisation convexe et inéquations variationnelles monotones, 163–80. Cham : Springer Nature Switzerland, 2012. http://dx.doi.org/10.1007/978-3-031-30681-5_6.
Texte intégralCrouzeix, Jean-Pierre, Abdelhak Hassouni et Eladio Ocaña-Anaya. « Dualité, Lagrangien, Points de Selle ». Dans Optimisation convexe et inéquations variationnelles monotones, 65–116. Cham : Springer Nature Switzerland, 2012. http://dx.doi.org/10.1007/978-3-031-30681-5_3.
Texte intégralCrouzeix, Jean-Pierre, Abdelhak Hassouni et Eladio Ocaña-Anaya. « Dualité et Sous-Différentiabilité ». Dans Optimisation convexe et inéquations variationnelles monotones, 37–63. Cham : Springer Nature Switzerland, 2012. http://dx.doi.org/10.1007/978-3-031-30681-5_2.
Texte intégral« Frontmatter ». Dans Optimisation et analyse convexe, i—ii. EDP Sciences, 2020. http://dx.doi.org/10.1051/978-2-7598-0700-0-fm.
Texte intégral« V.2 Optimisation à données affines (Programmation linéaire) ». Dans Optimisation et analyse convexe, 168–71. EDP Sciences, 2020. http://dx.doi.org/10.1051/978-2-7598-0700-0-016.
Texte intégral« IV.3. Premiers pas dans la théorie de la dualité ». Dans Optimisation et analyse convexe, 129–64. EDP Sciences, 2020. http://dx.doi.org/10.1051/978-2-7598-0700-0-014.
Texte intégral« VII.1. La transformation de Legendre-Fenchel ». Dans Optimisation et analyse convexe, 271–73. EDP Sciences, 2020. http://dx.doi.org/10.1051/978-2-7598-0700-0-021.
Texte intégralActes de conférences sur le sujet "Optimisation convexe en ligne"
Lourenço, Pedro, Hugo Costa, João Branco, Pierre-Loïc Garoche, Arash Sadeghzadeh, Jonathan Frey, Gianluca Frison, Anthea Comellini, Massimo Barbero et Valentin Preda. « Verification & ; validation of optimisation-based control systems : methods and outcomes of VV4RTOS ». Dans ESA 12th International Conference on Guidance Navigation and Control and 9th International Conference on Astrodynamics Tools and Techniques. ESA, 2023. http://dx.doi.org/10.5270/esa-gnc-icatt-2023-155.
Texte intégralPeschel, U., A. Shipulin, G. Onishukov et F. Lederer. « Optimisation of a Raman Frequency Converter based on highly Ge-doped fibres ». Dans The European Conference on Lasers and Electro-Optics. Washington, D.C. : Optica Publishing Group, 1996. http://dx.doi.org/10.1364/cleo_europe.1996.cwg3.
Texte intégralField, David. « Sand Fill Clean-Out on Wireline Enables Access to Additional Perforation Zones in Gas Well Producer ». Dans International Petroleum Technology Conference. IPTC, 2023. http://dx.doi.org/10.2523/iptc-23048-ea.
Texte intégralMurchie, Stuart William, Bård Martin Tinnen, Arne Motland, Bjarte Bore et Peter Gaballa. « Highly Instrumented Electric Line Deployed Intervention Technology Platform Provides Precise, Controlled High Expansion Completion Manipulation Capabilities ». Dans SPE/ICoTA Well Intervention Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/208987-ms.
Texte intégral