Letteratura scientifica selezionata sul tema "Optimisation convexe en ligne"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Optimisation convexe en ligne".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Optimisation convexe en ligne"
Hilout, Saïd. "Stabilité en optimisation convexe non différentiable". Comptes Rendus de l'Académie des Sciences - Series I - Mathematics 329, n. 11 (dicembre 1999): 1027–32. http://dx.doi.org/10.1016/s0764-4442(00)88631-0.
Testo completoKouada, I. "Sur la dualité en optimisation vectorielle convexe". RAIRO - Operations Research 28, n. 3 (1994): 255–81. http://dx.doi.org/10.1051/ro/1994280302551.
Testo completoBoyer, R. "Algorithmes de type F.A.C. en optimisation convexe". ESAIM: Mathematical Modelling and Numerical Analysis 28, n. 1 (1994): 95–119. http://dx.doi.org/10.1051/m2an/1994280100951.
Testo completoRodriguez, Pedro, e Didier Dumur. "Robustification d'une commande GPC par optimisation convexe du paramètre de Youla". Journal Européen des Systèmes Automatisés 37, n. 1 (30 gennaio 2003): 109–34. http://dx.doi.org/10.3166/jesa.37.109-134.
Testo completoAbbas-Turki, Mohamed, Gilles Duc e Benoît Clément. "Retouche de correcteurs par optimisation convexe. Application au pilotage d'un lanceur spatial". Journal Européen des Systèmes Automatisés 40, n. 9-10 (30 dicembre 2006): 997–1017. http://dx.doi.org/10.3166/jesa.40.997-1017.
Testo completoBelkeziz, K., e A. Metrane. "Optimisation d’une fonction linéaire sur l’ensemble des solutions efficaces d’un problème multicritère quadratique convexe". Annales mathématiques Blaise Pascal 11, n. 1 (2004): 19–33. http://dx.doi.org/10.5802/ambp.182.
Testo completoKouada, A. Issoufou. "Sur la propriété de domination et l'existence de points Pareto-efficaces en optimisation vectorielle convexe". RAIRO - Operations Research 28, n. 1 (1994): 77–84. http://dx.doi.org/10.1051/ro/1994280100771.
Testo completoBarbet, C., H. Longuet, P. Gatault, N. Rabot e J. M. Halimi. "Ligne directe ville–hôpital en néphrologie : optimisation du parcours de soins". Néphrologie & Thérapeutique 12, n. 5 (settembre 2016): 402–3. http://dx.doi.org/10.1016/j.nephro.2016.07.120.
Testo completoClément, Benoît. "Analyse par intervalles et optimisation convexe pour résoudre un problème général de faisabilité d’une contrainte robuste". Journal Européen des Systèmes Automatisés 46, n. 4-5 (30 luglio 2012): 381–95. http://dx.doi.org/10.3166/jesa.46.381-395.
Testo completoF. Aziz, Rahma, e Maha S. Younis. "A New Hybrid Conjugate Gradient Method with Global Convergence Properties". Wasit Journal for Pure sciences 3, n. 3 (30 settembre 2024): 58–68. http://dx.doi.org/10.31185/wjps.453.
Testo completoTesi sul tema "Optimisation convexe en ligne"
Fernandez, Camila. "Contributions and applications to survival analysis". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS230.
Testo completoSurvival analysis has attracted interest from a wide range of disciplines, spanning from medicine and predictive maintenance to various industrial applications. Its growing popularity can be attributed to significant advancements in computational power and the increased availability of data. Diverse approaches have been developed to address the challenge of censored data, from classical statistical tools to contemporary machine learning techniques. However, there is still considerable room for improvement. This thesis aims to introduce innovative approaches that provide deeper insights into survival distributions and to propose new methods with theoretical guarantees that enhance prediction accuracy. Notably, we notice the lack of models able to treat sequential data, a setting that is relevant due to its ability to adapt quickly to new information and its efficiency in handling large data streams without requiring significant memory resources. The first contribution of this thesis is to propose a theoretical framework for modeling online survival data. We model the hazard function as a parametric exponential that depends on the covariates, and we use online convex optimization algorithms to minimize the negative log-likelihood of our model, an approach that is novel in this field. We propose a new adaptive second-order algorithm, SurvONS, which ensures robustness in hyperparameter selection while maintaining fast regret bounds. Additionally, we introduce a stochastic approach that enhances the convexity properties to achieve faster convergence rates. The second contribution of this thesis is to provide a detailed comparison of diverse survival models, including semi-parametric, parametric, and machine learning models. We study the dataset character- istics that influence the methods performance, and we propose an aggregation procedure that enhances prediction accuracy and robustness. Finally, we apply the different approaches discussed throughout the thesis to an industrial case study : predicting employee attrition, a fundamental issue in modern business. Additionally, we study the impact of employee characteristics on attrition predictions using permutation feature importance and Shapley values
Reiffers-Masson, Alexandre. "Compétition sur la visibilité et la popularité dans les réseaux sociaux en ligne". Thesis, Avignon, 2016. http://www.theses.fr/2016AVIG0210/document.
Testo completoThis Ph.D. is dedicated to the application of the game theory for the understanding of users behaviour in Online Social Networks. The three main questions of this Ph.D. are: " How to maximize contents popularity ? "; " How to model the distribution of messages across sources and topics in OSNs ? "; " How to minimize gossip propagation and how to maximize contents diversity? ". After a survey concerning the research made about the previous problematics in chapter 1, we propose to study a competition over visibility in chapter 2. In chapter 3, we model and provide insight concerning the posting behaviour of publishers in OSNs by using the stochastic approximation framework. In chapter 4, it is a popularity competition which is described by using a differential game formulation. The chapter 5 is dedicated to the formulation of two convex optimization problems in the context of Online Social Networks. Finally conclusions and perspectives are given in chapter 6
Akhavanfoomani, Aria. "Derivative-free stochastic optimization, online learning and fairness". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAG001.
Testo completoIn this thesis, we first study the problem of zero-order optimization in the active setting for smooth and three different classes of functions: i) the functions that satisfy the Polyak-Łojasiewicz condition, ii) strongly convex functions, and iii) the larger class of highly smooth non-convex functions.Furthermore, we propose a novel algorithm that is based on l1-type randomization, and we study its properties for Lipschitz convex functions in an online optimization setting. Our analysis is due to deriving a new Poincar'e type inequality for the uniform measure on the l1-sphere with explicit constants.Then, we study the zero-order optimization problem in the passive schemes. We propose a new method for estimating the minimizer and the minimum value of a smooth and strongly convex regression function f. We derive upper bounds for this algorithm and prove minimax lower bounds for such a setting.In the end, we study the linear contextual bandit problem under fairness constraints where an agent has to select one candidate from a pool, and each candidate belongs to a sensitive group. We propose a novel notion of fairness which is practical in the aforementioned example. We design a greedy policy that computes an estimate of the relative rank of each candidate using the empirical cumulative distribution function, and we proved its optimal property
Ho, Vinh Thanh. "Techniques avancées d'apprentissage automatique basées sur la programmation DC et DCA". Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0289/document.
Testo completoIn this dissertation, we develop some advanced machine learning techniques in the framework of online learning and reinforcement learning (RL). The backbones of our approaches are DC (Difference of Convex functions) programming and DCA (DC Algorithm), and their online version that are best known as powerful nonsmooth, nonconvex optimization tools. This dissertation is composed of two parts: the first part studies some online machine learning techniques and the second part concerns RL in both batch and online modes. The first part includes two chapters corresponding to online classification (Chapter 2) and prediction with expert advice (Chapter 3). These two chapters mention a unified DC approximation approach to different online learning algorithms where the observed objective functions are 0-1 loss functions. We thoroughly study how to develop efficient online DCA algorithms in terms of theoretical and computational aspects. The second part consists of four chapters (Chapters 4, 5, 6, 7). After a brief introduction of RL and its related works in Chapter 4, Chapter 5 aims to provide effective RL techniques in batch mode based on DC programming and DCA. In particular, we first consider four different DC optimization formulations for which corresponding attractive DCA-based algorithms are developed, then carefully address the key issues of DCA, and finally, show the computational efficiency of these algorithms through various experiments. Continuing this study, in Chapter 6 we develop DCA-based RL techniques in online mode and propose their alternating versions. As an application, we tackle the stochastic shortest path (SSP) problem in Chapter 7. Especially, a particular class of SSP problems can be reformulated in two directions as a cardinality minimization formulation and an RL formulation. Firstly, the cardinality formulation involves the zero-norm in objective and the binary variables. We propose a DCA-based algorithm by exploiting a DC approximation approach for the zero-norm and an exact penalty technique for the binary variables. Secondly, we make use of the aforementioned DCA-based batch RL algorithm. All proposed algorithms are tested on some artificial road networks
Ho, Vinh Thanh. "Techniques avancées d'apprentissage automatique basées sur la programmation DC et DCA". Electronic Thesis or Diss., Université de Lorraine, 2017. http://www.theses.fr/2017LORR0289.
Testo completoIn this dissertation, we develop some advanced machine learning techniques in the framework of online learning and reinforcement learning (RL). The backbones of our approaches are DC (Difference of Convex functions) programming and DCA (DC Algorithm), and their online version that are best known as powerful nonsmooth, nonconvex optimization tools. This dissertation is composed of two parts: the first part studies some online machine learning techniques and the second part concerns RL in both batch and online modes. The first part includes two chapters corresponding to online classification (Chapter 2) and prediction with expert advice (Chapter 3). These two chapters mention a unified DC approximation approach to different online learning algorithms where the observed objective functions are 0-1 loss functions. We thoroughly study how to develop efficient online DCA algorithms in terms of theoretical and computational aspects. The second part consists of four chapters (Chapters 4, 5, 6, 7). After a brief introduction of RL and its related works in Chapter 4, Chapter 5 aims to provide effective RL techniques in batch mode based on DC programming and DCA. In particular, we first consider four different DC optimization formulations for which corresponding attractive DCA-based algorithms are developed, then carefully address the key issues of DCA, and finally, show the computational efficiency of these algorithms through various experiments. Continuing this study, in Chapter 6 we develop DCA-based RL techniques in online mode and propose their alternating versions. As an application, we tackle the stochastic shortest path (SSP) problem in Chapter 7. Especially, a particular class of SSP problems can be reformulated in two directions as a cardinality minimization formulation and an RL formulation. Firstly, the cardinality formulation involves the zero-norm in objective and the binary variables. We propose a DCA-based algorithm by exploiting a DC approximation approach for the zero-norm and an exact penalty technique for the binary variables. Secondly, we make use of the aforementioned DCA-based batch RL algorithm. All proposed algorithms are tested on some artificial road networks
Weiss, Pierre. "Algorithmes rapides d'optimisation convexe. Applications à la reconstruction d'images et à la détection de changements". Phd thesis, Université de Nice Sophia-Antipolis, 2008. http://tel.archives-ouvertes.fr/tel-00349452.
Testo completoKarimi, Belhal. "Non-Convex Optimization for Latent Data Models : Algorithms, Analysis and Applications". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX040/document.
Testo completoMany problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Many problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Euclidean space.Examples include topic models, neural networks or sparse logistic regression.Optimization methods, used to solve those problems, have been widely studied in the literature for convex objective functions and are extensively used in practice.However, recent breakthroughs in statistical modeling, such as deep learning, coupled with an explosion of data samples, require improvements of non-convex optimization procedure for large datasets.This thesis is an attempt to address those two challenges by developing algorithms with cheaper updates, ideally independent of the number of samples, and improving the theoretical understanding of non-convex optimization that remains rather limited.In this manuscript, we are interested in the minimization of such objective functions for latent data models, ie, when the data is partially observed which includes the conventional sense of missing data but is much broader than that.In the first part, we consider the minimization of a (possibly) non-convex and non-smooth objective function using incremental and online updates.To that end, we propose several algorithms exploiting the latent structure to efficiently optimize the objective and illustrate our findings with numerous applications.In the second part, we focus on the maximization of non-convex likelihood using the EM algorithm and its stochastic variants.We analyze several faster and cheaper algorithms and propose two new variants aiming at speeding the convergence of the estimated parameters
DANIILIDIS, Aris. "Analyse convexe et quasi-convexe ; applications en optimisation". Habilitation à diriger des recherches, Université de Pau et des Pays de l'Adour, 2002. http://tel.archives-ouvertes.fr/tel-00001355.
Testo completoBahraoui, Mohamed-Amin. "Suites diagonalement stationnaires en optimisation convexe". Montpellier 2, 1994. http://www.theses.fr/1994MON20153.
Testo completoYagoubi, Mohamed. "Commande robuste structurée et optimisation convexe". Nantes, 2003. http://www.theses.fr/2003NANT2027.
Testo completoLibri sul tema "Optimisation convexe en ligne"
Willem, Michel. Analyse convexe et optimisation. [S.l.]: CIACO, 1987.
Cerca il testo completoWillem, Michel. Analyse convexe et optimisation. 3a ed. Louvain-la-Neuve: CIACO, 1989.
Cerca il testo completoCrouzeix, Jean-Pierre, Abdelhak Hassouni e Eladio Ocaña-Anaya. Optimisation convexe et inéquations variationnelles monotones. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30681-5.
Testo completoHiriart-Urruty, Jean-Baptiste. Optimisation et analyse convexe: Exercices et problèmes corrigés, avec rappels de cours. Les Ulis: EDP sciences, 2009.
Cerca il testo completoGrötschel, Martin. Geometric algorithms and combinatorial optimization. 2a ed. Berlin: Springer-Verlag, 1993.
Cerca il testo completoGrötschel, Martin. Geometric algorithms and combinatorial optimization. Berlin: Springer-Verlag, 1988.
Cerca il testo completoEvripidis, Bampis, Jansen Klaus e Kenyon Claire, a cura di. Efficient approximation and online algorithms: Recent progress on classical combinatorical optimization problems and new applications. New York: Springer, 2006.
Cerca il testo completoEvripidis, Bampis, Jansen Klaus e Kenyon Claire, a cura di. Efficient approximation and online algorithms: Recent progress on classical combinatorical optimization problems and new applications. New York: Springer, 2006.
Cerca il testo completoHiriart-Urruty, Jean-Baptiste. Optimisation et analyse convexe. EDP Sciences, 2020. http://dx.doi.org/10.1051/978-2-7598-0700-0.
Testo completoOptimisation et analyse convexe. Presses Universitaires de France - PUF, 1998.
Cerca il testo completoCapitoli di libri sul tema "Optimisation convexe en ligne"
Crouzeix, Jean-Pierre, Abdelhak Hassouni e Eladio Ocaña-Anaya. "Monotonie et maximale monotonie". In Optimisation convexe et inéquations variationnelles monotones, 117–44. Cham: Springer Nature Switzerland, 2012. http://dx.doi.org/10.1007/978-3-031-30681-5_4.
Testo completoCrouzeix, Jean-Pierre, Abdelhak Hassouni e Eladio Ocaña-Anaya. "Ensembles et fonctions convexes". In Optimisation convexe et inéquations variationnelles monotones, 1–36. Cham: Springer Nature Switzerland, 2012. http://dx.doi.org/10.1007/978-3-031-30681-5_1.
Testo completoCrouzeix, Jean-Pierre, Abdelhak Hassouni e Eladio Ocaña-Anaya. "Inéquations Variationnelles". In Optimisation convexe et inéquations variationnelles monotones, 145–62. Cham: Springer Nature Switzerland, 2012. http://dx.doi.org/10.1007/978-3-031-30681-5_5.
Testo completoCrouzeix, Jean-Pierre, Abdelhak Hassouni e Eladio Ocaña-Anaya. "Dualité et Inéquations Variationnelles". In Optimisation convexe et inéquations variationnelles monotones, 163–80. Cham: Springer Nature Switzerland, 2012. http://dx.doi.org/10.1007/978-3-031-30681-5_6.
Testo completoCrouzeix, Jean-Pierre, Abdelhak Hassouni e Eladio Ocaña-Anaya. "Dualité, Lagrangien, Points de Selle". In Optimisation convexe et inéquations variationnelles monotones, 65–116. Cham: Springer Nature Switzerland, 2012. http://dx.doi.org/10.1007/978-3-031-30681-5_3.
Testo completoCrouzeix, Jean-Pierre, Abdelhak Hassouni e Eladio Ocaña-Anaya. "Dualité et Sous-Différentiabilité". In Optimisation convexe et inéquations variationnelles monotones, 37–63. Cham: Springer Nature Switzerland, 2012. http://dx.doi.org/10.1007/978-3-031-30681-5_2.
Testo completo"Frontmatter". In Optimisation et analyse convexe, i—ii. EDP Sciences, 2020. http://dx.doi.org/10.1051/978-2-7598-0700-0-fm.
Testo completo"V.2 Optimisation à données affines (Programmation linéaire)". In Optimisation et analyse convexe, 168–71. EDP Sciences, 2020. http://dx.doi.org/10.1051/978-2-7598-0700-0-016.
Testo completo"IV.3. Premiers pas dans la théorie de la dualité". In Optimisation et analyse convexe, 129–64. EDP Sciences, 2020. http://dx.doi.org/10.1051/978-2-7598-0700-0-014.
Testo completo"VII.1. La transformation de Legendre-Fenchel". In Optimisation et analyse convexe, 271–73. EDP Sciences, 2020. http://dx.doi.org/10.1051/978-2-7598-0700-0-021.
Testo completoAtti di convegni sul tema "Optimisation convexe en ligne"
Lourenço, Pedro, Hugo Costa, João Branco, Pierre-Loïc Garoche, Arash Sadeghzadeh, Jonathan Frey, Gianluca Frison, Anthea Comellini, Massimo Barbero e Valentin Preda. "Verification & validation of optimisation-based control systems: methods and outcomes of VV4RTOS". In ESA 12th International Conference on Guidance Navigation and Control and 9th International Conference on Astrodynamics Tools and Techniques. ESA, 2023. http://dx.doi.org/10.5270/esa-gnc-icatt-2023-155.
Testo completoPeschel, U., A. Shipulin, G. Onishukov e F. Lederer. "Optimisation of a Raman Frequency Converter based on highly Ge-doped fibres". In The European Conference on Lasers and Electro-Optics. Washington, D.C.: Optica Publishing Group, 1996. http://dx.doi.org/10.1364/cleo_europe.1996.cwg3.
Testo completoField, David. "Sand Fill Clean-Out on Wireline Enables Access to Additional Perforation Zones in Gas Well Producer". In International Petroleum Technology Conference. IPTC, 2023. http://dx.doi.org/10.2523/iptc-23048-ea.
Testo completoMurchie, Stuart William, Bård Martin Tinnen, Arne Motland, Bjarte Bore e Peter Gaballa. "Highly Instrumented Electric Line Deployed Intervention Technology Platform Provides Precise, Controlled High Expansion Completion Manipulation Capabilities". In SPE/ICoTA Well Intervention Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/208987-ms.
Testo completo