Thèses sur le sujet « Optimisation convexe en ligne »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Optimisation convexe en ligne ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.
Fernandez, Camila. « Contributions and applications to survival analysis ». Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS230.
Texte intégralSurvival analysis has attracted interest from a wide range of disciplines, spanning from medicine and predictive maintenance to various industrial applications. Its growing popularity can be attributed to significant advancements in computational power and the increased availability of data. Diverse approaches have been developed to address the challenge of censored data, from classical statistical tools to contemporary machine learning techniques. However, there is still considerable room for improvement. This thesis aims to introduce innovative approaches that provide deeper insights into survival distributions and to propose new methods with theoretical guarantees that enhance prediction accuracy. Notably, we notice the lack of models able to treat sequential data, a setting that is relevant due to its ability to adapt quickly to new information and its efficiency in handling large data streams without requiring significant memory resources. The first contribution of this thesis is to propose a theoretical framework for modeling online survival data. We model the hazard function as a parametric exponential that depends on the covariates, and we use online convex optimization algorithms to minimize the negative log-likelihood of our model, an approach that is novel in this field. We propose a new adaptive second-order algorithm, SurvONS, which ensures robustness in hyperparameter selection while maintaining fast regret bounds. Additionally, we introduce a stochastic approach that enhances the convexity properties to achieve faster convergence rates. The second contribution of this thesis is to provide a detailed comparison of diverse survival models, including semi-parametric, parametric, and machine learning models. We study the dataset character- istics that influence the methods performance, and we propose an aggregation procedure that enhances prediction accuracy and robustness. Finally, we apply the different approaches discussed throughout the thesis to an industrial case study : predicting employee attrition, a fundamental issue in modern business. Additionally, we study the impact of employee characteristics on attrition predictions using permutation feature importance and Shapley values
Reiffers-Masson, Alexandre. « Compétition sur la visibilité et la popularité dans les réseaux sociaux en ligne ». Thesis, Avignon, 2016. http://www.theses.fr/2016AVIG0210/document.
Texte intégralThis Ph.D. is dedicated to the application of the game theory for the understanding of users behaviour in Online Social Networks. The three main questions of this Ph.D. are: " How to maximize contents popularity ? "; " How to model the distribution of messages across sources and topics in OSNs ? "; " How to minimize gossip propagation and how to maximize contents diversity? ". After a survey concerning the research made about the previous problematics in chapter 1, we propose to study a competition over visibility in chapter 2. In chapter 3, we model and provide insight concerning the posting behaviour of publishers in OSNs by using the stochastic approximation framework. In chapter 4, it is a popularity competition which is described by using a differential game formulation. The chapter 5 is dedicated to the formulation of two convex optimization problems in the context of Online Social Networks. Finally conclusions and perspectives are given in chapter 6
Akhavanfoomani, Aria. « Derivative-free stochastic optimization, online learning and fairness ». Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAG001.
Texte intégralIn this thesis, we first study the problem of zero-order optimization in the active setting for smooth and three different classes of functions: i) the functions that satisfy the Polyak-Łojasiewicz condition, ii) strongly convex functions, and iii) the larger class of highly smooth non-convex functions.Furthermore, we propose a novel algorithm that is based on l1-type randomization, and we study its properties for Lipschitz convex functions in an online optimization setting. Our analysis is due to deriving a new Poincar'e type inequality for the uniform measure on the l1-sphere with explicit constants.Then, we study the zero-order optimization problem in the passive schemes. We propose a new method for estimating the minimizer and the minimum value of a smooth and strongly convex regression function f. We derive upper bounds for this algorithm and prove minimax lower bounds for such a setting.In the end, we study the linear contextual bandit problem under fairness constraints where an agent has to select one candidate from a pool, and each candidate belongs to a sensitive group. We propose a novel notion of fairness which is practical in the aforementioned example. We design a greedy policy that computes an estimate of the relative rank of each candidate using the empirical cumulative distribution function, and we proved its optimal property
Ho, Vinh Thanh. « Techniques avancées d'apprentissage automatique basées sur la programmation DC et DCA ». Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0289/document.
Texte intégralIn this dissertation, we develop some advanced machine learning techniques in the framework of online learning and reinforcement learning (RL). The backbones of our approaches are DC (Difference of Convex functions) programming and DCA (DC Algorithm), and their online version that are best known as powerful nonsmooth, nonconvex optimization tools. This dissertation is composed of two parts: the first part studies some online machine learning techniques and the second part concerns RL in both batch and online modes. The first part includes two chapters corresponding to online classification (Chapter 2) and prediction with expert advice (Chapter 3). These two chapters mention a unified DC approximation approach to different online learning algorithms where the observed objective functions are 0-1 loss functions. We thoroughly study how to develop efficient online DCA algorithms in terms of theoretical and computational aspects. The second part consists of four chapters (Chapters 4, 5, 6, 7). After a brief introduction of RL and its related works in Chapter 4, Chapter 5 aims to provide effective RL techniques in batch mode based on DC programming and DCA. In particular, we first consider four different DC optimization formulations for which corresponding attractive DCA-based algorithms are developed, then carefully address the key issues of DCA, and finally, show the computational efficiency of these algorithms through various experiments. Continuing this study, in Chapter 6 we develop DCA-based RL techniques in online mode and propose their alternating versions. As an application, we tackle the stochastic shortest path (SSP) problem in Chapter 7. Especially, a particular class of SSP problems can be reformulated in two directions as a cardinality minimization formulation and an RL formulation. Firstly, the cardinality formulation involves the zero-norm in objective and the binary variables. We propose a DCA-based algorithm by exploiting a DC approximation approach for the zero-norm and an exact penalty technique for the binary variables. Secondly, we make use of the aforementioned DCA-based batch RL algorithm. All proposed algorithms are tested on some artificial road networks
Ho, Vinh Thanh. « Techniques avancées d'apprentissage automatique basées sur la programmation DC et DCA ». Electronic Thesis or Diss., Université de Lorraine, 2017. http://www.theses.fr/2017LORR0289.
Texte intégralIn this dissertation, we develop some advanced machine learning techniques in the framework of online learning and reinforcement learning (RL). The backbones of our approaches are DC (Difference of Convex functions) programming and DCA (DC Algorithm), and their online version that are best known as powerful nonsmooth, nonconvex optimization tools. This dissertation is composed of two parts: the first part studies some online machine learning techniques and the second part concerns RL in both batch and online modes. The first part includes two chapters corresponding to online classification (Chapter 2) and prediction with expert advice (Chapter 3). These two chapters mention a unified DC approximation approach to different online learning algorithms where the observed objective functions are 0-1 loss functions. We thoroughly study how to develop efficient online DCA algorithms in terms of theoretical and computational aspects. The second part consists of four chapters (Chapters 4, 5, 6, 7). After a brief introduction of RL and its related works in Chapter 4, Chapter 5 aims to provide effective RL techniques in batch mode based on DC programming and DCA. In particular, we first consider four different DC optimization formulations for which corresponding attractive DCA-based algorithms are developed, then carefully address the key issues of DCA, and finally, show the computational efficiency of these algorithms through various experiments. Continuing this study, in Chapter 6 we develop DCA-based RL techniques in online mode and propose their alternating versions. As an application, we tackle the stochastic shortest path (SSP) problem in Chapter 7. Especially, a particular class of SSP problems can be reformulated in two directions as a cardinality minimization formulation and an RL formulation. Firstly, the cardinality formulation involves the zero-norm in objective and the binary variables. We propose a DCA-based algorithm by exploiting a DC approximation approach for the zero-norm and an exact penalty technique for the binary variables. Secondly, we make use of the aforementioned DCA-based batch RL algorithm. All proposed algorithms are tested on some artificial road networks
Weiss, Pierre. « Algorithmes rapides d'optimisation convexe. Applications à la reconstruction d'images et à la détection de changements ». Phd thesis, Université de Nice Sophia-Antipolis, 2008. http://tel.archives-ouvertes.fr/tel-00349452.
Texte intégralKarimi, Belhal. « Non-Convex Optimization for Latent Data Models : Algorithms, Analysis and Applications ». Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX040/document.
Texte intégralMany problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Many problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Euclidean space.Examples include topic models, neural networks or sparse logistic regression.Optimization methods, used to solve those problems, have been widely studied in the literature for convex objective functions and are extensively used in practice.However, recent breakthroughs in statistical modeling, such as deep learning, coupled with an explosion of data samples, require improvements of non-convex optimization procedure for large datasets.This thesis is an attempt to address those two challenges by developing algorithms with cheaper updates, ideally independent of the number of samples, and improving the theoretical understanding of non-convex optimization that remains rather limited.In this manuscript, we are interested in the minimization of such objective functions for latent data models, ie, when the data is partially observed which includes the conventional sense of missing data but is much broader than that.In the first part, we consider the minimization of a (possibly) non-convex and non-smooth objective function using incremental and online updates.To that end, we propose several algorithms exploiting the latent structure to efficiently optimize the objective and illustrate our findings with numerous applications.In the second part, we focus on the maximization of non-convex likelihood using the EM algorithm and its stochastic variants.We analyze several faster and cheaper algorithms and propose two new variants aiming at speeding the convergence of the estimated parameters
DANIILIDIS, Aris. « Analyse convexe et quasi-convexe ; applications en optimisation ». Habilitation à diriger des recherches, Université de Pau et des Pays de l'Adour, 2002. http://tel.archives-ouvertes.fr/tel-00001355.
Texte intégralBahraoui, Mohamed-Amin. « Suites diagonalement stationnaires en optimisation convexe ». Montpellier 2, 1994. http://www.theses.fr/1994MON20153.
Texte intégralYagoubi, Mohamed. « Commande robuste structurée et optimisation convexe ». Nantes, 2003. http://www.theses.fr/2003NANT2027.
Texte intégralHenrion, Didier. « Polynômes et optimisation convexe en commande robuste ». Habilitation à diriger des recherches, Université Paul Sabatier - Toulouse III, 2007. http://tel.archives-ouvertes.fr/tel-00246118.
Texte intégralOstrovskii, Dmitrii. « Reconstruction adaptative des signaux par optimisation convexe ». Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM004/document.
Texte intégralWe consider the problem of denoising a signal observed in Gaussian noise.In this problem, classical linear estimators are quasi-optimal provided that the set of possible signals is convex, compact, and known a priori. However, when the set is unspecified, designing an estimator which does not ``know'' the underlying structure of a signal yet has favorable theoretical guarantees of statistical performance remains a challenging problem. In this thesis, we study a new family of estimators for statistical recovery of signals satisfying certain time-invariance properties. Such signals are characterized by their harmonic structure, which is usually unknown in practice. We propose new estimators which are capable to exploit the unknown harmonic structure of a signal to reconstruct. We demonstrate that these estimators admit theoretical performance guarantees, in the form of oracle inequalities, in a variety of settings.We provide efficient algorithmic implementations of these estimators via first-order optimization algorithm with non-Euclidean geometry, and evaluate them on synthetic data, as well as some real-world signals and images
Durante, Valentin. « Optimisation convexe pour les modèles graphiques discrets ». Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30323.
Texte intégralGraphical models define a family of formalisms and algorithms used for logical and probabilistic reasoning, in fields as varied as image analysis or natural language processing. They can be learned from data, giving probabilistic information which can later be combined with logical information. The objective of the thesis is to improve the efficiency of the reasoning algorithms on these models in order to increase the power of the fundamental reasoning mechanism used in these tools (minorant computation) by exploiting the progress made in the field of convex optimisation. We should be able to solve problems that are currently beyond the reach of our most efficient tools
Melliani, Mohamed. « Analyse numérique d'algorithmes proximaux généralisés en optimisation convexe ». Rouen, 1997. http://www.theses.fr/1997ROUES030.
Texte intégralProchazka, Hynek. « Synthèse de régulateurs numériques robustes multivariables par optimisation convexe ». Phd thesis, Grenoble INPG, 2004. http://tel.archives-ouvertes.fr/tel-00169982.
Texte intégralLe mémoire est divisé en cinq parties. La première partie concerne les systèmes monovariables et le reste est concentré sur les problèmes de synthèse dans le domaine multivariable. Les cinq parties traitent les problématiques suivantes:
• La première partie (Chapitre 2) touche la synthèse de régulateurs robustes monovariables par le placement de pôles, le calibrage de sensibilités et l'optimisation convexe [LK98, LL99]. Elle présente une nouvelle approche de la synthèse par placement de pôles en utilisant les filtres bande-étroite du 2$^e$ ordre et un logiciel associé développé dans le cadre de ce travail. Le logiciel reflète les améliorations apportées qui ont radicalement simplifiées la procédure de synthèse. La section qui concerne l'optimisation convexe rappelle les principes est introduit la notation utilisée ultérieurement pour la théorie multivariable.
• La deuxième partie (Chapitre 3 et 4) évoque la théorie fondamentale de la commande multivariable et développe l'idée de la synthèse de régulateurs robustes multivariables par placement de pôles et calibrage des sensibilités. Le régulateur a la forme d'observateur avec le retour des états estimés et il est utilisé dans la suite comme le régulateur central pour la synthèse de régulateurs par optimisation convexe.
• La troisième partie (Chapitre 5) est la partie essentielle et elle développe la méthode de synthèse de régulateurs robustes multivariables par calibrage de sensibilités et optimisation convexe. La méthode est basée sur la même principe que celui utilisé dans le domaine monovariable [Lan98, LL99] et elle est réalisé comme une boite à outils interactive pour Matlab. Le placement de pôles multivariable est aussi intégré dans cette application. Le logiciel développé fait partie de la thèse et il est brièvement décrit dans ce chapitre.
• La quatrième partie (Chapitre 6) traite la synthèse par optimisation convexe du pré-compensateur multivariable pour la poursuite. Le pré-com\-pen\-sa\-teur nous permet de séparer les spécifications sur la robustesse (rejet de perturbations et traitement des incertitudes) et les spécifications sur la poursuite (temps de montée, dépassement maximal).
• La cinquième partie (Chapitre 7) concerne la technique de réduction des régulateurs multivariables par identification en boucle fermée. La méthode développée fait suite à un approche similaire développé pour les régulateurs monovariables dans [LKC01].
Álvarez, Daziano Felipe. « Systèmes dynamiques dissipatifs et méthodes d'approximation en optimisation convexe ». Montpellier 2, 1998. http://www.theses.fr/1998MON20241.
Texte intégralLebret, Hervé. « Synthese de diagrammes de reseaux d'antennes par optimisation convexe ». Rennes 1, 1994. http://www.theses.fr/1994REN10153.
Texte intégralCadoux, Florent. « Optimisation et analyse convexe pour la dynamique non-régulière ». Phd thesis, Grenoble 1, 2009. http://www.theses.fr/2009GRE10231.
Texte intégralThe aim of this work is to propose a new approach to the solution of 3D unilateral contact problems with Coulomb friction in solid mechanics. We consider dynamical systems composed of several bodies with a finite number of degrees of freedom: rigid bodies, or deformable bodies which are spatial approximations of continuous models. Friction between bodies is modelled using a classical formulation of Coulomb's law. After time discretization (or quasi-static approximation), we get at each time step a problem containing complementarity equations posed on a product of second order cones, plus other equations. Several methods have been proposed in the literature for different equivalent formulations of this problem, in particular by Moreau, Alart and Curnier, and De Saxcé. Considering the complementarity equations as optimality conditions (KKT) of an optimization problem, we propose a new equivalent reformulation as a parametric convex minimization problem coupled with a fixed point problem. Thanks to this viewpoint, we prove the existence of solutions under a quite mild assumption which can be checked in practice. Moreover, we can practically compute one of these solutions by solving numerically the fixed point equation. The performance of this approach is compared with existing methods
Cadoux, Florent. « Optimisation et analyse convexe pour la dynamique non-régulière ». Phd thesis, Université Joseph Fourier (Grenoble), 2009. http://tel.archives-ouvertes.fr/tel-00440798.
Texte intégralEl, Gueddari Loubna. « Proximal structured sparsity regularization for online reconstruction in high-resolution accelerated Magnetic Resonance imaging ». Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS573.
Texte intégralMagnetic resonance imaging (MRI) is the reference medical imaging technique for probing in vivo and non-invasively soft tissues in the human body, notably the brain. MR image resolution improvement in a standard scanning time (e.g., 400µm isotropic in 15 min) would allow medical doctors to significantly improve both their diagnosis and patients' follow-up. However the scanning time in MRI remains long, especially in the high resolution context. To reduce this time, the recent Compressed Sensing (CS) theory has revolutionized the way of acquiring data in several fields including MRI by overcoming the Shannon-Nyquist theorem. Using CS, data can then be massively under-sampled while ensuring conditions for optimal image recovery.In this context, previous Ph.D. thesis in the laboratory were dedicated to the design and implementation of physically plausible acquisition scenarios to accelerate the scan. Those projects deliver new optimization algorithm for the design of advanced non-Cartesian trajectory called SPARKLING: Spreading Projection Algorithm for Rapid K-space samplING. The generated SPARKLING trajectories led to acceleration factors up to 20 in 2D and 60 for 3D-acquisitions on highly resolved T₂* weighted images acquired at 7~Tesla.Those accelerations were only accessible thanks to the high input Signal-to-Noise Ratio delivered by the usage of multi-channel reception coils. However, those results are coming at a price of long and complex reconstruction.In this thesis, the objective is to propose an online approach for non-Cartesian multi-channel MR image reconstruction. To achieve this goal we rely on an online approach where the reconstruction starts from incomplete data.Hence acquisition and reconstruction are interleaved, and partial feedback is given during the scan. After exposing the Compressed Sensing theory, we present state-of the art method dedicated to multi-channel coil reconstruction. In particular, we will first focus on self-calibrating methods that presents the advantage to be adapted to non-Cartesian sampling and we propose a simple yet efficient method to estimate the coil sensitivity profile.However, owing to its dependence to user-defined parameters, this two-step approach (extraction of sensitivity maps and then image reconstruction) is not compatible with the timing constraints associated with online reconstruction. Then we studied the case of calibration-less reconstruction methods and splits them into two categories, the k-space based and the domain-based. While the k-space calibration-less method are sub-optimal for non-Cartesian reconstruction, due to the gridding procedure, we will retain the domain-based calibration-less reconstruction and prove theirs for online purposes. Hence in the second part, we first prove the advantage of mixed norm to improve the recovery guarantee in the pMRI setting. Then we studied the impact of structured sparse induced norm on the reconstruction multi-channel purposes, where then and adapt different penalty based on structured sparsity to handle those highly correlated images. Finally, the retained method will be applied to online purposes. The entire pipeline, is compatible with an implementation through the Gadgetron pipeline to deliver the reconstruction at the scanner console
Royer, Martin. « Optimalité statistique du partitionnement par l'optimisation convexe ». Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS442/document.
Texte intégralThis work focuses on the problem of point and variable clustering, that is the grouping of either similar vectors or similar components of a vector in a metric space. This has applications in many relevant fields including pattern recognition in image analysis or gene expression data classification. Through adequate modeling of the similarity between points or variables within a cluster we analyse the statistical properties of known clustering algorithms such as K-means.When considering homoscedastic elements for all groups the K-means algorithm is equivalent to a maximum-likelihood procedure. Otherwise the algorithm shows bias in the sense that it tends to separate groups with larger dispersion, regardless of actual group separation. By using a semi definite positive reformulation of the estimator, we suggest a pattern of correction for the algorithm that leads to the construction of computational algorithm with quasiminimax properties for hard clustering of points or variables.Those results can be studied under the classical mixture model or latent variables model, and can be extended to more general and robust class of $G$-block models. The stochastic controls can be made adaptive to the unknown number of classes as well as to the effective dimension of the problem. They help understand the behavior of the class of spectral estimators that are also widely used for clustering problems. They are supported by extensive simulation studies as well as data analysis stemming from the biological field.When focus is brought on the computational aspect of those algorithms, we exploit ideas based on a strong connexion with the domain of convex optimisation and specifically the technique of low-rank relaxation, of importance when dealing with high dimensional problems
Cornejo, Zuniga Oscar. « Conditionnement et algorithmes proximaux en localisation et optimisation non convexe ». Dijon, 2000. http://www.theses.fr/2000DIJOS015.
Texte intégralHbaïeb, Slim. « Analyse de cahier des charges en automatique par optimisation convexe ». Paris 11, 2002. http://www.theses.fr/2002PA112137.
Texte intégralThis PhD thesis deals with the analysis of specifications for linear continuous time control design. Two complementary approaches are developed: the first uses the input output trajectories as parameters and allows to study the limits of performances reachable by a given system; the second uses closed loop transfers as parameters and allows to analyze compatibility of several input output demands in time and frequency domains. A physical interpretation of Youla parameterization, used in this approach, is introduced too. A mathematical framework for finite dimensional approximation problems is developed. Tow kind of approximations were formulated: the first has simple convergence properties and the second has uniform convergence properties. Both allow to solve effectively the considered optimization problems. Advantages and drawbacks of each approach are presented and compared. Finite dimensional formulation in term of linear, quadratic or LM constrained optimization problems are developed for numerical application. These techniques are applied to an industrial case: "the steam generator water level control of a pressurized water reactor". This study was realized in collaboration with the department of "controle commande EDF Chatou". Keywords: feasibility, specifications, convex optimization, linear system, Youla parameterization, finite dimensional approximation, steam generator
Hadj-Saïd, Souad. « Optimisation énergétique Convexe pour véhicule Hybride électrique : vers une solution analytique ». Thesis, Orléans, 2018. http://www.theses.fr/2018ORLE2028/document.
Texte intégralThis thesis focuses on the energy management of Hybrid Electric Vehicle. In this type of vehicle, energy optimization is a major challenge. It consists of calculating optimal commands that minimize the vehicle’s energy consumption under a finite number of constraints. The optimization issue could be solved using a digital method or an analytical method. This choice depends on the nature of energy models that monitor the optimization criteria: analytical or maps of experimental measurements. However, this method presents numerous disadvantages. Its calculation is extremely time-consuming for instance. Therefore, the works presented in this thesis were directed in order to develop an analytical solution where the calculation is lesstime consuming. The architecture of the vehicle is complex. In fact, the vehicle contains two electrical machines, a thermal engine and a step-up. These components have all a straight impact on the vehicle’s energy consumption so several optimization variables were defining. Consequently, working on an analytical solution was a natural choice. The proposed analytical methodology consists of three steps: convex modeling, the command analytical calculation as well as the analytical command validation on a vehicle simulator. This methodology was applied to three possible configurations of the studied vehicle: parallel, biparallel and in serial. Finally, the step-up addition to the energy management as well as the study of itsimpact on the vehicle’s energy consumption are presented in the last chapter. The simulation results show that the analytical method reduces considerably the computing time and has an extremely low suboptimality
Zaourar, Sofia. « Optimisation convexe non-différentiable et méthodes de décomposition en recherche opérationnelle ». Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM099.
Texte intégralDecomposition methods are an application of the divide and conquer principle to large-scale optimization. Their idea is to decompose a given optimization problem into a sequence of easier subproblems. Although successful for many applications, these methods still present challenges. In this thesis, we propose methodological and algorithmic improvements of decomposition methods and illustrate them on several operations research problems. Our approach heavily relies on convex analysis and nonsmooth optimization. In constraint decomposition (or Lagrangian relaxation) applied to short-term electricity generation management, even the subproblems are too difficult to solve exactly. When solved approximately though, the obtained prices show an unstable noisy behaviour. We present a simple way to improve the structure of the prices by penalizing their noisy behaviour, in particular using a total variation regularization. We illustrate the consistency of our regularization on real-life problems from EDF. We then consider variable decomposition (or Benders decomposition), that can have a very slow convergence. With a nonsmooth optimization point of view on this method, we address the instability of Benders cutting-planes algorithm. We present an algorithmic stabilization inspired by bundle methods for convex optimization. The acceleration provided by this stabilization is illustrated on network design andhub location problems. We also study more general convex nonsmooth problems whose objective function is expensive to evaluate. This situation typically arises in decomposition methods. We show that it often exists extra information about the problem, cheap but with unknown accuracy, that is not used by the algorithms. We propose a way to incorporate this coarseinformation into classical nonsmooth optimization algorithms and apply it successfully to two-stage stochastic problems.Finally, we introduce a decomposition strategy for the machine reassignment problem. This decomposition leads to a new variant of vector bin packing problems, where the bins have variable sizes. We propose fast and efficient heuristics for this problem that improve on state of the art results of vector bin packing problems. An adaptation of these heuristics is also able to generate feasible solutions for Google instances of the machine reassignment problem
Ghouali, Noureddine. « Optimisation en ligne des systemes interconnectes en etat statique ». Nantes, 1988. http://www.theses.fr/1988NANT2026.
Texte intégralGhouali, Noureddine. « Optimisation en ligne des systèmes interconnectés en état statique ». Grenoble 2 : ANRT, 1988. http://catalogue.bnf.fr/ark:/12148/cb37613897q.
Texte intégralLazare, Arnaud. « Global optimization of polynomial programs with mixed-integer variables ». Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLY011.
Texte intégralIn this thesis, we are interested in the study of polynomial programs, that is optimization problems for which the objective function and/or the constraints are expressed by multivariate polynomials. These problems have many practical applications and are currently actively studied. Different methods can be used to find either a global or a heuristic solution, using for instance, positive semi-definite relaxations as in the "Moment/Sum of squares" method. But these problems remain very difficult and only small instances are addressed. In the quadratic case, an effective exact solution approach was initially proposed in the QCR method. It is based on a quadratic convex reformulation, which is optimal in terms of continuous relaxation bound.One of the motivations of this thesis is to generalize this approach to the case of polynomial programs. In most of this manuscript, we study optimization problems with binary variables. We propose two families of convex reformulations for these problems: "direct" reformulations and quadratic ones.For direct reformulations, we first focus on linearizations. We introduce the concept of q-linearization, that is a linearization using q additional variables, and we compare the bounds obtained by continuous relaxation for different values of q. Then, we apply convex reformulation to the polynomial problem, by adding additional terms to the objective function, but without adding additional variables or constraints.The second family of convex reformulations aims at extending quadratic convex reformulation to the polynomial case. We propose several new alternative reformulations that we compare to existing methods on instances of the literature. In particular we present the algorithm PQCR to solve unconstrained binary polynomial problems. The PQCR method is able to solve several unsolved instances. In addition to numerical experiments, we also propose a theoretical study to compare the different quadratic reformulations of the literature and then apply a convex reformulation to them.Finally, we consider more general problems and we propose a method to compute convex relaxations for continuous problems
Sudhakara, Murthy Prasad. « Modèles Parcimonieux et Optimisation Convexe pour la Séparation Aveugle de Sources Convolutives ». Phd thesis, Université Rennes 1, 2011. http://tel.archives-ouvertes.fr/tel-00586610.
Texte intégralAl, Sarray Basad. « Estimation et choix de modèle pour les séries temporelles par optimisation convexe ». Besançon, 2016. http://www.theses.fr/2016BESA2084.
Texte intégral[…] this study presents some of machine learning and convex methodes for ARMA model selection and estimation based on the conversion between ARMA –AR models and ARMA-State Space Models. Also in this study, for a time series decomposition and time series components analysis some of convex methods are implemented and simulated. The results show the ability of convex methods of analysing and modelling a given series
Plassart, Stéphan. « Optimisation en-ligne pour les systèmes dynamiques en temps-réel ». Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM017.
Texte intégralThe energy consumption is a crucial issue for real-time systems,that's why optimizing it online, i.e. while the processor is running, has become essential and will be the goal of this thesis.This optimization is done by adapting the processor speed during the job execution.This thesis addresses several situations with different knowledge on past, active and future job characteristics.Firstly, we consider that all job characteristics are known (the offline case),and we propose a linear time algorithm to determine the speed schedule to execute n jobs on a single processor.Secondly, using Markov decision processes, we solve the case where past and active job characteristics are entirely known,and for future jobs only the probability distribution of the jobs characteristics (arrival times, execution times and deadlines) are known.Thirdly we study a more general case: the execution is only discovered when the job is completed.In addition we also consider the case where we have no statistical knowledge on jobs,so we have to use learning methods to determine the optimal processor speeds online.Finally, we propose a feasibility analysis (the processor ability to execute all jobs before its deadline when it works always at maximal speed) of several classical online policies,and we show that our dynamic programming algorithm is also the best in terms of feasibility
Haddou, Mounir. « Contribution à l'étude des méthodes de décomposition et de barrières en optimisation convexe ». Clermont-Ferrand 2, 1995. http://www.theses.fr/1995CLF21729.
Texte intégralChine, Abderrazek. « Algorithmes robustes en optimisation non convexe : codes et simulations numériques en grande dimension ». Phd thesis, Grenoble 1, 1991. http://tel.archives-ouvertes.fr/tel-00340403.
Texte intégralChine, Abderrazek Pham Dinh Tao Laurent Pierre Jean. « Algorithmes robustes en optimisation non convexe codes et simulations numériques en grande dimension / ». S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00340403.
Texte intégralTran, Duc Quynh. « Optimisation non convexe en finance et en gestion de production : modèles et méthodes ». Thesis, Metz, 2011. http://www.theses.fr/2011METZ019S/document.
Texte intégralThis thesis deals with optimization techniques for solving some optimization problems in two domains : portfolio selection and production management. They are large scale non convex optimization problems due to integer variables and/or the non convexity of the objective function. Our approach is based on DC programming and DCA, DC relaxation techniques and the algorithm Branch and Bound. This work is motivated by the robustness and the performance of the DC programming and DCA compared to other methods. The thesis includes three parts : In the first part, we present the fundamental tools and the essential techniques in DC programming, DCA as well as the method Branch and Bound. The second one concerns some non convex optimization problem in portfolio selection. Two following problems are considered : Min max continuous problem with the cardinality constraints in portfolio selection ; A class of bilevel programming problems and its application in portfolio selection. The third part contains some non convex optimization problems in production management. We study three problems : Minimization of the maintenance cost involving the flow time and the tardiness penalty ; Minimization of the cost of multi-stages production/inventory systems with bottleneck ; Determination of transfer prices and inventory policy in supply chain of two enterprises
Tran, Duc Quynh. « Optimisation non convexe en finance et en gestion de production : modèles et méthodes ». Electronic Thesis or Diss., Metz, 2011. http://www.theses.fr/2011METZ019S.
Texte intégralThis thesis deals with optimization techniques for solving some optimization problems in two domains : portfolio selection and production management. They are large scale non convex optimization problems due to integer variables and/or the non convexity of the objective function. Our approach is based on DC programming and DCA, DC relaxation techniques and the algorithm Branch and Bound. This work is motivated by the robustness and the performance of the DC programming and DCA compared to other methods. The thesis includes three parts : In the first part, we present the fundamental tools and the essential techniques in DC programming, DCA as well as the method Branch and Bound. The second one concerns some non convex optimization problem in portfolio selection. Two following problems are considered : Min max continuous problem with the cardinality constraints in portfolio selection ; A class of bilevel programming problems and its application in portfolio selection. The third part contains some non convex optimization problems in production management. We study three problems : Minimization of the maintenance cost involving the flow time and the tardiness penalty ; Minimization of the cost of multi-stages production/inventory systems with bottleneck ; Determination of transfer prices and inventory policy in supply chain of two enterprises
Swaminathan, Bhargav Prasanna. « Gestion prévisionnelle des réseaux actifs de distribution - relaxation convexe sous incertitude ». Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAT039/document.
Texte intégralPower systems are faced by the rising shares of distributed renewable energy sources (DRES) and the deregulation of the electricity system. Distribution networks and their operators (DSO) are particularly at the front-line. The passive operational practives of many DSOs today have to evolve to overcome these challenges. Active Distribution Networks (ADN), and Active Network Management (ANM) have been touted as a potential solution. In this context, DSOs will streamline investment and operational decisions, creating a cost-effective framework of operations. They will evolve and take up new roles and optimally use flexibility to perform, for example, short-term op- erational planning of their networks. However, the development of such methods poses particular challenges. They are related to the presence of discrete elements (OLTCs and reconfiguration), the use of exogenous (external) flexibilities in these networks, the non-linear nature of optimal power flow (OPF) calculations, and uncertainties present in forecasts. The work leading to this thesis deals with and overcomes these challenges. First, a short-term economic analysis is done to ascertain the utilisation costs of flexibilities. This provides a common reference for different flexibilities. Then, exact linear flexibility models are developed using mathematical reformulation techniques. The OPF equations in operational planning are then convexified using reformulation techniques as well. The mixed-integer convex optimisation model thus developed, called the novel OP formulation, is exact and can guarantee globally optimal solutions. Simulations on two test networks allow us to evaluate the performance of this formulation. The uncertainty in DRES forecasts is then handled via three different formulations developed in this thesis. The best performing formulations under uncertainty are determined via comparison framework developed to test their performance
Nguyen, Van Vinh. « Méthodes exactes pour l'optimisation DC polyédrale en variables mixtes 0-1 basées sur DCA et des nouvelles coupes ». INSA de Rouen, 2006. http://www.theses.fr/2006ISAM0003.
Texte intégralRahmouni, Abdelouahed. « Etude de l'effet d'une perturbation variationnelle sur le comportement primal-dual en optimisation convexe ». Perpignan, 1993. http://www.theses.fr/1993PERP0169.
Texte intégralChaarani, Jamal Pham Dinh Tao Laurent Pierre Jean. « Etude d'une classe d'algorithmes d'optimisation non convexe implémentation et applications / ». S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00333443.
Texte intégralBenacer, Rachid Pham Dinh Tao. « Contribution à l'étude des algorithmes de l'optimisation non convexe et non différentiable ». S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00320986.
Texte intégralSudhakara, Murthy Prasad. « Sparse models and convex optimisation for convolutive blind source separation ». Rennes 1, 2011. https://tel.archives-ouvertes.fr/tel-00586610.
Texte intégralLa séparation aveugle de sources à partir de mélanges sous-déterminés se fait traditionnellement en deux étapes: l’estimation des filtres de mélange, puis celle des sources. L’hypothèse de parcimonie temps-fréquence des sources facilite la séparation, qui reste cependant difficile dans le cas de mélanges convolutifs à cause des ambiguités de permutation et de mise à l’échelle. Par ailleurs, la parcimonie temporelle des filtres facilite les techniques d’estimation aveugle de filtres fondées sur des corrélations croisées, qui restent cependant limitées au cas où une seule source est active. Dans cette thèse, on exploite conjointement la parcimonie des sources et des filtres de mélange pour l’estimation aveugle de filtres parcimonieux à partir de mélanges convolutifs stéréophoniques de plusieurs sources. Dans un premier temps, on montre comment la parcimonie des filtres permet de résoudre le problème de permutation, en l’absence de problème de mise à l’échelle. Ensuite, on propose un cadre constitu é de deux étapes pour l’estimation, basé sur des versions temps-fréquence de la corrélation croisée et sur la minimisation de norme ℓ1 : a) un clustering qui regroupe les points temps-fréquence où une seule source est active; b) la résolution d’un problème d’optimisation convexe pour estimer les filtres. La performance des algorithmes qui en résultent est évalués numériquement sur des problèmes de filtre d’estimation de filtres et de séparation de sources audio
Fontaine, Xavier. « Sequential learning and stochastic optimization of convex functions ». Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM024.
Texte intégralIn this thesis we study several machine learning problems that are all linked with the minimization of a noisy function, which will often be convex.Inspired by real-life applications we focus on sequential learning problems which consist in treating the data ``on the fly'', or in an online manner.The first part of this thesis is thus devoted to the study of three different sequential learning problems which all face the classical ``exploration vs. exploitation'' trade-off.Each of these problems consists in a situation where a decision maker has to take actions in order to maximize a reward or to evaluate a parameter under uncertainty, meaning that the rewards or the feedback of the possible actions are unknown and noisy.We demonstrate that all of these problems can be studied under the scope of stochastic convex optimization, and we propose and analyze algorithms to solve them.In the second part of this thesis we focus on the analysis of the Stochastic Gradient Descent algorithm, which is likely one of the most used stochastic optimization algorithms in machine learning.We provide an exhaustive analysis in the convex setting and in some non-convex situations by studying the associated continuous-time model, and obtain new optimal convergence results
Kulunchakov, Andrei. « Optimisation stochastique pour l'apprentissage machine à grande échelle : réduction de la variance et accélération ». Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM057.
Texte intégralA goal of this thesis is to explore several topics in optimization for high-dimensional stochastic problems. The first task is related to various incremental approaches, which rely on exact gradient information, such as SVRG, SAGA, MISO, SDCA. While the minimization of large limit sums of functions was thoroughly analyzed, we suggest in Chapter 2 a new technique, which allows to consider all these methods in a generic fashion and demonstrate their robustness to possible stochastic perturbations in the gradient information.Our technique is based on extending the concept of estimate sequence introduced originally by Yu. Nesterov in order to accelerate deterministic algorithms.Using the finite-sum structure of the problems, we are able to modify the aforementioned algorithms to take into account stochastic perturbations. At the same time, the framework allows to derive naturally new algorithms with the same guarantees as existing incremental methods. Finally, we propose a new accelerated stochastic gradient descent algorithm and a new accelerated SVRG algorithm that is robust to stochastic noise. This acceleration essentially performs the typical deterministic acceleration in the sense of Nesterov, while preserving the optimal variance convergence.Next, we address the problem of generic acceleration in stochastic optimization. For this task, we generalize in Chapter 3 the multi-stage approach called Catalyst, which was originally aimed to accelerate deterministic methods. In order to apply it to stochastic problems, we improve its flexibility on the choice of surrogate functions minimized at each stage. Finally, given an optimization method with mild convergence guarantees for strongly convex problems, our developed multi-stage procedure, accelerates convergence to a noise-dominated region, and then achieves the optimal (up to a logarithmic factor) worst-case convergence depending on the noise variance of the gradients. Thus, we successfully address the acceleration of various stochastic methods, including the variance-reduced approaches considered and generalized in Chapter 2. Again, the developed framework bears similarities with the acceleration performed by Yu. Nesterov using the estimate sequences. In this sense, we try to fill the gap between deterministic and stochastic optimization in terms of Nesterov's acceleration. A side contribution of this chapter is a generic analysis that can handle inexact proximal operators, providing new insights about the robustness of stochastic algorithms when the proximal operator cannot be exactly computed.In Chapter 4, we study properties of non-Euclidean stochastic algorithms applied to the problem of sparse signal recovery. A sparse structure significantly reduces the effects of noise in gradient observations. We propose a new stochastic algorithm, called SMD-SR, allowing to make better use of this structure. This method is a multi-step procedure which uses the stochastic mirror descent algorithm as a building block over its stages. Essentially, SMD-SR has two phases of convergence with the linear bias convergence during the preliminary phase and the optimal asymptotic rate during the asymptotic phase.Comparing to the most effective existing solution to the sparse stochastic optimization problems, we offer an improvement in several aspects. First, we establish the linear bias convergence (similar to the one of the deterministic gradient descent algorithm, when the full gradient observation is available), while showing the optimal robustness to noise. Second, we achieve this rate for a large class of noise models, including sub-Gaussian, Rademacher, multivariate Student distributions and scale mixtures. Finally, these results are obtained under the optimal condition on the level of sparsity which can approach the total number of iterations of the algorithm (up to a logarithmic factor)
Delyon, Alexandre. « Shape Optimisation Problems Around the Geometry of Branchiopod Eggs ». Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0123.
Texte intégralIn this thesis we are interested in a problem of mathematics applied to biology. The aim is to explain the shape of the eggs of Eulimnadia, a small animal belonging to the class Branchiopods}, and more precisely the Limnadiidae. Indeed, according to the theory of evolution it is reasonable to think that the shape of living beings or objects derived from living beings is optimized to ensure the survival and expansion of the species in question. To do this we have opted for the inverse modeling method. The latter consists in proposing a biological explanation for the shape of the eggs, then modeling it in the form of a mathematical problem, and more precisely a shape optimisation problem which we try to solve and finally compare the shape obtained to the real one. We have studied two models, one leading to geometry and packing problems, the other to shape optimisation problems in linear elasticity. After the resolution of the first modeling problem, another mathematical question naturally arose to us, and we managed to solve it, resulting in the complete Blaschke-Santalò (A,D,r) diagram. In other words we can answer the following question: given three positive numbers A,D, and r, and it is possible to find a convex set of the plane whose area is equal to A, diameter equal to D, and radius of the inscribed circle equal to r
Pasche, Claude. « Optimisation convexe dans les réseaux avec applications au trafic routier et à l'énergie électrique / ». [S.l.] : [s.n.], 1987. http://library.epfl.ch/theses/?nr=669.
Texte intégralBose, Gibin. « Approximation H infini, interpolation analytique et optimisation convexe : application à l’adaptation d’impédance large bande ». Thesis, Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ4007.
Texte intégralThe thesis makes an in-depth study of one of the classical problems in RF circuit design,the problem of impedance matching. Matching problem addresses the issue of transmitting the maximum available power from a source to a load within a frequency band. Antennas are one of the classical devices in which impedance matching plays an important role. The design of a matching circuit for a given load primarily amounts to find a lossless scattering matrix which when chained to the load minimize the reflection of power in the total system.In this work, both the theoretical aspects of the broadband matching problem and thepractical applicability of the developed approaches are given due importance. Part I of the thesis covers two different yet closely related approaches to the matching problem. These are based on the classical approaches developed by Helton and Fano-Youla to study the broadband matching problems. The framework established in the first approach entails in finding the best H infinity approximation to an L infinity function, Փ via Nehari's theory. This amounts to reduce the problem to a generalized eigen value problem based on an operator defined on H2, the Hankel operator, HՓ. The realizability of a given gain is provided by the constraint, operator norm of HՓ less than or equal to one. The second approach formulates the matching problem as a convex optimisation problem where in further flexibility is provided to the gain profiles compared to the previous approach. It is based on two rich theories, namely Fano-Youla matching theory and analytic interpolation. The realizabilty of a given gain is based on the Fano-Youla de-embedding conditions which reduces to the positivity of a classical matrix in analytic interpolation theory, the Pick matrix. The concavity of the concerned Pick matrix allows finding the solution to the problem by means of implementing a non-linear semi-definite programming problem. Most importantly, we estimate sharp lower bounds to the matching criterion for finite degree matching circuits and furnish circuits attaining those bounds.Part II of the thesis aims at realizing the matching circuits as ladder networks consisting of inductors and capacitors and discusses some important realizability constraints as well. Matching circuits are designed for several mismatched antennas, testing the robustness of the developed approach. The theory developed in the first part of the thesis provides an efficient way of comparing the matching criterion obtained to the theoretical limits
Fuentes, Marc. « Analyse et optimisation de problèmes sous contraintes d'autocorrélation ». Phd thesis, Université Paul Sabatier - Toulouse III, 2007. http://tel.archives-ouvertes.fr/tel-00195013.
Texte intégralAkoa, François Bertrand. « Approches de points intérieurs et de la programmation DC en optimisation non convexe. Codes et simulations numériques industrielles ». Rouen, INSA, 2005. http://www.theses.fr/2005ISARA001.
Texte intégralChibani, Akram. « Optimisation dynamique des chaînes logistiques agiles : application au cas d'approvisionnement en ligne ». Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22633/document.
Texte intégralIn a context of increased competition between enterprises, supply chains are struggling to respond to an increasingly volatile and complex environment. With technological advances, current practices to build efficient supply chains have changed. Indeed, the enthusiasms of companies with the use of internet have lead researchers to find adequate methods to cope with the dynamic nature of logistics networks. The purpose of this thesis is to address a dynamic procurement issue under asynchronous and repetitive variations over time. The supply chain considered is composed of two levels (buyer-suppliers) operating in highly agile environment. The questions facing the buyer is how many units of product should be purchased and from which supplier in response to variation in term of price and capacity. Because of this highly changing environment characterized by frequents changes in a short time, most of the classical optimization approaches seems inadequate to address these problems. Recently, dynamic optimization has been used successfully to deal with such problems. However, we have no knowledge of its application in a supply chain context. We propose a dynamic genetic approach which is applied to an e-procurement context in aim to optimize the procurement process during time