Academic literature on the topic 'Convex optimization'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Convex optimization.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Convex optimization"
Luethi, Hans-Jakob. "Convex Optimization." Journal of the American Statistical Association 100, no. 471 (September 2005): 1097. http://dx.doi.org/10.1198/jasa.2005.s41.
Full textCeria, Sebastián, and João Soares. "Convex programming for disjunctive convex optimization." Mathematical Programming 86, no. 3 (December 1, 1999): 595–614. http://dx.doi.org/10.1007/s101070050106.
Full textLasserre, Jean B. "On convex optimization without convex representation." Optimization Letters 5, no. 4 (April 13, 2011): 549–56. http://dx.doi.org/10.1007/s11590-011-0323-1.
Full textBen-Tal, A., and A. Nemirovski. "Robust Convex Optimization." Mathematics of Operations Research 23, no. 4 (November 1998): 769–805. http://dx.doi.org/10.1287/moor.23.4.769.
Full textTilahun, Surafel Luleseged. "Convex Grey Optimization." RAIRO - Operations Research 53, no. 1 (January 2019): 339–49. http://dx.doi.org/10.1051/ro/2018088.
Full textUbhaya, Vasant A. "Quasi-convex optimization." Journal of Mathematical Analysis and Applications 116, no. 2 (June 1986): 439–49. http://dx.doi.org/10.1016/s0022-247x(86)80008-7.
Full textOnn, Shmuel. "Convex Matroid Optimization." SIAM Journal on Discrete Mathematics 17, no. 2 (January 2003): 249–53. http://dx.doi.org/10.1137/s0895480102408559.
Full textPardalos, Panos M. "Convex optimization theory." Optimization Methods and Software 25, no. 3 (June 2010): 487. http://dx.doi.org/10.1080/10556781003625177.
Full textOnn, Shmuel, and Uriel G. Rothblum. "Convex Combinatorial Optimization." Discrete & Computational Geometry 32, no. 4 (August 19, 2004): 549–66. http://dx.doi.org/10.1007/s00454-004-1138-y.
Full textMayeli, Azita. "Non-convex Optimization via Strongly Convex Majorization-minimization." Canadian Mathematical Bulletin 63, no. 4 (December 10, 2019): 726–37. http://dx.doi.org/10.4153/s0008439519000730.
Full textDissertations / Theses on the topic "Convex optimization"
Joulin, Armand. "Convex optimization for cosegmentation." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00826236.
Full textRätsch, Gunnar. "Robust boosting via convex optimization." Phd thesis, Universität Potsdam, 2001. http://opus.kobv.de/ubp/volltexte/2005/39/.
Full textDie Arbeit behandelt folgende Sachverhalte:
o Die zur Analyse von Boosting-Methoden geeignete Statistische Lerntheorie. Wir studieren lerntheoretische Garantien zur Abschätzung der Vorhersagequalität auf ungesehenen Mustern. Kürzlich haben sich sogenannte Klassifikationstechniken mit großem Margin als ein praktisches Ergebnis dieser Theorie herausgestellt - insbesondere Boosting und Support-Vektor-Maschinen. Ein großer Margin impliziert eine hohe Vorhersagequalität der Entscheidungsregel. Deshalb wird analysiert, wie groß der Margin bei Boosting ist und ein verbesserter Algorithmus vorgeschlagen, der effizient Regeln mit maximalem Margin erzeugt.
o Was ist der Zusammenhang von Boosting und Techniken der konvexen Optimierung?
Um die Eigenschaften der entstehenden Klassifikations- oder Regressionsregeln zu analysieren, ist es sehr wichtig zu verstehen, ob und unter welchen Bedingungen iterative Algorithmen wie Boosting konvergieren. Wir zeigen, daß solche Algorithmen benutzt werden koennen, um sehr große Optimierungsprobleme mit Nebenbedingungen zu lösen, deren Lösung sich gut charakterisieren laesst. Dazu werden Verbindungen zum Wissenschaftsgebiet der konvexen Optimierung aufgezeigt und ausgenutzt, um Konvergenzgarantien für eine große Familie von Boosting-ähnlichen Algorithmen zu geben.
o Kann man Boosting robust gegenüber Meßfehlern und Ausreissern in den Daten machen?
Ein Problem bisheriger Boosting-Methoden ist die relativ hohe Sensitivität gegenüber Messungenauigkeiten und Meßfehlern in der Trainingsdatenmenge. Um dieses Problem zu beheben, wird die sogenannte 'Soft-Margin' Idee, die beim Support-Vector Lernen schon benutzt wird, auf Boosting übertragen. Das führt zu theoretisch gut motivierten, regularisierten Algorithmen, die ein hohes Maß an Robustheit aufweisen.
o Wie kann man die Anwendbarkeit von Boosting auf Regressionsprobleme erweitern?
Boosting-Methoden wurden ursprünglich für Klassifikationsprobleme entwickelt. Um die Anwendbarkeit auf Regressionsprobleme zu erweitern, werden die vorherigen Konvergenzresultate benutzt und neue Boosting-ähnliche Algorithmen zur Regression entwickelt. Wir zeigen, daß diese Algorithmen gute theoretische und praktische Eigenschaften haben.
o Ist Boosting praktisch anwendbar?
Die dargestellten theoretischen Ergebnisse werden begleitet von Simulationsergebnissen, entweder, um bestimmte Eigenschaften von Algorithmen zu illustrieren, oder um zu zeigen, daß sie in der Praxis tatsächlich gut funktionieren und direkt einsetzbar sind. Die praktische Relevanz der entwickelten Methoden wird in der Analyse chaotischer Zeitreihen und durch industrielle Anwendungen wie ein Stromverbrauch-Überwachungssystem und bei der Entwicklung neuer Medikamente illustriert.
In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues:
o The statistical learning theory framework for analyzing boosting methods.
We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution.
o How can boosting methods be related to mathematical optimization techniques?
To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms.
o How to make Boosting noise robust?
One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness.
o How to adapt boosting to regression problems?
Boosting methods are originally designed for classification problems. To extend the boosting idea to regression problems, we use the previous convergence results and relations to semi-infinite programming to design boosting-like algorithms for regression problems. We show that these leveraging algorithms have desirable theoretical and practical properties.
o Can boosting techniques be useful in practice?
The presented theoretical results are guided by simulation results either to illustrate properties of the proposed algorithms or to show that they work well in practice. We report on successful applications in a non-intrusive power monitoring system, chaotic time series analysis and a drug discovery process.
---
Anmerkung:
Der Autor ist Träger des von der Mathematisch-Naturwissenschaftlichen Fakultät der Universität Potsdam vergebenen Michelson-Preises für die beste Promotion des Jahres 2001/2002.
Nekooie, Batool. "Convex optimization involving matrix inequalities." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/13880.
Full textJangam, Ravindra nath vijay kumar. "BEAMFORMING TECHNIQUES USING CONVEX OPTIMIZATION." Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-33934.
Full textSaunderson, James (James Francis). "Subspace identification via convex optimization." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66475.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 88-92).
In this thesis we consider convex optimization-based approaches to the classical problem of identifying a subspace from noisy measurements of a random process taking values in the subspace. We focus on the case where the measurement noise is component-wise independent, known as the factor analysis model in statistics. We develop a new analysis of an existing convex optimization-based heuristic for this problem. Our analysis indicates that in high-dimensional settings, where both the ambient dimension and the dimension of the subspace to be identified are large, the convex heuristic, minimum trace factor analysis, is often very successful. We provide simple deterministic conditions on the underlying 'true' subspace under which the convex heuristic provably identifies the correct subspace. We also consider the performance of minimum trace factor analysis on 'typical' subspace identification problems, that is problems where the underlying subspace is chosen randomly from subspaces of a particular dimension. In this setting we establish conditions on the ambient dimension and the dimension of the underlying subspace under which the convex heuristic identifies the subspace correctly with high probability. We then consider a refinement of the subspace identification problem where we aim to identify a class of structured subspaces arising from Gaussian latent tree models. More precisely, given the covariance at the finest scale of a Gaussian latent tree model, and the tree that indexes the model, we aim to learn the parameters of the model, including the state dimensions of each of the latent variables. We do so by extending the convex heuristic, and our analysis, from the factor analysis setting to the setting of Gaussian latent tree models. We again provide deterministic conditions on the underlying latent tree model that ensure our convex optimization-based heuristic successfully identifies the parameters and state dimensions of the model.
by James Saunderson.
S.M.
Shewchun, John Marc 1972. "Constrained control using convex optimization." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/46471.
Full textBoţ, Radu Ioan. "Conjugate duality in convex optimization." Berlin [u.a.] Springer, 2010. http://dx.doi.org/10.1007/978-3-642-04900-2.
Full textAggarwal, Varun. "Analog circuit optimization using evolutionary algorithms and convex optimization." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40525.
Full textIncludes bibliographical references (p. 83-88).
In this thesis, we analyze state-of-art techniques for analog circuit sizing and compare them on various metrics. We ascertain that a methodology which improves the accuracy of sizing without increasing the run time or the designer effort is a contribution. We argue that the accuracy of geometric programming can be improved without adversely influencing the run time or increasing the designer's effort. This is facilitated by decomposition of geometric programming modeling into two steps, which decouples accuracy of models and run-time of geometric programming. We design a new algorithm for producing accurate posynomial models for MOS transistor parameters, which is the first step of the decomposition. The new algorithm can generate posynomial models with variable number of terms and real-valued exponents. The algorithm is a hybrid of a genetic algorithm and a convex optimization technique. We study the performance of the algorithm on artificially created benchmark problems. We show that the accuracy of posynomial models of MOS parameters is improved by a considerable amount by using the new algorithm. The new posynomial modeling algorithm can be used in any application of geometric programming and is not limited to MOS parameter modeling. In the last chapter, we discuss various ideas to improve the state-of-art in circuit sizing.
by Varun Aggarwal.
S.M.
van, den Berg Ewout. "Convex optimization for generalized sparse recovery." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/16646.
Full textLin, Chin-Yee. "Interior point methods for convex optimization." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/15044.
Full textBooks on the topic "Convex optimization"
Lieven, Vandenberghe, ed. Convex optimization. Cambridge, UK: Cambridge University Press, 2006.
Find full textConvex optimization theory. Belmont, Mass: Athena Scientific, 2009.
Find full textBrinkhuis, Jan. Convex Analysis for Optimization. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41804-5.
Full textNesterov, Yurii. Lectures on Convex Optimization. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-91578-4.
Full textBonnans, J. Frédéric. Convex and Stochastic Optimization. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14977-2.
Full textZaslavski, Alexander J. Convex Optimization with Computational Errors. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-37822-6.
Full textPardalos, Panos M., Antanas Žilinskas, and Julius Žilinskas. Non-Convex Multi-Objective Optimization. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61007-8.
Full textBorwein, Jonathan M., and Adrian S. Lewis. Convex Analysis and Nonlinear Optimization. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4757-9859-3.
Full textLi, Li. Selected Applications of Convex Optimization. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-46356-7.
Full textPeypouquet, Juan. Convex Optimization in Normed Spaces. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-13710-0.
Full textBook chapters on the topic "Convex optimization"
Nesterov, Yurii. "Convex Optimization." In Encyclopedia of Operations Research and Management Science, 281–87. Boston, MA: Springer US, 2013. http://dx.doi.org/10.1007/978-1-4419-1153-7_1171.
Full textAllgöwer, Frank, Jan Hasenauer, and Steffen Waldherr. "Convex Optimization." In Encyclopedia of Systems Biology, 501–2. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4419-9863-7_1449.
Full textHult, Henrik, Filip Lindskog, Ola Hammarlid, and Carl Johan Rehn. "Convex Optimization." In Risk and Portfolio Analysis, 33–38. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-4103-8_2.
Full textZaslavski, Alexander J. "Convex Optimization." In SpringerBriefs in Optimization, 13–56. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12644-4_2.
Full textRoyset, Johannes O., and Roger J.-B. Wets. "CONVEX OPTIMIZATION." In An Optimization Primer, 52–115. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-76275-9_2.
Full textÇınlar, Erhan, and Robert J. Vanderbei. "Convex Optimization." In Undergraduate Texts in Mathematics, 101–13. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4614-5257-7_6.
Full textAragón, Francisco J., Miguel A. Goberna, Marco A. López, and Margarita M. L. Rodríguez. "Convex Optimization." In Springer Undergraduate Texts in Mathematics and Technology, 117–80. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-11184-7_4.
Full textBorkar, Vivek S., and K. S. Mallikarjuna Rao. "Convex Optimization." In Texts and Readings in Mathematics, 79–100. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1652-8_5.
Full textWheeler, Jeffrey Paul. "Convex Optimization." In An Introduction to Optimization with Applications in Machine Learning and Data Analytics, 251–64. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9780367425517-19.
Full textStefanov, Stefan M. "Preliminaries: Convex Analysis and Convex Programming." In Applied Optimization, 1–61. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3417-1_1.
Full textConference papers on the topic "Convex optimization"
Boyd, Stephen. "Convex optimization." In the 17th ACM SIGKDD international conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2020408.2020410.
Full textSzu, Harold H. "Non-Convex Optimization." In 30th Annual Technical Symposium, edited by William J. Miceli. SPIE, 1986. http://dx.doi.org/10.1117/12.976247.
Full textUdell, Madeleine, Karanveer Mohan, David Zeng, Jenny Hong, Steven Diamond, and Stephen Boyd. "Convex Optimization in Julia." In 2014 First Workshop for High Performance Technical Computing in Dynamic Languages (HPTCDL). IEEE, 2014. http://dx.doi.org/10.1109/hptcdl.2014.5.
Full textTsianos, Konstantinos I., and Michael G. Rabbat. "Distributed strongly convex optimization." In 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2012. http://dx.doi.org/10.1109/allerton.2012.6483272.
Full textBoyd, Stephen, Lieven Vandenberghe, and Michael Grant. "Advances in Convex Optimization." In 2006 Chinese Control Conference. IEEE, 2006. http://dx.doi.org/10.1109/chicc.2006.280567.
Full textRamirez, Lennin Mallma, Alexandre Belfort de Almeida Chiacchio, Nelson Maculan Filho, Rodrigo de Souza Couto, Adilson Xavier, and Vinicius Layter Xavier. "HALA in Convex Optimization." In ANAIS DO SIMPóSIO BRASILEIRO DE PESQUISA OPERACIONAL. São José dos Campos - SP, BR: Galoa, 2023. http://dx.doi.org/10.59254/sbpo-2023-175132.
Full textLiu, Xinfu, and Ping Lu. "Solving Non-Convex Optimal Control Problems by Convex Optimization." In AIAA Guidance, Navigation, and Control (GNC) Conference. Reston, Virginia: American Institute of Aeronautics and Astronautics, 2013. http://dx.doi.org/10.2514/6.2013-4725.
Full textTsitsiklis, John N., and Zhi-quan Luo. "Communication complexity of convex optimization." In 1986 25th IEEE Conference on Decision and Control. IEEE, 1986. http://dx.doi.org/10.1109/cdc.1986.267379.
Full textAbdallah, Mohammed. "MIMO/OFDM convex optimization applications." In 2012 IEEE Long Island Systems, Applications and Technology Conference (LISAT). IEEE, 2012. http://dx.doi.org/10.1109/lisat.2012.6223201.
Full textChen, Niangjun, Anish Agarwal, Adam Wierman, Siddharth Barman, and Lachlan L. H. Andrew. "Online Convex Optimization Using Predictions." In SIGMETRICS '15: ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2745844.2745854.
Full textReports on the topic "Convex optimization"
Coffrin, Carleton James, and Line Alnaes Roald. Convex Relaxations in Power System Optimization, A Brief Introduction. Office of Scientific and Technical Information (OSTI), July 2018. http://dx.doi.org/10.2172/1461380.
Full textTran, Tuyen. Convex and Nonconvex Optimization Techniques for Multifacility Location and Clustering. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.7356.
Full textGiles, Daniel. The Majorization Minimization Principle and Some Applications in Convex Optimization. Portland State University Library, January 2015. http://dx.doi.org/10.15760/honors.175.
Full textDeits, Robin, and Russ Tedrake. Footstep Planning on Uneven Terrain with Mixed-Integer Convex Optimization. Fort Belvoir, VA: Defense Technical Information Center, August 2014. http://dx.doi.org/10.21236/ada609276.
Full textWen, Zaiwen, and Donald Goldfarb. A Line Search Multigrid Method for Large-Scale Convex Optimization. Fort Belvoir, VA: Defense Technical Information Center, July 2007. http://dx.doi.org/10.21236/ada478093.
Full textLawrence, Nathan. Convex and Nonconvex Optimization Techniques for the Constrained Fermat-Torricelli Problem. Portland State University Library, January 2016. http://dx.doi.org/10.15760/honors.319.
Full textChen, Yunmei, Guanghui Lan, Yuyuan Ouyang, and Wei Zhang. Fast Bundle-Level Type Methods for Unconstrained and Ball-Constrained Convex Optimization. Fort Belvoir, VA: Defense Technical Information Center, December 2014. http://dx.doi.org/10.21236/ada612792.
Full textKnapp, Adam C., and Kevin J. Johnson. Using Fisher Information Criteria for Chemical Sensor Selection via Convex Optimization Methods. Fort Belvoir, VA: Defense Technical Information Center, November 2016. http://dx.doi.org/10.21236/ada640843.
Full textAmelunxen, Dennis, Martin Lotz, Michael B. McCoy, and Joel A. Tropp. Living on the Edge: A Geometric Theory of Phase Transitions in Convex Optimization. Fort Belvoir, VA: Defense Technical Information Center, March 2013. http://dx.doi.org/10.21236/ada591124.
Full textFilipiak, Katarzyna, Dietrich von Rosen, Martin Singull, and Wojciech Rejchel. Estimation under inequality constraints in univariate and multivariate linear models. Linköping University Electronic Press, March 2024. http://dx.doi.org/10.3384/lith-mat-r-2024-01.
Full text