Gotowa bibliografia na temat „Convex optimization”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Convex optimization”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Convex optimization"
Luethi, Hans-Jakob. "Convex Optimization". Journal of the American Statistical Association 100, nr 471 (wrzesień 2005): 1097. http://dx.doi.org/10.1198/jasa.2005.s41.
Pełny tekst źródłaCeria, Sebastián, i João Soares. "Convex programming for disjunctive convex optimization". Mathematical Programming 86, nr 3 (1.12.1999): 595–614. http://dx.doi.org/10.1007/s101070050106.
Pełny tekst źródłaLasserre, Jean B. "On convex optimization without convex representation". Optimization Letters 5, nr 4 (13.04.2011): 549–56. http://dx.doi.org/10.1007/s11590-011-0323-1.
Pełny tekst źródłaBen-Tal, A., i A. Nemirovski. "Robust Convex Optimization". Mathematics of Operations Research 23, nr 4 (listopad 1998): 769–805. http://dx.doi.org/10.1287/moor.23.4.769.
Pełny tekst źródłaTilahun, Surafel Luleseged. "Convex Grey Optimization". RAIRO - Operations Research 53, nr 1 (styczeń 2019): 339–49. http://dx.doi.org/10.1051/ro/2018088.
Pełny tekst źródłaUbhaya, Vasant A. "Quasi-convex optimization". Journal of Mathematical Analysis and Applications 116, nr 2 (czerwiec 1986): 439–49. http://dx.doi.org/10.1016/s0022-247x(86)80008-7.
Pełny tekst źródłaOnn, Shmuel. "Convex Matroid Optimization". SIAM Journal on Discrete Mathematics 17, nr 2 (styczeń 2003): 249–53. http://dx.doi.org/10.1137/s0895480102408559.
Pełny tekst źródłaPardalos, Panos M. "Convex optimization theory". Optimization Methods and Software 25, nr 3 (czerwiec 2010): 487. http://dx.doi.org/10.1080/10556781003625177.
Pełny tekst źródłaOnn, Shmuel, i Uriel G. Rothblum. "Convex Combinatorial Optimization". Discrete & Computational Geometry 32, nr 4 (19.08.2004): 549–66. http://dx.doi.org/10.1007/s00454-004-1138-y.
Pełny tekst źródłaMayeli, Azita. "Non-convex Optimization via Strongly Convex Majorization-minimization". Canadian Mathematical Bulletin 63, nr 4 (10.12.2019): 726–37. http://dx.doi.org/10.4153/s0008439519000730.
Pełny tekst źródłaRozprawy doktorskie na temat "Convex optimization"
Joulin, Armand. "Convex optimization for cosegmentation". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00826236.
Pełny tekst źródłaRätsch, Gunnar. "Robust boosting via convex optimization". Phd thesis, Universität Potsdam, 2001. http://opus.kobv.de/ubp/volltexte/2005/39/.
Pełny tekst źródłaDie Arbeit behandelt folgende Sachverhalte:
o Die zur Analyse von Boosting-Methoden geeignete Statistische Lerntheorie. Wir studieren lerntheoretische Garantien zur Abschätzung der Vorhersagequalität auf ungesehenen Mustern. Kürzlich haben sich sogenannte Klassifikationstechniken mit großem Margin als ein praktisches Ergebnis dieser Theorie herausgestellt - insbesondere Boosting und Support-Vektor-Maschinen. Ein großer Margin impliziert eine hohe Vorhersagequalität der Entscheidungsregel. Deshalb wird analysiert, wie groß der Margin bei Boosting ist und ein verbesserter Algorithmus vorgeschlagen, der effizient Regeln mit maximalem Margin erzeugt.
o Was ist der Zusammenhang von Boosting und Techniken der konvexen Optimierung?
Um die Eigenschaften der entstehenden Klassifikations- oder Regressionsregeln zu analysieren, ist es sehr wichtig zu verstehen, ob und unter welchen Bedingungen iterative Algorithmen wie Boosting konvergieren. Wir zeigen, daß solche Algorithmen benutzt werden koennen, um sehr große Optimierungsprobleme mit Nebenbedingungen zu lösen, deren Lösung sich gut charakterisieren laesst. Dazu werden Verbindungen zum Wissenschaftsgebiet der konvexen Optimierung aufgezeigt und ausgenutzt, um Konvergenzgarantien für eine große Familie von Boosting-ähnlichen Algorithmen zu geben.
o Kann man Boosting robust gegenüber Meßfehlern und Ausreissern in den Daten machen?
Ein Problem bisheriger Boosting-Methoden ist die relativ hohe Sensitivität gegenüber Messungenauigkeiten und Meßfehlern in der Trainingsdatenmenge. Um dieses Problem zu beheben, wird die sogenannte 'Soft-Margin' Idee, die beim Support-Vector Lernen schon benutzt wird, auf Boosting übertragen. Das führt zu theoretisch gut motivierten, regularisierten Algorithmen, die ein hohes Maß an Robustheit aufweisen.
o Wie kann man die Anwendbarkeit von Boosting auf Regressionsprobleme erweitern?
Boosting-Methoden wurden ursprünglich für Klassifikationsprobleme entwickelt. Um die Anwendbarkeit auf Regressionsprobleme zu erweitern, werden die vorherigen Konvergenzresultate benutzt und neue Boosting-ähnliche Algorithmen zur Regression entwickelt. Wir zeigen, daß diese Algorithmen gute theoretische und praktische Eigenschaften haben.
o Ist Boosting praktisch anwendbar?
Die dargestellten theoretischen Ergebnisse werden begleitet von Simulationsergebnissen, entweder, um bestimmte Eigenschaften von Algorithmen zu illustrieren, oder um zu zeigen, daß sie in der Praxis tatsächlich gut funktionieren und direkt einsetzbar sind. Die praktische Relevanz der entwickelten Methoden wird in der Analyse chaotischer Zeitreihen und durch industrielle Anwendungen wie ein Stromverbrauch-Überwachungssystem und bei der Entwicklung neuer Medikamente illustriert.
In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues:
o The statistical learning theory framework for analyzing boosting methods.
We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution.
o How can boosting methods be related to mathematical optimization techniques?
To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms.
o How to make Boosting noise robust?
One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness.
o How to adapt boosting to regression problems?
Boosting methods are originally designed for classification problems. To extend the boosting idea to regression problems, we use the previous convergence results and relations to semi-infinite programming to design boosting-like algorithms for regression problems. We show that these leveraging algorithms have desirable theoretical and practical properties.
o Can boosting techniques be useful in practice?
The presented theoretical results are guided by simulation results either to illustrate properties of the proposed algorithms or to show that they work well in practice. We report on successful applications in a non-intrusive power monitoring system, chaotic time series analysis and a drug discovery process.
---
Anmerkung:
Der Autor ist Träger des von der Mathematisch-Naturwissenschaftlichen Fakultät der Universität Potsdam vergebenen Michelson-Preises für die beste Promotion des Jahres 2001/2002.
Nekooie, Batool. "Convex optimization involving matrix inequalities". Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/13880.
Pełny tekst źródłaJangam, Ravindra nath vijay kumar. "BEAMFORMING TECHNIQUES USING CONVEX OPTIMIZATION". Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-33934.
Pełny tekst źródłaSaunderson, James (James Francis). "Subspace identification via convex optimization". Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66475.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (p. 88-92).
In this thesis we consider convex optimization-based approaches to the classical problem of identifying a subspace from noisy measurements of a random process taking values in the subspace. We focus on the case where the measurement noise is component-wise independent, known as the factor analysis model in statistics. We develop a new analysis of an existing convex optimization-based heuristic for this problem. Our analysis indicates that in high-dimensional settings, where both the ambient dimension and the dimension of the subspace to be identified are large, the convex heuristic, minimum trace factor analysis, is often very successful. We provide simple deterministic conditions on the underlying 'true' subspace under which the convex heuristic provably identifies the correct subspace. We also consider the performance of minimum trace factor analysis on 'typical' subspace identification problems, that is problems where the underlying subspace is chosen randomly from subspaces of a particular dimension. In this setting we establish conditions on the ambient dimension and the dimension of the underlying subspace under which the convex heuristic identifies the subspace correctly with high probability. We then consider a refinement of the subspace identification problem where we aim to identify a class of structured subspaces arising from Gaussian latent tree models. More precisely, given the covariance at the finest scale of a Gaussian latent tree model, and the tree that indexes the model, we aim to learn the parameters of the model, including the state dimensions of each of the latent variables. We do so by extending the convex heuristic, and our analysis, from the factor analysis setting to the setting of Gaussian latent tree models. We again provide deterministic conditions on the underlying latent tree model that ensure our convex optimization-based heuristic successfully identifies the parameters and state dimensions of the model.
by James Saunderson.
S.M.
Shewchun, John Marc 1972. "Constrained control using convex optimization". Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/46471.
Pełny tekst źródłaBoţ, Radu Ioan. "Conjugate duality in convex optimization". Berlin [u.a.] Springer, 2010. http://dx.doi.org/10.1007/978-3-642-04900-2.
Pełny tekst źródłaAggarwal, Varun. "Analog circuit optimization using evolutionary algorithms and convex optimization". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40525.
Pełny tekst źródłaIncludes bibliographical references (p. 83-88).
In this thesis, we analyze state-of-art techniques for analog circuit sizing and compare them on various metrics. We ascertain that a methodology which improves the accuracy of sizing without increasing the run time or the designer effort is a contribution. We argue that the accuracy of geometric programming can be improved without adversely influencing the run time or increasing the designer's effort. This is facilitated by decomposition of geometric programming modeling into two steps, which decouples accuracy of models and run-time of geometric programming. We design a new algorithm for producing accurate posynomial models for MOS transistor parameters, which is the first step of the decomposition. The new algorithm can generate posynomial models with variable number of terms and real-valued exponents. The algorithm is a hybrid of a genetic algorithm and a convex optimization technique. We study the performance of the algorithm on artificially created benchmark problems. We show that the accuracy of posynomial models of MOS parameters is improved by a considerable amount by using the new algorithm. The new posynomial modeling algorithm can be used in any application of geometric programming and is not limited to MOS parameter modeling. In the last chapter, we discuss various ideas to improve the state-of-art in circuit sizing.
by Varun Aggarwal.
S.M.
van, den Berg Ewout. "Convex optimization for generalized sparse recovery". Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/16646.
Pełny tekst źródłaLin, Chin-Yee. "Interior point methods for convex optimization". Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/15044.
Pełny tekst źródłaKsiążki na temat "Convex optimization"
Lieven, Vandenberghe, red. Convex optimization. Cambridge, UK: Cambridge University Press, 2006.
Znajdź pełny tekst źródłaConvex optimization theory. Belmont, Mass: Athena Scientific, 2009.
Znajdź pełny tekst źródłaBrinkhuis, Jan. Convex Analysis for Optimization. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41804-5.
Pełny tekst źródłaNesterov, Yurii. Lectures on Convex Optimization. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-91578-4.
Pełny tekst źródłaBonnans, J. Frédéric. Convex and Stochastic Optimization. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14977-2.
Pełny tekst źródłaZaslavski, Alexander J. Convex Optimization with Computational Errors. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-37822-6.
Pełny tekst źródłaPardalos, Panos M., Antanas Žilinskas i Julius Žilinskas. Non-Convex Multi-Objective Optimization. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61007-8.
Pełny tekst źródłaBorwein, Jonathan M., i Adrian S. Lewis. Convex Analysis and Nonlinear Optimization. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4757-9859-3.
Pełny tekst źródłaLi, Li. Selected Applications of Convex Optimization. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-46356-7.
Pełny tekst źródłaPeypouquet, Juan. Convex Optimization in Normed Spaces. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-13710-0.
Pełny tekst źródłaCzęści książek na temat "Convex optimization"
Nesterov, Yurii. "Convex Optimization". W Encyclopedia of Operations Research and Management Science, 281–87. Boston, MA: Springer US, 2013. http://dx.doi.org/10.1007/978-1-4419-1153-7_1171.
Pełny tekst źródłaAllgöwer, Frank, Jan Hasenauer i Steffen Waldherr. "Convex Optimization". W Encyclopedia of Systems Biology, 501–2. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4419-9863-7_1449.
Pełny tekst źródłaHult, Henrik, Filip Lindskog, Ola Hammarlid i Carl Johan Rehn. "Convex Optimization". W Risk and Portfolio Analysis, 33–38. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-4103-8_2.
Pełny tekst źródłaZaslavski, Alexander J. "Convex Optimization". W SpringerBriefs in Optimization, 13–56. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12644-4_2.
Pełny tekst źródłaRoyset, Johannes O., i Roger J.-B. Wets. "CONVEX OPTIMIZATION". W An Optimization Primer, 52–115. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-76275-9_2.
Pełny tekst źródłaÇınlar, Erhan, i Robert J. Vanderbei. "Convex Optimization". W Undergraduate Texts in Mathematics, 101–13. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4614-5257-7_6.
Pełny tekst źródłaAragón, Francisco J., Miguel A. Goberna, Marco A. López i Margarita M. L. Rodríguez. "Convex Optimization". W Springer Undergraduate Texts in Mathematics and Technology, 117–80. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-11184-7_4.
Pełny tekst źródłaBorkar, Vivek S., i K. S. Mallikarjuna Rao. "Convex Optimization". W Texts and Readings in Mathematics, 79–100. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1652-8_5.
Pełny tekst źródłaWheeler, Jeffrey Paul. "Convex Optimization". W An Introduction to Optimization with Applications in Machine Learning and Data Analytics, 251–64. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9780367425517-19.
Pełny tekst źródłaStefanov, Stefan M. "Preliminaries: Convex Analysis and Convex Programming". W Applied Optimization, 1–61. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3417-1_1.
Pełny tekst źródłaStreszczenia konferencji na temat "Convex optimization"
Boyd, Stephen. "Convex optimization". W the 17th ACM SIGKDD international conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2020408.2020410.
Pełny tekst źródłaSzu, Harold H. "Non-Convex Optimization". W 30th Annual Technical Symposium, redaktor William J. Miceli. SPIE, 1986. http://dx.doi.org/10.1117/12.976247.
Pełny tekst źródłaUdell, Madeleine, Karanveer Mohan, David Zeng, Jenny Hong, Steven Diamond i Stephen Boyd. "Convex Optimization in Julia". W 2014 First Workshop for High Performance Technical Computing in Dynamic Languages (HPTCDL). IEEE, 2014. http://dx.doi.org/10.1109/hptcdl.2014.5.
Pełny tekst źródłaTsianos, Konstantinos I., i Michael G. Rabbat. "Distributed strongly convex optimization". W 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2012. http://dx.doi.org/10.1109/allerton.2012.6483272.
Pełny tekst źródłaBoyd, Stephen, Lieven Vandenberghe i Michael Grant. "Advances in Convex Optimization". W 2006 Chinese Control Conference. IEEE, 2006. http://dx.doi.org/10.1109/chicc.2006.280567.
Pełny tekst źródłaRamirez, Lennin Mallma, Alexandre Belfort de Almeida Chiacchio, Nelson Maculan Filho, Rodrigo de Souza Couto, Adilson Xavier i Vinicius Layter Xavier. "HALA in Convex Optimization". W ANAIS DO SIMPóSIO BRASILEIRO DE PESQUISA OPERACIONAL. São José dos Campos - SP, BR: Galoa, 2023. http://dx.doi.org/10.59254/sbpo-2023-175132.
Pełny tekst źródłaLiu, Xinfu, i Ping Lu. "Solving Non-Convex Optimal Control Problems by Convex Optimization". W AIAA Guidance, Navigation, and Control (GNC) Conference. Reston, Virginia: American Institute of Aeronautics and Astronautics, 2013. http://dx.doi.org/10.2514/6.2013-4725.
Pełny tekst źródłaTsitsiklis, John N., i Zhi-quan Luo. "Communication complexity of convex optimization". W 1986 25th IEEE Conference on Decision and Control. IEEE, 1986. http://dx.doi.org/10.1109/cdc.1986.267379.
Pełny tekst źródłaAbdallah, Mohammed. "MIMO/OFDM convex optimization applications". W 2012 IEEE Long Island Systems, Applications and Technology Conference (LISAT). IEEE, 2012. http://dx.doi.org/10.1109/lisat.2012.6223201.
Pełny tekst źródłaChen, Niangjun, Anish Agarwal, Adam Wierman, Siddharth Barman i Lachlan L. H. Andrew. "Online Convex Optimization Using Predictions". W SIGMETRICS '15: ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2745844.2745854.
Pełny tekst źródłaRaporty organizacyjne na temat "Convex optimization"
Coffrin, Carleton James, i Line Alnaes Roald. Convex Relaxations in Power System Optimization, A Brief Introduction. Office of Scientific and Technical Information (OSTI), lipiec 2018. http://dx.doi.org/10.2172/1461380.
Pełny tekst źródłaTran, Tuyen. Convex and Nonconvex Optimization Techniques for Multifacility Location and Clustering. Portland State University Library, styczeń 2000. http://dx.doi.org/10.15760/etd.7356.
Pełny tekst źródłaGiles, Daniel. The Majorization Minimization Principle and Some Applications in Convex Optimization. Portland State University Library, styczeń 2015. http://dx.doi.org/10.15760/honors.175.
Pełny tekst źródłaDeits, Robin, i Russ Tedrake. Footstep Planning on Uneven Terrain with Mixed-Integer Convex Optimization. Fort Belvoir, VA: Defense Technical Information Center, sierpień 2014. http://dx.doi.org/10.21236/ada609276.
Pełny tekst źródłaWen, Zaiwen, i Donald Goldfarb. A Line Search Multigrid Method for Large-Scale Convex Optimization. Fort Belvoir, VA: Defense Technical Information Center, lipiec 2007. http://dx.doi.org/10.21236/ada478093.
Pełny tekst źródłaLawrence, Nathan. Convex and Nonconvex Optimization Techniques for the Constrained Fermat-Torricelli Problem. Portland State University Library, styczeń 2016. http://dx.doi.org/10.15760/honors.319.
Pełny tekst źródłaChen, Yunmei, Guanghui Lan, Yuyuan Ouyang i Wei Zhang. Fast Bundle-Level Type Methods for Unconstrained and Ball-Constrained Convex Optimization. Fort Belvoir, VA: Defense Technical Information Center, grudzień 2014. http://dx.doi.org/10.21236/ada612792.
Pełny tekst źródłaKnapp, Adam C., i Kevin J. Johnson. Using Fisher Information Criteria for Chemical Sensor Selection via Convex Optimization Methods. Fort Belvoir, VA: Defense Technical Information Center, listopad 2016. http://dx.doi.org/10.21236/ada640843.
Pełny tekst źródłaAmelunxen, Dennis, Martin Lotz, Michael B. McCoy i Joel A. Tropp. Living on the Edge: A Geometric Theory of Phase Transitions in Convex Optimization. Fort Belvoir, VA: Defense Technical Information Center, marzec 2013. http://dx.doi.org/10.21236/ada591124.
Pełny tekst źródłaFilipiak, Katarzyna, Dietrich von Rosen, Martin Singull i Wojciech Rejchel. Estimation under inequality constraints in univariate and multivariate linear models. Linköping University Electronic Press, marzec 2024. http://dx.doi.org/10.3384/lith-mat-r-2024-01.
Pełny tekst źródła