Gotowa bibliografia na temat „Convex optimization”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Convex optimization”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Convex optimization"

1

Luethi, Hans-Jakob. "Convex Optimization". Journal of the American Statistical Association 100, nr 471 (wrzesień 2005): 1097. http://dx.doi.org/10.1198/jasa.2005.s41.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Ceria, Sebastián, i João Soares. "Convex programming for disjunctive convex optimization". Mathematical Programming 86, nr 3 (1.12.1999): 595–614. http://dx.doi.org/10.1007/s101070050106.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Lasserre, Jean B. "On convex optimization without convex representation". Optimization Letters 5, nr 4 (13.04.2011): 549–56. http://dx.doi.org/10.1007/s11590-011-0323-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Ben-Tal, A., i A. Nemirovski. "Robust Convex Optimization". Mathematics of Operations Research 23, nr 4 (listopad 1998): 769–805. http://dx.doi.org/10.1287/moor.23.4.769.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Tilahun, Surafel Luleseged. "Convex Grey Optimization". RAIRO - Operations Research 53, nr 1 (styczeń 2019): 339–49. http://dx.doi.org/10.1051/ro/2018088.

Pełny tekst źródła
Streszczenie:
Many optimization problems are formulated from a real scenario involving incomplete information due to uncertainty in reality. The uncertainties can be expressed with appropriate probability distributions or fuzzy numbers with a membership function, if enough information can be accessed for the construction of either the probability density function or the membership of the fuzzy numbers. However, in some cases there may not be enough information for that and grey numbers need to be used. A grey number is an interval number to represent the value of a quantity. Its exact value or the likelihood is not known but the maximum and/or the minimum possible values are. Applications in space exploration, robotics and engineering can be mentioned which involves such a scenario. An optimization problem is called a grey optimization problem if it involves a grey number in the objective function and/or constraint set. Unlike its wide applications, not much research is done in the field. Hence, in this paper, a convex grey optimization problem will be discussed. It will be shown that an optimal solution for a convex grey optimization problem is a grey number where the lower and upper limit are computed by solving the problem in an optimistic and pessimistic way. The optimistic way is when the decision maker counts the grey numbers as decision variables and optimize the objective function for all the decision variables whereas the pessimistic way is solving a minimax or maximin problem over the decision variables and over the grey numbers.
Style APA, Harvard, Vancouver, ISO itp.
6

Ubhaya, Vasant A. "Quasi-convex optimization". Journal of Mathematical Analysis and Applications 116, nr 2 (czerwiec 1986): 439–49. http://dx.doi.org/10.1016/s0022-247x(86)80008-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Onn, Shmuel. "Convex Matroid Optimization". SIAM Journal on Discrete Mathematics 17, nr 2 (styczeń 2003): 249–53. http://dx.doi.org/10.1137/s0895480102408559.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Pardalos, Panos M. "Convex optimization theory". Optimization Methods and Software 25, nr 3 (czerwiec 2010): 487. http://dx.doi.org/10.1080/10556781003625177.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Onn, Shmuel, i Uriel G. Rothblum. "Convex Combinatorial Optimization". Discrete & Computational Geometry 32, nr 4 (19.08.2004): 549–66. http://dx.doi.org/10.1007/s00454-004-1138-y.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Mayeli, Azita. "Non-convex Optimization via Strongly Convex Majorization-minimization". Canadian Mathematical Bulletin 63, nr 4 (10.12.2019): 726–37. http://dx.doi.org/10.4153/s0008439519000730.

Pełny tekst źródła
Streszczenie:
AbstractIn this paper, we introduce a class of nonsmooth nonconvex optimization problems, and we propose to use a local iterative minimization-majorization (MM) algorithm to find an optimal solution for the optimization problem. The cost functions in our optimization problems are an extension of convex functions with MC separable penalty, which were previously introduced by Ivan Selesnick. These functions are not convex; therefore, convex optimization methods cannot be applied here to prove the existence of optimal minimum point for these functions. For our purpose, we use convex analysis tools to first construct a class of convex majorizers, which approximate the value of non-convex cost function locally, then use the MM algorithm to prove the existence of local minimum. The convergence of the algorithm is guaranteed when the iterative points $x^{(k)}$ are obtained in a ball centred at $x^{(k-1)}$ with small radius. We prove that the algorithm converges to a stationary point (local minimum) of cost function when the surregators are strongly convex.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Convex optimization"

1

Joulin, Armand. "Convex optimization for cosegmentation". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00826236.

Pełny tekst źródła
Streszczenie:
La simplicité apparente avec laquelle un humain perçoit ce qui l'entoure suggère que le processus impliqué est en partie mécanique, donc ne nécessite pas un haut degré de réflexion. Cette observation suggère que notre perception visuelle du monde peut être simulée sur un ordinateur. La vision par ordinateur est le domaine de recherche consacré au problème de la création d'une forme de perception visuelle pour des ordinateurs. La puissance de calcul des ordinateurs des années 50 ne permettait pas de traiter et d'analyser les données visuelles nécessaires à l'élaboration d'une perception visuelle virtuelle. Depuis peu, la puissance de calcul et la capacité de stockage ont permis à ce domaine de vraiment émerger. En deux décennies, la vision par ordinateur a permis de répondre à problèmes pratiques ou industrielles comme la détection des visages, de personnes au comportement suspect dans une foule ou de défauts de fabrication dans des chaînes de production. En revanche, en ce qui concerne l'émergence d'une perception visuelle virtuelle non spécifique à une tâche donnée, peu de progrès ont été réalisés et la communauté est toujours confrontée à des problèmes fondamentaux. Un de ces problèmes est de segmenter un stimuli optique ou une image en régions porteuses de sens, en objets ou actions. La segmentation de scène est naturelle pour les humains, mais aussi essentielle pour comprendre pleinement son environnement. Malheureusement elle est aussi extrêmement difficile à reproduire sur un ordinateur car il n'existe pas de définition claire de la région "significative''. En effet, en fonction de la scène ou de la situation, une région peut avoir des interprétations différentes. Etant donnée une scène se passant dans la rue, on peut considérer que distinguer un piéton est important dans cette situation, par contre ses vêtements ne le semblent pas nécessairement. Si maintenant nous considérons une scène ayant lieu pendant un défilé de mode, un vêtement devient un élément important, donc une région significative. Ici, nous nous concentrons sur ce problème de segmentation et nous l'abordons sous un angle particulier pour éviter cette difficulté fondamentale. Nous considérerons la segmentation comme un problème d'apprentissage faiblement supervisé, c'est-à-dire qu'au lieu de segmenter des images selon une certaine définition prédéfinie de régions "significatives'', nous développons des méthodes permettant de segmenter simultanément un ensemble d'images en régions qui apparaissent régulièrement. Nous définissons donc une région "significative'' d'un point de vue statistique: Ce sont les régions qui apparaissent régulièrement dans l'ensemble des images données. Pour cela nous concevons des modèles ayant une portée qui va au-delà de l'application à la vision. Notre approche prend ses racines dans l'apprentissage statistique, dont l'objectif est de concevoir des méthodes efficaces pour extraire et/ou apprendre des motifs récurrents dans des jeux de données. Ce domaine a récemment connu une forte popularité en raison de l'augmentation du nombre et de la taille des bases de données disponibles. Nous nous concentrons ici sur des méthodes conçues pour découvrir l'information "cachée'' dans une base à partir d'annotations incomplètes ou inexistantes. Enfin, nos travaux prennent racine dans le domaine de l'optimisation numérique afin d'élaborer des algorithmes efficaces et adaptés à nos problèmes. En particulier, nous utilisons et adaptons des outils récemment développés afin de relaxer des problèmes combinatoires complexes en des problèmes convexes pour lesquels il est garanti de trouver la solution optimale. Nous illustrons la qualité de nos formulations et algorithmes aussi sur des problèmes tirés de domaines autres que la vision par ordinateur. En particulier, nous montrons que nos travaux peuvent être utilisés dans la classification de texte et en biologie cellulaire.
Style APA, Harvard, Vancouver, ISO itp.
2

Rätsch, Gunnar. "Robust boosting via convex optimization". Phd thesis, Universität Potsdam, 2001. http://opus.kobv.de/ubp/volltexte/2005/39/.

Pełny tekst źródła
Streszczenie:
In dieser Arbeit werden statistische Lernprobleme betrachtet. Lernmaschinen extrahieren Informationen aus einer gegebenen Menge von Trainingsmustern, so daß sie in der Lage sind, Eigenschaften von bisher ungesehenen Mustern - z.B. eine Klassenzugehörigkeit - vorherzusagen. Wir betrachten den Fall, bei dem die resultierende Klassifikations- oder Regressionsregel aus einfachen Regeln - den Basishypothesen - zusammengesetzt ist. Die sogenannten Boosting Algorithmen erzeugen iterativ eine gewichtete Summe von Basishypothesen, die gut auf ungesehenen Mustern vorhersagen.
Die Arbeit behandelt folgende Sachverhalte:

o Die zur Analyse von Boosting-Methoden geeignete Statistische Lerntheorie. Wir studieren lerntheoretische Garantien zur Abschätzung der Vorhersagequalität auf ungesehenen Mustern. Kürzlich haben sich sogenannte Klassifikationstechniken mit großem Margin als ein praktisches Ergebnis dieser Theorie herausgestellt - insbesondere Boosting und Support-Vektor-Maschinen. Ein großer Margin impliziert eine hohe Vorhersagequalität der Entscheidungsregel. Deshalb wird analysiert, wie groß der Margin bei Boosting ist und ein verbesserter Algorithmus vorgeschlagen, der effizient Regeln mit maximalem Margin erzeugt.

o Was ist der Zusammenhang von Boosting und Techniken der konvexen Optimierung?
Um die Eigenschaften der entstehenden Klassifikations- oder Regressionsregeln zu analysieren, ist es sehr wichtig zu verstehen, ob und unter welchen Bedingungen iterative Algorithmen wie Boosting konvergieren. Wir zeigen, daß solche Algorithmen benutzt werden koennen, um sehr große Optimierungsprobleme mit Nebenbedingungen zu lösen, deren Lösung sich gut charakterisieren laesst. Dazu werden Verbindungen zum Wissenschaftsgebiet der konvexen Optimierung aufgezeigt und ausgenutzt, um Konvergenzgarantien für eine große Familie von Boosting-ähnlichen Algorithmen zu geben.

o Kann man Boosting robust gegenüber Meßfehlern und Ausreissern in den Daten machen?
Ein Problem bisheriger Boosting-Methoden ist die relativ hohe Sensitivität gegenüber Messungenauigkeiten und Meßfehlern in der Trainingsdatenmenge. Um dieses Problem zu beheben, wird die sogenannte 'Soft-Margin' Idee, die beim Support-Vector Lernen schon benutzt wird, auf Boosting übertragen. Das führt zu theoretisch gut motivierten, regularisierten Algorithmen, die ein hohes Maß an Robustheit aufweisen.

o Wie kann man die Anwendbarkeit von Boosting auf Regressionsprobleme erweitern?
Boosting-Methoden wurden ursprünglich für Klassifikationsprobleme entwickelt. Um die Anwendbarkeit auf Regressionsprobleme zu erweitern, werden die vorherigen Konvergenzresultate benutzt und neue Boosting-ähnliche Algorithmen zur Regression entwickelt. Wir zeigen, daß diese Algorithmen gute theoretische und praktische Eigenschaften haben.

o Ist Boosting praktisch anwendbar?
Die dargestellten theoretischen Ergebnisse werden begleitet von Simulationsergebnissen, entweder, um bestimmte Eigenschaften von Algorithmen zu illustrieren, oder um zu zeigen, daß sie in der Praxis tatsächlich gut funktionieren und direkt einsetzbar sind. Die praktische Relevanz der entwickelten Methoden wird in der Analyse chaotischer Zeitreihen und durch industrielle Anwendungen wie ein Stromverbrauch-Überwachungssystem und bei der Entwicklung neuer Medikamente illustriert.
In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues:

o The statistical learning theory framework for analyzing boosting methods.
We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution.

o How can boosting methods be related to mathematical optimization techniques?
To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms.

o How to make Boosting noise robust?
One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness.

o How to adapt boosting to regression problems?
Boosting methods are originally designed for classification problems. To extend the boosting idea to regression problems, we use the previous convergence results and relations to semi-infinite programming to design boosting-like algorithms for regression problems. We show that these leveraging algorithms have desirable theoretical and practical properties.

o Can boosting techniques be useful in practice?
The presented theoretical results are guided by simulation results either to illustrate properties of the proposed algorithms or to show that they work well in practice. We report on successful applications in a non-intrusive power monitoring system, chaotic time series analysis and a drug discovery process.

---
Anmerkung:
Der Autor ist Träger des von der Mathematisch-Naturwissenschaftlichen Fakultät der Universität Potsdam vergebenen Michelson-Preises für die beste Promotion des Jahres 2001/2002.
Style APA, Harvard, Vancouver, ISO itp.
3

Nekooie, Batool. "Convex optimization involving matrix inequalities". Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/13880.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Jangam, Ravindra nath vijay kumar. "BEAMFORMING TECHNIQUES USING CONVEX OPTIMIZATION". Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-33934.

Pełny tekst źródła
Streszczenie:
The thesis analyses and validates Beamforming methods using Convex Optimization.  CVX which is a Matlab supported tool for convex optimization has been used to develop this concept. An algorithm is designed by which an appropriate system has been identified by varying parameters such as number of antennas, passband width, and stopbands widths of a beamformer. We have observed the beamformer by minimizing the error for Least-square and Infinity norms. A graph obtained by the optimum values between least-square and infinity norms shows us a trade-off between these two norms. We have observed convex optimization for double passband of a beamformer which has proven the flexibility of convex optimization. On extension for this, we designed a filter in which stopband is arbitrary. A constraint is used by which the stopband would be varying depending upon the upper boundary (limiting) line which varies w.r.t y-axis (dB). The beamformer has been observed for feasibility by varying parameters such as number of antennas, arbitrary upper boundaries, stopbands and passband. This proves that there is flexibility for designing a beamformer as desired.
Style APA, Harvard, Vancouver, ISO itp.
5

Saunderson, James (James Francis). "Subspace identification via convex optimization". Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66475.

Pełny tekst źródła
Streszczenie:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 88-92).
In this thesis we consider convex optimization-based approaches to the classical problem of identifying a subspace from noisy measurements of a random process taking values in the subspace. We focus on the case where the measurement noise is component-wise independent, known as the factor analysis model in statistics. We develop a new analysis of an existing convex optimization-based heuristic for this problem. Our analysis indicates that in high-dimensional settings, where both the ambient dimension and the dimension of the subspace to be identified are large, the convex heuristic, minimum trace factor analysis, is often very successful. We provide simple deterministic conditions on the underlying 'true' subspace under which the convex heuristic provably identifies the correct subspace. We also consider the performance of minimum trace factor analysis on 'typical' subspace identification problems, that is problems where the underlying subspace is chosen randomly from subspaces of a particular dimension. In this setting we establish conditions on the ambient dimension and the dimension of the underlying subspace under which the convex heuristic identifies the subspace correctly with high probability. We then consider a refinement of the subspace identification problem where we aim to identify a class of structured subspaces arising from Gaussian latent tree models. More precisely, given the covariance at the finest scale of a Gaussian latent tree model, and the tree that indexes the model, we aim to learn the parameters of the model, including the state dimensions of each of the latent variables. We do so by extending the convex heuristic, and our analysis, from the factor analysis setting to the setting of Gaussian latent tree models. We again provide deterministic conditions on the underlying latent tree model that ensure our convex optimization-based heuristic successfully identifies the parameters and state dimensions of the model.
by James Saunderson.
S.M.
Style APA, Harvard, Vancouver, ISO itp.
6

Shewchun, John Marc 1972. "Constrained control using convex optimization". Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/46471.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Boţ, Radu Ioan. "Conjugate duality in convex optimization". Berlin [u.a.] Springer, 2010. http://dx.doi.org/10.1007/978-3-642-04900-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Aggarwal, Varun. "Analog circuit optimization using evolutionary algorithms and convex optimization". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40525.

Pełny tekst źródła
Streszczenie:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 83-88).
In this thesis, we analyze state-of-art techniques for analog circuit sizing and compare them on various metrics. We ascertain that a methodology which improves the accuracy of sizing without increasing the run time or the designer effort is a contribution. We argue that the accuracy of geometric programming can be improved without adversely influencing the run time or increasing the designer's effort. This is facilitated by decomposition of geometric programming modeling into two steps, which decouples accuracy of models and run-time of geometric programming. We design a new algorithm for producing accurate posynomial models for MOS transistor parameters, which is the first step of the decomposition. The new algorithm can generate posynomial models with variable number of terms and real-valued exponents. The algorithm is a hybrid of a genetic algorithm and a convex optimization technique. We study the performance of the algorithm on artificially created benchmark problems. We show that the accuracy of posynomial models of MOS parameters is improved by a considerable amount by using the new algorithm. The new posynomial modeling algorithm can be used in any application of geometric programming and is not limited to MOS parameter modeling. In the last chapter, we discuss various ideas to improve the state-of-art in circuit sizing.
by Varun Aggarwal.
S.M.
Style APA, Harvard, Vancouver, ISO itp.
9

van, den Berg Ewout. "Convex optimization for generalized sparse recovery". Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/16646.

Pełny tekst źródła
Streszczenie:
The past decade has witnessed the emergence of compressed sensing as a way of acquiring sparsely representable signals in a compressed form. These developments have greatly motivated research in sparse signal recovery, which lies at the heart of compressed sensing, and which has recently found its use in altogether new applications. In the first part of this thesis we study the theoretical aspects of joint-sparse recovery by means of sum-of-norms minimization, and the ReMBo-l₁ algorithm, which combines boosting techniques with l₁-minimization. For the sum-of-norms approach we derive necessary and sufficient conditions for recovery, by extending existing results to the joint-sparse setting. We focus in particular on minimization of the sum of l₁, and l₂ norms, and give concrete examples where recovery succeeds with one formulation but not with the other. We base our analysis of ReMBo-l₁ on its geometrical interpretation, which leads to a study of orthant intersections with randomly oriented subspaces. This work establishes a clear picture of the mechanics behind the method, and explains the different aspects of its performance. The second part and main contribution of this thesis is the development of a framework for solving a wide class of convex optimization problems for sparse recovery. We provide a detailed account of the application of the framework on several problems, but also consider its limitations. The framework has been implemented in the SPGL1 algorithm, which is already well established as an effective solver. Numerical results show that our algorithm is state-of-the-art, and compares favorably even with solvers for the easier---but less natural---Lagrangian formulations. The last part of this thesis discusses two supporting software packages: Sparco, which provides a suite of test problems for sparse recovery, and Spot, a Matlab toolbox for the creation and manipulation of linear operators. Spot greatly facilitates rapid prototyping in sparse recovery and compressed sensing, where linear operators form the elementary building blocks. Following the practice of reproducible research, all code used for the experiments and generation of figures is available online at http://www.cs.ubc.ca/labs/scl/thesis/09vandenBerg/.
Style APA, Harvard, Vancouver, ISO itp.
10

Lin, Chin-Yee. "Interior point methods for convex optimization". Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/15044.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Convex optimization"

1

Lieven, Vandenberghe, red. Convex optimization. Cambridge, UK: Cambridge University Press, 2006.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Convex optimization theory. Belmont, Mass: Athena Scientific, 2009.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Brinkhuis, Jan. Convex Analysis for Optimization. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41804-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Nesterov, Yurii. Lectures on Convex Optimization. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-91578-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Bonnans, J. Frédéric. Convex and Stochastic Optimization. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14977-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Zaslavski, Alexander J. Convex Optimization with Computational Errors. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-37822-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Pardalos, Panos M., Antanas Žilinskas i Julius Žilinskas. Non-Convex Multi-Objective Optimization. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61007-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Borwein, Jonathan M., i Adrian S. Lewis. Convex Analysis and Nonlinear Optimization. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4757-9859-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Li, Li. Selected Applications of Convex Optimization. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-46356-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Peypouquet, Juan. Convex Optimization in Normed Spaces. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-13710-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Convex optimization"

1

Nesterov, Yurii. "Convex Optimization". W Encyclopedia of Operations Research and Management Science, 281–87. Boston, MA: Springer US, 2013. http://dx.doi.org/10.1007/978-1-4419-1153-7_1171.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Allgöwer, Frank, Jan Hasenauer i Steffen Waldherr. "Convex Optimization". W Encyclopedia of Systems Biology, 501–2. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4419-9863-7_1449.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Hult, Henrik, Filip Lindskog, Ola Hammarlid i Carl Johan Rehn. "Convex Optimization". W Risk and Portfolio Analysis, 33–38. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-4103-8_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Zaslavski, Alexander J. "Convex Optimization". W SpringerBriefs in Optimization, 13–56. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12644-4_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Royset, Johannes O., i Roger J.-B. Wets. "CONVEX OPTIMIZATION". W An Optimization Primer, 52–115. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-76275-9_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Çınlar, Erhan, i Robert J. Vanderbei. "Convex Optimization". W Undergraduate Texts in Mathematics, 101–13. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4614-5257-7_6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Aragón, Francisco J., Miguel A. Goberna, Marco A. López i Margarita M. L. Rodríguez. "Convex Optimization". W Springer Undergraduate Texts in Mathematics and Technology, 117–80. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-11184-7_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Borkar, Vivek S., i K. S. Mallikarjuna Rao. "Convex Optimization". W Texts and Readings in Mathematics, 79–100. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1652-8_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Wheeler, Jeffrey Paul. "Convex Optimization". W An Introduction to Optimization with Applications in Machine Learning and Data Analytics, 251–64. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9780367425517-19.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Stefanov, Stefan M. "Preliminaries: Convex Analysis and Convex Programming". W Applied Optimization, 1–61. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3417-1_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Convex optimization"

1

Boyd, Stephen. "Convex optimization". W the 17th ACM SIGKDD international conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2020408.2020410.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Szu, Harold H. "Non-Convex Optimization". W 30th Annual Technical Symposium, redaktor William J. Miceli. SPIE, 1986. http://dx.doi.org/10.1117/12.976247.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Udell, Madeleine, Karanveer Mohan, David Zeng, Jenny Hong, Steven Diamond i Stephen Boyd. "Convex Optimization in Julia". W 2014 First Workshop for High Performance Technical Computing in Dynamic Languages (HPTCDL). IEEE, 2014. http://dx.doi.org/10.1109/hptcdl.2014.5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Tsianos, Konstantinos I., i Michael G. Rabbat. "Distributed strongly convex optimization". W 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2012. http://dx.doi.org/10.1109/allerton.2012.6483272.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Boyd, Stephen, Lieven Vandenberghe i Michael Grant. "Advances in Convex Optimization". W 2006 Chinese Control Conference. IEEE, 2006. http://dx.doi.org/10.1109/chicc.2006.280567.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Ramirez, Lennin Mallma, Alexandre Belfort de Almeida Chiacchio, Nelson Maculan Filho, Rodrigo de Souza Couto, Adilson Xavier i Vinicius Layter Xavier. "HALA in Convex Optimization". W ANAIS DO SIMPóSIO BRASILEIRO DE PESQUISA OPERACIONAL. São José dos Campos - SP, BR: Galoa, 2023. http://dx.doi.org/10.59254/sbpo-2023-175132.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Liu, Xinfu, i Ping Lu. "Solving Non-Convex Optimal Control Problems by Convex Optimization". W AIAA Guidance, Navigation, and Control (GNC) Conference. Reston, Virginia: American Institute of Aeronautics and Astronautics, 2013. http://dx.doi.org/10.2514/6.2013-4725.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Tsitsiklis, John N., i Zhi-quan Luo. "Communication complexity of convex optimization". W 1986 25th IEEE Conference on Decision and Control. IEEE, 1986. http://dx.doi.org/10.1109/cdc.1986.267379.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Abdallah, Mohammed. "MIMO/OFDM convex optimization applications". W 2012 IEEE Long Island Systems, Applications and Technology Conference (LISAT). IEEE, 2012. http://dx.doi.org/10.1109/lisat.2012.6223201.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Chen, Niangjun, Anish Agarwal, Adam Wierman, Siddharth Barman i Lachlan L. H. Andrew. "Online Convex Optimization Using Predictions". W SIGMETRICS '15: ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2745844.2745854.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Convex optimization"

1

Coffrin, Carleton James, i Line Alnaes Roald. Convex Relaxations in Power System Optimization, A Brief Introduction. Office of Scientific and Technical Information (OSTI), lipiec 2018. http://dx.doi.org/10.2172/1461380.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Tran, Tuyen. Convex and Nonconvex Optimization Techniques for Multifacility Location and Clustering. Portland State University Library, styczeń 2000. http://dx.doi.org/10.15760/etd.7356.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Giles, Daniel. The Majorization Minimization Principle and Some Applications in Convex Optimization. Portland State University Library, styczeń 2015. http://dx.doi.org/10.15760/honors.175.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Deits, Robin, i Russ Tedrake. Footstep Planning on Uneven Terrain with Mixed-Integer Convex Optimization. Fort Belvoir, VA: Defense Technical Information Center, sierpień 2014. http://dx.doi.org/10.21236/ada609276.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Wen, Zaiwen, i Donald Goldfarb. A Line Search Multigrid Method for Large-Scale Convex Optimization. Fort Belvoir, VA: Defense Technical Information Center, lipiec 2007. http://dx.doi.org/10.21236/ada478093.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Lawrence, Nathan. Convex and Nonconvex Optimization Techniques for the Constrained Fermat-Torricelli Problem. Portland State University Library, styczeń 2016. http://dx.doi.org/10.15760/honors.319.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Chen, Yunmei, Guanghui Lan, Yuyuan Ouyang i Wei Zhang. Fast Bundle-Level Type Methods for Unconstrained and Ball-Constrained Convex Optimization. Fort Belvoir, VA: Defense Technical Information Center, grudzień 2014. http://dx.doi.org/10.21236/ada612792.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Knapp, Adam C., i Kevin J. Johnson. Using Fisher Information Criteria for Chemical Sensor Selection via Convex Optimization Methods. Fort Belvoir, VA: Defense Technical Information Center, listopad 2016. http://dx.doi.org/10.21236/ada640843.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Amelunxen, Dennis, Martin Lotz, Michael B. McCoy i Joel A. Tropp. Living on the Edge: A Geometric Theory of Phase Transitions in Convex Optimization. Fort Belvoir, VA: Defense Technical Information Center, marzec 2013. http://dx.doi.org/10.21236/ada591124.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Filipiak, Katarzyna, Dietrich von Rosen, Martin Singull i Wojciech Rejchel. Estimation under inequality constraints in univariate and multivariate linear models. Linköping University Electronic Press, marzec 2024. http://dx.doi.org/10.3384/lith-mat-r-2024-01.

Pełny tekst źródła
Streszczenie:
In this paper least squares and maximum likelihood estimates under univariate and multivariate linear models with a priori information related to maximum effects in the models are determined. Both loss functions (the least squares and negative log-likelihood) and the constraints are convex, so the convex optimization theory can be utilized to obtain estimates, which in this paper are called Safety belt estimates. In particular, the complementary slackness condition, common in convex optimization, implies two alternative types of solutions, strongly dependent on the data and the restriction. It is experimentally shown that, despite of the similarity to the ridge regression estimation under the univariate linear model, the Safety belt estimates behave usually better than estimates obtained via ridge regression. Moreover, concerning the multivariate model, the proposed technique represents a completely novel approach.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii