Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Convex optimization.

Artykuły w czasopismach na temat „Convex optimization”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Convex optimization”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Luethi, Hans-Jakob. "Convex Optimization". Journal of the American Statistical Association 100, nr 471 (wrzesień 2005): 1097. http://dx.doi.org/10.1198/jasa.2005.s41.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Ceria, Sebastián, i João Soares. "Convex programming for disjunctive convex optimization". Mathematical Programming 86, nr 3 (1.12.1999): 595–614. http://dx.doi.org/10.1007/s101070050106.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Lasserre, Jean B. "On convex optimization without convex representation". Optimization Letters 5, nr 4 (13.04.2011): 549–56. http://dx.doi.org/10.1007/s11590-011-0323-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Ben-Tal, A., i A. Nemirovski. "Robust Convex Optimization". Mathematics of Operations Research 23, nr 4 (listopad 1998): 769–805. http://dx.doi.org/10.1287/moor.23.4.769.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Tilahun, Surafel Luleseged. "Convex Grey Optimization". RAIRO - Operations Research 53, nr 1 (styczeń 2019): 339–49. http://dx.doi.org/10.1051/ro/2018088.

Pełny tekst źródła
Streszczenie:
Many optimization problems are formulated from a real scenario involving incomplete information due to uncertainty in reality. The uncertainties can be expressed with appropriate probability distributions or fuzzy numbers with a membership function, if enough information can be accessed for the construction of either the probability density function or the membership of the fuzzy numbers. However, in some cases there may not be enough information for that and grey numbers need to be used. A grey number is an interval number to represent the value of a quantity. Its exact value or the likelihood is not known but the maximum and/or the minimum possible values are. Applications in space exploration, robotics and engineering can be mentioned which involves such a scenario. An optimization problem is called a grey optimization problem if it involves a grey number in the objective function and/or constraint set. Unlike its wide applications, not much research is done in the field. Hence, in this paper, a convex grey optimization problem will be discussed. It will be shown that an optimal solution for a convex grey optimization problem is a grey number where the lower and upper limit are computed by solving the problem in an optimistic and pessimistic way. The optimistic way is when the decision maker counts the grey numbers as decision variables and optimize the objective function for all the decision variables whereas the pessimistic way is solving a minimax or maximin problem over the decision variables and over the grey numbers.
Style APA, Harvard, Vancouver, ISO itp.
6

Ubhaya, Vasant A. "Quasi-convex optimization". Journal of Mathematical Analysis and Applications 116, nr 2 (czerwiec 1986): 439–49. http://dx.doi.org/10.1016/s0022-247x(86)80008-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Onn, Shmuel. "Convex Matroid Optimization". SIAM Journal on Discrete Mathematics 17, nr 2 (styczeń 2003): 249–53. http://dx.doi.org/10.1137/s0895480102408559.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Pardalos, Panos M. "Convex optimization theory". Optimization Methods and Software 25, nr 3 (czerwiec 2010): 487. http://dx.doi.org/10.1080/10556781003625177.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Onn, Shmuel, i Uriel G. Rothblum. "Convex Combinatorial Optimization". Discrete & Computational Geometry 32, nr 4 (19.08.2004): 549–66. http://dx.doi.org/10.1007/s00454-004-1138-y.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Mayeli, Azita. "Non-convex Optimization via Strongly Convex Majorization-minimization". Canadian Mathematical Bulletin 63, nr 4 (10.12.2019): 726–37. http://dx.doi.org/10.4153/s0008439519000730.

Pełny tekst źródła
Streszczenie:
AbstractIn this paper, we introduce a class of nonsmooth nonconvex optimization problems, and we propose to use a local iterative minimization-majorization (MM) algorithm to find an optimal solution for the optimization problem. The cost functions in our optimization problems are an extension of convex functions with MC separable penalty, which were previously introduced by Ivan Selesnick. These functions are not convex; therefore, convex optimization methods cannot be applied here to prove the existence of optimal minimum point for these functions. For our purpose, we use convex analysis tools to first construct a class of convex majorizers, which approximate the value of non-convex cost function locally, then use the MM algorithm to prove the existence of local minimum. The convergence of the algorithm is guaranteed when the iterative points $x^{(k)}$ are obtained in a ball centred at $x^{(k-1)}$ with small radius. We prove that the algorithm converges to a stationary point (local minimum) of cost function when the surregators are strongly convex.
Style APA, Harvard, Vancouver, ISO itp.
11

Agrawal, Akshay, Shane Barratt i Stephen Boyd. "Learning Convex Optimization Models". IEEE/CAA Journal of Automatica Sinica 8, nr 8 (sierpień 2021): 1355–64. http://dx.doi.org/10.1109/jas.2021.1004075.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Wiesemann, Wolfram, Daniel Kuhn i Melvyn Sim. "Distributionally Robust Convex Optimization". Operations Research 62, nr 6 (grudzień 2014): 1358–76. http://dx.doi.org/10.1287/opre.2014.1314.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Kryazhimskii, Arkadii V. "Convex Optimization Via Feedbacks". SIAM Journal on Control and Optimization 37, nr 1 (styczeń 1998): 278–302. http://dx.doi.org/10.1137/s036301299528030x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Tatarenko, Tatiana, i Behrouz Touri. "Non-Convex Distributed Optimization". IEEE Transactions on Automatic Control 62, nr 8 (sierpień 2017): 3744–57. http://dx.doi.org/10.1109/tac.2017.2648041.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Gershman, Alex, Nicholas Sidiropoulos, Shahram Shahbazpanahi, Mats Bengtsson i Bjorn Ottersten. "Convex Optimization-Based Beamforming". IEEE Signal Processing Magazine 27, nr 3 (maj 2010): 62–75. http://dx.doi.org/10.1109/msp.2010.936015.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Bard, Jonathan F. "Convex two-level optimization". Mathematical Programming 40-40, nr 1-3 (styczeń 1988): 15–27. http://dx.doi.org/10.1007/bf01580720.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Lesage-Landry, Antoine, Iman Shames i Joshua A. Taylor. "Predictive online convex optimization". Automatica 113 (marzec 2020): 108771. http://dx.doi.org/10.1016/j.automatica.2019.108771.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Heaton, Howard, Xiaohan Chen, Zhangyang Wang i Wotao Yin. "Safeguarded Learned Convex Optimization". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 6 (26.06.2023): 7848–55. http://dx.doi.org/10.1609/aaai.v37i6.25950.

Pełny tekst źródła
Streszczenie:
Applications abound in which optimization problems must be repeatedly solved, each time with new (but similar) data. Analytic optimization algorithms can be hand-designed to provably solve these problems in an iterative fashion. On one hand, data-driven algorithms can "learn to optimize" (L2O) with much fewer iterations and similar cost per iteration as general-purpose optimization algorithms. On the other hand, unfortunately, many L2O algorithms lack converge guarantees. To fuse the advantages of these approaches, we present a Safe-L2O framework. Safe-L2O updates incorporate a safeguard to guarantee convergence for convex problems with proximal and/or gradient oracles. The safeguard is simple and computationally cheap to implement, and it is activated only when the data-driven L2O updates would perform poorly or appear to diverge. This yields the numerical benefits of employing machine learning to create rapid L2O algorithms while still guaranteeing convergence. Our numerical examples show convergence of Safe-L2O algorithms, even when the provided data is not from the distribution of training data.
Style APA, Harvard, Vancouver, ISO itp.
19

Lasserre, Jean B. "Erratum to: On convex optimization without convex representation". Optimization Letters 8, nr 5 (6.04.2014): 1795–96. http://dx.doi.org/10.1007/s11590-014-0735-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Belotti, Pietro, Christian Kirches, Sven Leyffer, Jeff Linderoth, James Luedtke i Ashutosh Mahajan. "Mixed-integer nonlinear optimization". Acta Numerica 22 (2.04.2013): 1–131. http://dx.doi.org/10.1017/s0962492913000032.

Pełny tekst źródła
Streszczenie:
Many optimal decision problems in scientific, engineering, and public sector applications involve both discrete decisions and nonlinear system dynamics that affect the quality of the final design or plan. These decision problems lead to mixed-integer nonlinear programming (MINLP) problems that combine the combinatorial difficulty of optimizing over discrete variable sets with the challenges of handling nonlinear functions. We review models and applications of MINLP, and survey the state of the art in methods for solving this challenging class of problems.Most solution methods for MINLP apply some form of tree search. We distinguish two broad classes of methods: single-tree and multitree methods. We discuss these two classes of methods first in the case where the underlying problem functions are convex. Classical single-tree methods include nonlinear branch-and-bound and branch-and-cut methods, while classical multitree methods include outer approximation and Benders decomposition. The most efficient class of methods for convex MINLP are hybrid methods that combine the strengths of both classes of classical techniques.Non-convex MINLPs pose additional challenges, because they contain non-convex functions in the objective function or the constraints; hence even when the integer variables are relaxed to be continuous, the feasible region is generally non-convex, resulting in many local minima. We discuss a range of approaches for tackling this challenging class of problems, including piecewise linear approximations, generic strategies for obtaining convex relaxations for non-convex functions, spatial branch-and-bound methods, and a small sample of techniques that exploit particular types of non-convex structures to obtain improved convex relaxations.We finish our survey with a brief discussion of three important aspects of MINLP. First, we review heuristic techniques that can obtain good feasible solution in situations where the search-tree has grown too large or we require real-time solutions. Second, we describe an emerging area of mixed-integer optimal control that adds systems of ordinary differential equations to MINLP. Third, we survey the state of the art in software for MINLP.
Style APA, Harvard, Vancouver, ISO itp.
21

Lan, Yu, i Ji Li. "Mars Ascent Trajectory Optimization Based on Convex Optimization". Journal of Physics: Conference Series 2364, nr 1 (1.11.2022): 012013. http://dx.doi.org/10.1088/1742-6596/2364/1/012013.

Pełny tekst źródła
Streszczenie:
Abstract In this paper, a Mars ascent trajectory optimization algorithm based on convex optimization is designed for the Mars ascent trajectory optimization problem. The main accomplishment of this paper is the trajectory optimization of the second-stage ignition based on the convex optimization. For the first-stage ignition, open-loop guidance is used. In this stage, the ascender flies using the maximum thrust to track the attitude profile, thus completing the ascent flight control. In the presence of errors in the first-stage ignition, the experiment generates multiple sets of first-stage shutdown point data and completes simulation verification experiments based on them. The simulation verifies that the second-stage ignition trajectory optimization based on convex optimization algorithm can achieve high accuracy into orbit despite the large starting point deviation.
Style APA, Harvard, Vancouver, ISO itp.
22

Popovici, Nicolae. "Convexité au sens direct ou inverse et applications dans l'optimisation vectorielle". Journal of Numerical Analysis and Approximation Theory 29, nr 1 (1.02.2000): 75–82. http://dx.doi.org/10.33993/jnaat291-656.

Pełny tekst źródła
Streszczenie:
(in English) The aim of this paper is to study vector optimization poblems involving objective functions which are convex in some direct or inverse sense (i.e. a special class of cone-quasiconvex functions). In particular, it is shown that the image of the objective function is a cone-convex set, property which is important from the scalarization point of view in vector optimization. (in French) Le but de cet article est d'etudier les problemes d'optimisation vectorielles ayant des fonctions objectifs convexes au sens direct ou inverse. Il s'agit notamment de donner des conditions suffisantes pour que l'image d'un ensemble convexe par des fonction objectifs cone-quasiconvexes soit un ensemble cone-convexe, propriete qui joue un role important dans la plupart des methodes de scalarisation utilisees dans l'optimisation vectorielle.
Style APA, Harvard, Vancouver, ISO itp.
23

Ni, Zhitong, Andrew Jian Zhang, Ren-Ping Liu i Kai Yang. "Doubly Constrained Waveform Optimization for Integrated Sensing and Communications". Sensors 23, nr 13 (28.06.2023): 5988. http://dx.doi.org/10.3390/s23135988.

Pełny tekst źródła
Streszczenie:
This paper investigates threshold-constrained joint waveform optimization for an integrated sensing and communication (ISAC) system. Unlike existing studies, we employ mutual information (MI) and sum rate (SR) as sensing and communication metrics, respectively, and optimize the waveform under constraints to both metrics simultaneously. This provides significant flexibility in meeting system performance. We formulate three different optimization problems that constrain the radar performance only, the communication performance only, and the ISAC performance, respectively. New techniques are developed to solve the original problems, which are NP-hard and cannot be directly solved by conventional semi-definite programming (SDP) techniques. Novel gradient descent methods are developed to solve the first two problems. For the third non-convex optimization problem, we transform it into a convex problem and solve it via convex toolboxes. We also disclose the connections between three optimizations using numerical results. Finally, simulation results are provided and validate the proposed optimization solutions.
Style APA, Harvard, Vancouver, ISO itp.
24

Chen, Po-Yu, i Ivan W. Selesnick. "Group-Sparse Signal Denoising: Non-Convex Regularization, Convex Optimization". IEEE Transactions on Signal Processing 62, nr 13 (lipiec 2014): 3464–78. http://dx.doi.org/10.1109/tsp.2014.2329274.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Dutta, Joydeep. "Barrier method in nonsmooth convex optimization without convex representation". Optimization Letters 9, nr 6 (10.10.2014): 1177–85. http://dx.doi.org/10.1007/s11590-014-0811-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Wang, Xuanxuan, Wujun Ji i Yun Gao. "Optimization Strategy of the Electric Vehicle Power Battery Based on the Convex Optimization Algorithm". Processes 11, nr 5 (6.05.2023): 1416. http://dx.doi.org/10.3390/pr11051416.

Pełny tekst źródła
Streszczenie:
With the development of the electric vehicle industry, electric vehicles have provided more choices for people. However, the performance of electric vehicles needs improvement, which makes most consumers take a wait-and-see attitude. Therefore, finding a method that can effectively improve the performance of electric vehicles is of great significance. To improve the current performance of electric vehicles, a convex optimization algorithm is proposed to optimize the motor model and power battery parameters of electric vehicles, improving the overall performance of electric vehicles. The performance of the proposed convex optimization algorithm, dual loop DP optimization algorithm, and nonlinear optimization algorithm is compared. The results show that the hydrogen consumption of electric vehicles optimized by the convex optimization algorithm is 95.364 g. This consumption is lower than 98.165 g of the DCDP optimization algorithm and 105.236 g of the nonlinear optimization algorithm before optimization. It is also significantly better than the 125.59 g of electric vehicles before optimization. The calculation time of the convex optimization algorithm optimization is 4.9 s, which is lower than the DCDP optimization algorithm and nonlinear optimization algorithm. The above results indicate that convex optimization algorithms have better optimization performance. After optimizing the power battery using a convex optimization algorithm, the overall performance of electric vehicles is higher. Therefore, this method can effectively improve the performance of current electric vehicle power batteries, make new energy vehicles develop rapidly, and improve the increasingly serious environmental pollution and energy crisis in China.
Style APA, Harvard, Vancouver, ISO itp.
27

Sun, Xiang-Kai, i Hong-Yong Fu. "A Note on Optimality Conditions for DC Programs Involving Composite Functions". Abstract and Applied Analysis 2014 (2014): 1–6. http://dx.doi.org/10.1155/2014/203467.

Pełny tekst źródła
Streszczenie:
By using the formula of theε-subdifferential for the sum of a convex function with a composition of convex functions, some necessary and sufficient optimality conditions for a DC programming problem involving a composite function are obtained. As applications, a composed convex optimization problem, a DC optimization problem, and a convex optimization problem with a linear operator are examined at the end of this paper.
Style APA, Harvard, Vancouver, ISO itp.
28

Salman, Abbas Musleh, Ahmed Alridha i Ahmed Hadi Hussain. "Some Topics on Convex Optimization". Journal of Physics: Conference Series 1818, nr 1 (1.03.2021): 012171. http://dx.doi.org/10.1088/1742-6596/1818/1/012171.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Ghavamzadeh, Mohammed, Shie Mannor, Joelle Pineau i Aviv Tamar. "Convex Optimization: Algorithms and Complexity". Foundations and Trends® in Machine Learning 8, nr 5-6 (2015): 359–483. http://dx.doi.org/10.1561/2200000049.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Bubeck, Sébastien. "Convex Optimization: Algorithms and Complexity". Foundations and Trends® in Machine Learning 8, nr 3-4 (2015): 231–357. http://dx.doi.org/10.1561/2200000050.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Hazan, Elad. "Introduction to Online Convex Optimization". Foundations and Trends® in Optimization 2, nr 3-4 (2016): 157–325. http://dx.doi.org/10.1561/2400000013.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Boyd, Stephen P. "Real-time Embedded Convex Optimization". IFAC Proceedings Volumes 42, nr 11 (2009): 9. http://dx.doi.org/10.3182/20090712-4-tr-2008.00004.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Drusvyatskiy, Dmitriy, i Adrian S. Lewis. "Generic nondegeneracy in convex optimization". Proceedings of the American Mathematical Society 139, nr 7 (21.12.2010): 2519–27. http://dx.doi.org/10.1090/s0002-9939-2010-10692-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Selim, S. Z. "Optimization of linear-convex programs". Optimization 29, nr 4 (styczeń 1994): 319–31. http://dx.doi.org/10.1080/02331939408843961.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Simpkins, Alex. "Convex Optimization [On the Shelf]". IEEE Robotics & Automation Magazine 20, nr 4 (grudzień 2013): 164–65. http://dx.doi.org/10.1109/mra.2013.2283189.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Hu, T. C., Victor Klee i David Larman. "Optimization of Globally Convex Functions". SIAM Journal on Control and Optimization 27, nr 5 (wrzesień 1989): 1026–47. http://dx.doi.org/10.1137/0327055.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Temlyakov, V. N. "Greedy expansions in convex optimization". Proceedings of the Steklov Institute of Mathematics 284, nr 1 (maj 2014): 244–62. http://dx.doi.org/10.1134/s0081543814010180.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Adivar, Murat, i Shu-Cherng Fang. "Convex optimization on mixed domains". Journal of Industrial & Management Optimization 8, nr 1 (2012): 189–227. http://dx.doi.org/10.3934/jimo.2012.8.189.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Park, Poogyeon, i Thomas Kailath. "H filtering via convex optimization". International Journal of Control 66, nr 1 (styczeń 1997): 15–22. http://dx.doi.org/10.1080/002071797224793.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Chen, Niangjun, Anish Agarwal, Adam Wierman, Siddharth Barman i Lachlan L. H. Andrew. "Online Convex Optimization Using Predictions". ACM SIGMETRICS Performance Evaluation Review 43, nr 1 (24.06.2015): 191–204. http://dx.doi.org/10.1145/2796314.2745854.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Gawlitza, Thomas Martin, Helmut Seidl, Assalé Adjé, Stéphane Gaubert i Éric Goubault. "Abstract interpretation meets convex optimization". Journal of Symbolic Computation 47, nr 12 (grudzień 2012): 1416–46. http://dx.doi.org/10.1016/j.jsc.2011.12.048.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Joshi, S., i S. Boyd. "Sensor Selection via Convex Optimization". IEEE Transactions on Signal Processing 57, nr 2 (luty 2009): 451–62. http://dx.doi.org/10.1109/tsp.2008.2007095.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Rantzer, Anders. "Dynamic programming via convex optimization". IFAC Proceedings Volumes 32, nr 2 (lipiec 1999): 2059–64. http://dx.doi.org/10.1016/s1474-6670(17)56349-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Luan, Nguyen Ngoc, i Jen-Chih Yao. "Generalized polyhedral convex optimization problems". Journal of Global Optimization 75, nr 3 (11.03.2019): 789–811. http://dx.doi.org/10.1007/s10898-019-00763-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Kryazhimskii, A. V., i R. A. Usachev. "Convex two-level optimization problem". Computational Mathematics and Modeling 19, nr 1 (styczeń 2008): 73–101. http://dx.doi.org/10.1007/s10598-008-0007-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Pintér, J. "Global optimization on convex sets". Operations-Research-Spektrum 8, nr 4 (grudzień 1986): 197–202. http://dx.doi.org/10.1007/bf01721128.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Tsitsiklis, John N., i Zhi-Quan Luo. "Communication complexity of convex optimization". Journal of Complexity 3, nr 3 (wrzesień 1987): 231–43. http://dx.doi.org/10.1016/0885-064x(87)90013-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Kulikov, A. N., i V. R. Fazylov. "Convex optimization with prescribed accuracy". USSR Computational Mathematics and Mathematical Physics 30, nr 3 (styczeń 1990): 16–22. http://dx.doi.org/10.1016/0041-5553(90)90185-u.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Nguyen, Hao, i Guergana Petrova. "Greedy Strategies for Convex Optimization". Calcolo 54, nr 1 (30.03.2016): 207–24. http://dx.doi.org/10.1007/s10092-016-0183-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

DeVore, R. A., i V. N. Temlyakov. "Convex Optimization on Banach Spaces". Foundations of Computational Mathematics 16, nr 2 (14.02.2015): 369–94. http://dx.doi.org/10.1007/s10208-015-9248-x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii