Добірка наукової літератури з теми "Subgradient descent"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Subgradient descent".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Subgradient descent"

1

Krutikov, Vladimir, Svetlana Gutova, Elena Tovbis, Lev Kazakovtsev, and Eugene Semenkin. "Relaxation Subgradient Algorithms with Machine Learning Procedures." Mathematics 10, no. 21 (October 25, 2022): 3959. http://dx.doi.org/10.3390/math10213959.

Повний текст джерела
Анотація:
In the modern digital economy, optimal decision support systems, as well as machine learning systems, are becoming an integral part of production processes. Artificial neural network training as well as other engineering problems generate such problems of high dimension that are difficult to solve with traditional gradient or conjugate gradient methods. Relaxation subgradient minimization methods (RSMMs) construct a descent direction that forms an obtuse angle with all subgradients of the current minimum neighborhood, which reduces to the problem of solving systems of inequalities. Having formalized the model and taking into account the specific features of subgradient sets, we reduced the problem of solving a system of inequalities to an approximation problem and obtained an efficient rapidly converging iterative learning algorithm for finding the direction of descent, conceptually similar to the iterative least squares method. The new algorithm is theoretically substantiated, and an estimate of its convergence rate is obtained depending on the parameters of the subgradient set. On this basis, we have developed and substantiated a new RSMM, which has the properties of the conjugate gradient method on quadratic functions. We have developed a practically realizable version of the minimization algorithm that uses a rough one-dimensional search. A computational experiment on complex functions in a space of high dimension confirms the effectiveness of the proposed algorithm. In the problems of training neural network models, where it is required to remove insignificant variables or neurons using methods such as the Tibshirani LASSO, our new algorithm outperforms known methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tovbis, Elena, Vladimir Krutikov, Predrag Stanimirović, Vladimir Meshechkin, Aleksey Popov, and Lev Kazakovtsev. "A Family of Multi-Step Subgradient Minimization Methods." Mathematics 11, no. 10 (May 11, 2023): 2264. http://dx.doi.org/10.3390/math11102264.

Повний текст джерела
Анотація:
For solving non-smooth multidimensional optimization problems, we present a family of relaxation subgradient methods (RSMs) with a built-in algorithm for finding the descent direction that forms an acute angle with all subgradients in the neighborhood of the current minimum. Minimizing the function along the opposite direction (with a minus sign) enables the algorithm to go beyond the neighborhood of the current minimum. The family of algorithms for finding the descent direction is based on solving systems of inequalities. The finite convergence of the algorithms on separable bounded sets is proved. Algorithms for solving systems of inequalities are used to organize the RSM family. On quadratic functions, the methods of the RSM family are equivalent to the conjugate gradient method (CGM). The methods are intended for solving high-dimensional problems and are studied theoretically and numerically. Examples of solving convex and non-convex smooth and non-smooth problems of large dimensions are given.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Li, Gang, Minghua Li, and Yaohua Hu. "Stochastic quasi-subgradient method for stochastic quasi-convex feasibility problems." Discrete & Continuous Dynamical Systems - S 15, no. 4 (2022): 713. http://dx.doi.org/10.3934/dcdss.2021127.

Повний текст джерела
Анотація:
<p style='text-indent:20px;'>The feasibility problem is at the core of the modeling of many problems in various disciplines of mathematics and physical sciences, and the quasi-convex function is widely applied in many fields such as economics, finance, and management science. In this paper, we consider the stochastic quasi-convex feasibility problem (SQFP), which is to find a common point of infinitely many sublevel sets of quasi-convex functions. Inspired by the idea of a stochastic index scheme, we propose a stochastic quasi-subgradient method to solve the SQFP, in which the quasi-subgradients of a random (and finite) index set of component quasi-convex functions at the current iterate are used to construct the descent direction at each iteration. Moreover, we introduce a notion of Hölder-type error bound property relative to the random control sequence for the SQFP, and use it to establish the global convergence theorem and convergence rate theory of the stochastic quasi-subgradient method. It is revealed in this paper that the stochastic quasi-subgradient method enjoys both advantages of low computational cost requirement and fast convergence feature.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Chu, Wenqing, Yao Hu, Chen Zhao, Haifeng Liu, and Deng Cai. "Atom Decomposition Based Subgradient Descent for matrix classification." Neurocomputing 205 (September 2016): 222–28. http://dx.doi.org/10.1016/j.neucom.2016.03.069.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bedi, Amrit Singh, and Ketan Rajawat. "Network Resource Allocation via Stochastic Subgradient Descent: Convergence Rate." IEEE Transactions on Communications 66, no. 5 (May 2018): 2107–21. http://dx.doi.org/10.1109/tcomm.2018.2792430.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Nedić, Angelia, and Soomin Lee. "On Stochastic Subgradient Mirror-Descent Algorithm with Weighted Averaging." SIAM Journal on Optimization 24, no. 1 (January 2014): 84–107. http://dx.doi.org/10.1137/120894464.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Cui, Yun-Ling, Lu-Chuan Ceng, Fang-Fei Zhang, Cong-Shan Wang, Jian-Ye Li, Hui-Ying Hu, and Long He. "Modified Mann-Type Subgradient Extragradient Rules for Variational Inequalities and Common Fixed Points Implicating Countably Many Nonexpansive Operators." Mathematics 10, no. 11 (June 6, 2022): 1949. http://dx.doi.org/10.3390/math10111949.

Повний текст джерела
Анотація:
In a real Hilbert space, let the CFPP, VIP, and HFPP denote the common fixed-point problem of countable nonexpansive operators and asymptotically nonexpansive operator, variational inequality problem, and hierarchical fixed point problem, respectively. With the help of the Mann iteration method, a subgradient extragradient approach with a linear-search process, and a hybrid deepest-descent technique, we construct two modified Mann-type subgradient extragradient rules with a linear-search process for finding a common solution of the CFPP and VIP. Under suitable assumptions, we demonstrate the strong convergence of the suggested rules to a common solution of the CFPP and VIP, which is only a solution of a certain HFPP.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Montonen, O., N. Karmitsa, and M. M. Mäkelä. "Multiple subgradient descent bundle method for convex nonsmooth multiobjective optimization." Optimization 67, no. 1 (October 12, 2017): 139–58. http://dx.doi.org/10.1080/02331934.2017.1387259.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Beck, Amir, and Marc Teboulle. "Mirror descent and nonlinear projected subgradient methods for convex optimization." Operations Research Letters 31, no. 3 (May 2003): 167–75. http://dx.doi.org/10.1016/s0167-6377(02)00231-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ceng, Lu-Chuan, Li-Jun Zhu, and Tzu-Chien Yin. "Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints." AIMS Mathematics 8, no. 2 (2023): 2961–94. http://dx.doi.org/10.3934/math.2023154.

Повний текст джерела
Анотація:
<abstract><p>In this paper, we introduce the modified Mann-like subgradient-like extragradient implicit rules with linear-search process for finding a common solution of a system of generalized equilibrium problems, a pseudomonotone variational inequality problem and a fixed-point problem of an asymptotically nonexpansive mapping in a real Hilbert space. The proposed algorithms are based on the subgradient extragradient rule with linear-search process, Mann implicit iteration approach, and hybrid deepest-descent technique. Under mild restrictions, we demonstrate the strong convergence of the proposed algorithms to a common solution of the investigated problems, which is a unique solution of a certain hierarchical variational inequality defined on their common solution set.</p></abstract>
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Subgradient descent"

1

Beltran, Royo César. "Generalized unit commitment by the radar multiplier method." Doctoral thesis, Universitat Politècnica de Catalunya, 2001. http://hdl.handle.net/10803/6501.

Повний текст джерела
Анотація:
This operations research thesis should be situated in the field of the power generation industry. The general objective of this work is to efficiently solve the Generalized Unit Commitment (GUC) problem by means of specialized software. The GUC problem generalizes the Unit Commitment (UC) problem by simultane-ously solving the associated Optimal Power Flow (OPF) problem. There are many approaches to solve the UC and OPF problems separately, but approaches to solve them jointly, i.e. to solve the GUC problem, are quite scarce. One of these GUC solving approaches is due to professors Batut and Renaud, whose methodology has been taken as a starting point for the methodology presented herein.
This thesis report is structured as follows. Chapter 1 describes the state of the art of the UC and GUC problems. The formulation of the classical short-term power planning problems related to the GUC problem, namely the economic dispatching problem, the OPF problem, and the UC problem, are reviewed. Special attention is paid to the UC literature and to the traditional methods for solving the UC problem. In chapter 2 we extend the OPF model developed by professors Heredia and Nabona to obtain our GUC model. The variables used and the modelling of the thermal, hydraulic and transmission systems are introduced, as is the objective function. Chapter 3 deals with the Variable Duplication (VD) method, which is used to decompose the GUC problem as an alternative to the Classical Lagrangian Relaxation (CLR) method. Furthermore, in chapter 3 dual bounds provided by the VDmethod or by the CLR methods are theoretically compared.
Throughout chapters 4, 5, and 6 our solution methodology, the Radar Multiplier (RM) method, is designed and tested. Three independent matters are studied: first, the auxiliary problem principle method, used by Batut and Renaud to treat the inseparable augmented Lagrangian, is compared with the block coordinate descent method from both theoretical and practical points of view. Second, the Radar Sub- gradient (RS) method, a new Lagrange multiplier updating method, is proposed and computationally compared with the classical subgradient method. And third, we study the local character of the optimizers computed by the Augmented Lagrangian Relaxation (ALR) method when solving the GUC problem. A heuristic to improve the local ALR optimizers is designed and tested.
Chapter 7 is devoted to our computational implementation of the RM method, the MACH code. First, the design of MACH is reviewed brie y and then its performance is tested by solving real-life large-scale UC and GUC instances. Solutions computed using our VD formulation of the GUC problem are partially primal feasible since they do not necessarily fulfill the spinning reserve constraints. In chapter 8 we study how to modify this GUC formulation with the aim of obtaining full primal feasible solutions. A successful test based on a simple UC problem is reported. The conclusions, contributions of the thesis, and proposed further research can be found in chapter 9.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yaji, Vinayaka Ganapati. "Stochastic approximation with set-valued maps and Markov noise: Theoretical foundations and applications." Thesis, 2017. https://etd.iisc.ac.in/handle/2005/5461.

Повний текст джерела
Анотація:
Stochastic approximation algorithms produce estimates of a desired solution using noisy real world data. Introduced by Robbins and Monro, in 1951, stochastic approximation techniques have been instrumental in the asymptotic analysis of several adaptive algorithms in learning, signal processing and control. A popular method for the analysis of stochastic approximation schemes is the dynamical systems approach or the o.d.e. method introduced by Ljung and developed further extensively by Benaim and Hirsch. We build upon the works of Benaim et.al. and Bhatnagar et.al., and present the asymptotic analysis of stochastic approximation schemes with set-valued drift functions and nonadditive Markov noise. The frame- works studied by us are under the weakest set of assumptions and encompass a majority of the methods studied to date. We first present the asymptotic analysis of stochastic approximation schemes with set-valued drift function and non-additive iterate-dependent Markov noise. We show that a linearly interpolated trajectory of such a recursion is an asymptotic pseudotrajectory for the flow of a limiting differential inclusion obtained by averaging the set-valued drift function of the recursion with respect to the stationary distributions of the Markov noise. The limit set theorem by Benaim is then used to characterize the limit sets of the recursion in terms of the dynamics of the limiting differential inclusion. The analysis presented allows us characterize the asymptotic behavior of controlled stochastic approximation, subgradient descent, approximate drift problem and analysis of discontinuous dynamics all in the presence of non-additive iterate-dependent Markov noise. Next we present the asymptotic analysis of a stochastic approximation scheme on two timescales with set- valued drift functions and in the presence of non-additive iterate-dependent Markov noise. It is shown that the recursion on each timescale tracks the flow of a differential inclusion obtained by averaging the set-valued drift function in the recursion with respect to a set of measures which take into account both the averaging with respect to the stationary distributions of the Markov noise terms and the interdependence between the two recursions on different timescales. Finally, we present the analysis of stochastic approximation schemes with set-valued maps in the absence of a stability guarantee. We prove that after a large number of iterations if the stochastic approximation process enters the domain of attraction of an attracting set it gets locked into the attracting set with high probability. We demonstrate that the above result is an effective instrument for analyzing stochastic approximation schemes in the absence of a stability guarantee, by using it to obtain an alternate criterion for convergence in the presence of a locally attracting set for the mean field and by using it to show that a feedback mechanism, which involves resetting the iterates at regular time intervals, stabilizes the scheme when the mean field possesses a globally attracting set, thereby guaranteeing convergence. Our results build on the works of Borkar, Andrieu et al. and Chen et al., by allowing for the presence of set-valued drift functions.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Subgradient descent"

1

Kiwiel, Krzysztof C. "Aggregate subgradient methods for unconstrained convex minimization." In Methods of Descent for Nondifferentiable Optimization, 44–86. Berlin, Heidelberg: Springer Berlin Heidelberg, 1985. http://dx.doi.org/10.1007/bfb0074502.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kiwiel, Krzysztof C. "Methods with subgradient locality measures for minimizing nonconvex functions." In Methods of Descent for Nondifferentiable Optimization, 87–138. Berlin, Heidelberg: Springer Berlin Heidelberg, 1985. http://dx.doi.org/10.1007/bfb0074503.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kiwiel, Krzysztof C. "Methods with subgradient deletion rules for unconstrained nonconvex minimization." In Methods of Descent for Nondifferentiable Optimization, 139–89. Berlin, Heidelberg: Springer Berlin Heidelberg, 1985. http://dx.doi.org/10.1007/bfb0074504.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Subgradient descent"

1

Gez, Tamir L. S., and Kobi Cohen. "Subgradient Descent Learning with Over-the-Air Computation." In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. http://dx.doi.org/10.1109/icassp49357.2023.10095134.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zhang, Honghui, Jingdong Wang, Ping Tan, Jinglu Wang, and Long Quan. "Learning CRFs for Image Parsing with Adaptive Subgradient Descent." In 2013 IEEE International Conference on Computer Vision (ICCV). IEEE, 2013. http://dx.doi.org/10.1109/iccv.2013.382.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Lucchi, Aurelien, Yunpeng Li, and Pascal Fua. "Learning for Structured Prediction Using Approximate Subgradient Descent with Working Sets." In 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2013. http://dx.doi.org/10.1109/cvpr.2013.259.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Singhal, Manmohan, and Saurabh Khanna. "Proximal Subgradient Descent Method for Cancelling Cross-Interference in FMCW Radars." In 2023 IEEE Statistical Signal Processing Workshop (SSP). IEEE, 2023. http://dx.doi.org/10.1109/ssp53291.2023.10208039.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wu, Songwei, Hang Yu, and Justin Dauwels. "Efficient Stochastic Subgradient Descent Algorithms for High-dimensional Semi-sparse Graphical Model Selection." In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. http://dx.doi.org/10.1109/icassp.2019.8683823.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wan, Yuanyu, Nan Wei, and Lijun Zhang. "Efficient Adaptive Online Learning via Frequent Directions." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/381.

Повний текст джерела
Анотація:
By employing time-varying proximal functions, adaptive subgradient methods (ADAGRAD) have improved the regret bound and been widely used in online learning and optimization. However, ADAGRAD with full matrix proximal functions (ADA-FULL) cannot deal with large-scale problems due to the impractical time and space complexities, though it has better performance when gradients are correlated. In this paper, we propose ADA-FD, an efficient variant of ADA-FULL based on a deterministic matrix sketching technique called frequent directions. Following ADA-FULL, we incorporate our ADA-FD into both primal-dual subgradient method and composite mirror descent method to develop two efficient methods. By maintaining and manipulating low-rank matrices, at each iteration, the space complexity is reduced from $O(d^2)$ to $O(\tau d)$ and the time complexity is reduced from $O(d^3)$ to $O(\tau^2d)$, where $d$ is the dimensionality of the data and $\tau \ll d$ is the sketching size. Theoretical analysis reveals that the regret of our methods is close to that of ADA-FULL as long as the outer product matrix of gradients is approximately low-rank. Experimental results show that our ADA-FD is comparable to ADA-FULL and outperforms other state-of-the-art algorithms in online convex optimization as well as in training convolutional neural networks (CNN).
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії