To see the other types of publications on this topic, follow the link: Subgradient descent.

Journal articles on the topic 'Subgradient descent'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 46 journal articles for your research on the topic 'Subgradient descent.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Krutikov, Vladimir, Svetlana Gutova, Elena Tovbis, Lev Kazakovtsev, and Eugene Semenkin. "Relaxation Subgradient Algorithms with Machine Learning Procedures." Mathematics 10, no. 21 (October 25, 2022): 3959. http://dx.doi.org/10.3390/math10213959.

Full text
Abstract:
In the modern digital economy, optimal decision support systems, as well as machine learning systems, are becoming an integral part of production processes. Artificial neural network training as well as other engineering problems generate such problems of high dimension that are difficult to solve with traditional gradient or conjugate gradient methods. Relaxation subgradient minimization methods (RSMMs) construct a descent direction that forms an obtuse angle with all subgradients of the current minimum neighborhood, which reduces to the problem of solving systems of inequalities. Having formalized the model and taking into account the specific features of subgradient sets, we reduced the problem of solving a system of inequalities to an approximation problem and obtained an efficient rapidly converging iterative learning algorithm for finding the direction of descent, conceptually similar to the iterative least squares method. The new algorithm is theoretically substantiated, and an estimate of its convergence rate is obtained depending on the parameters of the subgradient set. On this basis, we have developed and substantiated a new RSMM, which has the properties of the conjugate gradient method on quadratic functions. We have developed a practically realizable version of the minimization algorithm that uses a rough one-dimensional search. A computational experiment on complex functions in a space of high dimension confirms the effectiveness of the proposed algorithm. In the problems of training neural network models, where it is required to remove insignificant variables or neurons using methods such as the Tibshirani LASSO, our new algorithm outperforms known methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Tovbis, Elena, Vladimir Krutikov, Predrag Stanimirović, Vladimir Meshechkin, Aleksey Popov, and Lev Kazakovtsev. "A Family of Multi-Step Subgradient Minimization Methods." Mathematics 11, no. 10 (May 11, 2023): 2264. http://dx.doi.org/10.3390/math11102264.

Full text
Abstract:
For solving non-smooth multidimensional optimization problems, we present a family of relaxation subgradient methods (RSMs) with a built-in algorithm for finding the descent direction that forms an acute angle with all subgradients in the neighborhood of the current minimum. Minimizing the function along the opposite direction (with a minus sign) enables the algorithm to go beyond the neighborhood of the current minimum. The family of algorithms for finding the descent direction is based on solving systems of inequalities. The finite convergence of the algorithms on separable bounded sets is proved. Algorithms for solving systems of inequalities are used to organize the RSM family. On quadratic functions, the methods of the RSM family are equivalent to the conjugate gradient method (CGM). The methods are intended for solving high-dimensional problems and are studied theoretically and numerically. Examples of solving convex and non-convex smooth and non-smooth problems of large dimensions are given.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Gang, Minghua Li, and Yaohua Hu. "Stochastic quasi-subgradient method for stochastic quasi-convex feasibility problems." Discrete & Continuous Dynamical Systems - S 15, no. 4 (2022): 713. http://dx.doi.org/10.3934/dcdss.2021127.

Full text
Abstract:
<p style='text-indent:20px;'>The feasibility problem is at the core of the modeling of many problems in various disciplines of mathematics and physical sciences, and the quasi-convex function is widely applied in many fields such as economics, finance, and management science. In this paper, we consider the stochastic quasi-convex feasibility problem (SQFP), which is to find a common point of infinitely many sublevel sets of quasi-convex functions. Inspired by the idea of a stochastic index scheme, we propose a stochastic quasi-subgradient method to solve the SQFP, in which the quasi-subgradients of a random (and finite) index set of component quasi-convex functions at the current iterate are used to construct the descent direction at each iteration. Moreover, we introduce a notion of Hölder-type error bound property relative to the random control sequence for the SQFP, and use it to establish the global convergence theorem and convergence rate theory of the stochastic quasi-subgradient method. It is revealed in this paper that the stochastic quasi-subgradient method enjoys both advantages of low computational cost requirement and fast convergence feature.</p>
APA, Harvard, Vancouver, ISO, and other styles
4

Chu, Wenqing, Yao Hu, Chen Zhao, Haifeng Liu, and Deng Cai. "Atom Decomposition Based Subgradient Descent for matrix classification." Neurocomputing 205 (September 2016): 222–28. http://dx.doi.org/10.1016/j.neucom.2016.03.069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bedi, Amrit Singh, and Ketan Rajawat. "Network Resource Allocation via Stochastic Subgradient Descent: Convergence Rate." IEEE Transactions on Communications 66, no. 5 (May 2018): 2107–21. http://dx.doi.org/10.1109/tcomm.2018.2792430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nedić, Angelia, and Soomin Lee. "On Stochastic Subgradient Mirror-Descent Algorithm with Weighted Averaging." SIAM Journal on Optimization 24, no. 1 (January 2014): 84–107. http://dx.doi.org/10.1137/120894464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cui, Yun-Ling, Lu-Chuan Ceng, Fang-Fei Zhang, Cong-Shan Wang, Jian-Ye Li, Hui-Ying Hu, and Long He. "Modified Mann-Type Subgradient Extragradient Rules for Variational Inequalities and Common Fixed Points Implicating Countably Many Nonexpansive Operators." Mathematics 10, no. 11 (June 6, 2022): 1949. http://dx.doi.org/10.3390/math10111949.

Full text
Abstract:
In a real Hilbert space, let the CFPP, VIP, and HFPP denote the common fixed-point problem of countable nonexpansive operators and asymptotically nonexpansive operator, variational inequality problem, and hierarchical fixed point problem, respectively. With the help of the Mann iteration method, a subgradient extragradient approach with a linear-search process, and a hybrid deepest-descent technique, we construct two modified Mann-type subgradient extragradient rules with a linear-search process for finding a common solution of the CFPP and VIP. Under suitable assumptions, we demonstrate the strong convergence of the suggested rules to a common solution of the CFPP and VIP, which is only a solution of a certain HFPP.
APA, Harvard, Vancouver, ISO, and other styles
8

Montonen, O., N. Karmitsa, and M. M. Mäkelä. "Multiple subgradient descent bundle method for convex nonsmooth multiobjective optimization." Optimization 67, no. 1 (October 12, 2017): 139–58. http://dx.doi.org/10.1080/02331934.2017.1387259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Beck, Amir, and Marc Teboulle. "Mirror descent and nonlinear projected subgradient methods for convex optimization." Operations Research Letters 31, no. 3 (May 2003): 167–75. http://dx.doi.org/10.1016/s0167-6377(02)00231-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ceng, Lu-Chuan, Li-Jun Zhu, and Tzu-Chien Yin. "Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints." AIMS Mathematics 8, no. 2 (2023): 2961–94. http://dx.doi.org/10.3934/math.2023154.

Full text
Abstract:
<abstract><p>In this paper, we introduce the modified Mann-like subgradient-like extragradient implicit rules with linear-search process for finding a common solution of a system of generalized equilibrium problems, a pseudomonotone variational inequality problem and a fixed-point problem of an asymptotically nonexpansive mapping in a real Hilbert space. The proposed algorithms are based on the subgradient extragradient rule with linear-search process, Mann implicit iteration approach, and hybrid deepest-descent technique. Under mild restrictions, we demonstrate the strong convergence of the proposed algorithms to a common solution of the investigated problems, which is a unique solution of a certain hierarchical variational inequality defined on their common solution set.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
11

Auslender, A., and M. Teboulle. "Interior Gradient and Epsilon-Subgradient Descent Methods for Constrained Convex Minimization." Mathematics of Operations Research 29, no. 1 (February 2004): 1–26. http://dx.doi.org/10.1287/moor.1030.0062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Cui, Yun-Ling, Lu-Chuan Ceng, Fang-Fei Zhang, Liang He, Jie Yin, Cong-Shan Wang, and Hui-Ying Hu. "Mann Hybrid Deepest-Descent Extragradient Method with Line-Search Process for Hierarchical Variational Inequalities for Countable Nonexpansive Mappings." Journal of Mathematics 2023 (May 15, 2023): 1–18. http://dx.doi.org/10.1155/2023/6177912.

Full text
Abstract:
In real Hilbert spaces, let the CFPP indicate a common fixed-point problem of asymptotically nonexpansive operator and countably many nonexpansive operators, and suppose that the HVI and VIP represent a hierarchical variational inequality and a variational inequality problem, respectively. We put forward Mann hybrid deepest-descent extragradient approach for solving the HVI with the CFPP and VIP constraints. The proposed algorithms are on the basis of Mann’s iterative technique, viscosity approximation method, subgradient extragradient rule with linear-search process, and hybrid deepest-descent rule. Under suitable restrictions, it is shown that the sequences constructed by the algorithms converge strongly to a solution of the HVI with the CFPP and VIP constraints.
APA, Harvard, Vancouver, ISO, and other styles
13

Panup, Wanida, and Rabian Wangkeeree. "Stochastic Subgradient for Large-Scale Support Vector Machine Using the Generalized Pinball Loss Function." Symmetry 13, no. 9 (September 8, 2021): 1652. http://dx.doi.org/10.3390/sym13091652.

Full text
Abstract:
In this paper, we propose a stochastic gradient descent algorithm, called stochastic gradient descent method-based generalized pinball support vector machine (SG-GPSVM), to solve data classification problems. This approach was developed by replacing the hinge loss function in the conventional support vector machine (SVM) with a generalized pinball loss function. We show that SG-GPSVM is convergent and that it approximates the conventional generalized pinball support vector machine (GPSVM). Further, the symmetric kernel method was adopted to evaluate the performance of SG-GPSVM as a nonlinear classifier. Our suggested algorithm surpasses existing methods in terms of noise insensitivity, resampling stability, and accuracy for large-scale data scenarios, according to the experimental results.
APA, Harvard, Vancouver, ISO, and other styles
14

Boţ, Radu Ioan, and Axel Böhm. "An incremental mirror descent subgradient algorithm with random sweeping and proximal step." Optimization 68, no. 1 (June 14, 2018): 33–50. http://dx.doi.org/10.1080/02331934.2018.1482491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Gokbayrak, Kagan, and Omer Selvi. "A Subgradient Descent Algorithm for Optimization of Initially Controllable Flow Shop Systems." Discrete Event Dynamic Systems 19, no. 2 (February 4, 2009): 267–82. http://dx.doi.org/10.1007/s10626-009-0061-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Arachie, Chidubem, and Bert Huang. "Adversarial Label Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3183–90. http://dx.doi.org/10.1609/aaai.v33i01.33013183.

Full text
Abstract:
We consider the task of training classifiers without labels. We propose a weakly supervised method—adversarial label learning—that trains classifiers to perform well against an adversary that chooses labels for training data. The weak supervision constrains what labels the adversary can choose. The method therefore minimizes an upper bound of the classifier’s error rate using projected primal-dual subgradient descent. Minimizing this bound protects against bias and dependencies in the weak supervision. Experiments on real datasets show that our method can train without labels and outperforms other approaches for weakly supervised learning.
APA, Harvard, Vancouver, ISO, and other styles
17

Gebken, Bennet, and Sebastian Peitz. "An Efficient Descent Method for Locally Lipschitz Multiobjective Optimization Problems." Journal of Optimization Theory and Applications 188, no. 3 (January 13, 2021): 696–723. http://dx.doi.org/10.1007/s10957-020-01803-w.

Full text
Abstract:
AbstractWe present an efficient descent method for unconstrained, locally Lipschitz multiobjective optimization problems. The method is realized by combining a theoretical result regarding the computation of descent directions for nonsmooth multiobjective optimization problems with a practical method to approximate the subdifferentials of the objective functions. We show convergence to points which satisfy a necessary condition for Pareto optimality. Using a set of test problems, we compare our method with the multiobjective proximal bundle method by Mäkelä. The results indicate that our method is competitive while being easier to implement. Although the number of objective function evaluations is larger, the overall number of subgradient evaluations is smaller. Our method can be combined with a subdivision algorithm to compute entire Pareto sets of nonsmooth problems. Finally, we demonstrate how our method can be used for solving sparse optimization problems, which are present in many real-life applications.
APA, Harvard, Vancouver, ISO, and other styles
18

Stonyakin, Fedor Sergeevich, Aleksej N. Stepanov, Alexander Vladimirovich Gasnikov, and Alexander A. Titov. "Mirror descent for constrained optimization problems with large subgradient values of functional constraints." Computer Research and Modeling 12, no. 2 (April 2020): 301–17. http://dx.doi.org/10.20537/2076-7633-2020-12-2-301-317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Korablev, A. I., and V. V. Eisman. "A general method of finding the direction of descent in ε-subgradient methods." Journal of Mathematical Sciences 74, no. 6 (May 1995): 1327–31. http://dx.doi.org/10.1007/bf02367719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Robinson, Stephen M. "Linear convergence of epsilon-subgradient descent methods for a class of convex functions." Mathematical Programming 86, no. 1 (September 1, 1999): 41–50. http://dx.doi.org/10.1007/s101070050078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Ximing, Neng Fan, and Panos M. Pardalos. "Stochastic subgradient descent method for large-scale robust chance-constrained support vector machines." Optimization Letters 11, no. 5 (March 15, 2016): 1013–24. http://dx.doi.org/10.1007/s11590-016-1026-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Hand, Paul, Oscar Leong, and Vladislav Voroninski. "Optimal sample complexity of subgradient descent for amplitude flow via non-Lipschitz matrix concentration." Communications in Mathematical Sciences 19, no. 7 (2021): 2035–47. http://dx.doi.org/10.4310/cms.2021.v19.n7.a11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Liping, Wei Xu, Tianyi Chen, Georgios B. Giannakis, and Qing Ling. "RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 1544–51. http://dx.doi.org/10.1609/aaai.v33i01.33011544.

Full text
Abstract:
In this paper, we propose a class of robust stochastic subgradient methods for distributed learning from heterogeneous datasets at presence of an unknown number of Byzantine workers. The Byzantine workers, during the learning process, may send arbitrary incorrect messages to the master due to data corruptions, communication failures or malicious attacks, and consequently bias the learned model. The key to the proposed methods is a regularization term incorporated with the objective function so as to robustify the learning task and mitigate the negative effects of Byzantine attacks. The resultant subgradient-based algorithms are termed Byzantine-Robust Stochastic Aggregation methods, justifying our acronym RSA used henceforth. In contrast to most of the existing algorithms, RSA does not rely on the assumption that the data are independent and identically distributed (i.i.d.) on the workers, and hence fits for a wider class of applications. Theoretically, we show that: i) RSA converges to a near-optimal solution with the learning error dependent on the number of Byzantine workers; ii) the convergence rate of RSA under Byzantine attacks is the same as that of the stochastic gradient descent method, which is free of Byzantine attacks. Numerically, experiments on real dataset corroborate the competitive performance of RSA and a complexity reduction compared to the state-of-the-art alternatives.
APA, Harvard, Vancouver, ISO, and other styles
24

Ceng, Lu-Chuan, Ching-Feng Wen, Yeong-Cheng Liou, and Jen-Chih Yao. "On Strengthened Inertial-Type Subgradient Extragradient Rule with Adaptive Step Sizes for Variational Inequalities and Fixed Points of Asymptotically Nonexpansive Mappings." Mathematics 10, no. 6 (March 17, 2022): 958. http://dx.doi.org/10.3390/math10060958.

Full text
Abstract:
In a real Hilbert space, let the VIP denote a pseudomonotone variational inequality problem with Lipschitz continuity operator, and let the CFPP indicate a common fixed-point problem of finitely many nonexpansive mappings and an asymptotically nonexpansive mapping. On the basis of the Mann iteration method, the viscosity approximation method and the hybrid steepest-descent method, we propose and analyze two strengthened inertial-type subgradient extragradient rules with adaptive step sizes for solving the VIP and CFPP. With the help of suitable restrictions, we show the strong convergence of the suggested rules to a common solution of the VIP and CFPP, which is the unique solution of a hierarchical variational inequality (HVI).
APA, Harvard, Vancouver, ISO, and other styles
25

Ceng, Lu-Chuan, Xiaolong Qin, Yekini Shehu, and Jen-Chih Yao. "Mildly Inertial Subgradient Extragradient Method for Variational Inequalities Involving an Asymptotically Nonexpansive and Finitely Many Nonexpansive Mappings." Mathematics 7, no. 10 (September 22, 2019): 881. http://dx.doi.org/10.3390/math7100881.

Full text
Abstract:
In a real Hilbert space, let the notation VIP indicate a variational inequality problem for a Lipschitzian, pseudomonotone operator, and let CFPP denote a common fixed-point problem of an asymptotically nonexpansive mapping and finitely many nonexpansive mappings. This paper introduces mildly inertial algorithms with linesearch process for finding a common solution of the VIP and the CFPP by using a subgradient approach. These fully absorb hybrid steepest-descent ideas, viscosity iteration ideas, and composite Mann-type iterative ideas. With suitable conditions on real parameters, it is shown that the sequences generated our algorithms converge to a common solution in norm, which is a unique solution of a hierarchical variational inequality (HVI).
APA, Harvard, Vancouver, ISO, and other styles
26

Luo, Songting, Shingyu Leung, and Jianliang Qian. "An Adjoint State Method for Numerical Approximation of Continuous Traffic Congestion Equilibria." Communications in Computational Physics 10, no. 5 (November 2011): 1113–31. http://dx.doi.org/10.4208/cicp.020210.311210a.

Full text
Abstract:
AbstractThe equilibrium metric for minimizing a continuous congested traffic model is the solution of a variational problem involving geodesic distances. The continuous equilibrium metric and its associated variational problem are closely related to the classical discrete Wardrop’s equilibrium. We propose an adjoint state method to numerically approximate continuous traffic congestion equilibria through the continuous formulation. The method formally derives an adjoint state equation to compute the gradient descent direction so as to minimize a nonlinear functional involving the equilibrium metric and the resulting geodesic distances. The geodesic distance needed for the state equation is computed by solving a factored eikonal equation, and the adjoint state equation is solved by a fast sweeping method. Numerical examples demonstrate that the proposed adjoint state method produces desired equilibrium metrics and outperforms the subgradient marching method for computing such equilibrium metrics.
APA, Harvard, Vancouver, ISO, and other styles
27

Xu, Hang, Song Li, and Junhong Lin. "Low rank matrix recovery with adversarial sparse noise*." Inverse Problems 38, no. 3 (January 18, 2022): 035001. http://dx.doi.org/10.1088/1361-6420/ac44dc.

Full text
Abstract:
Abstract Many problems in data science can be treated as recovering a low-rank matrix from a small number of random linear measurements, possibly corrupted with adversarial noise and dense noise. Recently, a bunch of theories on variants of models have been developed for different noises, but with fewer theories on the adversarial noise. In this paper, we study low-rank matrix recovery problem from linear measurements perturbed by ℓ 1-bounded noise and sparse noise that can arbitrarily change an adversarially chosen ω-fraction of the measurement vector. For Gaussian measurements with nearly optimal number of measurements, we show that the nuclear-norm constrained least absolute deviation (LAD) can successfully estimate the ground-truth matrix for any ω < 0.239. Similar robust recovery results are also established for an iterative hard thresholding algorithm applied to the rank-constrained LAD considering geometrically decaying step-sizes, and the unconstrained LAD based on matrix factorization as well as its subgradient descent solver.
APA, Harvard, Vancouver, ISO, and other styles
28

Перевозчиков, Александр Геннадьевич, Валерий Юрьевич Решетов, and Александра Ильинична Лесик. "The ''attack-defense'' model on networks with the initial residuals of the parties." Herald of Tver State University. Series: Applied Mathematics, no. 2 (July 21, 2021): 68–81. http://dx.doi.org/10.26456/vtpmk618.

Full text
Abstract:
Статья обобщает игру «нападение-оборона», имеющую сетевую структуру, в части учета начальных остатков ресурсов сторон и основана на работе R. Hohzaki, V. Tanaka. В отличие от последней, оборона на каждом из возможных направлений движения между вершинами сети, заданных ориентированными ребрами, может иметь ненулевые начальные остатки ресурсов сторон, что приводит в общем случае к выпуклым минимаксным задачам, которые могут быть решены методом субградиентного спуска. В частности, изучаемая модель обобщает игру «нападение-оборона» с начальными остатками, предложенную В.Ф.Огарышевым, на сетевой случай. The article generalizes the "attack-defense" game with the network structure, in terms of accounting for the initial residuals of the parties' resources and is based on the work by Hohzaki and Tanaka. In contrast to the latter, the defense on each of the possible movement directions between the network’s vertices, given by the oriented edges, can have nonzero initial residuals of the parties' resources, which generally leads to convex minimax problems that can be solved by the subgradient descent method. In particular, the model under study generalizes the "attack-defense" game with initial residuals, proposed by Ogaryshev, to the network case.
APA, Harvard, Vancouver, ISO, and other styles
29

Chen, Lin, Ji-Ting Jia, Qiong Zhang, Wan-Yu Deng, and Wei Wei. "Online Sequential Projection Vector Machine with Adaptive Data Mean Update." Computational Intelligence and Neuroscience 2016 (2016): 1–13. http://dx.doi.org/10.1155/2016/5197932.

Full text
Abstract:
We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including(1)the projection vectors for dimension reduction,(2)the input weights, biases, and output weights, and(3)the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.
APA, Harvard, Vancouver, ISO, and other styles
30

Baillon, J. B., and R. Cominetti. "A Convergence Result for Nonautonomous Subgradient Evolution Equations and Its Application to the Steepest Descent Exponential Penalty Trajectory in Linear Programming." Journal of Functional Analysis 187, no. 2 (December 2001): 263–73. http://dx.doi.org/10.1006/jfan.2001.3828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Перевозчиков, Александр Геннадьевич, Валерий Юрьевич Решетов, and Александра Ильинична Лесик. "The ``attack-defense'' game with restrictions on the intake capacity of points." Herald of Tver State University. Series: Applied Mathematics, no. 3 (November 30, 2020): 78–92. http://dx.doi.org/10.26456/vtpmk600.

Full text
Abstract:
Работа обобщает игру «нападение-оборона» Ю.Б.Гермейера в части учета пропускной способности пунктов и основана на его обобщенном принципе уравнивания, что приводит в случае однородности ресурсов сторон к выпуклым минимаксным задачам, которые могут быть решены методом субградиентного спуска. Классическая модель «нападение-оборона» Ю.Б.Гермейера является модификацией модели О.Гросса. В работе В.Ф. Огарышева исследована игровая модель, обобщающая модели Гросса и Гермейера. В работе Д.А. Молодцова изучалась модель Гросса с непротивоположными интересами сторон, в работах Т.Н.Данильченко, К.К. Масевич и Б.П.Крутова - динамические расширения модели. В военных моделях пункты интерпретируются обычно как направления и характеризуют пространственное распределение ресурсов защиты по ширине. Однако реально имеют место также ограничения по пропускной способности пунктов (направлений). Это приводит в случае однородных ресурсов к минимаксным задачам для определения гарантированного результата (НГР) обороны. Получена точная верхняя оценка для НГР обороны, которая показывает потенциальные возможности обороны с учетом пропускной способности пунктов (направлений). The work generalizes the Germeier’s "attack-defense" game in terms of accounting for the intake capacity of points and is based on his generalized equalization principle, which leads to convex minimax problems that can be solved by subgradient descent in the case of homogeneity of the parties' resources. The classical Germeier’s "attack-defense" model is a modification of the Gross’ model. The game model that generalizes Gross’ model and Germeier’s model was studied by Ogaryshev. Molodtsov studied the Gross’s model with nonantagonistic interests of the parties; Danilchenko, Masevich and Krutova studied the dynamic extensions of the model. In the military models the points are usually interpreted as directions and characterize the spatial distribution of defense resources by width. However, there are also actual restrictions on the intake capacity of points. This leads, in the case of homogeneous resources, to minimax problems for determining the best guaranteed defense result (BGDR). An accurate upper estimate for the best guaranteed defense result was obtained, which shows the potential defense capabilities taking into account the intake capacity of points.
APA, Harvard, Vancouver, ISO, and other styles
32

Xia, Zun-quan. "Finding subgradients or descent directions of convex functions by external polyhedral approximation of subdifferentials." Optimization Methods and Software 1, no. 3 (January 1992): 253–64. http://dx.doi.org/10.1080/10556789208805523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Griewank, Andreas, and Andrea Walther. "Polyhedral DC Decomposition and DCA Optimization of Piecewise Linear Functions." Algorithms 13, no. 7 (July 11, 2020): 166. http://dx.doi.org/10.3390/a13070166.

Full text
Abstract:
For piecewise linear functions f : R n ↦ R we show how their abs-linear representation can be extended to yield simultaneously their decomposition into a convex f ˇ and a concave part f ^ , including a pair of generalized gradients g ˇ ∈ R n ∋ g ^ . The latter satisfy strict chain rules and can be computed in the reverse mode of algorithmic differentiation, at a small multiple of the cost of evaluating f itself. It is shown how f ˇ and f ^ can be expressed as a single maximum and a single minimum of affine functions, respectively. The two subgradients g ˇ and − g ^ are then used to drive DCA algorithms, where the (convex) inner problem can be solved in finitely many steps, e.g., by a Simplex variant or the true steepest descent method. Using a reflection technique to update the gradients of the concave part, one can ensure finite convergence to a local minimizer of f, provided the Linear Independence Kink Qualification holds. For piecewise smooth objectives the approach can be used as an inner method for successive piecewise linearization.
APA, Harvard, Vancouver, ISO, and other styles
34

Dick, Josef, Guoyin Li, and Dinh Duy Tran. "A new regularization for sparse optimization." ANZIAM Journal 62 (February 7, 2022): C176—C191. http://dx.doi.org/10.21914/anziamj.v62.16076.

Full text
Abstract:
Several numerical studies have shown that non-convex sparsity-induced regularization can outperform the convex ℓ1-penalty. In this article, we introduce a new non-convex and non-smooth regularization. This new regularization is a continuous and separable function which provides a tighter approximation to the cardinality function than any ℓq-penalty (0 < q < 1). We then apply the Proximal Gradient Method to solve a regularized optimization problem with the new regularization. The convergence analysis shows that the algorithm converges to a critical point and we also provide a pseudo-code for fast implementation. In addition, we conduct a simple numerical experiment with a regularized least square problem to illustrate the performance of the new regularization. References H. Attouch, J. Bolte, and B. F. Svaiter. “Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms, forward-backward splitting, and regularized Gauss–Seidel methods”. In: Math. Program. 137 (2013), pp. 91–129. doi: 10.1007/s10107-011-0484-9. A. Beck. “First-order methods in optimization”. In: MOS-SIAM Series on optimization. Society for Industrial and Applied Mathematics, 2017. doi: 10.1137/1.9781611974997. J. Bolte, A. Daniilidis, A. Lewis, and M. Shiota. “Clarke subgradients of stratifiable functions”. In: SIAM J. Optim. 18 (2007), pp. 556–572. doi: 10.1137/060670080. R. Chartrand. “Exact reconstruction of sparse signals via nonconvex minimization”. In: IEEE Sig. Process. Lett. 14.10 (2007), pp. 707–710. doi: 10.1109/LSP.2007.898300. D. L. Donoho, M. Elad, and V. N. Temlyakov. “Stable recovery of sparse overcomplete representations in the presence of noise”. In: IEEE Trans. Inform. Theory 52.1 (2006), pp. 6–18. doi: 10.1109/TIT.2005.860430. L. van den Dries and P. Speissegger. “The field of reals with multisummable series and the exponential function”. In: Proc. London Math. Soc. 81 (2000), pp. 513–565. doi: 10.1112/S0024611500012648. J. Fan and R. Li. “Variable selection via nonconcave penalized likelihood and Its oracle properties”. In: J. Am. Stat. Assoc. 96 (2001), pp. 1348–1360. doi: 10.1198/016214501753382273. T. Hastie, R. Tibshirani, and M. Wainwright. Statistical learning with sparsity: The lasso and generalizations. Chapman and Hall, 2015. doi: 10.1201/b18401. J. Lv and Y. Fan. “A unified approach to model selection and sparse recovery using regularized least squares”. In: Annal. Stat. 37.6A (2009), pp. 3498–3528. doi: 10.1214/09-AOS683. G. Marjanovic and V. Solo. “On lq optimization and matrix completion”. In: IEEE Trans. Signal Process. 60.11 (2012), pp. 5714–5724. doi: 10.1109/TSP.2012.2212015. R. Mazumder, J. H. Friedman, and T. Hastie. “SparseNet: Coordinate descent with nonconvex penalties”. In: J. Am. Stat. Assoc. 106 (2011), pp. 1125–1138. doi: 10.1198/jasa.2011.tm09738. F. Wen, L. Chu, P. Liu, and R. C. Qiu. “A Survey on nonconvex regularization-based sparse and low-rank recovery in signal processing, statistics, and machine learning”. In: IEEE Access 6 (2018), pp. 69883–69906. doi: 10.1109/ACCESS.2018.2880454. on pp. C74, C77). C.-H. Zhang. “Nearly unbiased variable selection under minimax concave penalty”. In: Annal. Stat. 38.2 (2010), pp. 894–942. doi: 10.1214/09-aos729.
APA, Harvard, Vancouver, ISO, and other styles
35

Gu, Bin, Yingying Shan, Xin Quan, and Guansheng Zheng. "Accelerating Sequential Minimal Optimization via Stochastic Subgradient Descent." IEEE Transactions on Cybernetics, 2019, 1–9. http://dx.doi.org/10.1109/tcyb.2019.2893289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Schechtman, S. "Stochastic proximal subgradient descent oscillates in the vicinity of its accumulation set." Optimization Letters, May 6, 2022. http://dx.doi.org/10.1007/s11590-022-01884-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Krygin, V., and R. Khomenko. "Self-Driven Algorithm for Solving Supermodular (max,+) Labeling Problems Based on Subgradient Descent*." Cybernetics and Systems Analysis, October 21, 2022. http://dx.doi.org/10.1007/s10559-022-00485-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Gez, Tamir L. S., and Kobi Cohen. "Subgradient Descent Learning over Fading Multiple Access Channels with Over-the-Air Computation." IEEE Access, 2023, 1. http://dx.doi.org/10.1109/access.2023.3291023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ceng, Lu-Chuan, and Qing Yuan. "Composite inertial subgradient extragradient methods for variational inequalities and fixed point problems." Journal of Inequalities and Applications 2019, no. 1 (October 26, 2019). http://dx.doi.org/10.1186/s13660-019-2229-x.

Full text
Abstract:
Abstract In this paper, we introduce and investigate composite inertial gradient-based algorithms with a line-search process for solving a variational inequality problem (VIP) with a pseudomonotone and Lipschitz continuous mapping and a common fixed-point problem (CFPP) of finitely many nonexpansive mappings and a strictly pseudocontractive mapping in the framework infinite-dimensional Hilbert spaces. The proposed algorithms are based on an inertial subgradient–extragradient method with the line-search process, hybrid steepest-descent methods, viscosity approximation methods and Mann iteration methods. Under weak conditions, we prove strong convergence of the proposed algorithms to the element in the common solution set of the VIP and CFPP, which solves a certain hierarchical VIP defined on this common solution set.
APA, Harvard, Vancouver, ISO, and other styles
40

Boţ, Radu Ioan, Minh N. Dao, and Guoyin Li. "Extrapolated Proximal Subgradient Algorithms for Nonconvex and Nonsmooth Fractional Programs." Mathematics of Operations Research, December 7, 2021. http://dx.doi.org/10.1287/moor.2021.1214.

Full text
Abstract:
In this paper, we consider a broad class of nonsmooth and nonconvex fractional programs, which encompass many important modern optimization problems arising from diverse areas such as the recently proposed scale-invariant sparse signal reconstruction problem in signal processing. We propose a proximal subgradient algorithm with extrapolations for solving this optimization model and show that the iterated sequence generated by the algorithm is bounded and that any one of its limit points is a stationary point of the model problem. The choice of our extrapolation parameter is flexible and includes the popular extrapolation parameter adopted in the restarted fast iterative shrinking-threshold algorithm (FISTA). By providing a unified analysis framework of descent methods, we establish the convergence of the full sequence under the assumption that a suitable merit function satisfies the Kurdyka–Łojasiewicz property. Our algorithm exhibits linear convergence for the scale-invariant sparse signal reconstruction problem and the Rayleigh quotient problem over spherical constraint. When the denominator is the maximum of finitely many continuously differentiable weakly convex functions, we also propose another extrapolated proximal subgradient algorithm with guaranteed convergence to a stronger notion of stationary points of the model problem. Finally, we illustrate the proposed methods by both analytical and simulated numerical examples.
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Jia, and Ioannis D. Schizas. "Multimodal correlations-based data clustering." Foundations of Data Science, 2022, 0. http://dx.doi.org/10.3934/fods.2022011.

Full text
Abstract:
<p style='text-indent:20px;'>This work proposes a novel technique for clustering multimodal data according to their information content. Statistical correlations present in data that contain similar information are exploited to perform the clustering task. Specifically, multiset canonical correlation analysis is equipped with norm-one regularization mechanisms to identify clusters within different types of data that share the same information content. A pertinent minimization formulation is put forth, while block coordinate descent is employed to derive a batch clustering algorithm which achieves better clustering performance than existing alternatives. Relying on subgradient descent, an online clustering approach is derived which substantially lowers computational complexity compared to the batch approach, while not compromising significantly the clustering performance. It is established that for an increasing number of data the novel regularized multiset framework is able to correctly cluster the multimodal data entries. Further, it is proved that the online clustering scheme converges with probability one to a stationary point of the ensemble regularized multiset correlations cost having the potential to recover the correct clusters. Extensive numerical tests demonstrate that the novel clustering scheme outperforms existing alternatives, while the online scheme achieves substantial computational savings.</p>
APA, Harvard, Vancouver, ISO, and other styles
42

Latz, Jonas. "Gradient flows and randomised thresholding: sparse inversion and classification." Inverse Problems, October 19, 2022. http://dx.doi.org/10.1088/1361-6420/ac9b84.

Full text
Abstract:
Abstract Sparse inversion and classification problems are ubiquitous in modern data science and imaging. They are often formulated as non-smooth minimisation problems. In sparse inversion, we minimise, e.g., the sum of a data fidelity term and an L1/LASSO regulariser. In classification, we consider, e.g., the sum of a data fidelity term and a non-smooth Ginzburg--Landau energy. Standard (sub)gradient descent methods have shown to be inefficient when approaching such problems. Splitting techniques are much more useful: here, the target function is partitioned into a sum of two subtarget functions -- each of which can be efficiently optimised. Splitting proceeds by performing optimisation steps alternately with respect to each of the two subtarget functions. In this work, we study splitting from a stochastic continuous-time perspective. Indeed, we define a differential inclusion that follows one of the two subtarget function's negative subdifferential at each point in time. The choice of the subtarget function is controlled by a binary continuous-time Markov process. The resulting dynamical system is a stochastic approximation of the underlying subgradient flow. We investigate this stochastic approximation for an L1-regularised sparse inversion flow and for a discrete Allen-Cahn equation minimising a Ginzburg--Landau energy. In both cases, we study the longtime behaviour of the stochastic dynamical system and its ability to approximate the underlying subgradient flow at any accuracy. We illustrate our theoretical findings in a simple sparse estimation problem and also in low- and high-dimensional classification problems.
APA, Harvard, Vancouver, ISO, and other styles
43

Jianhong, Wang, and Ricardo A. Ramirez-Mendoza. "Synthesis cascade estimation for aircraft system identification." Aircraft Engineering and Aerospace Technology, June 13, 2022. http://dx.doi.org/10.1108/aeat-03-2022-0093.

Full text
Abstract:
Purpose The purpose of this paper extends the authors’ previous contributions on aircraft system identification, such as open loop identification or closed loop identification, to cascade system identification. Because the cascade system is one special network system, existing in lots of practical engineers, more unknown systems are needed to identify simultaneously within the statistical environment with the probabilistic noises. Consider this problem of cascade system identification, prediction error method is proposed to identify three unknown systems, which are parameterized by three unknown parameter vectors. Then the cascade system identification is transferred as one parameter identification problem, being solved by the online subgradient descent algorithm. Furthermore, the nonparametric estimation is proposed to consider the general case without any parameterized process. To make up the identification mission, model validation process is given to show the asymptotic interval of the identified parameter. Finally, simulation example confirms the proposed theoretical results. Design/methodology/approach Firstly, aircraft system identification is reviewed through the understanding about system identification and advances in control theory, then cascade system identification is introduced to be one special network system. Secondly, for the problem of cascade system identification, prediction error method and online subgradient decent algorithm are combined together to identify the cascade system with the parameterized systems. Thirdly from the point of more general completeness, another way is proposed to identify the nonparametric estimation, then model validation process is added to complete the whole identification mission. Findings This cascade system corresponds to one network system, existing in lots of practice, such as aircraft, ship and robot, so it is necessary to identify this cascade system, paving a way for latter network system identification. Parametric and nonparametric estimations are all studied within the statistical environment. Then research on bounded noise is an ongoing work. Originality/value To the best of the authors’ knowledge, research on aircraft system identification only concern on open loop and closed loop system identification, no any identification results about network system identification. This paper considers cascade system identification, being one special case on network system identification, so this paper paves a basic way for latter more advanced system identification and control theory.
APA, Harvard, Vancouver, ISO, and other styles
44

Dutta, Haimonti. "A Consensus Algorithm for Linear Support Vector Machines." Management Science, August 30, 2021. http://dx.doi.org/10.1287/mnsc.2021.4042.

Full text
Abstract:
In the era of big data, an important weapon in a machine learning researcher’s arsenal is a scalable support vector machine (SVM) algorithm. Traditional algorithms for learning SVMs scale superlinearly with the training set size, which becomes infeasible quickly for large data sets. In recent years, scalable algorithms have been designed which study the primal or dual formulations of the problem. These often suggest a way to decompose the problem and facilitate development of distributed algorithms. In this paper, we present a distributed algorithm for learning linear SVMs in the primal form for binary classification called the gossip-based subgradient (GADGET) SVM. The algorithm is designed such that it can be executed locally on sites of a distributed system. Each site processes its local homogeneously partitioned data and learns a primal SVM model; it then gossips with random neighbors about the classifier learnt and uses this information to update the model. To learn the model, the SVM optimization problem is solved using several techniques, including a gradient estimation procedure, stochastic gradient descent method, and several variants including minibatches of varying sizes. Our theoretical results indicate that the rate at which the GADGET SVM algorithm converges to the global optimum at each site is dominated by an [Formula: see text] term, where λ measures the degree of convexity of the function at the site. Empirical results suggest that this anytime algorithm—where the quality of results improve gradually as computation time increases—has performance comparable to its centralized, pseudodistributed, and other state-of-the-art gossip-based SVM solvers. It is at least 1.5 times (often several orders of magnitude) faster than other gossip-based SVM solvers known in literature and has a message complexity of O(d) per iteration, where d represents the number of features of the data set. Finally, a large-scale case study is presented wherein the consensus-based SVM algorithm is used to predict failures of advanced mechanical components in a chocolate manufacturing process using more than one million data points. This paper was accepted by J. George Shanthikumar, big data analytics.
APA, Harvard, Vancouver, ISO, and other styles
45

Wen, Jiajun, Wai Keung Wong, Xiao-Li Hu, Honglin Chu, and Zhihui Lai. "Restricted subgradient descend method for sparse signal learning." International Journal of Machine Learning and Cybernetics, April 26, 2022. http://dx.doi.org/10.1007/s13042-022-01551-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Wen, Jiajun, Wai Keung Wong, Xiao-Li Hu, Honglin Chu, and Zhihui Lai. "Restricted subgradient descend method for sparse signal learning." International Journal of Machine Learning and Cybernetics, April 26, 2022. http://dx.doi.org/10.1007/s13042-022-01551-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography