Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Inertial Bregman proximal gradient.

Статті в журналах з теми "Inertial Bregman proximal gradient"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Inertial Bregman proximal gradient".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Mukkamala, Mahesh Chandra, Peter Ochs, Thomas Pock, and Shoham Sabach. "Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Nonconvex Optimization." SIAM Journal on Mathematics of Data Science 2, no. 3 (January 2020): 658–82. http://dx.doi.org/10.1137/19m1298007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kabbadj, S. "Inexact Version of Bregman Proximal Gradient Algorithm." Abstract and Applied Analysis 2020 (April 1, 2020): 1–11. http://dx.doi.org/10.1155/2020/1963980.

Повний текст джерела
Анотація:
The Bregman Proximal Gradient (BPG) algorithm is an algorithm for minimizing the sum of two convex functions, with one being nonsmooth. The supercoercivity of the objective function is necessary for the convergence of this algorithm precluding its use in many applications. In this paper, we give an inexact version of the BPG algorithm while circumventing the condition of supercoercivity by replacing it with a simple condition on the parameters of the problem. Our study covers the existing results, while giving other.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Zhou, Yi, Yingbin Liang, and Lixin Shen. "A simple convergence analysis of Bregman proximal gradient algorithm." Computational Optimization and Applications 73, no. 3 (April 4, 2019): 903–12. http://dx.doi.org/10.1007/s10589-019-00092-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Hanzely, Filip, Peter Richtárik, and Lin Xiao. "Accelerated Bregman proximal gradient methods for relatively smooth convex optimization." Computational Optimization and Applications 79, no. 2 (April 7, 2021): 405–40. http://dx.doi.org/10.1007/s10589-021-00273-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Mahadevan, Sridhar, Stephen Giguere, and Nicholas Jacek. "Basis Adaptation for Sparse Nonlinear Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 654–60. http://dx.doi.org/10.1609/aaai.v27i1.8665.

Повний текст джерела
Анотація:
This paper presents a new approach to representation discovery in reinforcement learning (RL) using basis adaptation. We introduce a general framework for basis adaptation as {\em nonlinear separable least-squares value function approximation} based on finding Frechet gradients of an error function using variable projection functionals. We then present a scalable proximal gradient-based approach for basis adaptation using the recently proposed mirror-descent framework for RL. Unlike traditional temporal-difference (TD) methods for RL, mirror descent based RL methods undertake proximal gradient updates of weights in a dual space, which is linked together with the primal space using a Legendre transform involving the gradient of a strongly convex function. Mirror descent RL can be viewed as a proximal TD algorithm using Bregman divergence as the distance generating function. We present a new class of regularized proximal-gradient based TD methods, which combine feature selection through sparse L1 regularization and basis adaptation. Experimental results are provided to illustrate and validate the approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Yang, Lei, and Kim-Chuan Toh. "Bregman Proximal Point Algorithm Revisited: A New Inexact Version and Its Inertial Variant." SIAM Journal on Optimization 32, no. 3 (July 13, 2022): 1523–54. http://dx.doi.org/10.1137/20m1360748.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Li, Jing, Xiao Wei, Fengpin Wang, and Jinjia Wang. "IPGM: Inertial Proximal Gradient Method for Convolutional Dictionary Learning." Electronics 10, no. 23 (December 3, 2021): 3021. http://dx.doi.org/10.3390/electronics10233021.

Повний текст джерела
Анотація:
Inspired by the recent success of the proximal gradient method (PGM) and recent efforts to develop an inertial algorithm, we propose an inertial PGM (IPGM) for convolutional dictionary learning (CDL) by jointly optimizing both an ℓ2-norm data fidelity term and a sparsity term that enforces an ℓ1 penalty. Contrary to other CDL methods, in the proposed approach, the dictionary and needles are updated with an inertial force by the PGM. We obtain a novel derivative formula for the needles and dictionary with respect to the data fidelity term. At the same time, a gradient descent step is designed to add an inertial term. The proximal operation uses the thresholding operation for needles and projects the dictionary to a unit-norm sphere. We prove the convergence property of the proposed IPGM algorithm in a backtracking case. Simulation results show that the proposed IPGM achieves better performance than the PGM and slice-based methods that possess the same structure and are optimized using the alternating-direction method of multipliers (ADMM).
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Xiao, Xiantao. "A Unified Convergence Analysis of Stochastic Bregman Proximal Gradient and Extragradient Methods." Journal of Optimization Theory and Applications 188, no. 3 (January 8, 2021): 605–27. http://dx.doi.org/10.1007/s10957-020-01799-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wang, Qingsong, Zehui Liu, Chunfeng Cui, and Deren Han. "A Bregman Proximal Stochastic Gradient Method with Extrapolation for Nonconvex Nonsmooth Problems." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (March 24, 2024): 15580–88. http://dx.doi.org/10.1609/aaai.v38i14.29485.

Повний текст джерела
Анотація:
In this paper, we explore a specific optimization problem that involves the combination of a differentiable nonconvex function and a nondifferentiable function. The differentiable component lacks a global Lipschitz continuous gradient, posing challenges for optimization. To address this issue and accelerate the convergence, we propose a Bregman proximal stochastic gradient method with extrapolation (BPSGE), which only requires smooth adaptivity of the differentiable part. Under variance reduction framework, we not only analyze the subsequential and global convergence of the proposed algorithm under certain conditions, but also analyze the sublinear convergence rate of the subsequence, and the complexity of the algorithm, revealing that the BPSGE algorithm requires at most O(epsilon\^\,(-2)) iterations in expectation to attain an epsilon-stationary point. To validate the effectiveness of our proposed algorithm, we conduct numerical experiments on three real-world applications: graph regularized nonnegative matrix factorization (NMF), matrix factorization with weakly-convex regularization, and NMF with nonconvex sparsity constraints. These experiments demonstrate that BPSGE is faster than the baselines without extrapolation. The code is available at: https://github.com/nothing2wang/BPSGE-Algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

He, Lulu, Jimin Ye, and Jianwei E. "Nonconvex optimization with inertial proximal stochastic variance reduction gradient." Information Sciences 648 (November 2023): 119546. http://dx.doi.org/10.1016/j.ins.2023.119546.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Zhu, Daoli, Sien Deng, Minghua Li, and Lei Zhao. "Level-Set Subdifferential Error Bounds and Linear Convergence of Bregman Proximal Gradient Method." Journal of Optimization Theory and Applications 189, no. 3 (May 31, 2021): 889–918. http://dx.doi.org/10.1007/s10957-021-01865-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Hua, Xiaoqin, and Nobuo Yamashita. "Block coordinate proximal gradient methods with variable Bregman functions for nonsmooth separable optimization." Mathematical Programming 160, no. 1-2 (January 27, 2016): 1–32. http://dx.doi.org/10.1007/s10107-015-0969-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Zhang, Xiaoya, Roberto Barrio, M. Angeles Martinez, Hao Jiang, and Lizhi Cheng. "Bregman Proximal Gradient Algorithm With Extrapolation for a Class of Nonconvex Nonsmooth Minimization Problems." IEEE Access 7 (2019): 126515–29. http://dx.doi.org/10.1109/access.2019.2937005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Kesornprom, Suparat, and Prasit Cholamjiak. "A modified inertial proximal gradient method for minimization problems and applications." AIMS Mathematics 7, no. 5 (2022): 8147–61. http://dx.doi.org/10.3934/math.2022453.

Повний текст джерела
Анотація:
<abstract><p>In this paper, the aim is to design a new proximal gradient algorithm by using the inertial technique with adaptive stepsize for solving convex minimization problems and prove convergence of the iterates under some suitable assumptions. Some numerical implementations of image deblurring are performed to show the efficiency of the proposed methods.</p></abstract>
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Boţ, Radu Ioan, Ernö Robert Csetnek, and Nimit Nimana. "An Inertial Proximal-Gradient Penalization Scheme for Constrained Convex Optimization Problems." Vietnam Journal of Mathematics 46, no. 1 (September 1, 2017): 53–71. http://dx.doi.org/10.1007/s10013-017-0256-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Kankam, Kunrada, and Prasit Cholamjiak. "Double Inertial Proximal Gradient Algorithms for Convex Optimization Problems and Applications." Acta Mathematica Scientia 43, no. 3 (April 29, 2023): 1462–76. http://dx.doi.org/10.1007/s10473-023-0326-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Ahookhosh, Masoud, Le Thi Khanh Hien, Nicolas Gillis, and Panagiotis Patrinos. "A Block Inertial Bregman Proximal Algorithm for Nonsmooth Nonconvex Problems with Application to Symmetric Nonnegative Matrix Tri-Factorization." Journal of Optimization Theory and Applications 190, no. 1 (June 15, 2021): 234–58. http://dx.doi.org/10.1007/s10957-021-01880-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Bao, Chenglong, Chang Chen null, and Kai Jiang. "An Adaptive Block Bregman Proximal Gradient Method for Computing Stationary States of Multicomponent Phase-Field Crystal Model." CSIAM Transactions on Applied Mathematics 3, no. 1 (June 2022): 133–71. http://dx.doi.org/10.4208/csiam-am.so-2021-0002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Wu, Zhongming, and Min Li. "General inertial proximal gradient method for a class of nonconvex nonsmooth optimization problems." Computational Optimization and Applications 73, no. 1 (February 18, 2019): 129–58. http://dx.doi.org/10.1007/s10589-019-00073-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Kesornprom, Suparat, Papatsara Inkrong, Uamporn Witthayarat, and Prasit Cholamjiak. "A recent proximal gradient algorithm for convex minimization problem using double inertial extrapolations." AIMS Mathematics 9, no. 7 (2024): 18841–59. http://dx.doi.org/10.3934/math.2024917.

Повний текст джерела
Анотація:
<abstract><p>In this study, we suggest a new class of forward-backward (FB) algorithms designed to solve convex minimization problems. Our method incorporates a linesearch technique, eliminating the need to choose Lipschitz assumptions explicitly. Additionally, we apply double inertial extrapolations to enhance the algorithm's convergence rate. We establish a weak convergence theorem under some mild conditions. Furthermore, we perform numerical tests, and apply the algorithm to image restoration and data classification as a practical application. The experimental results show our approach's superior performance and effectiveness, surpassing some existing methods in the literature.</p></abstract>
Стилі APA, Harvard, Vancouver, ISO та ін.
21

KESORNPROM, Suparat, and Prasit CHOLAMJİAK. "A double proximal gradient method with new linesearch for solving convex minimization problem with application to data classification." Results in Nonlinear Analysis 5, no. 4 (December 30, 2022): 412–22. http://dx.doi.org/10.53006/rna.1143531.

Повний текст джерела
Анотація:
In this paper, we propose a new proximal gradient method for a convex minimization problem in real Hilbert spaces. We suggest a new linesearch which does not require the condition of Lipschitz constant and improve conditions of inertial term which speed up performance of convergence. Moreover, we prove the weak convergence of the proposed method under some suitable conditions. The numerical implementations in data classification are reported to show its efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Adly, Samir, and Hedy Attouch. "Finite Convergence of Proximal-Gradient Inertial Algorithms Combining Dry Friction with Hessian-Driven Damping." SIAM Journal on Optimization 30, no. 3 (January 2020): 2134–62. http://dx.doi.org/10.1137/19m1307779.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Pakkaranang, Nuttapol, Poom Kumam, Vasile Berinde, and Yusuf I. Suleiman. "Superiorization methodology and perturbation resilience of inertial proximal gradient algorithm with application to signal recovery." Journal of Supercomputing 76, no. 12 (February 27, 2020): 9456–77. http://dx.doi.org/10.1007/s11227-020-03215-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Jolaoso, L. O., H. A. Abass, and O. T. Mewomo. "A viscosity-proximal gradient method with inertial extrapolation for solving certain minimization problems in Hilbert space." Archivum Mathematicum, no. 3 (2019): 167–94. http://dx.doi.org/10.5817/am2019-3-167.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Bussaban, Limpapat, Attapol Kaewkhao, and Suthep Suantai. "Inertial s-iteration forward-backward algorithm for a family of nonexpansive operators with applications to image restoration problems." Filomat 35, no. 3 (2021): 771–82. http://dx.doi.org/10.2298/fil2103771b.

Повний текст джерела
Анотація:
Image restoration is an important branch of image processing which has been studied extensively while there are several methods to solve this problem by many authors with the challenges of computational speed and accuracy of algorithms. In this paper, we present two methods, called ?Inertial S-iteration forward-backward algorithm (ISFBA)? and ?A fast iterative shrinkage-thresholding algorithm-Siteration (FISTA-S)?, for finding an approximate solution of least absolute shrinkage and selection operator problem by using a special technique in fixed point theory and prove weak convergence of the proposed methods under some suitable conditions. Moreover, we apply our main results to solve image restoration problems. It is shown by some numerical examples that our algorithms have a good behavior compared with forward-backward algorithm (FBA), a new accelerated proximal gradient algorithm (nAGA) and a fast iterative shrinkage-thresholding algorithm (FISTA).
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Wang, Xiaofan, Zhiyuan Deng, Changle Wang, and Jinjia Wang. "Inertial Algorithm with Dry Fraction and Convolutional Sparse Coding for 3D Localization with Light Field Microscopy." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 18 (March 24, 2024): 20830–37. http://dx.doi.org/10.1609/aaai.v38i18.30072.

Повний текст джерела
Анотація:
Light field microscopy is a high-speed 3D imaging technique that records the light field from multiple angles by the microlens array(MLA), thus allowing us to obtain information about the light source from a single image only. For the fundamental problem of neuron localization, we improve the method of combining depth-dependent dictionary with sparse coding in this paper. In order to obtain higher localization accuracy and good noise immunity, we propose an inertial proximal gradient acceleration algorithm with dry friction, Fast-IPGDF. By preventing falling into a local minimum, our algorithm achieves better convergence and converges quite fast, which improves the speed and accuracy of obtaining the locolization of the light source based on the matching depth of epipolar plane images (EPI). We demonstrate the effectiveness of the algorithm for localizing non-scattered fluorescent beads in both noisy and non-noisy environments. The experimental results show that our method can achieve simultaneous localization of multiple point sources and effective localization in noisy environments. Compared to existing studies, our method shows significant improvements in both localization accuracy and speed.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Loría-Calderón, Tyrone M., Carlos D. Gómez-Carmona, Keven G. Santamaría-Guzmán, Mynor Rodríguez-Hernández, and José Pino-Ortega. "Quantifying the External Joint Workload and Safety of Latin Dance in Older Adults: Potential Benefits for Musculoskeletal Health." Applied Sciences 14, no. 7 (March 22, 2024): 2689. http://dx.doi.org/10.3390/app14072689.

Повний текст джерела
Анотація:
As global aging rises, identifying strategies to mitigate age-related physical decline has become an urgent priority. Dance represents a promising exercise modality for older adults, yet few studies have quantified the external loads older dancers experience. This study aimed to characterize the impacts accumulated across lower limb and spinal locations in older adults during Latin dance. Thirty older Latin dancers (age = 66.56 ± 6.38 years; female = 93.3%) wore inertial sensors on the scapulae, lumbar spine, knees, and ankles during a 1 h class. A distal-to-proximal gradient emerged in the total impacts (F = 429.29; p < 0.01; ωp2 = 0.43) and per intensities (F = 103.94-to-665.55; p < 0.01; ωp2 = 0.07-to-0.54), with the highest impacts sustained in the ankles (≈9000 total impacts) from 2 g to >10 g (p < 0.01; d = 1.03-to-4.95; ankles > knees > lower back > scapulae) and knees (≈12,000 total impacts) when <2 g (p < 0.01, d = 2.73-to-3.25; knees > ankles > lower back > scapulae). The majority of the impacts remained below 6 g across all anatomical locations (>94%). The impacts also increased in lower limb locations with faster tempos (r = 0.10-to-0.52; p < 0.01), while subtly accumulating over successive songs rather than indicating fatigue (r = 0.11-to-0.35; p < 0.01). The mild ankle and knee loads could strengthen the dancers’ lower extremity bones and muscles in a population vulnerable to sarcopenia, osteoporosis, and falls. Quantifying the workload via accelerometry enables creating personalized dance programs to empower healthy aging. With global aging rising, this work addresses a timely public health need regarding sustainable lifelong exercise for older people. Ranging from low to moderate, the measured impact magnitudes suggest that dance lessons may provide enough osteogenic stimulus without overloading structures.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Wu, Zhongming, Chongshou Li, Min Li, and Andrew Lim. "Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems." Journal of Global Optimization, August 19, 2020. http://dx.doi.org/10.1007/s10898-020-00943-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Gao, Xue, Xingju Cai, Xiangfeng Wang, and Deren Han. "An alternating structure-adapted Bregman proximal gradient descent algorithm for constrained nonconvex nonsmooth optimization problems and its inertial variant." Journal of Global Optimization, June 24, 2023. http://dx.doi.org/10.1007/s10898-023-01300-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Zhang, Hui, Yu-Hong Dai, Lei Guo, and Wei Peng. "Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions." Mathematics of Operations Research, June 25, 2019. http://dx.doi.org/10.1287/moor.2019.1047.

Повний текст джерела
Анотація:
We introduce a unified algorithmic framework, called the proximal-like incremental aggregated gradient (PLIAG) method, for minimizing the sum of a convex function that consists of additive relatively smooth convex components and a proper lower semicontinuous convex regularization function over an abstract feasible set whose geometry can be captured by using the domain of a Legendre function. The PLIAG method includes many existing algorithms in the literature as special cases, such as the proximal gradient method, the Bregman proximal gradient method (also called the NoLips algorithm), the incremental aggregated gradient method, the incremental aggregated proximal method, and the proximal incremental aggregated gradient method. It also includes some novel interesting iteration schemes. First, we show that the PLIAG method is globally sublinearly convergent without requiring a growth condition, which extends the sublinear convergence result for the proximal gradient algorithm to incremental aggregated-type first-order methods. Then, by embedding a so-called Bregman distance growth condition into a descent-type lemma to construct a special Lyapunov function, we show that the PLIAG method is globally linearly convergent in terms of both function values and Bregman distances to the optimal solution set, provided that the step size is not greater than some positive constant. The convergence results derived in this paper are all established beyond the standard assumptions in the literature (i.e., without requiring the strong convexity and the Lipschitz gradient continuity of the smooth part of the objective). When specialized to many existing algorithms, our results recover or supplement their convergence results under strictly weaker conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Takahashi, Shota, Mituhiro Fukuda, and Mirai Tanaka. "New Bregman proximal type algorithms for solving DC optimization problems." Computational Optimization and Applications, September 23, 2022. http://dx.doi.org/10.1007/s10589-022-00411-w.

Повний текст джерела
Анотація:
AbstractDifference of Convex (DC) optimization problems have objective functions that are differences between two convex functions. Representative ways of solving these problems are the proximal DC algorithms, which require that the convex part of the objective function have L-smoothness. In this article, we propose the Bregman Proximal DC Algorithm (BPDCA) for solving large-scale DC optimization problems that do not possess L-smoothness. Instead, it requires that the convex part of the objective function has the L-smooth adaptable property that is exploited in Bregman proximal gradient algorithms. In addition, we propose an accelerated version, the Bregman Proximal DC Algorithm with extrapolation (BPDCAe), with a new restart scheme. We show the global convergence of the iterates generated by BPDCA(e) to a limiting critical point under the assumption of the Kurdyka-Łojasiewicz property or subanalyticity of the objective function and other weaker conditions than those of the existing methods. We applied our algorithms to phase retrieval, which can be described both as a nonconvex optimization problem and as a DC optimization problem. Numerical experiments showed that BPDCAe outperformed existing Bregman proximal-type algorithms because the DC formulation allows for larger admissible step sizes.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Liu, Jin-Zan, and Xin-Wei Liu. "A dual Bregman proximal gradient method for relatively-strongly convex optimization." Numerical Algebra, Control & Optimization, 2021, 0. http://dx.doi.org/10.3934/naco.2021028.

Повний текст джерела
Анотація:
<p style='text-indent:20px;'>We consider a convex composite minimization problem, whose objective is the sum of a relatively-strongly convex function and a closed proper convex function. A dual Bregman proximal gradient method is proposed for solving this problem and is shown that the convergence rate of the primal sequence is <inline-formula><tex-math id="M1">\begin{document}$ O(\frac{1}{k}) $\end{document}</tex-math></inline-formula>. Moreover, based on the acceleration scheme, we prove that the convergence rate of the primal sequence is <inline-formula><tex-math id="M2">\begin{document}$ O(\frac{1}{k^{\gamma}}) $\end{document}</tex-math></inline-formula>, where <inline-formula><tex-math id="M3">\begin{document}$ \gamma\in[1,2] $\end{document}</tex-math></inline-formula> is determined by the triangle scaling property of the Bregman distance.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Sun, Tao, Linbo Qiao, and Dongsheng Li. "Nonergodic Complexity of Proximal Inertial Gradient Descents." IEEE Transactions on Neural Networks and Learning Systems, 2020, 1–14. http://dx.doi.org/10.1109/tnnls.2020.3025157.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Chen, Kangming, Ellen H. Fukuda, and Nobuo Yamashita. "A proximal gradient method with Bregman distance in multi-objective optimization." Pacific Journal of Optimization, 2024. http://dx.doi.org/10.61208/pjo-2024-012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Mukkamala, Mahesh Chandra, Jalal Fadili, and Peter Ochs. "Global convergence of model function based Bregman proximal minimization algorithms." Journal of Global Optimization, December 1, 2021. http://dx.doi.org/10.1007/s10898-021-01114-y.

Повний текст джерела
Анотація:
AbstractLipschitz continuity of the gradient mapping of a continuously differentiable function plays a crucial role in designing various optimization algorithms. However, many functions arising in practical applications such as low rank matrix factorization or deep neural network problems do not have a Lipschitz continuous gradient. This led to the development of a generalized notion known as the L-smad property, which is based on generalized proximity measures called Bregman distances. However, the L-smad property cannot handle nonsmooth functions, for example, simple nonsmooth functions like $$\vert x^4-1 \vert $$ | x 4 - 1 | and also many practical composite problems are out of scope. We fix this issue by proposing the MAP property, which generalizes the L-smad property and is also valid for a large class of structured nonconvex nonsmooth composite problems. Based on the proposed MAP property, we propose a globally convergent algorithm called Model BPG, that unifies several existing algorithms. The convergence analysis is based on a new Lyapunov function. We also numerically illustrate the superior performance of Model BPG on standard phase retrieval problems and Poisson linear inverse problems, when compared to a state of the art optimization method that is valid for generic nonconvex nonsmooth optimization problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Xiao, Xiantao. "A Unified Convergence Analysis of Stochastic Bregman Proximal Gradient and Extragradient Methods." Journal of Optimization Theory and Applications, January 8, 2021. http://dx.doi.org/10.1007/s10957-020-01799-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Bonettini, S., M. Prato, and S. Rebegoldi. "A new proximal heavy ball inexact line-search algorithm." Computational Optimization and Applications, March 10, 2024. http://dx.doi.org/10.1007/s10589-024-00565-9.

Повний текст джерела
Анотація:
AbstractWe study a novel inertial proximal-gradient method for composite optimization. The proposed method alternates between a variable metric proximal-gradient iteration with momentum and an Armijo-like linesearch based on the sufficient decrease of a suitable merit function. The linesearch procedure allows for a major flexibility on the choice of the algorithm parameters. We prove the convergence of the iterates sequence towards a stationary point of the problem, in a Kurdyka–Łojasiewicz framework. Numerical experiments on a variety of convex and nonconvex problems highlight the superiority of our proposal with respect to several standard methods, especially when the inertial parameter is selected by mimicking the Conjugate Gradient updating rule.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Duan, Peichao, Yiqun Zhang, and Qinxiong Bu. "New inertial proximal gradient methods for unconstrained convex optimization problems." Journal of Inequalities and Applications 2020, no. 1 (December 2020). http://dx.doi.org/10.1186/s13660-020-02522-6.

Повний текст джерела
Анотація:
AbstractThe proximal gradient method is a highly powerful tool for solving the composite convex optimization problem. In this paper, firstly, we propose inexact inertial acceleration methods based on the viscosity approximation and proximal scaled gradient algorithm to accelerate the convergence of the algorithm. Under reasonable parameters, we prove that our algorithms strongly converge to some solution of the problem, which is the unique solution of a variational inequality problem. Secondly, we propose an inexact alternated inertial proximal point algorithm. Under suitable conditions, the weak convergence theorem is proved. Finally, numerical results illustrate the performances of our algorithms and present a comparison with related algorithms. Our results improve and extend the corresponding results reported by many authors recently.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Hertrich, Johannes, and Gabriele Steidl. "Inertial stochastic PALM and applications in machine learning." Sampling Theory, Signal Processing, and Data Analysis 20, no. 1 (April 22, 2022). http://dx.doi.org/10.1007/s43670-022-00021-x.

Повний текст джерела
Анотація:
AbstractInertial algorithms for minimizing nonsmooth and nonconvex functions as the inertial proximal alternating linearized minimization algorithm (iPALM) have demonstrated their superiority with respect to computation time over their non inertial variants. In many problems in imaging and machine learning, the objective functions have a special form involving huge data which encourage the application of stochastic algorithms. While algorithms based on stochastic gradient descent are still used in the majority of applications, recently also stochastic algorithms for minimizing nonsmooth and nonconvex functions were proposed. In this paper, we derive an inertial variant of a stochastic PALM algorithm with variance-reduced gradient estimator, called iSPALM, and prove linear convergence of the algorithm under certain assumptions. Our inertial approach can be seen as generalization of momentum methods widely used to speed up and stabilize optimization algorithms, in particular in machine learning, to nonsmooth problems. Numerical experiments for learning the weights of a so-called proximal neural network and the parameters of Student-t mixture models show that our new algorithm outperforms both stochastic PALM and its deterministic counterparts.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Zhang, Xiaoya, Wei Peng, and Hui Zhang. "Inertial proximal incremental aggregated gradient method with linear convergence guarantees." Mathematical Methods of Operations Research, June 25, 2022. http://dx.doi.org/10.1007/s00186-022-00790-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Guo, Chenzheng, Jing Zhao, and Qiao-Li Dong. "A stochastic two-step inertial Bregman proximal alternating linearized minimization algorithm for nonconvex and nonsmooth problems." Numerical Algorithms, November 9, 2023. http://dx.doi.org/10.1007/s11075-023-01693-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Jia, Zehui, Jieru Huang, and Xingju Cai. "Proximal-like incremental aggregated gradient method with Bregman distance in weakly convex optimization problems." Journal of Global Optimization, May 29, 2021. http://dx.doi.org/10.1007/s10898-021-01044-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Kankam, Kunrada, Watcharaporn Cholamjiak, and Prasit Cholamjiak. "New inertial forward–backward algorithm for convex minimization with applications." Demonstratio Mathematica 56, no. 1 (January 1, 2023). http://dx.doi.org/10.1515/dema-2022-0188.

Повний текст джерела
Анотація:
Abstract In this work, we present a new proximal gradient algorithm based on Tseng’s extragradient method and an inertial technique to solve the convex minimization problem in real Hilbert spaces. Using the stepsize rules, the selection of the Lipschitz constant of the gradient of functions is avoided. We then prove the weak convergence theorem and present the numerical experiments for image recovery. The comparative results show that the proposed algorithm has better efficiency than other methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Sun, Shuya, and Lulu He. "General inertial proximal stochastic variance reduction gradient for nonconvex nonsmooth optimization." Journal of Inequalities and Applications 2023, no. 1 (February 17, 2023). http://dx.doi.org/10.1186/s13660-023-02922-4.

Повний текст джерела
Анотація:
AbstractIn this paper, motivated by the competitive performance of the proximal stochastic variance reduction gradient (Prox-SVRG) method, a novel general inertial Prox-SVRG (GIProx-SVRG) algorithm is proposed for solving a class of nonconvex finite sum problems. More precisely, Nesterov’s momentum trick-based extrapolation accelerated step is incorporated into the framework of Prox-SVRG method. The GIProx-SVRG algorithm possesses more general accelerated expression and thus can potentially achieve accelerated convergence speed. Moreover, based on the supermartingale convergence theory and the error bound condition, we establish a linear convergence rate for the iterate sequence generated by the GIProx-SVRG algorithm. We observe that there is no theory in which the general extrapolation technique is incorporated into the Prox-SVRG method, whereas we establish such a theory in this paper. Experimental results demonstrate the superiority of our method over state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Inkrong, Papatsara, and Prasit Cholamjiak. "Modified proximal gradient methods involving double inertial extrapolations for monotone inclusion." Mathematical Methods in the Applied Sciences, April 30, 2024. http://dx.doi.org/10.1002/mma.10159.

Повний текст джерела
Анотація:
In this work, we propose a novel class of forward‐backward‐forward algorithms for solving monotone inclusion problems. Our approach incorporates a self‐adaptive technique to eliminate the need for explicitly selecting Lipschitz assumptions and utilizes two‐step inertial extrapolations to enhance the convergence rate of the algorithm. We establish a weak convergence theorem under mild assumptions. Furthermore, we conduct numerical tests on image deblurring and data classification as practical applications. The experimental results demonstrate that our algorithm surpasses some existing methods in the literature which shows its superior performance and effectiveness.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Valkonen, Tuomo. "Proximal methods for point source localisation." Journal of Nonsmooth Analysis and Optimization Volume 4, Original research articles (September 21, 2023). http://dx.doi.org/10.46298/jnsao-2023-10433.

Повний текст джерела
Анотація:
Point source localisation is generally modelled as a Lasso-type problem on measures. However, optimisation methods in non-Hilbert spaces, such as the space of Radon measures, are much less developed than in Hilbert spaces. Most numerical algorithms for point source localisation are based on the Frank-Wolfe conditional gradient method, for which ad hoc convergence theory is developed. We develop extensions of proximal-type methods to spaces of measures. This includes forward-backward splitting, its inertial version, and primal-dual proximal splitting. Their convergence proofs follow standard patterns. We demonstrate their numerical efficacy.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Mouktonglang, Thanasak, Wipawinee Chaiwino, and Raweerote Suparatulatorn. "A proximal gradient method with double inertial steps for minimization problems involving demicontractive mappings." Journal of Inequalities and Applications 2024, no. 1 (May 15, 2024). http://dx.doi.org/10.1186/s13660-024-03145-x.

Повний текст джерела
Анотація:
AbstractIn this article, we present a novel proximal gradient method based on double inertial steps for solving fixed points of demicontractive mapping and minimization problems. We also establish a weak convergence theorem by applying this method. Additionally, we provide a numerical example related to a signal recovery problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

"Convergence of proximal gradient method with alternated inertial step for minimization problem." Advances in Fixed Point Theory, 2024. http://dx.doi.org/10.28919/afpt/8625.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Silveti-Falls, Antonio, Cesare Molinari, and Jalal Fadili. "Inexact and Stochastic Generalized Conditional Gradient with Augmented Lagrangian and Proximal Step." Journal of Nonsmooth Analysis and Optimization Volume 2, Original research articles (September 1, 2021). http://dx.doi.org/10.46298/jnsao-2021-6480.

Повний текст джерела
Анотація:
In this paper we propose and analyze inexact and stochastic versions of the CGALP algorithm developed in [25], which we denote ICGALP , that allow for errors in the computation of several important quantities. In particular this allows one to compute some gradients, proximal terms, and/or linear minimization oracles in an inexact fashion that facilitates the practical application of the algorithm to computationally intensive settings, e.g., in high (or possibly infinite) dimensional Hilbert spaces commonly found in machine learning problems. The algorithm is able to solve composite minimization problems involving the sum of three convex proper lower-semicontinuous functions subject to an affine constraint of the form Ax = b for some bounded linear operator A. Only one of the functions in the objective is assumed to be differentiable, the other two are assumed to have an accessible proximal operator and a linear minimization oracle. As main results, we show convergence of the Lagrangian values (so-called convergence in the Bregman sense) and asymptotic feasibility of the affine constraint as well as strong convergence of the sequence of dual variables to a solution of the dual problem, in an almost sure sense. Almost sure convergence rates are given for the Lagrangian values and the feasibility gap for the ergodic primal variables. Rates in expectation are given for the Lagrangian values and the feasibility gap subsequentially in the pointwise sense. Numerical experiments verifying the predicted rates of convergence are shown as well.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Cohen, Eyal, and Marc Teboulle. "Alternating and Parallel Proximal Gradient Methods for Nonsmooth, Nonconvex Minimax: A Unified Convergence Analysis." Mathematics of Operations Research, February 8, 2024. http://dx.doi.org/10.1287/moor.2022.0294.

Повний текст джерела
Анотація:
There is growing interest in nonconvex minimax problems that is driven by an abundance of applications. Our focus is on nonsmooth, nonconvex-strongly concave minimax, thus departing from the more common weakly convex and smooth models assumed in the recent literature. We present proximal gradient schemes with either parallel or alternating steps. We show that both methods can be analyzed through a single scheme within a unified analysis that relies on expanding a general convergence mechanism used for analyzing nonconvex, nonsmooth optimization problems. In contrast to the current literature, which focuses on the complexity of obtaining nearly approximate stationary solutions, we prove subsequence convergence to a critical point of the primal objective and global convergence when the latter is semialgebraic. Furthermore, the complexity results we provide are with respect to approximate stationary solutions. Lastly, we expand the scope of problems that can be addressed by generalizing one of the steps with a Bregman proximal gradient update, and together with a few adjustments to the analysis, this allows us to extend the convergence and complexity results to this broader setting. Funding: The research of E. Cohen was partially supported by a doctoral fellowship from the Israel Science Foundation [Grant 2619-20] and Deutsche Forschungsgemeinschaft [Grant 800240]. The research of M. Teboulle was partially supported by the Israel Science Foundation [Grant 2619-20] and Deutsche Forschungsgemeinschaft [Grant 800240].
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії