Academic literature on the topic 'Inertial Bregman proximal gradient'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Inertial Bregman proximal gradient.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Inertial Bregman proximal gradient"

1

Mukkamala, Mahesh Chandra, Peter Ochs, Thomas Pock, and Shoham Sabach. "Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Nonconvex Optimization." SIAM Journal on Mathematics of Data Science 2, no. 3 (January 2020): 658–82. http://dx.doi.org/10.1137/19m1298007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kabbadj, S. "Inexact Version of Bregman Proximal Gradient Algorithm." Abstract and Applied Analysis 2020 (April 1, 2020): 1–11. http://dx.doi.org/10.1155/2020/1963980.

Full text
Abstract:
The Bregman Proximal Gradient (BPG) algorithm is an algorithm for minimizing the sum of two convex functions, with one being nonsmooth. The supercoercivity of the objective function is necessary for the convergence of this algorithm precluding its use in many applications. In this paper, we give an inexact version of the BPG algorithm while circumventing the condition of supercoercivity by replacing it with a simple condition on the parameters of the problem. Our study covers the existing results, while giving other.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Yi, Yingbin Liang, and Lixin Shen. "A simple convergence analysis of Bregman proximal gradient algorithm." Computational Optimization and Applications 73, no. 3 (April 4, 2019): 903–12. http://dx.doi.org/10.1007/s10589-019-00092-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hanzely, Filip, Peter Richtárik, and Lin Xiao. "Accelerated Bregman proximal gradient methods for relatively smooth convex optimization." Computational Optimization and Applications 79, no. 2 (April 7, 2021): 405–40. http://dx.doi.org/10.1007/s10589-021-00273-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mahadevan, Sridhar, Stephen Giguere, and Nicholas Jacek. "Basis Adaptation for Sparse Nonlinear Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 654–60. http://dx.doi.org/10.1609/aaai.v27i1.8665.

Full text
Abstract:
This paper presents a new approach to representation discovery in reinforcement learning (RL) using basis adaptation. We introduce a general framework for basis adaptation as {\em nonlinear separable least-squares value function approximation} based on finding Frechet gradients of an error function using variable projection functionals. We then present a scalable proximal gradient-based approach for basis adaptation using the recently proposed mirror-descent framework for RL. Unlike traditional temporal-difference (TD) methods for RL, mirror descent based RL methods undertake proximal gradient updates of weights in a dual space, which is linked together with the primal space using a Legendre transform involving the gradient of a strongly convex function. Mirror descent RL can be viewed as a proximal TD algorithm using Bregman divergence as the distance generating function. We present a new class of regularized proximal-gradient based TD methods, which combine feature selection through sparse L1 regularization and basis adaptation. Experimental results are provided to illustrate and validate the approach.
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Lei, and Kim-Chuan Toh. "Bregman Proximal Point Algorithm Revisited: A New Inexact Version and Its Inertial Variant." SIAM Journal on Optimization 32, no. 3 (July 13, 2022): 1523–54. http://dx.doi.org/10.1137/20m1360748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Jing, Xiao Wei, Fengpin Wang, and Jinjia Wang. "IPGM: Inertial Proximal Gradient Method for Convolutional Dictionary Learning." Electronics 10, no. 23 (December 3, 2021): 3021. http://dx.doi.org/10.3390/electronics10233021.

Full text
Abstract:
Inspired by the recent success of the proximal gradient method (PGM) and recent efforts to develop an inertial algorithm, we propose an inertial PGM (IPGM) for convolutional dictionary learning (CDL) by jointly optimizing both an ℓ2-norm data fidelity term and a sparsity term that enforces an ℓ1 penalty. Contrary to other CDL methods, in the proposed approach, the dictionary and needles are updated with an inertial force by the PGM. We obtain a novel derivative formula for the needles and dictionary with respect to the data fidelity term. At the same time, a gradient descent step is designed to add an inertial term. The proximal operation uses the thresholding operation for needles and projects the dictionary to a unit-norm sphere. We prove the convergence property of the proposed IPGM algorithm in a backtracking case. Simulation results show that the proposed IPGM achieves better performance than the PGM and slice-based methods that possess the same structure and are optimized using the alternating-direction method of multipliers (ADMM).
APA, Harvard, Vancouver, ISO, and other styles
8

Xiao, Xiantao. "A Unified Convergence Analysis of Stochastic Bregman Proximal Gradient and Extragradient Methods." Journal of Optimization Theory and Applications 188, no. 3 (January 8, 2021): 605–27. http://dx.doi.org/10.1007/s10957-020-01799-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Qingsong, Zehui Liu, Chunfeng Cui, and Deren Han. "A Bregman Proximal Stochastic Gradient Method with Extrapolation for Nonconvex Nonsmooth Problems." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (March 24, 2024): 15580–88. http://dx.doi.org/10.1609/aaai.v38i14.29485.

Full text
Abstract:
In this paper, we explore a specific optimization problem that involves the combination of a differentiable nonconvex function and a nondifferentiable function. The differentiable component lacks a global Lipschitz continuous gradient, posing challenges for optimization. To address this issue and accelerate the convergence, we propose a Bregman proximal stochastic gradient method with extrapolation (BPSGE), which only requires smooth adaptivity of the differentiable part. Under variance reduction framework, we not only analyze the subsequential and global convergence of the proposed algorithm under certain conditions, but also analyze the sublinear convergence rate of the subsequence, and the complexity of the algorithm, revealing that the BPSGE algorithm requires at most O(epsilon\^\,(-2)) iterations in expectation to attain an epsilon-stationary point. To validate the effectiveness of our proposed algorithm, we conduct numerical experiments on three real-world applications: graph regularized nonnegative matrix factorization (NMF), matrix factorization with weakly-convex regularization, and NMF with nonconvex sparsity constraints. These experiments demonstrate that BPSGE is faster than the baselines without extrapolation. The code is available at: https://github.com/nothing2wang/BPSGE-Algorithm.
APA, Harvard, Vancouver, ISO, and other styles
10

He, Lulu, Jimin Ye, and Jianwei E. "Nonconvex optimization with inertial proximal stochastic variance reduction gradient." Information Sciences 648 (November 2023): 119546. http://dx.doi.org/10.1016/j.ins.2023.119546.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Inertial Bregman proximal gradient"

1

Godeme, Jean-Jacques. "Ρhase retrieval with nοn-Euclidean Bregman based geοmetry." Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMC214.

Full text
Abstract:
Dans ce travail, nous nous intéressons au problème de reconstruction de phase de signaux à valeurs réelles en dimension finie, un défi rencontré dans de nombreuses disciplines scientifiques et d’ingénierie. Nous explorons deux approches complémentaires : la reconstruction avec et sans régularisation. Dans les deux cas, notre travail se concentre sur la relaxation de l’hypothèse de Lipschitz-continuité généralement requise par les algorithmes de descente du premier ordre, et qui n’est pas valide pour la reconstruction de phase lorsqu’il formulée comme un problème de minimisation. L’idée clé ici est de remplacer la géométrie euclidienne par une divergence de Bregman non euclidienne associée à un noyau générateur approprié. Nous utilisons un algorithme de descente miroir ou de descente à la Bregman avec cette divergence pour résoudre le problème de reconstruction de phase sans régularisation. Nous démontrons des résultats de reconstruction exacte (à un signe global près) à la fois dans un cadre déterministe et avec une forte probabilité pour un nombre suffisant de mesures aléatoires (mesures Gaussiennes et pour des mesures structurées comme la diffraction codée). De plus, nous établissons la stabilité de cette approche vis-à-vis d’un bruit additif faible. En passant à la reconstruction de phase régularisée, nous développons et analysons d’abord un algorithme proximal inertiel à la Bregman pour minimiser la somme de deux fonctions, l’une étant convexe et potentiellement non lisse et la seconde étant relativement lisse dans la géométrie de Bregman. Nous fournissons des garanties de convergence à la fois globale et locale pour cet algorithme. Enfin, nous étudions la reconstruction sans bruit et la stabilité du problème régularisé par un a priori de faible complexité. Pour celà, nous formulons le problème comme la minimisation d’une objective impliquant un terme d’attache aux données non convexe et un terme de régularisation convexe favorisant les solutions conformes à une certaine notion de faible complexité. Nous établissons des conditions pour une reconstruction exacte et stable et fournissons des bornes sur le nombre de mesures aléatoires suffisants pour de garantir que ces conditionssoient remplies. Ces bornes d’échantillonnage dépendent de la faible complexité des signaux à reconstruire. Ces résultats nouveaux permettent d’aller bien au-delà du cas de la reconstruction de phase parcimonieuse
In this work, we investigate the phase retrieval problem of real-valued signals in finite dimension, a challenge encountered across various scientific and engineering disciplines. It explores two complementary approaches: retrieval with and without regularization. In both settings, our work is focused on relaxing the Lipschitz-smoothness assumption generally required by first-order splitting algorithms, and which is not valid for phase retrieval cast as a minimization problem. The key idea here is to replace the Euclidean geometry by a non-Euclidean Bregman divergence associated to an appropriate kernel. We use a Bregman gradient/mirror descent algorithm with this divergence to solve thephase retrieval problem without regularization, and we show exact (up to a global sign) recovery both in a deterministic setting and with high probability for a sufficient number of random measurements (Gaussian and Coded Diffraction Patterns). Furthermore, we establish the robustness of this approachagainst small additive noise. Shifting to regularized phase retrieval, we first develop and analyze an Inertial Bregman Proximal Gradient algorithm for minimizing the sum of two functions in finite dimension, one of which is convex and possibly nonsmooth and the second is relatively smooth in the Bregman geometry. We provide both global and local convergence guarantees for this algorithm. Finally, we study noiseless and stable recovery of low complexity regularized phase retrieval. For this, weformulate the problem as the minimization of an objective functional involving a nonconvex smooth data fidelity term and a convex regularizer promoting solutions conforming to some notion of low-complexity related to their nonsmoothness points. We establish conditions for exact and stable recovery and provide sample complexity bounds for random measurements to ensure that these conditions hold. These sample bounds depend on the low complexity of the signals to be recovered. Our new results allow to go far beyond the case of sparse phase retrieval
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Inertial Bregman proximal gradient"

1

Mukkamala, Mahesh Chandra, Felix Westerkamp, Emanuel Laude, Daniel Cremers, and Peter Ochs. "Bregman Proximal Gradient Algorithms for Deep Matrix Factorization." In Lecture Notes in Computer Science, 204–15. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-75549-2_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Inertial Bregman proximal gradient"

1

Li, Huan, Wenjuan Zhang, Shujian Huang, and Feng Xiao. "Poisson Noise Image Restoration Based on Bregman Proximal Gradient." In 2023 6th International Conference on Computer Network, Electronic and Automation (ICCNEA). IEEE, 2023. http://dx.doi.org/10.1109/iccnea60107.2023.00058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pu, Wenqiang, Jiawei Zhang, Rui Zhou, Xiao Fu, and Mingyi Hong. "A Smoothed Bregman Proximal Gradient Algorithm for Decentralized Nonconvex Optimization." In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024. http://dx.doi.org/10.1109/icassp48485.2024.10448285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography