Auswahl der wissenschaftlichen Literatur zum Thema „Inertial Bregman proximal gradient“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Inertial Bregman proximal gradient" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Inertial Bregman proximal gradient"

1

Mukkamala, Mahesh Chandra, Peter Ochs, Thomas Pock und Shoham Sabach. „Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Nonconvex Optimization“. SIAM Journal on Mathematics of Data Science 2, Nr. 3 (Januar 2020): 658–82. http://dx.doi.org/10.1137/19m1298007.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Kabbadj, S. „Inexact Version of Bregman Proximal Gradient Algorithm“. Abstract and Applied Analysis 2020 (01.04.2020): 1–11. http://dx.doi.org/10.1155/2020/1963980.

Der volle Inhalt der Quelle
Annotation:
The Bregman Proximal Gradient (BPG) algorithm is an algorithm for minimizing the sum of two convex functions, with one being nonsmooth. The supercoercivity of the objective function is necessary for the convergence of this algorithm precluding its use in many applications. In this paper, we give an inexact version of the BPG algorithm while circumventing the condition of supercoercivity by replacing it with a simple condition on the parameters of the problem. Our study covers the existing results, while giving other.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zhou, Yi, Yingbin Liang und Lixin Shen. „A simple convergence analysis of Bregman proximal gradient algorithm“. Computational Optimization and Applications 73, Nr. 3 (04.04.2019): 903–12. http://dx.doi.org/10.1007/s10589-019-00092-y.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hanzely, Filip, Peter Richtárik und Lin Xiao. „Accelerated Bregman proximal gradient methods for relatively smooth convex optimization“. Computational Optimization and Applications 79, Nr. 2 (07.04.2021): 405–40. http://dx.doi.org/10.1007/s10589-021-00273-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Mahadevan, Sridhar, Stephen Giguere und Nicholas Jacek. „Basis Adaptation for Sparse Nonlinear Reinforcement Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 27, Nr. 1 (30.06.2013): 654–60. http://dx.doi.org/10.1609/aaai.v27i1.8665.

Der volle Inhalt der Quelle
Annotation:
This paper presents a new approach to representation discovery in reinforcement learning (RL) using basis adaptation. We introduce a general framework for basis adaptation as {\em nonlinear separable least-squares value function approximation} based on finding Frechet gradients of an error function using variable projection functionals. We then present a scalable proximal gradient-based approach for basis adaptation using the recently proposed mirror-descent framework for RL. Unlike traditional temporal-difference (TD) methods for RL, mirror descent based RL methods undertake proximal gradient updates of weights in a dual space, which is linked together with the primal space using a Legendre transform involving the gradient of a strongly convex function. Mirror descent RL can be viewed as a proximal TD algorithm using Bregman divergence as the distance generating function. We present a new class of regularized proximal-gradient based TD methods, which combine feature selection through sparse L1 regularization and basis adaptation. Experimental results are provided to illustrate and validate the approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Yang, Lei, und Kim-Chuan Toh. „Bregman Proximal Point Algorithm Revisited: A New Inexact Version and Its Inertial Variant“. SIAM Journal on Optimization 32, Nr. 3 (13.07.2022): 1523–54. http://dx.doi.org/10.1137/20m1360748.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Li, Jing, Xiao Wei, Fengpin Wang und Jinjia Wang. „IPGM: Inertial Proximal Gradient Method for Convolutional Dictionary Learning“. Electronics 10, Nr. 23 (03.12.2021): 3021. http://dx.doi.org/10.3390/electronics10233021.

Der volle Inhalt der Quelle
Annotation:
Inspired by the recent success of the proximal gradient method (PGM) and recent efforts to develop an inertial algorithm, we propose an inertial PGM (IPGM) for convolutional dictionary learning (CDL) by jointly optimizing both an ℓ2-norm data fidelity term and a sparsity term that enforces an ℓ1 penalty. Contrary to other CDL methods, in the proposed approach, the dictionary and needles are updated with an inertial force by the PGM. We obtain a novel derivative formula for the needles and dictionary with respect to the data fidelity term. At the same time, a gradient descent step is designed to add an inertial term. The proximal operation uses the thresholding operation for needles and projects the dictionary to a unit-norm sphere. We prove the convergence property of the proposed IPGM algorithm in a backtracking case. Simulation results show that the proposed IPGM achieves better performance than the PGM and slice-based methods that possess the same structure and are optimized using the alternating-direction method of multipliers (ADMM).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Xiao, Xiantao. „A Unified Convergence Analysis of Stochastic Bregman Proximal Gradient and Extragradient Methods“. Journal of Optimization Theory and Applications 188, Nr. 3 (08.01.2021): 605–27. http://dx.doi.org/10.1007/s10957-020-01799-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Wang, Qingsong, Zehui Liu, Chunfeng Cui und Deren Han. „A Bregman Proximal Stochastic Gradient Method with Extrapolation for Nonconvex Nonsmooth Problems“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 14 (24.03.2024): 15580–88. http://dx.doi.org/10.1609/aaai.v38i14.29485.

Der volle Inhalt der Quelle
Annotation:
In this paper, we explore a specific optimization problem that involves the combination of a differentiable nonconvex function and a nondifferentiable function. The differentiable component lacks a global Lipschitz continuous gradient, posing challenges for optimization. To address this issue and accelerate the convergence, we propose a Bregman proximal stochastic gradient method with extrapolation (BPSGE), which only requires smooth adaptivity of the differentiable part. Under variance reduction framework, we not only analyze the subsequential and global convergence of the proposed algorithm under certain conditions, but also analyze the sublinear convergence rate of the subsequence, and the complexity of the algorithm, revealing that the BPSGE algorithm requires at most O(epsilon\^\,(-2)) iterations in expectation to attain an epsilon-stationary point. To validate the effectiveness of our proposed algorithm, we conduct numerical experiments on three real-world applications: graph regularized nonnegative matrix factorization (NMF), matrix factorization with weakly-convex regularization, and NMF with nonconvex sparsity constraints. These experiments demonstrate that BPSGE is faster than the baselines without extrapolation. The code is available at: https://github.com/nothing2wang/BPSGE-Algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

He, Lulu, Jimin Ye und Jianwei E. „Nonconvex optimization with inertial proximal stochastic variance reduction gradient“. Information Sciences 648 (November 2023): 119546. http://dx.doi.org/10.1016/j.ins.2023.119546.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Inertial Bregman proximal gradient"

1

Godeme, Jean-Jacques. „Ρhase retrieval with nοn-Euclidean Bregman based geοmetry“. Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMC214.

Der volle Inhalt der Quelle
Annotation:
Dans ce travail, nous nous intéressons au problème de reconstruction de phase de signaux à valeurs réelles en dimension finie, un défi rencontré dans de nombreuses disciplines scientifiques et d’ingénierie. Nous explorons deux approches complémentaires : la reconstruction avec et sans régularisation. Dans les deux cas, notre travail se concentre sur la relaxation de l’hypothèse de Lipschitz-continuité généralement requise par les algorithmes de descente du premier ordre, et qui n’est pas valide pour la reconstruction de phase lorsqu’il formulée comme un problème de minimisation. L’idée clé ici est de remplacer la géométrie euclidienne par une divergence de Bregman non euclidienne associée à un noyau générateur approprié. Nous utilisons un algorithme de descente miroir ou de descente à la Bregman avec cette divergence pour résoudre le problème de reconstruction de phase sans régularisation. Nous démontrons des résultats de reconstruction exacte (à un signe global près) à la fois dans un cadre déterministe et avec une forte probabilité pour un nombre suffisant de mesures aléatoires (mesures Gaussiennes et pour des mesures structurées comme la diffraction codée). De plus, nous établissons la stabilité de cette approche vis-à-vis d’un bruit additif faible. En passant à la reconstruction de phase régularisée, nous développons et analysons d’abord un algorithme proximal inertiel à la Bregman pour minimiser la somme de deux fonctions, l’une étant convexe et potentiellement non lisse et la seconde étant relativement lisse dans la géométrie de Bregman. Nous fournissons des garanties de convergence à la fois globale et locale pour cet algorithme. Enfin, nous étudions la reconstruction sans bruit et la stabilité du problème régularisé par un a priori de faible complexité. Pour celà, nous formulons le problème comme la minimisation d’une objective impliquant un terme d’attache aux données non convexe et un terme de régularisation convexe favorisant les solutions conformes à une certaine notion de faible complexité. Nous établissons des conditions pour une reconstruction exacte et stable et fournissons des bornes sur le nombre de mesures aléatoires suffisants pour de garantir que ces conditionssoient remplies. Ces bornes d’échantillonnage dépendent de la faible complexité des signaux à reconstruire. Ces résultats nouveaux permettent d’aller bien au-delà du cas de la reconstruction de phase parcimonieuse
In this work, we investigate the phase retrieval problem of real-valued signals in finite dimension, a challenge encountered across various scientific and engineering disciplines. It explores two complementary approaches: retrieval with and without regularization. In both settings, our work is focused on relaxing the Lipschitz-smoothness assumption generally required by first-order splitting algorithms, and which is not valid for phase retrieval cast as a minimization problem. The key idea here is to replace the Euclidean geometry by a non-Euclidean Bregman divergence associated to an appropriate kernel. We use a Bregman gradient/mirror descent algorithm with this divergence to solve thephase retrieval problem without regularization, and we show exact (up to a global sign) recovery both in a deterministic setting and with high probability for a sufficient number of random measurements (Gaussian and Coded Diffraction Patterns). Furthermore, we establish the robustness of this approachagainst small additive noise. Shifting to regularized phase retrieval, we first develop and analyze an Inertial Bregman Proximal Gradient algorithm for minimizing the sum of two functions in finite dimension, one of which is convex and possibly nonsmooth and the second is relatively smooth in the Bregman geometry. We provide both global and local convergence guarantees for this algorithm. Finally, we study noiseless and stable recovery of low complexity regularized phase retrieval. For this, weformulate the problem as the minimization of an objective functional involving a nonconvex smooth data fidelity term and a convex regularizer promoting solutions conforming to some notion of low-complexity related to their nonsmoothness points. We establish conditions for exact and stable recovery and provide sample complexity bounds for random measurements to ensure that these conditions hold. These sample bounds depend on the low complexity of the signals to be recovered. Our new results allow to go far beyond the case of sparse phase retrieval
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Inertial Bregman proximal gradient"

1

Mukkamala, Mahesh Chandra, Felix Westerkamp, Emanuel Laude, Daniel Cremers und Peter Ochs. „Bregman Proximal Gradient Algorithms for Deep Matrix Factorization“. In Lecture Notes in Computer Science, 204–15. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-75549-2_17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Inertial Bregman proximal gradient"

1

Li, Huan, Wenjuan Zhang, Shujian Huang und Feng Xiao. „Poisson Noise Image Restoration Based on Bregman Proximal Gradient“. In 2023 6th International Conference on Computer Network, Electronic and Automation (ICCNEA). IEEE, 2023. http://dx.doi.org/10.1109/iccnea60107.2023.00058.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pu, Wenqiang, Jiawei Zhang, Rui Zhou, Xiao Fu und Mingyi Hong. „A Smoothed Bregman Proximal Gradient Algorithm for Decentralized Nonconvex Optimization“. In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024. http://dx.doi.org/10.1109/icassp48485.2024.10448285.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie