Literatura académica sobre el tema "Kurdyka-Lojasiewicz"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Kurdyka-Lojasiewicz".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Kurdyka-Lojasiewicz"

1

Tran, Phuong Minh y Nhan Thanh Nguyen. "On the Convergence of Bounded Solutions of Non Homogeneous Gradient-like Systems". Journal of Advanced Engineering and Computation 1, n.º 1 (8 de junio de 2017): 61. http://dx.doi.org/10.25073/jaec.201711.50.

Texto completo
Resumen
We study the long time behavior of the bounded solutions of non homogeneous gradient-like system which admits a strict Lyapunov function. More precisely, we show that any bounded solution of the gradient-like system converges to an accumulation point as time goes to infinity under some mild hypotheses. As in homogeneous case, the key assumptions for this system are also the angle condition and the Kurdyka-Lojasiewicz inequality. The convergence result will be proved under a L1 -condition of the perturbation term. Moreover, if the Lyapunov function satisfies a Lojasiewicz inequality then the rate of convergence will be even obtained.This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Papa Quiroz, Erik A. y Jose L. Huaman ˜Naupa. "Método del Punto Proximal Inexacto Usando Cuasi-Distancias para Optimización de Funciones KL." Pesquimat 25, n.º 1 (30 de junio de 2022): 22–35. http://dx.doi.org/10.15381/pesquimat.v25i1.23144.

Texto completo
Resumen
Se introduce un algoritmo de punto proximal inexacto utilizando cuasi-distancias para dar solución a un problema de minimización en el espacio Euclideano. Este algoritmo ha sido motivado por el método proximal introducido por Attouch et al. [1] pero en este caso consideramos cuasi-distancias en vez de la distancia Euclidiana, funciones que satisfacen la desigualdad de Kurdyka-Lojasiewicz, errores vectoriales en el residual del punto crítico de los subproblemas proximales regula-rizados. Obtenemos bajo algunos supuestos adicionales la convergencia global de la sucesión generada por el algoritmo a un punto crítico del problema.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Luo, Zhijun, Zhibin Zhu y Benxin Zhang. "A LogTVSCAD Nonconvex Regularization Model for Image Deblurring in the Presence of Impulse Noise". Discrete Dynamics in Nature and Society 2021 (26 de octubre de 2021): 1–19. http://dx.doi.org/10.1155/2021/3289477.

Texto completo
Resumen
This paper proposes a nonconvex model (called LogTVSCAD) for deblurring images with impulsive noises, using the log-function penalty as the regularizer and adopting the smoothly clipped absolute deviation (SCAD) function as the data-fitting term. The proposed nonconvex model can effectively overcome the poor performance of the classical TVL1 model for high-level impulsive noise. A difference of convex functions algorithm (DCA) is proposed to solve the nonconvex model. For the model subproblem, we consider the alternating direction method of multipliers (ADMM) algorithm to solve it. The global convergence is discussed based on Kurdyka–Lojasiewicz. Experimental results show the advantages of the proposed nonconvex model over existing models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Bento, G. C. y A. Soubeyran. "A Generalized Inexact Proximal Point Method for Nonsmooth Functions that Satisfies Kurdyka Lojasiewicz Inequality". Set-Valued and Variational Analysis 23, n.º 3 (20 de febrero de 2015): 501–17. http://dx.doi.org/10.1007/s11228-015-0319-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Bonettini, Silvia, Danilo Pezzi, Marco Prato y Simone Rebegoldi. "On an iteratively reweighted linesearch based algorithm for nonconvex composite optimization". Inverse Problems, 4 de abril de 2023. http://dx.doi.org/10.1088/1361-6420/acca43.

Texto completo
Resumen
Abstract In this paper we propose a new algorithm for solving a class of nonsmooth nonconvex problems, which is obtained by combining the iteratively reweighted scheme with a finite number of forward–backward iterations based on a linesearch procedure. The new method overcomes some limitations of linesearch forward–backward methods, since it can be applied also to minimize functions containing terms that are both nonsmooth and nonconvex. Moreover, the combined scheme can take advantage of acceleration techniques consisting in suitable selection rules for the algorithm parameters. We develop the
convergence analysis of the new method within the framework of the Kurdyka-Lojasiewicz property. Finally, we present the results of a numerical experience on microscopy image super resolution, showing that the performances of our method are comparable or superior to those of other algorithms designed for this specific application.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Yuan, Ganzhao y Bernard Ghanem. "A Proximal Alternating Direction Method for Semi-Definite Rank Minimization". Proceedings of the AAAI Conference on Artificial Intelligence 30, n.º 1 (2 de marzo de 2016). http://dx.doi.org/10.1609/aaai.v30i1.10228.

Texto completo
Resumen
Semi-definite rank minimization problems model a wide range of applications in both signal processing and machine learning fields. This class of problem is NP-hard in general. In this paper, we propose a proximal Alternating Direction Method (ADM) for the well-known semi-definite rank regularized minimization problem. Specifically, we first reformulate this NP-hard problem as an equivalent biconvex MPEC (Mathematical Program with Equilibrium Constraints), and then solve it using proximal ADM, which involves solving a sequence of structured convex semi-definite subproblems to find a desirable solution to the original rank regularized optimization problem. Moreover, based on the Kurdyka-Lojasiewicz inequality, we prove that the proposed method always converges to a KKT stationary point under mild conditions. We apply the proposed method to the widely studied and popular sensor network localization problem. Our extensive experiments demonstrate that the proposed algorithm outperforms state-of-the-art low-rank semi-definite minimization algorithms in terms of solution quality.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Kurdyka-Lojasiewicz"

1

Nguyen, Trong Phong. "Inégalités de Kurdyka-Lojasiewicz et convexité : algorithmes et applications". Thesis, Toulouse 1, 2017. http://www.theses.fr/2017TOU10022/document.

Texto completo
Resumen
Cette thèse traite des méthodes de descente d’ordre un pour les problèmes de minimisation. Elle comprend trois parties. Dans la première partie, nous apportons une vue d’ensemble des bornes d’erreur et les premières briques d’unification d’un concept. Nous montrons en effet la place centrale de l’inégalité du gradient de Lojasiewicz, en mettant en relation cette inégalité avec les bornes d’erreur. Dans la seconde partie, en usant de l’inégalité de Kurdyka-Lojasiewicz (KL), nous apportons un nouvel outil pour calculer la complexité des m´méthodes de descente d’ordre un pour la minimisation convexe. Notre approche est totalement originale et utilise une suite proximale “worst-case” unidimensionnelle. Ces résultats introduisent une méthodologie simple : trouver une borne d’erreur, calculer la fonction KL désingularisante quand c’est possible, identifier les constantes pertinentes dans la méthode de descente, et puis calculer la complexité en usant de la suite proximale “worst-case” unidimensionnelle. Enfin, nous étendons la méthode extragradient pour minimiser la somme de deux fonctions, la première étant lisse et la seconde convexe. Sous l’hypothèse de l’inégalité KL, nous montrons que la suite produite par la méthode extragradient converge vers un point critique de ce problème et qu’elle est de longueur finie. Quand les deux fonctions sont convexes, nous donnons la vitesse de convergence O(1/k) qui est classique pour la méthode de gradient. De plus, nous montrons que notre complexité de la seconde partie peut être appliquée à cette méthode. Considérer la méthode extragradient est l’occasion de d´écrire la recherche linéaire exacte pour les méthodes de décomposition proximales. Nous donnons des détails pour l’implémentation de ce programme pour le problème des moindres carrés avec régularisation ℓ1 et nous donnons des résultats numériques qui suggèrent que combiner des méthodes non-accélérées avec la recherche linéaire exacte peut être un choix performant
This thesis focuses on first order descent methods in the minimization problems. There are three parts. Firstly, we give an overview on local and global error bounds. We try to provide the first bricks of a unified theory by showing the centrality of the Lojasiewicz gradient inequality. In the second part, by using Kurdyka- Lojasiewicz (KL) inequality, we provide new tools to compute the complexity of first-order descent methods in convex minimization. Our approach is completely original and makes use of a one-dimensional worst-case proximal sequence. This result inaugurates a simple methodology: derive an error bound, compute the KL esingularizing function whenever possible, identify essential constants in the descent method and finally compute the complexity using the one-dimensional worst case proximal sequence. Lastly, we extend the extragradient method to minimize the sum of two functions, the first one being smooth and the second being convex. Under Kurdyka-Lojasiewicz assumption, we prove that the sequence produced by the extragradient method converges to a critical point of this problem and has finite length. When both functions are convex, we provide a O(1/k) convergence rate. Furthermore, we show that our complexity result in the second part can be applied to this method. Considering the extragradient method is the occasion to describe exact line search for proximal decomposition methods. We provide details for the implementation of this scheme for the ℓ1 regularized least squares problem and give numerical results which suggest that combining nonaccelerated methods with exact line search can be a competitive choice
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Assunção, Filho Pedro Bonfim de. "Um algoritmo proximal com quase-distância". Universidade Federal de Goiás, 2015. http://repositorio.bc.ufg.br/tede/handle/tede/4521.

Texto completo
Resumen
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2015-05-14T15:48:06Z No. of bitstreams: 2 Dissertação - Pedro Bonfim de Assunção Filho - 2015.pdf: 1595722 bytes, checksum: f3fd3bdb8a9b340d60e156dcf07a9d63 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-05-14T15:51:54Z (GMT) No. of bitstreams: 2 Dissertação - Pedro Bonfim de Assunção Filho - 2015.pdf: 1595722 bytes, checksum: f3fd3bdb8a9b340d60e156dcf07a9d63 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Made available in DSpace on 2015-05-14T15:51:54Z (GMT). No. of bitstreams: 2 Dissertação - Pedro Bonfim de Assunção Filho - 2015.pdf: 1595722 bytes, checksum: f3fd3bdb8a9b340d60e156dcf07a9d63 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2015-02-25
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
In this work, based in [1, 18], we study the convergence of method of proximal point (MPP) regularized by a quasi-distance, applied to an optimization problem. The objective function considered not is necessarily convex and satisfies the property of Kurdyka- Lojasiewicz around by their generalized critical points. More specifically, we will show that any limited sequence, generated from MPP, converge the a generalized critical point.
Neste trabalho, baseado em [1, 18], estudamos a convergência do método do ponto proximal (MPP) regularizado por uma quase-distância aplicado a um problema de otimização. A função objetivo considerada não é necessariamente convexa e satisfaz a propriedade de Kurdyka-Lojasiewicz ao redor de seus pontos críticos generalizados. Mais precisamente, mostraremos que qualquer sequência limitada, gerada pelo MPP, converge a um ponto crítico generalizado.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Sousa, Júnior Valdinês Leite de. "Sobre a convergência de métodos de descida em otimização não-suave: aplicações à ciência comportamental". Universidade Federal de Goiás, 2017. http://repositorio.bc.ufg.br/tede/handle/tede/6864.

Texto completo
Resumen
Submitted by Cássia Santos (cassia.bcufg@gmail.com) on 2017-02-22T12:12:47Z No. of bitstreams: 2 Tese - Valdinês Leite de Sousa Júnior - 2017.pdf: 2145153 bytes, checksum: 388666d9bc1ff5aa261882785a3cc5e0 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2017-02-22T13:04:40Z (GMT) No. of bitstreams: 2 Tese - Valdinês Leite de Sousa Júnior - 2017.pdf: 2145153 bytes, checksum: 388666d9bc1ff5aa261882785a3cc5e0 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2017-02-22T13:04:40Z (GMT). No. of bitstreams: 2 Tese - Valdinês Leite de Sousa Júnior - 2017.pdf: 2145153 bytes, checksum: 388666d9bc1ff5aa261882785a3cc5e0 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-02-03
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq
In this work, we investigate four different types of descent methods: a dual descent method in the scalar context and a multiobjective proximal point methods (one exact and two inexact versions). The first one is restricted to functions that satisfy the Kurdyka-Lojasiewicz property, where it is used a quasi-distance as a regularization function. In the next three methods, the objective is to study the convergence of a multiobjective proximal methods (exact an inexact) for a particular class of multiobjective functions that are not necessarily differentiable. For the inexact methods, we choose a proximal distance as the regularization term. Such a well-known distance allows us to analyze the convergence of the method under various settings. Applications in behavioral sciences are analyzed in the sense of the variational rationality approach.
Neste trabalho, investigaremos quatro tipos diferentes de métodos de descida: um método de descida dual e três versões do método do ponto proximal (exato e inexato) em otimização multiobjetivo. No primeiro, a análise de convergência será restrita a funções que satisfazem a propriedade Kurdyka-Lojasiewicz, onde é usada uma quase-distância como função regularizadora. Nos seguintes, o objetivo é estudar a convergência de uma versão exata e duas versões inexatas do método de ponto proximal em otimização multiobjetivo para uma classe particular de funções multiobjetivo que não são necessariamente diferenciáveis. Para os métodos inexatos, escolhemos uma distância proximal como termo regularizador. Aplicações em ciência comportamental serão analisadas no sentido da abordagem da teoria de racionalidade variacional.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Bensaid, Bilel. "Analyse et développement de nouveaux optimiseurs en Machine Learning". Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0218.

Texto completo
Resumen
Ces dernières années, l’intelligence artificielle (IA) est confrontée à deux défis majeurs (parmi d’autres), à savoir l’explicabilité et la frugalité, dans un contexte d’intégration de l’IA dans des systèmes critiques ou embarqués et de raréfaction des ressources. Le défi est d’autant plus conséquent que les modèles proposés apparaissent commes des boîtes noires étant donné le nombre faramineux d’hyperparamètres à régler (véritable savoir-faire) pour les faire fonctionner. Parmi ces paramètres, l’optimiseur ainsi que les réglages qui lui sont associés ont un rôle critique dans la bonne mise en oeuvre de ces outils [196]. Dans cette thèse, nous nous focalisons sur l’analyse des algorithmes d’apprentissage/optimiseurs dans le contexte des réseaux de neurones, en identifiant des propriétés mathématiques faisant écho aux deux défis évoqués et nécessaires à la robustesse du processus d’apprentissage. Dans un premier temps, nous identifions des comportements indésirables lors du processus d’apprentissage qui vont à l’encontre d’une IA explicable et frugale. Ces comportements sont alors expliqués au travers de deux outils: la stabilité de Lyapunov et les intégrateurs géométriques. Empiriquement, la stabilisation du processus d’apprentissage améliore les performances, autorisant la construction de modèles plus économes. Théoriquement, le point de vue développé permet d’établir des garanties de convergence pour les optimiseurs classiquement utilisés dans l’entraînement des réseaux. La même démarche est suivie concernant l’optimisation mini-batch où les comportements indésirables sont légions: la notion de splitting équilibré est alors centrale afin d’expliquer et d’améliorer les performances. Cette étude ouvre la voie au développement de nouveaux optimiseurs adaptatifs, issus de la relation profonde entre optimisation robuste et schémas numériques préservant les invariants des systèmes dynamiques
Over the last few years, developping an explainable and frugal artificial intelligence (AI) became a fundamental challenge, especially when AI is used in safety-critical systems and demands ever more energy. This issue is even more serious regarding the huge number of hyperparameters to tune to make the models work. Among these parameters, the optimizer as well as its associated tunings appear as the most important leverages to improve these models [196]. This thesis focuses on the analysis of learning process/optimizer for neural networks, by identifying mathematical properties closely related to these two challenges. First, undesirable behaviors preventing the design of explainable and frugal networks are identified. Then, these behaviors are explained using two tools: Lyapunov stability and geometrical integrators. Through numerical experiments, the learning process stabilization improves the overall performances and allows the design of shallow networks. Theoretically, the suggested point of view enables to derive convergence guarantees for classical Deep Learning optimizers. The same approach is valuable for mini-batch optimization where unwelcome phenomenons proliferate: the concept of balanced splitting scheme becomes essential to enhance the learning process understanding and improve its robustness. This study paves the way to the design of new adaptive optimizers, by exploiting the deep relation between robust optimization and invariant preserving scheme for dynamical systems
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Xu, Yangyang. "Block Coordinate Descent for Regularized Multi-convex Optimization". Thesis, 2013. http://hdl.handle.net/1911/72066.

Texto completo
Resumen
This thesis considers regularized block multi-convex optimization, where the feasible set and objective function are generally non-convex but convex in each block of variables. I review some of its interesting examples and propose a generalized block coordinate descent (BCD) method. The generalized BCD uses three different block-update schemes. Based on the property of one block subproblem, one can freely choose one of the three schemes to update the corresponding block of variables. Appropriate choices of block-update schemes can often speed up the algorithm and greatly save computing time. Under certain conditions, I show that any limit point satisfies the Nash equilibrium conditions. Furthermore, I establish its global convergence and estimate its asymptotic convergence rate by assuming a property based on the Kurdyka-{\L}ojasiewicz inequality. As a consequence, this thesis gives a global linear convergence result of cyclic block coordinate descent for strongly convex optimization. The proposed algorithms are adapted for factorizing nonnegative matrices and tensors, as well as completing them from their incomplete observations. The algorithms were tested on synthetic data, hyperspectral data, as well as image sets from the CBCL, ORL and Swimmer databases. Compared to the existing state-of-the-art algorithms, the proposed algorithms demonstrate superior performance in both speed and solution quality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía