Índice
Literatura académica sobre el tema "Kurdyka-Lojasiewicz"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Kurdyka-Lojasiewicz".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Kurdyka-Lojasiewicz"
Tran, Phuong Minh y Nhan Thanh Nguyen. "On the Convergence of Bounded Solutions of Non Homogeneous Gradient-like Systems". Journal of Advanced Engineering and Computation 1, n.º 1 (8 de junio de 2017): 61. http://dx.doi.org/10.25073/jaec.201711.50.
Texto completoPapa Quiroz, Erik A. y Jose L. Huaman ˜Naupa. "Método del Punto Proximal Inexacto Usando Cuasi-Distancias para Optimización de Funciones KL." Pesquimat 25, n.º 1 (30 de junio de 2022): 22–35. http://dx.doi.org/10.15381/pesquimat.v25i1.23144.
Texto completoLuo, Zhijun, Zhibin Zhu y Benxin Zhang. "A LogTVSCAD Nonconvex Regularization Model for Image Deblurring in the Presence of Impulse Noise". Discrete Dynamics in Nature and Society 2021 (26 de octubre de 2021): 1–19. http://dx.doi.org/10.1155/2021/3289477.
Texto completoBento, G. C. y A. Soubeyran. "A Generalized Inexact Proximal Point Method for Nonsmooth Functions that Satisfies Kurdyka Lojasiewicz Inequality". Set-Valued and Variational Analysis 23, n.º 3 (20 de febrero de 2015): 501–17. http://dx.doi.org/10.1007/s11228-015-0319-6.
Texto completoBonettini, Silvia, Danilo Pezzi, Marco Prato y Simone Rebegoldi. "On an iteratively reweighted linesearch based algorithm for nonconvex composite optimization". Inverse Problems, 4 de abril de 2023. http://dx.doi.org/10.1088/1361-6420/acca43.
Texto completoYuan, Ganzhao y Bernard Ghanem. "A Proximal Alternating Direction Method for Semi-Definite Rank Minimization". Proceedings of the AAAI Conference on Artificial Intelligence 30, n.º 1 (2 de marzo de 2016). http://dx.doi.org/10.1609/aaai.v30i1.10228.
Texto completoTesis sobre el tema "Kurdyka-Lojasiewicz"
Nguyen, Trong Phong. "Inégalités de Kurdyka-Lojasiewicz et convexité : algorithmes et applications". Thesis, Toulouse 1, 2017. http://www.theses.fr/2017TOU10022/document.
Texto completoThis thesis focuses on first order descent methods in the minimization problems. There are three parts. Firstly, we give an overview on local and global error bounds. We try to provide the first bricks of a unified theory by showing the centrality of the Lojasiewicz gradient inequality. In the second part, by using Kurdyka- Lojasiewicz (KL) inequality, we provide new tools to compute the complexity of first-order descent methods in convex minimization. Our approach is completely original and makes use of a one-dimensional worst-case proximal sequence. This result inaugurates a simple methodology: derive an error bound, compute the KL esingularizing function whenever possible, identify essential constants in the descent method and finally compute the complexity using the one-dimensional worst case proximal sequence. Lastly, we extend the extragradient method to minimize the sum of two functions, the first one being smooth and the second being convex. Under Kurdyka-Lojasiewicz assumption, we prove that the sequence produced by the extragradient method converges to a critical point of this problem and has finite length. When both functions are convex, we provide a O(1/k) convergence rate. Furthermore, we show that our complexity result in the second part can be applied to this method. Considering the extragradient method is the occasion to describe exact line search for proximal decomposition methods. We provide details for the implementation of this scheme for the ℓ1 regularized least squares problem and give numerical results which suggest that combining nonaccelerated methods with exact line search can be a competitive choice
Assunção, Filho Pedro Bonfim de. "Um algoritmo proximal com quase-distância". Universidade Federal de Goiás, 2015. http://repositorio.bc.ufg.br/tede/handle/tede/4521.
Texto completoApproved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-05-14T15:51:54Z (GMT) No. of bitstreams: 2 Dissertação - Pedro Bonfim de Assunção Filho - 2015.pdf: 1595722 bytes, checksum: f3fd3bdb8a9b340d60e156dcf07a9d63 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Made available in DSpace on 2015-05-14T15:51:54Z (GMT). No. of bitstreams: 2 Dissertação - Pedro Bonfim de Assunção Filho - 2015.pdf: 1595722 bytes, checksum: f3fd3bdb8a9b340d60e156dcf07a9d63 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2015-02-25
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
In this work, based in [1, 18], we study the convergence of method of proximal point (MPP) regularized by a quasi-distance, applied to an optimization problem. The objective function considered not is necessarily convex and satisfies the property of Kurdyka- Lojasiewicz around by their generalized critical points. More specifically, we will show that any limited sequence, generated from MPP, converge the a generalized critical point.
Neste trabalho, baseado em [1, 18], estudamos a convergência do método do ponto proximal (MPP) regularizado por uma quase-distância aplicado a um problema de otimização. A função objetivo considerada não é necessariamente convexa e satisfaz a propriedade de Kurdyka-Lojasiewicz ao redor de seus pontos críticos generalizados. Mais precisamente, mostraremos que qualquer sequência limitada, gerada pelo MPP, converge a um ponto crítico generalizado.
Sousa, Júnior Valdinês Leite de. "Sobre a convergência de métodos de descida em otimização não-suave: aplicações à ciência comportamental". Universidade Federal de Goiás, 2017. http://repositorio.bc.ufg.br/tede/handle/tede/6864.
Texto completoApproved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2017-02-22T13:04:40Z (GMT) No. of bitstreams: 2 Tese - Valdinês Leite de Sousa Júnior - 2017.pdf: 2145153 bytes, checksum: 388666d9bc1ff5aa261882785a3cc5e0 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2017-02-22T13:04:40Z (GMT). No. of bitstreams: 2 Tese - Valdinês Leite de Sousa Júnior - 2017.pdf: 2145153 bytes, checksum: 388666d9bc1ff5aa261882785a3cc5e0 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-02-03
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq
In this work, we investigate four different types of descent methods: a dual descent method in the scalar context and a multiobjective proximal point methods (one exact and two inexact versions). The first one is restricted to functions that satisfy the Kurdyka-Lojasiewicz property, where it is used a quasi-distance as a regularization function. In the next three methods, the objective is to study the convergence of a multiobjective proximal methods (exact an inexact) for a particular class of multiobjective functions that are not necessarily differentiable. For the inexact methods, we choose a proximal distance as the regularization term. Such a well-known distance allows us to analyze the convergence of the method under various settings. Applications in behavioral sciences are analyzed in the sense of the variational rationality approach.
Neste trabalho, investigaremos quatro tipos diferentes de métodos de descida: um método de descida dual e três versões do método do ponto proximal (exato e inexato) em otimização multiobjetivo. No primeiro, a análise de convergência será restrita a funções que satisfazem a propriedade Kurdyka-Lojasiewicz, onde é usada uma quase-distância como função regularizadora. Nos seguintes, o objetivo é estudar a convergência de uma versão exata e duas versões inexatas do método de ponto proximal em otimização multiobjetivo para uma classe particular de funções multiobjetivo que não são necessariamente diferenciáveis. Para os métodos inexatos, escolhemos uma distância proximal como termo regularizador. Aplicações em ciência comportamental serão analisadas no sentido da abordagem da teoria de racionalidade variacional.
Bensaid, Bilel. "Analyse et développement de nouveaux optimiseurs en Machine Learning". Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0218.
Texto completoOver the last few years, developping an explainable and frugal artificial intelligence (AI) became a fundamental challenge, especially when AI is used in safety-critical systems and demands ever more energy. This issue is even more serious regarding the huge number of hyperparameters to tune to make the models work. Among these parameters, the optimizer as well as its associated tunings appear as the most important leverages to improve these models [196]. This thesis focuses on the analysis of learning process/optimizer for neural networks, by identifying mathematical properties closely related to these two challenges. First, undesirable behaviors preventing the design of explainable and frugal networks are identified. Then, these behaviors are explained using two tools: Lyapunov stability and geometrical integrators. Through numerical experiments, the learning process stabilization improves the overall performances and allows the design of shallow networks. Theoretically, the suggested point of view enables to derive convergence guarantees for classical Deep Learning optimizers. The same approach is valuable for mini-batch optimization where unwelcome phenomenons proliferate: the concept of balanced splitting scheme becomes essential to enhance the learning process understanding and improve its robustness. This study paves the way to the design of new adaptive optimizers, by exploiting the deep relation between robust optimization and invariant preserving scheme for dynamical systems
Xu, Yangyang. "Block Coordinate Descent for Regularized Multi-convex Optimization". Thesis, 2013. http://hdl.handle.net/1911/72066.
Texto completo