Literatura académica sobre el tema "DC (Difference of Convex functions) programming and DCA (DC Algorithms)"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Índice
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "DC (Difference of Convex functions) programming and DCA (DC Algorithms)".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "DC (Difference of Convex functions) programming and DCA (DC Algorithms)"
Le, Hoai Minh, Hoai An Le Thi, Tao Pham Dinh y Van Ngai Huynh. "Block Clustering Based on Difference of Convex Functions (DC) Programming and DC Algorithms". Neural Computation 25, n.º 10 (octubre de 2013): 2776–807. http://dx.doi.org/10.1162/neco_a_00490.
Texto completoLe Thi, Hoai An y Vinh Thanh Ho. "Online Learning Based on Online DCA and Application to Online Classification". Neural Computation 32, n.º 4 (abril de 2020): 759–93. http://dx.doi.org/10.1162/neco_a_01266.
Texto completoLe Thi, Hoai An, Xuan Thanh Vo y Tao Pham Dinh. "Efficient Nonnegative Matrix Factorization by DC Programming and DCA". Neural Computation 28, n.º 6 (junio de 2016): 1163–216. http://dx.doi.org/10.1162/neco_a_00836.
Texto completoKebaili, Zahira y Mohamed Achache. "Solving nonmonotone affine variational inequalities problem by DC programming and DCA". Asian-European Journal of Mathematics 13, n.º 03 (17 de diciembre de 2018): 2050067. http://dx.doi.org/10.1142/s1793557120500679.
Texto completoPhan, Duy Nhat, Hoai An Le Thi y Tao Pham Dinh. "Sparse Covariance Matrix Estimation by DCA-Based Algorithms". Neural Computation 29, n.º 11 (noviembre de 2017): 3040–77. http://dx.doi.org/10.1162/neco_a_01012.
Texto completoWang, Meihua, Fengmin Xu y Chengxian Xu. "A Branch-and-Bound Algorithm Embedded with DCA for DC Programming". Mathematical Problems in Engineering 2012 (2012): 1–16. http://dx.doi.org/10.1155/2012/364607.
Texto completoLi, Jieya y Liming Yang. "Robust sparse principal component analysis by DC programming algorithm". Journal of Intelligent & Fuzzy Systems 39, n.º 3 (7 de octubre de 2020): 3183–93. http://dx.doi.org/10.3233/jifs-191617.
Texto completoLe Thi, Hoai An, Manh Cuong Nguyen y Tao Pham Dinh. "A DC Programming Approach for Finding Communities in Networks". Neural Computation 26, n.º 12 (diciembre de 2014): 2827–54. http://dx.doi.org/10.1162/neco_a_00673.
Texto completoAn, Le Thi Hoai y Pham Dinh Tao. "The DC (Difference of Convex Functions) Programming and DCA Revisited with DC Models of Real World Nonconvex Optimization Problems". Annals of Operations Research 133, n.º 1-4 (enero de 2005): 23–46. http://dx.doi.org/10.1007/s10479-004-5022-1.
Texto completoJi, Ying y Shaojian Qu. "Proximal Point Algorithms for Vector DC Programming with Applications to Probabilistic Lot Sizing with Service Levels". Discrete Dynamics in Nature and Society 2017 (2017): 1–8. http://dx.doi.org/10.1155/2017/5675183.
Texto completoTesis sobre el tema "DC (Difference of Convex functions) programming and DCA (DC Algorithms)"
Luu, Hoang Phuc Hau. "Techniques avancées d'apprentissage automatique basées sur DCA et applications à la maintenance prédictive". Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0139.
Texto completoStochastic optimization is of major importance in the age of big data and artificial intelligence. This is attributed to the prevalence of randomness/uncertainty as well as the ever-growing availability of data, both of which render the deterministic approach infeasible. This thesis studies nonconvex stochastic optimization and aims at resolving real-world challenges, including scalability, high variance, endogenous uncertainty, and correlated noise. The main theme of the thesis is to design and analyze novel stochastic algorithms based on DC (difference-of-convex functions) programming and DCA (DC algorithm) to meet new issues emerging in machine learning, particularly deep learning. As an industrial application, we apply the proposed methods to predictive maintenance where the core problem is essentially a time series forecasting problem.The thesis consists of six chapters. Preliminaries on DC programming and DCA are presented in Chapter 1. Chapter 2 studies a class of DC programs whose objective functions contain a large-sum structure. We propose two new stochastic DCA schemes, DCA-SVRG and DCA-SAGA, that combine variance reduction techniques and investigate two sampling strategies (with and without replacement). The proposed algorithms' almost sure convergence to DC critical points is established, and the methods' complexity is examined. Chapter 3 studies general stochastic DC programs (the distribution of the associated random variable is arbitrary) where a stream of i.i.d. (independent and identically distributed) samples from the interested distribution is available. We design stochastic DCA schemes in the online setting to directly solve this theoretical learning problem. Chapter 4 considers a class of stochastic DC programs where endogenous uncertainty is in play and i.i.d. samples are textit{unavailable}. Instead, we assume that only Markov chains that are ergodic fast enough to the target distributions can be accessed. We then design a stochastic algorithm termed Markov chain stochastic DCA (MCSDCA) and provide the convergence analysis in both asymptotic and nonasymptotic senses. The proposed method is then applied to deep learning via PDEs (partial differential equations) regularization, yielding two MCSDCA realizations, MCSDCA-odLD and MCSDCA-udLD, respectively, based on overdamped and underdamped Langevin dynamics. Predictive maintenance applications are discussed in Chapter 5. The remaining useful life (RUL) prediction and capacity estimation are two central problems being investigated, both of which may be framed as time series prediction problems using the data-driven approach. The MCSDCA-odLD and MCSDCA-udLD established in Chapter 4 are used to train these models using appropriate deep neural networks. In comparison to various baseline optimizers in deep learning, numerical studies show that the two techniques are superior, and the prediction results nearly match the true RUL/capacity values. Finally, chapter 6 brings the thesis to a close
Phan, Duy Nhat. "Algorithmes basés sur la programmation DC et DCA pour l’apprentissage avec la parcimonie et l’apprentissage stochastique en grande dimension". Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0235/document.
Texto completoThese days with the increasing abundance of data with high dimensionality, high dimensional classification problems have been highlighted as a challenge in machine learning community and have attracted a great deal of attention from researchers in the field. In recent years, sparse and stochastic learning techniques have been proven to be useful for this kind of problem. In this thesis, we focus on developing optimization approaches for solving some classes of optimization problems in these two topics. Our methods are based on DC (Difference of Convex functions) programming and DCA (DC Algorithms) which are wellknown as one of the most powerful tools in optimization. The thesis is composed of three parts. The first part tackles the issue of variable selection. The second part studies the problem of group variable selection. The final part of the thesis concerns the stochastic learning. In the first part, we start with the variable selection in the Fisher's discriminant problem (Chapter 2) and the optimal scoring problem (Chapter 3), which are two different approaches for the supervised classification in the high dimensional setting, in which the number of features is much larger than the number of observations. Continuing this study, we study the structure of the sparse covariance matrix estimation problem and propose four appropriate DCA based algorithms (Chapter 4). Two applications in finance and classification are conducted to illustrate the efficiency of our methods. The second part studies the L_p,0regularization for the group variable selection (Chapter 5). Using a DC approximation of the L_p,0norm, we indicate that the approximate problem is equivalent to the original problem with suitable parameters. Considering two equivalent reformulations of the approximate problem we develop DCA based algorithms to solve them. Regarding applications, we implement the proposed algorithms for group feature selection in optimal scoring problem and estimation problem of multiple covariance matrices. In the third part of the thesis, we introduce a stochastic DCA for large scale parameter estimation problems (Chapter 6) in which the objective function is a large sum of nonconvex components. As an application, we propose a special stochastic DCA for the loglinear model incorporating latent variables
Phan, Duy Nhat. "Algorithmes basés sur la programmation DC et DCA pour l’apprentissage avec la parcimonie et l’apprentissage stochastique en grande dimension". Electronic Thesis or Diss., Université de Lorraine, 2016. http://www.theses.fr/2016LORR0235.
Texto completoThese days with the increasing abundance of data with high dimensionality, high dimensional classification problems have been highlighted as a challenge in machine learning community and have attracted a great deal of attention from researchers in the field. In recent years, sparse and stochastic learning techniques have been proven to be useful for this kind of problem. In this thesis, we focus on developing optimization approaches for solving some classes of optimization problems in these two topics. Our methods are based on DC (Difference of Convex functions) programming and DCA (DC Algorithms) which are wellknown as one of the most powerful tools in optimization. The thesis is composed of three parts. The first part tackles the issue of variable selection. The second part studies the problem of group variable selection. The final part of the thesis concerns the stochastic learning. In the first part, we start with the variable selection in the Fisher's discriminant problem (Chapter 2) and the optimal scoring problem (Chapter 3), which are two different approaches for the supervised classification in the high dimensional setting, in which the number of features is much larger than the number of observations. Continuing this study, we study the structure of the sparse covariance matrix estimation problem and propose four appropriate DCA based algorithms (Chapter 4). Two applications in finance and classification are conducted to illustrate the efficiency of our methods. The second part studies the L_p,0regularization for the group variable selection (Chapter 5). Using a DC approximation of the L_p,0norm, we indicate that the approximate problem is equivalent to the original problem with suitable parameters. Considering two equivalent reformulations of the approximate problem we develop DCA based algorithms to solve them. Regarding applications, we implement the proposed algorithms for group feature selection in optimal scoring problem and estimation problem of multiple covariance matrices. In the third part of the thesis, we introduce a stochastic DCA for large scale parameter estimation problems (Chapter 6) in which the objective function is a large sum of nonconvex components. As an application, we propose a special stochastic DCA for the loglinear model incorporating latent variables