Добірка наукової літератури з теми "DC (Difference of Convex functions) programming and DCA (DC Algorithms)"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "DC (Difference of Convex functions) programming and DCA (DC Algorithms)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "DC (Difference of Convex functions) programming and DCA (DC Algorithms)":

1

Le, Hoai Minh, Hoai An Le Thi, Tao Pham Dinh, and Van Ngai Huynh. "Block Clustering Based on Difference of Convex Functions (DC) Programming and DC Algorithms." Neural Computation 25, no. 10 (October 2013): 2776–807. http://dx.doi.org/10.1162/neco_a_00490.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM.
2

Le Thi, Hoai An, and Vinh Thanh Ho. "Online Learning Based on Online DCA and Application to Online Classification." Neural Computation 32, no. 4 (April 2020): 759–93. http://dx.doi.org/10.1162/neco_a_01266.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We investigate an approach based on DC (Difference of Convex functions) programming and DCA (DC Algorithm) for online learning techniques. The prediction problem of an online learner can be formulated as a DC program for which online DCA is applied. We propose the two so-called complete/approximate versions of online DCA scheme and prove their logarithmic/sublinear regrets. Six online DCA-based algorithms are developed for online binary linear classification. Numerical experiments on a variety of benchmark classification data sets show the efficiency of our proposed algorithms in comparison with the state-of-the-art online classification algorithms.
3

Le Thi, Hoai An, Xuan Thanh Vo, and Tao Pham Dinh. "Efficient Nonnegative Matrix Factorization by DC Programming and DCA." Neural Computation 28, no. 6 (June 2016): 1163–216. http://dx.doi.org/10.1162/neco_a_00836.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this letter, we consider the nonnegative matrix factorization (NMF) problem and several NMF variants. Two approaches based on DC (difference of convex functions) programming and DCA (DC algorithm) are developed. The first approach follows the alternating framework that requires solving, at each iteration, two nonnegativity-constrained least squares subproblems for which DCA-based schemes are investigated. The convergence property of the proposed algorithm is carefully studied. We show that with suitable DC decompositions, our algorithm generates most of the standard methods for the NMF problem. The second approach directly applies DCA on the whole NMF problem. Two algorithms—one computing all variables and one deploying a variable selection strategy—are proposed. The proposed methods are then adapted to solve various NMF variants, including the nonnegative factorization, the smooth regularization NMF, the sparse regularization NMF, the multilayer NMF, the convex/convex-hull NMF, and the symmetric NMF. We also show that our algorithms include several existing methods for these NMF variants as special versions. The efficiency of the proposed approaches is empirically demonstrated on both real-world and synthetic data sets. It turns out that our algorithms compete favorably with five state-of-the-art alternating nonnegative least squares algorithms.
4

Kebaili, Zahira, and Mohamed Achache. "Solving nonmonotone affine variational inequalities problem by DC programming and DCA." Asian-European Journal of Mathematics 13, no. 03 (December 17, 2018): 2050067. http://dx.doi.org/10.1142/s1793557120500679.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, we consider an optimization model for solving the nonmonotone affine variational inequalities problem (AVI). It is formulated as a DC (Difference of Convex functions) program for which DCA (DC Algorithms) are applied. The resulting DCA are simple: it consists of solving successive convex quadratic program. Numerical experiments on several test problems illustrate the efficiency of the proposed approach in terms of the quality of the obtained solutions and the speed of convergence.
5

Phan, Duy Nhat, Hoai An Le Thi, and Tao Pham Dinh. "Sparse Covariance Matrix Estimation by DCA-Based Algorithms." Neural Computation 29, no. 11 (November 2017): 3040–77. http://dx.doi.org/10.1162/neco_a_01012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This letter proposes a novel approach using the [Formula: see text]-norm regularization for the sparse covariance matrix estimation (SCME) problem. The objective function of SCME problem is composed of a nonconvex part and the [Formula: see text] term, which is discontinuous and difficult to tackle. Appropriate DC (difference of convex functions) approximations of [Formula: see text]-norm are used that result in approximation SCME problems that are still nonconvex. DC programming and DCA (DC algorithm), powerful tools in nonconvex programming framework, are investigated. Two DC formulations are proposed and corresponding DCA schemes developed. Two applications of the SCME problem that are considered are classification via sparse quadratic discriminant analysis and portfolio optimization. A careful empirical experiment is performed through simulated and real data sets to study the performance of the proposed algorithms. Numerical results showed their efficiency and their superiority compared with seven state-of-the-art methods.
6

Wang, Meihua, Fengmin Xu, and Chengxian Xu. "A Branch-and-Bound Algorithm Embedded with DCA for DC Programming." Mathematical Problems in Engineering 2012 (2012): 1–16. http://dx.doi.org/10.1155/2012/364607.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The special importance of Difference of Convex (DC) functions programming has been recognized in recent studies on nonconvex optimization problems. In this work, a class of DC programming derived from the portfolio selection problems is studied. The most popular method applied to solve the problem is the Branch-and-Bound (B&B) algorithm. However, “the curse of dimensionality” will affect the performance of the B&B algorithm. DC Algorithm (DCA) is an efficient method to get a local optimal solution. It has been applied to many practical problems, especially for large-scale problems. A B&B-DCA algorithm is proposed by embedding DCA into the B&B algorithms, the new algorithm improves the computational performance and obtains a global optimal solution. Computational results show that the proposed B&B-DCA algorithm has the superiority of the branch number and computational time than general B&B. The nice features of DCA (inexpensiveness, reliability, robustness, globality of computed solutions, etc.) provide crucial support to the combined B&B-DCA for accelerating the convergence of B&B.
7

Li, Jieya, and Liming Yang. "Robust sparse principal component analysis by DC programming algorithm." Journal of Intelligent & Fuzzy Systems 39, no. 3 (October 7, 2020): 3183–93. http://dx.doi.org/10.3233/jifs-191617.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The classical principal component analysis (PCA) is not sparse enough since it is based on the L2-norm that is also prone to be adversely affected by the presence of outliers and noises. In order to address the problem, a sparse robust PCA framework is proposed based on the min of zero-norm regularization and the max of Lp-norm (0 < p ≤ 2) PCA. Furthermore, we developed a continuous optimization method, DC (difference of convex functions) programming algorithm (DCA), to solve the proposed problem. The resulting algorithm (called DC-LpZSPCA) is convergent linearly. In addition, when choosing different p values, the model can keep robust and is applicable to different data types. Numerical simulations are simulated in artificial data sets and Yale face data sets. Experiment results show that the proposed method can maintain good sparsity and anti-outlier ability.
8

Le Thi, Hoai An, Manh Cuong Nguyen, and Tao Pham Dinh. "A DC Programming Approach for Finding Communities in Networks." Neural Computation 26, no. 12 (December 2014): 2827–54. http://dx.doi.org/10.1162/neco_a_00673.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Automatic discovery of community structures in complex networks is a fundamental task in many disciplines, including physics, biology, and the social sciences. The most used criterion for characterizing the existence of a community structure in a network is modularity, a quantitative measure proposed by Newman and Girvan ( 2004 ). The discovery community can be formulated as the so-called modularity maximization problem that consists of finding a partition of nodes of a network with the highest modularity. In this letter, we propose a fast and scalable algorithm called DCAM, based on DC (difference of convex function) programming and DCA (DC algorithms), an innovative approach in nonconvex programming framework for solving the modularity maximization problem. The special structure of the problem considered here has been well exploited to get an inexpensive DCA scheme that requires only a matrix-vector product at each iteration. Starting with a very large number of communities, DCAM furnishes, as output results, an optimal partition together with the optimal number of communities [Formula: see text]; that is, the number of communities is discovered automatically during DCAM’s iterations. Numerical experiments are performed on a variety of real-world network data sets with up to 4,194,304 nodes and 30,359,198 edges. The comparative results with height reference algorithms show that the proposed approach outperforms them not only on quality and rapidity but also on scalability. Moreover, it realizes a very good trade-off between the quality of solutions and the run time.
9

An, Le Thi Hoai, and Pham Dinh Tao. "The DC (Difference of Convex Functions) Programming and DCA Revisited with DC Models of Real World Nonconvex Optimization Problems." Annals of Operations Research 133, no. 1-4 (January 2005): 23–46. http://dx.doi.org/10.1007/s10479-004-5022-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ji, Ying, and Shaojian Qu. "Proximal Point Algorithms for Vector DC Programming with Applications to Probabilistic Lot Sizing with Service Levels." Discrete Dynamics in Nature and Society 2017 (2017): 1–8. http://dx.doi.org/10.1155/2017/5675183.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We present a new algorithm for solving vector DC programming, where the vector function is a function of the difference of C-convex functions. Because of the nonconvexity of the objective function, it is difficult to solve this class of problems. We propose several proximal point algorithms to address this class of problems, which make use of the special structure of the problems (i.e., the DC structure). The well-posedness and the global convergence of the proposed algorithms are developed. The efficiency of the proposed algorithm is shown by an application to a multicriteria model stemming from lot sizing problems.

Дисертації з теми "DC (Difference of Convex functions) programming and DCA (DC Algorithms)":

1

Luu, Hoang Phuc Hau. "Techniques avancées d'apprentissage automatique basées sur DCA et applications à la maintenance prédictive." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0139.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'optimisation stochastique revêt une importance majeure à l'ère du big data et de l'intelligence artificielle. Ceci est attribué à la prévalence de l'aléatoire/de l'incertitude ainsi qu'à la disponibilité toujours croissante des données, deux facteurs qui rendent l'approche déterministe infaisable. Cette thèse étudie l'optimisation stochastique non convexe et vise à résoudre les défis du monde réel, notamment l'extensibilité, variance élevée, l'incertitude endogène et le bruit corrélé. Le thème principal de la thèse est de concevoir et d'analyser de nouveaux algorithmes stochastiques basés sur la programmation DC (différence de fonctions convexes) et DCA (algorithme DC) pour répondre aux nouvelles problématiques émergeant dans l'apprentissage automatique, en particulier l'apprentissage profond. Comme application industrielle, nous appliquons les méthodes proposées à la maintenance prédictive où le problème central est essentiellement un problème de prévision de séries temporelles.La thèse se compose de six chapitres. Les préliminaires sur la programmation DC et le DCA sont présentés dans le chapitre 1. Le chapitre 2 étudie une classe de programmes DC dont les fonctions objectives contiennent une structure de somme importante. Nous proposons deux nouveaux schémas DCA stochastiques, DCA-SVRG et DCA-SAGA, qui combinent des techniques de réduction de la variance et étudient deux stratégies d'échantillonnage (avec et sans remplacement). La convergence presque sûre des algorithmes proposés vers les points critiques DC est établie, et la complexité des méthodes est examinée. Le chapitre 3 étudie les programmes DC stochastiques généraux (la distribution de la variable aléatoire associée est arbitraire) où un flux d'échantillons i.i.d. (indépendants et identiquement distribués) de la distribution intéressée est disponible. Nous concevons des schémas DCA stochastiques dans le cadre en ligne pour résoudre directement ce problème d'apprentissage théorique. Le chapitre 4 considère une classe de programmes DC stochastiques où l'incertitude endogène est en jeu et où les échantillons i.i.d. ne sont pas disponibles. Au lieu de cela, nous supposons que seules les chaînes de Markov qui sont ergodiques assez rapidement vers les distributions cibles peuvent être accédées. Nous concevons ensuite un algorithme stochastique appelé DCA stochastique à chaînes de Markov (MCSDCA) et fournissons une analyse de convergence dans les sens asymptotique et non asymptotique. La méthode proposée est ensuite appliquée à l'apprentissage profond via la régularisation des EDP (équations différentielles partielles), ce qui donne deux réalisations de MCSDCA, MCSDCA-odLD et MCSDCA-udLD, respectivement, basées sur la dynamique de Langevin suramortie et sous-amortie. Les applications de maintenance prédictive sont abordées au chapitre 5. La prédiction de la durée de vie utile restante (RUL) et l'estimation de la capacité sont deux problèmes centraux étudiés, qui peuvent tous deux être formulés comme des problèmes de prédiction de séries temporelles utilisant l'approche guidée par les données. Les modèles MCSDCA-odLD et MCSDCA-udLD établis au chapitre 4 sont utilisés pour former ces modèles à l'aide de réseaux neuronaux profonds appropriés. En comparaison avec divers optimiseurs de base en apprentissage profond, les études numériques montrent que les deux techniques sont supérieures, et les résultats de prédiction correspondent presque aux vraies valeurs de RUL/capacité. Enfin, le chapitre 6 met un terme à la thèse
Stochastic optimization is of major importance in the age of big data and artificial intelligence. This is attributed to the prevalence of randomness/uncertainty as well as the ever-growing availability of data, both of which render the deterministic approach infeasible. This thesis studies nonconvex stochastic optimization and aims at resolving real-world challenges, including scalability, high variance, endogenous uncertainty, and correlated noise. The main theme of the thesis is to design and analyze novel stochastic algorithms based on DC (difference-of-convex functions) programming and DCA (DC algorithm) to meet new issues emerging in machine learning, particularly deep learning. As an industrial application, we apply the proposed methods to predictive maintenance where the core problem is essentially a time series forecasting problem.The thesis consists of six chapters. Preliminaries on DC programming and DCA are presented in Chapter 1. Chapter 2 studies a class of DC programs whose objective functions contain a large-sum structure. We propose two new stochastic DCA schemes, DCA-SVRG and DCA-SAGA, that combine variance reduction techniques and investigate two sampling strategies (with and without replacement). The proposed algorithms' almost sure convergence to DC critical points is established, and the methods' complexity is examined. Chapter 3 studies general stochastic DC programs (the distribution of the associated random variable is arbitrary) where a stream of i.i.d. (independent and identically distributed) samples from the interested distribution is available. We design stochastic DCA schemes in the online setting to directly solve this theoretical learning problem. Chapter 4 considers a class of stochastic DC programs where endogenous uncertainty is in play and i.i.d. samples are textit{unavailable}. Instead, we assume that only Markov chains that are ergodic fast enough to the target distributions can be accessed. We then design a stochastic algorithm termed Markov chain stochastic DCA (MCSDCA) and provide the convergence analysis in both asymptotic and nonasymptotic senses. The proposed method is then applied to deep learning via PDEs (partial differential equations) regularization, yielding two MCSDCA realizations, MCSDCA-odLD and MCSDCA-udLD, respectively, based on overdamped and underdamped Langevin dynamics. Predictive maintenance applications are discussed in Chapter 5. The remaining useful life (RUL) prediction and capacity estimation are two central problems being investigated, both of which may be framed as time series prediction problems using the data-driven approach. The MCSDCA-odLD and MCSDCA-udLD established in Chapter 4 are used to train these models using appropriate deep neural networks. In comparison to various baseline optimizers in deep learning, numerical studies show that the two techniques are superior, and the prediction results nearly match the true RUL/capacity values. Finally, chapter 6 brings the thesis to a close
2

Phan, Duy Nhat. "Algorithmes basés sur la programmation DC et DCA pour l’apprentissage avec la parcimonie et l’apprentissage stochastique en grande dimension." Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0235/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
De nos jours, avec l'abondance croissante de données de très grande taille, les problèmes de classification de grande dimension ont été mis en évidence comme un challenge dans la communauté d'apprentissage automatique et ont beaucoup attiré l'attention des chercheurs dans le domaine. Au cours des dernières années, les techniques d'apprentissage avec la parcimonie et l'optimisation stochastique se sont prouvées être efficaces pour ce type de problèmes. Dans cette thèse, nous nous concentrons sur le développement des méthodes d'optimisation pour résoudre certaines classes de problèmes concernant ces deux sujets. Nos méthodes sont basées sur la programmation DC (Difference of Convex functions) et DCA (DC Algorithm) étant reconnues comme des outils puissants d'optimisation non convexe. La thèse est composée de trois parties. La première partie aborde le problème de la sélection des variables. La deuxième partie étudie le problème de la sélection de groupes de variables. La dernière partie de la thèse liée à l'apprentissage stochastique. Dans la première partie, nous commençons par la sélection des variables dans le problème discriminant de Fisher (Chapitre 2) et le problème de scoring optimal (Chapitre 3), qui sont les deux approches différentes pour la classification supervisée dans l'espace de grande dimension, dans lequel le nombre de variables est beaucoup plus grand que le nombre d'observations. Poursuivant cette étude, nous étudions la structure du problème d'estimation de matrice de covariance parcimonieuse et fournissons les quatre algorithmes appropriés basés sur la programmation DC et DCA (Chapitre 4). Deux applications en finance et en classification sont étudiées pour illustrer l'efficacité de nos méthodes. La deuxième partie étudie la L_p,0régularisation pour la sélection de groupes de variables (Chapitre 5). En utilisant une approximation DC de la L_p,0norme, nous prouvons que le problème approché, avec des paramètres appropriés, est équivalent au problème original. Considérant deux reformulations équivalentes du problème approché, nous développons différents algorithmes basés sur la programmation DC et DCA pour les résoudre. Comme applications, nous mettons en pratique nos méthodes pour la sélection de groupes de variables dans les problèmes de scoring optimal et d'estimation de multiples matrices de covariance. Dans la troisième partie de la thèse, nous introduisons un DCA stochastique pour des problèmes d'estimation des paramètres à grande échelle (Chapitre 6) dans lesquelles la fonction objectif est la somme d'une grande famille des fonctions non convexes. Comme une étude de cas, nous proposons un schéma DCA stochastique spécial pour le modèle loglinéaire incorporant des variables latentes
These days with the increasing abundance of data with high dimensionality, high dimensional classification problems have been highlighted as a challenge in machine learning community and have attracted a great deal of attention from researchers in the field. In recent years, sparse and stochastic learning techniques have been proven to be useful for this kind of problem. In this thesis, we focus on developing optimization approaches for solving some classes of optimization problems in these two topics. Our methods are based on DC (Difference of Convex functions) programming and DCA (DC Algorithms) which are wellknown as one of the most powerful tools in optimization. The thesis is composed of three parts. The first part tackles the issue of variable selection. The second part studies the problem of group variable selection. The final part of the thesis concerns the stochastic learning. In the first part, we start with the variable selection in the Fisher's discriminant problem (Chapter 2) and the optimal scoring problem (Chapter 3), which are two different approaches for the supervised classification in the high dimensional setting, in which the number of features is much larger than the number of observations. Continuing this study, we study the structure of the sparse covariance matrix estimation problem and propose four appropriate DCA based algorithms (Chapter 4). Two applications in finance and classification are conducted to illustrate the efficiency of our methods. The second part studies the L_p,0regularization for the group variable selection (Chapter 5). Using a DC approximation of the L_p,0norm, we indicate that the approximate problem is equivalent to the original problem with suitable parameters. Considering two equivalent reformulations of the approximate problem we develop DCA based algorithms to solve them. Regarding applications, we implement the proposed algorithms for group feature selection in optimal scoring problem and estimation problem of multiple covariance matrices. In the third part of the thesis, we introduce a stochastic DCA for large scale parameter estimation problems (Chapter 6) in which the objective function is a large sum of nonconvex components. As an application, we propose a special stochastic DCA for the loglinear model incorporating latent variables
3

Phan, Duy Nhat. "Algorithmes basés sur la programmation DC et DCA pour l’apprentissage avec la parcimonie et l’apprentissage stochastique en grande dimension." Electronic Thesis or Diss., Université de Lorraine, 2016. http://www.theses.fr/2016LORR0235.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
De nos jours, avec l'abondance croissante de données de très grande taille, les problèmes de classification de grande dimension ont été mis en évidence comme un challenge dans la communauté d'apprentissage automatique et ont beaucoup attiré l'attention des chercheurs dans le domaine. Au cours des dernières années, les techniques d'apprentissage avec la parcimonie et l'optimisation stochastique se sont prouvées être efficaces pour ce type de problèmes. Dans cette thèse, nous nous concentrons sur le développement des méthodes d'optimisation pour résoudre certaines classes de problèmes concernant ces deux sujets. Nos méthodes sont basées sur la programmation DC (Difference of Convex functions) et DCA (DC Algorithm) étant reconnues comme des outils puissants d'optimisation non convexe. La thèse est composée de trois parties. La première partie aborde le problème de la sélection des variables. La deuxième partie étudie le problème de la sélection de groupes de variables. La dernière partie de la thèse liée à l'apprentissage stochastique. Dans la première partie, nous commençons par la sélection des variables dans le problème discriminant de Fisher (Chapitre 2) et le problème de scoring optimal (Chapitre 3), qui sont les deux approches différentes pour la classification supervisée dans l'espace de grande dimension, dans lequel le nombre de variables est beaucoup plus grand que le nombre d'observations. Poursuivant cette étude, nous étudions la structure du problème d'estimation de matrice de covariance parcimonieuse et fournissons les quatre algorithmes appropriés basés sur la programmation DC et DCA (Chapitre 4). Deux applications en finance et en classification sont étudiées pour illustrer l'efficacité de nos méthodes. La deuxième partie étudie la L_p,0régularisation pour la sélection de groupes de variables (Chapitre 5). En utilisant une approximation DC de la L_p,0norme, nous prouvons que le problème approché, avec des paramètres appropriés, est équivalent au problème original. Considérant deux reformulations équivalentes du problème approché, nous développons différents algorithmes basés sur la programmation DC et DCA pour les résoudre. Comme applications, nous mettons en pratique nos méthodes pour la sélection de groupes de variables dans les problèmes de scoring optimal et d'estimation de multiples matrices de covariance. Dans la troisième partie de la thèse, nous introduisons un DCA stochastique pour des problèmes d'estimation des paramètres à grande échelle (Chapitre 6) dans lesquelles la fonction objectif est la somme d'une grande famille des fonctions non convexes. Comme une étude de cas, nous proposons un schéma DCA stochastique spécial pour le modèle loglinéaire incorporant des variables latentes
These days with the increasing abundance of data with high dimensionality, high dimensional classification problems have been highlighted as a challenge in machine learning community and have attracted a great deal of attention from researchers in the field. In recent years, sparse and stochastic learning techniques have been proven to be useful for this kind of problem. In this thesis, we focus on developing optimization approaches for solving some classes of optimization problems in these two topics. Our methods are based on DC (Difference of Convex functions) programming and DCA (DC Algorithms) which are wellknown as one of the most powerful tools in optimization. The thesis is composed of three parts. The first part tackles the issue of variable selection. The second part studies the problem of group variable selection. The final part of the thesis concerns the stochastic learning. In the first part, we start with the variable selection in the Fisher's discriminant problem (Chapter 2) and the optimal scoring problem (Chapter 3), which are two different approaches for the supervised classification in the high dimensional setting, in which the number of features is much larger than the number of observations. Continuing this study, we study the structure of the sparse covariance matrix estimation problem and propose four appropriate DCA based algorithms (Chapter 4). Two applications in finance and classification are conducted to illustrate the efficiency of our methods. The second part studies the L_p,0regularization for the group variable selection (Chapter 5). Using a DC approximation of the L_p,0norm, we indicate that the approximate problem is equivalent to the original problem with suitable parameters. Considering two equivalent reformulations of the approximate problem we develop DCA based algorithms to solve them. Regarding applications, we implement the proposed algorithms for group feature selection in optimal scoring problem and estimation problem of multiple covariance matrices. In the third part of the thesis, we introduce a stochastic DCA for large scale parameter estimation problems (Chapter 6) in which the objective function is a large sum of nonconvex components. As an application, we propose a special stochastic DCA for the loglinear model incorporating latent variables

До бібліографії