Articoli di riviste sul tema "DC (Difference of Convex functions) programming and DCA (DC Algorithms)"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: DC (Difference of Convex functions) programming and DCA (DC Algorithms).

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-17 articoli di riviste per l'attività di ricerca sul tema "DC (Difference of Convex functions) programming and DCA (DC Algorithms)".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Le, Hoai Minh, Hoai An Le Thi, Tao Pham Dinh e Van Ngai Huynh. "Block Clustering Based on Difference of Convex Functions (DC) Programming and DC Algorithms". Neural Computation 25, n. 10 (ottobre 2013): 2776–807. http://dx.doi.org/10.1162/neco_a_00490.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM.
2

Le Thi, Hoai An, e Vinh Thanh Ho. "Online Learning Based on Online DCA and Application to Online Classification". Neural Computation 32, n. 4 (aprile 2020): 759–93. http://dx.doi.org/10.1162/neco_a_01266.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We investigate an approach based on DC (Difference of Convex functions) programming and DCA (DC Algorithm) for online learning techniques. The prediction problem of an online learner can be formulated as a DC program for which online DCA is applied. We propose the two so-called complete/approximate versions of online DCA scheme and prove their logarithmic/sublinear regrets. Six online DCA-based algorithms are developed for online binary linear classification. Numerical experiments on a variety of benchmark classification data sets show the efficiency of our proposed algorithms in comparison with the state-of-the-art online classification algorithms.
3

Le Thi, Hoai An, Xuan Thanh Vo e Tao Pham Dinh. "Efficient Nonnegative Matrix Factorization by DC Programming and DCA". Neural Computation 28, n. 6 (giugno 2016): 1163–216. http://dx.doi.org/10.1162/neco_a_00836.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this letter, we consider the nonnegative matrix factorization (NMF) problem and several NMF variants. Two approaches based on DC (difference of convex functions) programming and DCA (DC algorithm) are developed. The first approach follows the alternating framework that requires solving, at each iteration, two nonnegativity-constrained least squares subproblems for which DCA-based schemes are investigated. The convergence property of the proposed algorithm is carefully studied. We show that with suitable DC decompositions, our algorithm generates most of the standard methods for the NMF problem. The second approach directly applies DCA on the whole NMF problem. Two algorithms—one computing all variables and one deploying a variable selection strategy—are proposed. The proposed methods are then adapted to solve various NMF variants, including the nonnegative factorization, the smooth regularization NMF, the sparse regularization NMF, the multilayer NMF, the convex/convex-hull NMF, and the symmetric NMF. We also show that our algorithms include several existing methods for these NMF variants as special versions. The efficiency of the proposed approaches is empirically demonstrated on both real-world and synthetic data sets. It turns out that our algorithms compete favorably with five state-of-the-art alternating nonnegative least squares algorithms.
4

Kebaili, Zahira, e Mohamed Achache. "Solving nonmonotone affine variational inequalities problem by DC programming and DCA". Asian-European Journal of Mathematics 13, n. 03 (17 dicembre 2018): 2050067. http://dx.doi.org/10.1142/s1793557120500679.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this paper, we consider an optimization model for solving the nonmonotone affine variational inequalities problem (AVI). It is formulated as a DC (Difference of Convex functions) program for which DCA (DC Algorithms) are applied. The resulting DCA are simple: it consists of solving successive convex quadratic program. Numerical experiments on several test problems illustrate the efficiency of the proposed approach in terms of the quality of the obtained solutions and the speed of convergence.
5

Phan, Duy Nhat, Hoai An Le Thi e Tao Pham Dinh. "Sparse Covariance Matrix Estimation by DCA-Based Algorithms". Neural Computation 29, n. 11 (novembre 2017): 3040–77. http://dx.doi.org/10.1162/neco_a_01012.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This letter proposes a novel approach using the [Formula: see text]-norm regularization for the sparse covariance matrix estimation (SCME) problem. The objective function of SCME problem is composed of a nonconvex part and the [Formula: see text] term, which is discontinuous and difficult to tackle. Appropriate DC (difference of convex functions) approximations of [Formula: see text]-norm are used that result in approximation SCME problems that are still nonconvex. DC programming and DCA (DC algorithm), powerful tools in nonconvex programming framework, are investigated. Two DC formulations are proposed and corresponding DCA schemes developed. Two applications of the SCME problem that are considered are classification via sparse quadratic discriminant analysis and portfolio optimization. A careful empirical experiment is performed through simulated and real data sets to study the performance of the proposed algorithms. Numerical results showed their efficiency and their superiority compared with seven state-of-the-art methods.
6

Wang, Meihua, Fengmin Xu e Chengxian Xu. "A Branch-and-Bound Algorithm Embedded with DCA for DC Programming". Mathematical Problems in Engineering 2012 (2012): 1–16. http://dx.doi.org/10.1155/2012/364607.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The special importance of Difference of Convex (DC) functions programming has been recognized in recent studies on nonconvex optimization problems. In this work, a class of DC programming derived from the portfolio selection problems is studied. The most popular method applied to solve the problem is the Branch-and-Bound (B&B) algorithm. However, “the curse of dimensionality” will affect the performance of the B&B algorithm. DC Algorithm (DCA) is an efficient method to get a local optimal solution. It has been applied to many practical problems, especially for large-scale problems. A B&B-DCA algorithm is proposed by embedding DCA into the B&B algorithms, the new algorithm improves the computational performance and obtains a global optimal solution. Computational results show that the proposed B&B-DCA algorithm has the superiority of the branch number and computational time than general B&B. The nice features of DCA (inexpensiveness, reliability, robustness, globality of computed solutions, etc.) provide crucial support to the combined B&B-DCA for accelerating the convergence of B&B.
7

Li, Jieya, e Liming Yang. "Robust sparse principal component analysis by DC programming algorithm". Journal of Intelligent & Fuzzy Systems 39, n. 3 (7 ottobre 2020): 3183–93. http://dx.doi.org/10.3233/jifs-191617.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The classical principal component analysis (PCA) is not sparse enough since it is based on the L2-norm that is also prone to be adversely affected by the presence of outliers and noises. In order to address the problem, a sparse robust PCA framework is proposed based on the min of zero-norm regularization and the max of Lp-norm (0 < p ≤ 2) PCA. Furthermore, we developed a continuous optimization method, DC (difference of convex functions) programming algorithm (DCA), to solve the proposed problem. The resulting algorithm (called DC-LpZSPCA) is convergent linearly. In addition, when choosing different p values, the model can keep robust and is applicable to different data types. Numerical simulations are simulated in artificial data sets and Yale face data sets. Experiment results show that the proposed method can maintain good sparsity and anti-outlier ability.
8

Le Thi, Hoai An, Manh Cuong Nguyen e Tao Pham Dinh. "A DC Programming Approach for Finding Communities in Networks". Neural Computation 26, n. 12 (dicembre 2014): 2827–54. http://dx.doi.org/10.1162/neco_a_00673.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Automatic discovery of community structures in complex networks is a fundamental task in many disciplines, including physics, biology, and the social sciences. The most used criterion for characterizing the existence of a community structure in a network is modularity, a quantitative measure proposed by Newman and Girvan ( 2004 ). The discovery community can be formulated as the so-called modularity maximization problem that consists of finding a partition of nodes of a network with the highest modularity. In this letter, we propose a fast and scalable algorithm called DCAM, based on DC (difference of convex function) programming and DCA (DC algorithms), an innovative approach in nonconvex programming framework for solving the modularity maximization problem. The special structure of the problem considered here has been well exploited to get an inexpensive DCA scheme that requires only a matrix-vector product at each iteration. Starting with a very large number of communities, DCAM furnishes, as output results, an optimal partition together with the optimal number of communities [Formula: see text]; that is, the number of communities is discovered automatically during DCAM’s iterations. Numerical experiments are performed on a variety of real-world network data sets with up to 4,194,304 nodes and 30,359,198 edges. The comparative results with height reference algorithms show that the proposed approach outperforms them not only on quality and rapidity but also on scalability. Moreover, it realizes a very good trade-off between the quality of solutions and the run time.
9

An, Le Thi Hoai, e Pham Dinh Tao. "The DC (Difference of Convex Functions) Programming and DCA Revisited with DC Models of Real World Nonconvex Optimization Problems". Annals of Operations Research 133, n. 1-4 (gennaio 2005): 23–46. http://dx.doi.org/10.1007/s10479-004-5022-1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Ji, Ying, e Shaojian Qu. "Proximal Point Algorithms for Vector DC Programming with Applications to Probabilistic Lot Sizing with Service Levels". Discrete Dynamics in Nature and Society 2017 (2017): 1–8. http://dx.doi.org/10.1155/2017/5675183.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We present a new algorithm for solving vector DC programming, where the vector function is a function of the difference of C-convex functions. Because of the nonconvexity of the objective function, it is difficult to solve this class of problems. We propose several proximal point algorithms to address this class of problems, which make use of the special structure of the problems (i.e., the DC structure). The well-posedness and the global convergence of the proposed algorithms are developed. The efficiency of the proposed algorithm is shown by an application to a multicriteria model stemming from lot sizing problems.
11

Yang, Liming, Zhuo Ren, Yidan Wang e Hongwei Dong. "A Robust Regression Framework with Laplace Kernel-Induced Loss". Neural Computation 29, n. 11 (novembre 2017): 3014–39. http://dx.doi.org/10.1162/neco_a_01002.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This work proposes a robust regression framework with nonconvex loss function. Two regression formulations are presented based on the Laplace kernel-induced loss (LK-loss). Moreover, we illustrate that the LK-loss function is a nice approximation for the zero-norm. However, nonconvexity of the LK-loss makes it difficult to optimize. A continuous optimization method is developed to solve the proposed framework. The problems are formulated as DC (difference of convex functions) programming. The corresponding DC algorithms (DCAs) converge linearly. Furthermore, the proposed algorithms are applied directly to determine the hardness of licorice seeds using near-infrared spectral data with noisy input. Experiments in eight spectral regions show that the proposed methods improve generalization compared with the traditional support vector regressions (SVR), especially in high-frequency regions. Experiments on several benchmark data sets demonstrate that the proposed methods achieve better results than the traditional regression methods in most of data sets we have considered.
12

Ling, Jiajing, Tarun Gupta e Akshat Kumar. "Reinforcement Learning for Zone Based Multiagent Pathfinding under Uncertainty". Proceedings of the International Conference on Automated Planning and Scheduling 30 (1 giugno 2020): 551–59. http://dx.doi.org/10.1609/icaps.v30i1.6751.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We address the problem of multiple agents finding their paths from respective sources to destination nodes in a graph (also called MAPF). Most existing approaches assume that all agents move at fixed speed, and that a single node accommodates only a single agent. Motivated by the emerging applications of autonomous vehicles such as drone traffic management, we present zone-based path finding (or ZBPF) where agents move among zones, and agents' movements require uncertain travel time. Furthermore, each zone can accommodate multiple agents (as per its capacity). We also develop a simulator for ZBPF which provides a clean interface from the simulation environment to learning algorithms. We develop a novel formulation of the ZBPF problem using difference-of-convex functions (DC) programming. The resulting approach can be used for policy learning using samples from the simulator. We also present a multiagent credit assignment scheme that helps our learning approach converge faster. Empirical results in a number of 2D and 3D instances show that our approach can effectively minimize congestion in zones, while ensuring agents reach their final destinations.
13

Xu, Hai-Ming, Hui Xue, Xiao-Hong Chen e Yun-Yun Wang. "Solving Indefinite Kernel Support Vector Machine with Difference of Convex Functions Programming". Proceedings of the AAAI Conference on Artificial Intelligence 31, n. 1 (13 febbraio 2017). http://dx.doi.org/10.1609/aaai.v31i1.10889.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Indefinite kernel support vector machine (IKSVM) has recently attracted increasing attentions in machine learning. Different from traditional SVMs, IKSVM essentially is a non-convex optimization problem. Some algorithms directly change the spectrum of the indefinite kernel matrix at the cost of losing some valuable information involved in the kernels so as to transform the non-convex problem into a convex one. Other algorithms aim to solve the dual form of IKSVM, but suffer from the dual gap between the primal and dual problems in the case of indefinite kernels. In this paper, we directly focus on the non-convex primal form of IKSVM and propose a novel algorithm termed as IKSVM-DC. According to the characteristics of the spectrum for the indefinite kernel matrix, IKSVM-DC decomposes the objective function into the subtraction of two convex functions and thus reformulates the primal problem as a difference of convex functions (DC) programming which can be optimized by the DC algorithm (DCA). In order to accelerate convergence rate, IKSVM-DC further combines the classical DCA with a line search step along the descent direction at each iteration. A theoretical analysis is then presented to validate that IKSVM-DC can converge to a local minimum. Systematical experiments on real-world datasets demonstrate the superiority of IKSVM-DC compared to state-of-the-art IKSVM related algorithms.
14

Aragón-Artacho, F. J., R. Campoy e P. T. Vuong. "The Boosted DC Algorithm for Linearly Constrained DC Programming". Set-Valued and Variational Analysis, 21 dicembre 2022. http://dx.doi.org/10.1007/s11228-022-00656-x.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractThe Boosted Difference of Convex functions Algorithm (BDCA) has been recently introduced to accelerate the performance of the classical Difference of Convex functions Algorithm (DCA). This acceleration is achieved thanks to an extrapolation step from the point computed by DCA via a line search procedure. In this work, we propose an extension of BDCA that can be applied to difference of convex functions programs with linear constraints, and prove that every cluster point of the sequence generated by this algorithm is a Karush–Kuhn–Tucker point of the problem if the feasible set has a Slater point. When the objective function is quadratic, we prove that any sequence generated by the algorithm is bounded and R-linearly (geometrically) convergent. Finally, we present some numerical experiments where we compare the performance of DCA and BDCA on some challenging problems: to test the copositivity of a given matrix, to solve one-norm and infinity-norm trust-region subproblems, and to solve piecewise quadratic problems with box constraints. Our numerical results demonstrate that this new extension of BDCA outperforms DCA.
15

Awasthi, Pranjal, Anqi Mao, Mehryar Mohri e Yutao Zhong. "DC-programming for neural network optimizations". Journal of Global Optimization, 2 gennaio 2024. http://dx.doi.org/10.1007/s10898-023-01344-2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractWe discuss two key problems related to learning and optimization of neural networks: the computation of the adversarial attack for adversarial robustness and approximate optimization of complex functions. We show that both problems can be cast as instances of DC-programming. We give an explicit decomposition of the corresponding functions as differences of convex functions (DC) and report the results of experiments demonstrating the effectiveness of the DCA algorithm applied to these problems.
16

Moudafi, Abdellatif. "Difference of two norms-regularizations for Q-Lasso". Applied Computing and Informatics ahead-of-print, ahead-of-print (5 agosto 2020). http://dx.doi.org/10.1016/j.aci.2018.07.002.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The focus of this paper is in Q-Lasso introduced in Alghamdi et al. (2013) which extended the Lasso by Tibshirani (1996). The closed convex subset Q belonging in a Euclidean m-space, for m∈IN, is the set of errors when linear measurements are taken to recover a signal/image via the Lasso. Based on a recent work by Wang (2013), we are interested in two new penalty methods for Q-Lasso relying on two types of difference of convex functions (DC for short) programming where the DC objective functions are the difference of l1 and lσq norms and the difference of l1 and lr norms with r>1. By means of a generalized q-term shrinkage operator upon the special structure of lσq norm, we design a proximal gradient algorithm for handling the DC l1−lσq model. Then, based on the majorization scheme, we develop a majorized penalty algorithm for the DC l1−lr model. The convergence results of our new algorithms are presented as well. We would like to emphasize that extensive simulation results in the case Q={b} show that these two new algorithms offer improved signal recovery performance and require reduced computational effort relative to state-of-the-art l1 and lp (p∈(0,1)) models, see Wang (2013). We also devise two DC Algorithms on the spirit of a paper where exact DC representation of the cardinality constraint is investigated and which also used the largest-q norm of lσq and presented numerical results that show the efficiency of our DC Algorithm in comparison with other methods using other penalty terms in the context of quadratic programing, see Jun-ya et al. (2017).
17

D’Alessandro, Pietro, Manlio Gaudioso, Giovanni Giallombardo e Giovanna Miglionico. "The Descent–Ascent Algorithm for DC Programming". INFORMS Journal on Computing, 14 dicembre 2023. http://dx.doi.org/10.1287/ijoc.2023.0142.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We introduce a bundle method for the unconstrained minimization of nonsmooth difference-of-convex (DC) functions, and it is based on the calculation of a special type of descent direction called descent–ascent direction. The algorithm only requires evaluations of the minuend component function at each iterate, and it can be considered as a parsimonious bundle method as accumulation of information takes place only in case the descent–ascent direction does not provide a sufficient decrease. No line search is performed, and proximity control is pursued independent of whether the decrease in the objective function is achieved. Termination of the algorithm at a point satisfying a weak criticality condition is proved, and numerical results on a set of benchmark DC problems are reported. History: Accepted by Antonio Frangioni, Area Editor for Design & Analysis of Algorithms – Continuous. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information ( https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2023.0142 ) as well as from the IJOC GitHub software repository ( https://github.com/INFORMSJoC/2023.0142 ). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/ .

Vai alla bibliografia