Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Approximate norm descent methods.

Статті в журналах з теми "Approximate norm descent methods"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Approximate norm descent methods".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Morini, Benedetta, Margherita Porcelli, and Philippe L. Toint. "Approximate norm descent methods for constrained nonlinear systems." Mathematics of Computation 87, no. 311 (May 11, 2017): 1327–51. http://dx.doi.org/10.1090/mcom/3251.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Jin, Yang, Li, and Liu. "Sparse Recovery Algorithm for Compressed Sensing Using Smoothed l0 Norm and Randomized Coordinate Descent." Mathematics 7, no. 9 (September 9, 2019): 834. http://dx.doi.org/10.3390/math7090834.

Повний текст джерела
Анотація:
Compressed sensing theory is widely used in the field of fault signal diagnosis and image processing. Sparse recovery is one of the core concepts of this theory. In this paper, we proposed a sparse recovery algorithm using a smoothed l0 norm and a randomized coordinate descent (RCD), then applied it to sparse signal recovery and image denoising. We adopted a new strategy to express the (P0) problem approximately and put forward a sparse recovery algorithm using RCD. In the computer simulation experiments, we compared the performance of this algorithm to other typical methods. The results show that our algorithm possesses higher precision in sparse signal recovery. Moreover, it achieves higher signal to noise ratio (SNR) and faster convergence speed in image denoising.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Xu, Kai, and Zhi Xiong. "Nonparametric Tensor Completion Based on Gradient Descent and Nonconvex Penalty." Symmetry 11, no. 12 (December 12, 2019): 1512. http://dx.doi.org/10.3390/sym11121512.

Повний текст джерела
Анотація:
Existing tensor completion methods all require some hyperparameters. However, these hyperparameters determine the performance of each method, and it is difficult to tune them. In this paper, we propose a novel nonparametric tensor completion method, which formulates tensor completion as an unconstrained optimization problem and designs an efficient iterative method to solve it. In each iteration, we not only calculate the missing entries by the aid of data correlation, but consider the low-rank of tensor and the convergence speed of iteration. Our iteration is based on the gradient descent method, and approximates the gradient descent direction with tensor matricization and singular value decomposition. Considering the symmetry of every dimension of a tensor, the optimal unfolding direction in each iteration may be different. So we select the optimal unfolding direction by scaled latent nuclear norm in each iteration. Moreover, we design formula for the iteration step-size based on the nonconvex penalty. During the iterative process, we store the tensor in sparsity and adopt the power method to compute the maximum singular value quickly. The experiments of image inpainting and link prediction show that our method is competitive with six state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ko, Dongnam, and Enrique Zuazua. "Model predictive control with random batch methods for a guiding problem." Mathematical Models and Methods in Applied Sciences 31, no. 08 (July 2021): 1569–92. http://dx.doi.org/10.1142/s0218202521500329.

Повний текст джерела
Анотація:
We model, simulate and control the guiding problem for a herd of evaders under the action of repulsive drivers. The problem is formulated in an optimal control framework, where the drivers (controls) aim to guide the evaders (states) to a desired region of the Euclidean space. The numerical simulation of such models quickly becomes unfeasible for a large number of interacting agents, as the number of interactions grows [Formula: see text] for [Formula: see text] agents. For reducing the computational cost to [Formula: see text], we use the Random Batch Method (RBM), which provides a computationally feasible approximation of the dynamics. First, the considered time interval is divided into a number of subintervals. In each subinterval, the RBM randomly divides the set of particles into small subsets (batches), considering only the interactions inside each batch. Due to the averaging effect, the RBM approximation converges to the exact dynamics in the [Formula: see text]-expectation norm as the length of subintervals goes to zero. For this approximated dynamics, the corresponding optimal control can be computed efficiently using a classical gradient descent. The resulting control is not optimal for the original system, but for a reduced RBM model. We therefore adopt a Model Predictive Control (MPC) strategy to handle the error in the dynamics. This leads to a semi-feedback control strategy, where the control is applied only for a short time interval to the original system, and then compute the optimal control for the next time interval with the state of the (controlled) original dynamics. Through numerical experiments we show that the combination of RBM and MPC leads to a significant reduction of the computational cost, preserving the capacity of controlling the overall dynamics.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Utomo, Rukmono Budi. "METODE NUMERIK STEPEST DESCENT DENGAN DIRECTION DAN NORMRERATA ARITMATIKA." AKSIOMA Journal of Mathematics Education 5, no. 2 (January 3, 2017): 128. http://dx.doi.org/10.24127/ajpm.v5i2.673.

Повний текст джерела
Анотація:
This research is investigating ofSteepest Descent numerical method with direction and norm arithmetic mean. This research is begin with try to understand what Steepest Descent Numerical is and its algorithm. After that, we constructing the new Steepest Descent numerical method using another direction and norm called arithmetic mean. This paper also containing numerical counting examples using both of these methods and analyze them self.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Goh, B. S. "Approximate Greatest Descent Methods for Optimization with Equality Constraints." Journal of Optimization Theory and Applications 148, no. 3 (November 16, 2010): 505–27. http://dx.doi.org/10.1007/s10957-010-9765-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Xiao, Yunhai, Chunjie Wu, and Soon-Yi Wu. "Norm descent conjugate gradient methods for solving symmetric nonlinear equations." Journal of Global Optimization 62, no. 4 (July 11, 2014): 751–62. http://dx.doi.org/10.1007/s10898-014-0218-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Qiu, Yixuan, and Xiao Wang. "Stochastic Approximate Gradient Descent via the Langevin Algorithm." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5428–35. http://dx.doi.org/10.1609/aaai.v34i04.5992.

Повний текст джерела
Анотація:
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAGD), as an alternative to the stochastic gradient descent for cases where unbiased stochastic gradients cannot be trivially obtained. Traditional methods for such problems rely on general-purpose sampling techniques such as Markov chain Monte Carlo, which typically requires manual intervention for tuning parameters and does not work efficiently in practice. Instead, SAGD makes use of the Langevin algorithm to construct stochastic gradients that are biased in finite steps but accurate asymptotically, enabling us to theoretically establish the convergence guarantee for SAGD. Inspired by our theoretical analysis, we also provide useful guidelines for its practical implementation. Finally, we show that SAGD performs well experimentally in popular statistical and machine learning problems such as the expectation-maximization algorithm and the variational autoencoders.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Yang, Yin, and Yunqing Huang. "Spectral-Collocation Methods for Fractional Pantograph Delay-Integrodifferential Equations." Advances in Mathematical Physics 2013 (2013): 1–14. http://dx.doi.org/10.1155/2013/821327.

Повний текст джерела
Анотація:
We propose and analyze a spectral Jacobi-collocation approximation for fractional order integrodifferential equations of Volterra type with pantograph delay. The fractional derivative is described in the Caputo sense. We provide a rigorous error analysis for the collocation method, which shows that the error of approximate solution decays exponentially inL∞norm and weightedL2-norm. The numerical examples are given to illustrate the theoretical results.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Poggio, Tomaso, Andrzej Banburski, and Qianli Liao. "Theoretical issues in deep networks." Proceedings of the National Academy of Sciences 117, no. 48 (June 9, 2020): 30039–45. http://dx.doi.org/10.1073/pnas.1907369117.

Повний текст джерела
Анотація:
While deep learning is successful in a number of applications, it is not yet well understood theoretically. A theoretical characterization of deep learning should answer questions about their approximation power, the dynamics of optimization, and good out-of-sample performance, despite overparameterization and the absence of explicit regularization. We review our recent results toward this goal. In approximation theory both shallow and deep networks are known to approximate any continuous functions at an exponential cost. However, we proved that for certain types of compositional functions, deep networks of the convolutional type (even without weight sharing) can avoid the curse of dimensionality. In characterizing minimization of the empirical exponential loss we consider the gradient flow of the weight directions rather than the weights themselves, since the relevant function underlying classification corresponds to normalized networks. The dynamics of normalized weights turn out to be equivalent to those of the constrained problem of minimizing the loss subject to a unit norm constraint. In particular, the dynamics of typical gradient descent have the same critical points as the constrained problem. Thus there is implicit regularization in training deep networks under exponential-type loss functions during gradient flow. As a consequence, the critical points correspond to minimum norm infima of the loss. This result is especially relevant because it has been recently shown that, for overparameterized models, selection of a minimum norm solution optimizes cross-validation leave-one-out stability and thereby the expected error. Thus our results imply that gradient descent in deep networks minimize the expected error.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Boykov, Ilya, Vladimir Roudnev, and Alla Boykova. "Approximate Methods for Solving Linear and Nonlinear Hypersingular Integral Equations." Axioms 9, no. 3 (July 1, 2020): 74. http://dx.doi.org/10.3390/axioms9030074.

Повний текст джерела
Анотація:
We propose an iterative projection method for solving linear and nonlinear hypersingular integral equations with non-Riemann integrable functions on the right-hand sides. We investigate hypersingular integral equations with second order singularities. Today, hypersingular integral equations of this type are widely used in physics and technology. The convergence of the proposed method is based on the Lyapunov stability theory of solutions of ordinary differential equation systems. The advantage of the method for linear equations is in simplicity of unique solvability verification for the approximate equations system in terms of the operator logarithmic norm. This makes it possible to estimate the norm of the inverse matrix for an approximating system. The advantage of the method for nonlinear equations is that neither the existence or reversibility of the nonlinear operator derivative is required. Examples are given illustrating the effectiveness of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Shi, Dongyang, and Zhiyun Yu. "Low-Order Nonconforming Mixed Finite Element Methods for Stationary Incompressible Magnetohydrodynamics Equations." Journal of Applied Mathematics 2012 (2012): 1–21. http://dx.doi.org/10.1155/2012/825609.

Повний текст джерела
Анотація:
The nonconforming mixed finite element methods (NMFEMs) are introduced and analyzed for the numerical discretization of a nonlinear, fully coupled stationary incompressible magnetohydrodynamics (MHD) problem in 3D. A family of the low-order elements on tetrahedra or hexahedra are chosen to approximate the pressure, the velocity field, and the magnetic field. The existence and uniqueness of the approximate solutions are shown, and the optimal error estimates for the corresponding unknown variables inL2-norm are established, as well as those in a brokenH1-norm for the velocity and the magnetic fields. Furthermore, a new approach is adopted to prove the discrete Poincaré-Friedrichs inequality, which is easier than that of the previous literature.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Wei, Yunxia, and Yanping Chen. "Convergence Analysis of the Spectral Methods for Weakly Singular Volterra Integro-Differential Equations with Smooth Solutions." Advances in Applied Mathematics and Mechanics 4, no. 1 (February 2012): 1–20. http://dx.doi.org/10.4208/aamm.10-m1055.

Повний текст джерела
Анотація:
AbstractThe theory of a class of spectral methods is extended to Volterra integro-differential equations which contain a weakly singular kernel (t - s)->* with 0< μ <1. In this work, we consider the case when the underlying solutions of weakly singular Volterra integro-differential equations are sufficiently smooth. We provide a rigorous error analysis for the spectral methods, which shows that both the errors of approximate solutions and the errors of approximate derivatives of the solutions decay exponentially inL°°-norm and weightedL2-norm. The numerical examples are given to illustrate the theoretical results.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

HENSHALL, JOHN M., and BRUCE TIER. "An algorithm for sampling descent graphs in large complex pedigrees efficiently." Genetical Research 81, no. 3 (June 2003): 205–12. http://dx.doi.org/10.1017/s0016672303006232.

Повний текст джерела
Анотація:
No exact method for determining genotypic and identity-by-descent probabilities is available for large complex pedigrees. Approximate methods for such pedigrees cannot be guaranteed to be unbiased. A new method is proposed that uses the Metropolis–Hastings algorithm to sample a Markov chain of descent graphs which fit the pedigree and known genotypes. Unknown genotypes are determined from each descent graph. Genotypic probabilities are estimated as their means. The algorithm is shown to be unbiased for small complex pedigrees and feasible and consistent for moderately large complex pedigrees.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Yang, Long, Yu Zhang, Gang Zheng, Qian Zheng, Pengfei Li, Jianhang Huang, and Gang Pan. "Policy Optimization with Stochastic Mirror Descent." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8823–31. http://dx.doi.org/10.1609/aaai.v36i8.20863.

Повний текст джерела
Анотація:
Improving sample efficiency has been a longstanding goal in reinforcement learning. This paper proposes VRMPO algorithm: a sample efficient policy gradient method with stochastic mirror descent. In VRMPO, a novel variance-reduced policy gradient estimator is presented to improve sample efficiency. We prove that the proposed VRMPO needs only O(ε−3) sample trajectories to achieve an ε-approximate first-order stationary point, which matches the best sample complexity for policy optimization. Extensive empirical results demonstrate that VRMP outperforms the state-of-the-art policy gradient methods in various settings.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

He, Linjie, Yumin Chen, Caiming Zhong, and Keshou Wu. "Granular Elastic Network Regression with Stochastic Gradient Descent." Mathematics 10, no. 15 (July 27, 2022): 2628. http://dx.doi.org/10.3390/math10152628.

Повний текст джерела
Анотація:
Linear regression is the use of linear functions to model the relationship between a dependent variable and one or more independent variables. Linear regression models have been widely used in various fields such as finance, industry, and medicine. To address the problem that the traditional linear regression model is difficult to handle uncertain data, we propose a granule-based elastic network regression model. First we construct granules and granular vectors by granulation methods. Then, we define multiple granular operation rules so that the model can effectively handle uncertain data. Further, the granular norm and the granular vector norm are defined to design the granular loss function and construct the granular elastic network regression model. After that, we conduct the derivative of the granular loss function and design the granular elastic network gradient descent optimization algorithm. Finally, we performed experiments on the UCI datasets to verify the validity of the granular elasticity network. We found that the granular elasticity network has the advantage of good fit compared with the traditional linear regression model.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Yang, Yin, Xinfa Yang, Jindi Wang, and Jie Liu. "The numerical solution of the time-fractional non-linear Klein-Gordon equation via spectral collocation method." Thermal Science 23, no. 3 Part A (2019): 1529–37. http://dx.doi.org/10.2298/tsci180824220y.

Повний текст джерела
Анотація:
In this paper, we consider the numerical solution of the time-fractional non-linear Klein-Gordon equation. We propose a spectral collocation method in both temporal and spatial discretizations with a spectral expansion of Jacobi interpolation polynomial for this equation. A rigorous error analysis is provided for the spectral methods to show both the errors of approximate solutions and the errors of approximate derivatives of the solutions decaying exponentially in infinity-norm and weighted L2-norm. Numerical tests are carried out to confirm the theoretical results.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Zeng, Xueying, Lixin Shen, Yuesheng Xu, and Jian Lu. "Matrix completion via minimizing an approximate rank." Analysis and Applications 17, no. 05 (September 2019): 689–713. http://dx.doi.org/10.1142/s0219530519400025.

Повний текст джерела
Анотація:
The low rank matrix completion problem which aims to recover a matrix from that having missing entries has received much attention in many fields such as image processing and machine learning. The rank of a matrix may be measured by the [Formula: see text] norm of the vector of its singular values. Due to the nonconvexity and discontinuity of the [Formula: see text] norm, solving the low rank matrix completion problem which is clearly NP hard suffers from computational challenges. In this paper, we propose a constrained matrix completion model in which a novel nonconvex continuous rank surrogate is used to approximate the rank function of a matrix, promote low rank of the recovered matrix and address the computational challenges. The proposed rank surrogate differs from the convex nuclear norm and other existing state-of-the-art nonconvex surrogates in a way that it alleviates the discontinuity and nonconvexity of the rank function through a local [Formula: see text]-relaxation of the [Formula: see text] norm so that it possesses several desirable properties. These properties ensure that it accurately approximates the rank function by choosing an appropriate relaxation parameter. We moreover develop an efficient iterative algorithm to solve the resulting model. We also propose strategies of automatically updating the relaxation parameter to practically ensure the global convergence and speed up the algorithm. We establish theoretical convergence results for the proposed algorithm. Experimental results are presented to demonstrate significant performance improvements of the proposed model and the associated algorithm as compared to state-of-the-art methods in both recoverability and computational efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Siahlooei, Esmaeil, and Seyed Abolfazl Shahzadeh Fazeli. "Two Iterative Methods for Solving Linear Interval Systems." Applied Computational Intelligence and Soft Computing 2018 (October 8, 2018): 1–13. http://dx.doi.org/10.1155/2018/2797038.

Повний текст джерела
Анотація:
Conjugate gradient is an iterative method that solves a linear system Ax=b, where A is a positive definite matrix. We present this new iterative method for solving linear interval systems Ãx̃=b̃, where à is a diagonally dominant interval matrix, as defined in this paper. Our method is based on conjugate gradient algorithm in the context view of interval numbers. Numerical experiments show that the new interval modified conjugate gradient method minimizes the norm of the difference of Ãx̃ and b̃ at every step while the norm is sufficiently small. In addition, we present another iterative method that solves Ãx̃=b̃, where à is a diagonally dominant interval matrix. This method, using the idea of steepest descent, finds exact solution x̃ for linear interval systems, where Ãx̃=b̃; we present a proof that indicates that this iterative method is convergent. Also, our numerical experiments illustrate the efficiency of the proposed methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Jacobsen, Andrew, Matthew Schlegel, Cameron Linke, Thomas Degris, Adam White, and Martha White. "Meta-Descent for Online, Continual Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3943–50. http://dx.doi.org/10.1609/aaai.v33i01.33013943.

Повний текст джерела
Анотація:
This paper investigates different vector step-size adaptation approaches for non-stationary online, continual prediction problems. Vanilla stochastic gradient descent can be considerably improved by scaling the update with a vector of appropriately chosen step-sizes. Many methods, including AdaGrad, RMSProp, and AMSGrad, keep statistics about the learning process to approximate a second order update—a vector approximation of the inverse Hessian. Another family of approaches use meta-gradient descent to adapt the stepsize parameters to minimize prediction error. These metadescent strategies are promising for non-stationary problems, but have not been as extensively explored as quasi-second order methods. We first derive a general, incremental metadescent algorithm, called AdaGain, designed to be applicable to a much broader range of algorithms, including those with semi-gradient updates or even those with accelerations, such as RMSProp. We provide an empirical comparison of methods from both families. We conclude that methods from both families can perform well, but in non-stationary prediction problems the meta-descent methods exhibit advantages. Our method is particularly robust across several prediction problems, and is competitive with the state-of-the-art method on a large-scale, time-series prediction problem on real data from a mobile robot.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Bonesky, Thomas, Kamil S. Kazimierski, Peter Maass, Frank Schöpfer, and Thomas Schuster. "Minimization of Tikhonov Functionals in Banach Spaces." Abstract and Applied Analysis 2008 (2008): 1–19. http://dx.doi.org/10.1155/2008/192679.

Повний текст джерела
Анотація:
Tikhonov functionals are known to be well suited for obtaining regularized solutions of linear operator equations. We analyze two iterative methods for finding the minimizer of norm-based Tikhonov functionals in Banach spaces. One is the steepest descent method, whereby the iterations are directly carried out in the underlying space, and the other one performs iterations in the dual space. We prove strong convergence of both methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Sandilya, Ruchi, and Sarvesh Kumar. "Convergence Analysis of Discontinuous Finite Volume Methods for Elliptic Optimal Control Problems." International Journal of Computational Methods 13, no. 02 (March 2016): 1640012. http://dx.doi.org/10.1142/s0219876216400120.

Повний текст джерела
Анотація:
In this paper, we discuss the convergence analysis of discontinuous finite volume methods applied to distribute the optimal control problems governed by a class of second-order linear elliptic equations. In order to approximate the control, two different methodologies are adopted: one is the method of variational discretization and second is piecewise constant discretization technique. For variational discretization method, optimal order of convergence in the [Formula: see text]-norm for state, adjoint state and control variables is derived. Moreover, optimal order of convergence in discrete [Formula: see text]-norm is also derived for state and adjoint state variables. Whereas, for piecewise constant approximation of control, first order convergence is derived for control, state and adjoint state variables in the [Formula: see text]-norm. In addition to that, optimal order of convergence in discrete [Formula: see text]-norm is derived for state and adjoint state variables. Also, some numerical experiments are conducted to support the derived theoretical convergence rate.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Shi, Xiulian. "Spectral Collocation Methods for Fractional Integro-Differential Equations with Weakly Singular Kernels." Journal of Mathematics 2022 (October 25, 2022): 1–9. http://dx.doi.org/10.1155/2022/3767559.

Повний текст джерела
Анотація:
In this paper, we propose and analyze a spectral approximation for the numerical solutions of fractional integro-differential equations with weakly kernels. First, the original equations are transformed into an equivalent weakly singular Volterra integral equation, which possesses nonsmooth solutions. To eliminate the singularity of the solution, we introduce some suitable smoothing transformations, and then use Jacobi spectral collocation method to approximate the resulting equation. Later, the spectral accuracy of the proposed method is investigated in the infinity norm and weighted L 2 norm. Finally, some numerical examples are considered to verify the obtained theoretical results.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Carpentier, Jean, and Sebastien Blandin. "Approximate Gradient Descent Convergence Dynamics for Adaptive Control on Heterogeneous Networks." Proceedings of the International Conference on Automated Planning and Scheduling 29 (May 25, 2021): 68–76. http://dx.doi.org/10.1609/icaps.v29i1.3461.

Повний текст джерела
Анотація:
Adaptive control is a classical control method for complex cyber-physical systems, including transportation networks. In this work, we analyze the convergence properties of such methods on exemplar graphs, both theoretically and numerically. We first illustrate a limitation of the standard backpressure algorithm for scheduling optimization, and prove that a re-scaling of the model state can lead to an improvement in the overall system optimality by a factor of at most O(k) depending on the network parameters, where k characterizes the network heterogeneity. We exhaustively describe the associated transient and steady-state regimes, and derive convergence properties within this generalized class of backpressure algorithms. Extensive simulations are conducted on both a synthetic network and on a more realistic large-scale network modeled on the Manhattan grid on which theoretical results are verified.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Innocenti, Luca, Leonardo Banchi, Sougato Bose, Alessandro Ferraro, and Mauro Paternostro. "Approximate supervised learning of quantum gates via ancillary qubits." International Journal of Quantum Information 16, no. 08 (December 2018): 1840004. http://dx.doi.org/10.1142/s021974991840004x.

Повний текст джерела
Анотація:
We present strategies for the training of a qubit network aimed at the ancilla-assisted synthesis of multi-qubit gates based on a set of restricted resources. By assuming the availability of only time-independent single and two-qubit interactions, we introduce and describe a supervised learning strategy implemented through momentum-stochastic gradient descent with automatic differentiation methods. We demonstrate the effectiveness of the scheme by discussing the implementation of nontrivial three qubit operations, including a QFT and a half-adder gate.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Han, Xu, Jiasong Wu, Lu Wang, Yang Chen, Lotfi Senhadji, and Huazhong Shu. "Linear Total Variation Approximate Regularized Nuclear Norm Optimization for Matrix Completion." Abstract and Applied Analysis 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/765782.

Повний текст джерела
Анотація:
Matrix completion that estimates missing values in visual data is an important topic in computer vision. Most of the recent studies focused on the low rank matrix approximation via the nuclear norm. However, the visual data, such as images, is rich in texture which may not be well approximated by low rank constraint. In this paper, we propose a novel matrix completion method, which combines the nuclear norm with the local geometric regularizer to solve the problem of matrix completion for redundant texture images. And in this paper we mainly consider one of the most commonly graph regularized parameters: the total variation norm which is a widely used measure for enforcing intensity continuity and recovering a piecewise smooth image. The experimental results show that the encouraging results can be obtained by the proposed method on real texture images compared to the state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Buong, Nguyen, Nguyen Anh, and Khuat Binh. "Steepest-descent Ishikawa iterative methods for a class of variational inequalities in Banach spaces." Filomat 34, no. 5 (2020): 1557–69. http://dx.doi.org/10.2298/fil2005557b.

Повний текст джерела
Анотація:
In this paper, for finding a fixed point of a nonexpansive mapping in either uniformly smooth or reflexive and strictly convex Banach spaces with a uniformly G?teaux differentiable norm, we present a new explicit iterative method, based on a combination of the steepest-descent method with the Ishikawa iterative one. We also show its several particular cases one of which is the composite Halpern iterative method in literature. The explicit iterative method is also extended to the case of infinite family of nonexpansive mappings. Numerical experiments are given for illustration.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Yu, Hong, Tongjun Sun, and Na Li. "The Time DiscontinuousH1-Galerkin Mixed Finite Element Method for Linear Sobolev Equations." Discrete Dynamics in Nature and Society 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/618258.

Повний текст джерела
Анотація:
We combine theH1-Galerkin mixed finite element method with the time discontinuous Galerkin method to approximate linear Sobolev equations. The advantages of these two methods are fully utilized. The approximate schemes are established to get the approximate solutions by a piecewise polynomial of degree at mostq-1with the time variable. The existence and uniqueness of the solutions are proved, and the optimalH1-norm error estimates are derived. We get high accuracy for both the space and time variables.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Karakida, Ryo, and Kazuki Osawa. "Understanding approximate Fisher information for fast convergence of natural gradient descent in wide neural networks*." Journal of Statistical Mechanics: Theory and Experiment 2021, no. 12 (December 1, 2021): 124010. http://dx.doi.org/10.1088/1742-5468/ac3ae3.

Повний текст джерела
Анотація:
Abstract Natural gradient descent (NGD) helps to accelerate the convergence of gradient descent dynamics, but it requires approximations in large-scale deep neural networks because of its high computational cost. Empirical studies have confirmed that some NGD methods with approximate Fisher information converge sufficiently fast in practice. Nevertheless, it remains unclear from the theoretical perspective why and under what conditions such heuristic approximations work well. In this work, we reveal that, under specific conditions, NGD with approximate Fisher information achieves the same fast convergence to global minima as exact NGD. We consider deep neural networks in the infinite-width limit, and analyze the asymptotic training dynamics of NGD in function space via the neural tangent kernel. In the function space, the training dynamics with the approximate Fisher information are identical to those with the exact Fisher information, and they converge quickly. The fast convergence holds in layer-wise approximations; for instance, in block diagonal approximation where each block corresponds to a layer as well as in block tri-diagonal and K-FAC approximations. We also find that a unit-wise approximation achieves the same fast convergence under some assumptions. All of these different approximations have an isotropic gradient in the function space, and this plays a fundamental role in achieving the same convergence properties in training. Thus, the current study gives a novel and unified theoretical foundation with which to understand NGD methods in deep learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

KIM, MI-YOUNG. "DISCONTINUOUS GALERKIN METHODS FOR THE LOTKA–MCKENDRICK EQUATION WITH FINITE LIFE-SPAN." Mathematical Models and Methods in Applied Sciences 16, no. 02 (February 2006): 161–76. http://dx.doi.org/10.1142/s0218202506001108.

Повний текст джерела
Анотація:
We consider a model of population dynamics whose mortality function is unbounded and the solution is not regular near the maximum age. A continuous-time discontinuous Galerkin method to approximate the solution is described and analyzed. Our results show that the scheme is convergent, in L∞(L2) norm, at the rate of r + 1/2 away from the maximum age and that it is convergent at the rate of l - 1/(2q) + α/2 in L2(L2) norm, near the maximum age, if u ∈ L2(Wl,2q), where q ≥ 1, 1/2 ≤ l ≤ r + 1, r is the degree of the polynomial of the approximation space, and α is the growth rate of the mortality function; this estimate is super-convergent near the maximum age. Strong stability of the scheme is shown.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Bertrand, Fleurianne, Zhiqiang Cai, and Eun Young Park. "Least-Squares Methods for Elasticity and Stokes Equations with Weakly Imposed Symmetry." Computational Methods in Applied Mathematics 19, no. 3 (July 1, 2019): 415–30. http://dx.doi.org/10.1515/cmam-2018-0255.

Повний текст джерела
Анотація:
AbstractThis paper develops and analyzes two least-squares methods for the numerical solution of linear elasticity and Stokes equations in both two and three dimensions. Both approaches use the{L^{2}}norm to define least-squares functionals. One is based on the stress-displacement/velocity-rotation/vorticity-pressure (SDRP/SVVP) formulation, and the other is based on the stress-displacement/velocity-rotation/vorticity (SDR/SVV) formulation. The introduction of the rotation/vorticity variable enables us to weakly enforce the symmetry of the stress. It is shown that the homogeneous least-squares functionals are elliptic and continuous in the norm of{H(\mathrm{div};\Omega)}for the stress, of{H^{1}(\Omega)}for the displacement/velocity, and of{L^{2}(\Omega)}for the rotation/vorticity and the pressure. This immediately implies optimal error estimates in the energy norm for conforming finite element approximations. As well, it admits optimal multigrid solution methods if Raviart–Thomas finite element spaces are used to approximate the stress tensor. Through a refined duality argument, an optimal{L^{2}}norm error estimates for the displacement/velocity are also established. Finally, numerical results for a Cook’s membrane problem of planar elasticity are included in order to illustrate the robustness of our method in the incompressible limit.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Zhu, Jun, Changwei Chen, Shoubao Su, and Zinan Chang. "Compressive Sensing of Multichannel EEG Signals via lq Norm and Schatten-p Norm Regularization." Mathematical Problems in Engineering 2016 (2016): 1–7. http://dx.doi.org/10.1155/2016/2189563.

Повний текст джерела
Анотація:
In Wireless Body Area Networks (WBAN) the energy consumption is dominated by sensing and communication. Recently, a simultaneous cosparsity and low-rank (SCLR) optimization model has shown the state-of-the-art performance in compressive sensing (CS) recovery of multichannel EEG signals. How to solve the resulting regularization problem, involving l0 norm and rank function which is known as an NP-hard problem, is critical to the recovery results. SCLR takes use of l1 norm and nuclear norm as a convex surrogate function for l0 norm and rank function. However, l1 norm and nuclear norm cannot well approximate the l0 norm and rank because there exist irreparable gaps between them. In this paper, an optimization model with lq norm and schatten-p norm is proposed to enforce cosparsity and low-rank property in the reconstructed multichannel EEG signals. An efficient iterative scheme is used to solve the resulting nonconvex optimization problem. Experimental results have demonstrated that the proposed algorithm can significantly outperform existing state-of-the-art CS methods for compressive sensing of multichannel EEG channels.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Zhou, Jingcheng, Wei Wei, Ruizhi Zhang, and Zhiming Zheng. "Damped Newton Stochastic Gradient Descent Method for Neural Networks Training." Mathematics 9, no. 13 (June 29, 2021): 1533. http://dx.doi.org/10.3390/math9131533.

Повний текст джерела
Анотація:
First-order methods such as stochastic gradient descent (SGD) have recently become popular optimization methods to train deep neural networks (DNNs) for good generalization; however, they need a long training time. Second-order methods which can lower the training time are scarcely used on account of their overpriced computing cost to obtain the second-order information. Thus, many works have approximated the Hessian matrix to cut the cost of computing while the approximate Hessian matrix has large deviation. In this paper, we explore the convexity of the Hessian matrix of partial parameters and propose the damped Newton stochastic gradient descent (DN-SGD) method and stochastic gradient descent damped Newton (SGD-DN) method to train DNNs for regression problems with mean square error (MSE) and classification problems with cross-entropy loss (CEL). In contrast to other second-order methods for estimating the Hessian matrix of all parameters, our methods only accurately compute a small part of the parameters, which greatly reduces the computational cost and makes the convergence of the learning process much faster and more accurate than SGD and Adagrad. Several numerical experiments on real datasets were performed to verify the effectiveness of our methods for regression and classification problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Zhang, Bin, Liuliu Wang, Shuang Li, Futai Xie, and Lideng Wei. "Airborne Single-Pass Multi-Baseline InSAR Layover Separation Method Based on Multi-Look Compressive Sensing." Applied Sciences 12, no. 24 (December 9, 2022): 12658. http://dx.doi.org/10.3390/app122412658.

Повний текст джерела
Анотація:
Due to the small number of baselines (2–3), the traditional L1 norm compressive sensing method for layover solution in InSAR has poor separation ability and height estimation stability and a long operation time. This paper, based on the idea of multi-look, adopts a multi-look compressive sensing method and a multi-look compressive sensing method based on separable approximate sparse reconstruction. The layover separation method based on multi-look compressive sensing adopts the surrounding pixels around the current point as independent observations together with this point to increase the observation vector in compressive sensing, and uses the singular value decomposition method to obtain the noise value, which is used to improve the dimensions of measured data in compressive sensing, reduces the noise level, and improves the stability of noise estimation. Meanwhile, the results of the multi-look L1 norm solution method are closer to those of the L0 norm solution, and the sparse reconstruction ability of compressive sensing is improved. Thus, the separation ability of the scatterers in the layover areas and the stability of height estimation are stronger. In addition, the multi-look compressive sensing method based on separable approximate sparse reconstruction constructs differential operation and soft functions, transforms the L1–L2 norm optimization into an iterative soft threshold shrinkage processing mode, and improves the processing speed by means of the threshold iteration method, which can effectively reduce the operation time while maintaining the resolution ability of scatterers in layover areas and the height direction estimation accuracy and provides the possibility for large-scale data processing. These two methods are effectively verified by means of simulation and measured data. The simulation experiments of the two methods are based on the airborne MEMPHIS system with four antennas, and the height values of the layover scatterers solved by the two methods are more reliable, stable, and closer to the real value than those solved by the traditional compressive sensing method. The operation time of the separable approximate sparse reconstruction method is comparable to the processing time of the traditional compressive sensing method and nearly one-quarter that of the multi-look compressive sensing method. The real data experiments of the two methods are based on the airborne Millimeter-wave InSAR system with three antennas. The two methods both have certain height resolutions in the height direction estimation of layover areas and fine elevation continuity, while traditional compressive sensing method cannot satisfy the condition of sparsity and has poor scatterer separation and elevation continuity. Nevertheless, the multi-look compressive sensing method is a little more stable than the separable approximate sparse reconstruction method, and the operation time of the separable approximate sparse reconstruction method is comparable to the traditional compressive sensing method and nearly one-fifth that of the multi-look compressive sensing method.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

SHISHKIN, GRIGORII I., and PETR N. VABISHCHEVICH. "PARALLEL DOMAIN DECOMPOSITION METHODS WITH THE OVERLAPPING OF SUBDOMAINS FOR PARABOLIC PROBLEMS." Mathematical Models and Methods in Applied Sciences 06, no. 08 (December 1996): 1169–85. http://dx.doi.org/10.1142/s0218202596000493.

Повний текст джерела
Анотація:
For a model of two-dimensional boundary value problem for a second-order parabolic equation, finite difference schemes on the base of a domain decomposition method, oriented on modern parallel computers, is constructed. In the used finite difference schemes iterations at time levels are not applied; some subdomains overlap. We study two classes of schemes characterized by synchronous and asynchronous implementations. It is shown that, under refining grids, the approximate solutions do converge to the exact one in the uniform grid norm.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Yadav, Sangita, and Amiya K. Pani. "Superconvergent discontinuous Galerkin methods for nonlinear parabolic initial and boundary value problems." Journal of Numerical Mathematics 27, no. 3 (September 25, 2019): 183–202. http://dx.doi.org/10.1515/jnma-2018-0035.

Повний текст джерела
Анотація:
Abstract In this article, we discuss error estimates for nonlinear parabolic problems using discontinuous Galerkin methods which include HDG method in the spatial direction while keeping time variable continuous. When piecewise polynomials of degree k ⩾ 1 are used to approximate both the potential as well as the flux, it is shown that the error estimate for the semi-discrete flux in L∞(0, T; L2)-norm is of order k + 1. With the help of a suitable post-processing of the semi-discrete potential, it is proved that the resulting post-processed potential converges with order of convergence $\begin{array}{} \displaystyle O\big(\!\sqrt{{}\log(T/h^2)}\,h^{k+2}\big) \end{array}$ in L∞(0, T; L2)-norm. These results extend the HDG analysis of Chabaud and Cockburn [Math. Comp. 81 (2012), 107–129] for the heat equation to non-linear parabolic problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Wang, Yi Yan, and Shi Yun Wu. "Adaptive Image Denoising Approach Based on Generalized Lp Norm Variational Model." Applied Mechanics and Materials 556-562 (May 2014): 4851–55. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.4851.

Повний текст джерела
Анотація:
The well-known methods based on gradient dependent regularizers such as total variation (TV) model often suffer the staircase effect and the loss of edge details. In order to overcome such drawbacks, an adaptive variational approach is proposed. First, we introduced a Gaussian smoothed image as the variable of the Lp norm, and then we employed the difference curvature instead of gradient as new edge indicator, which can effectively distinguish between ramps and edges. In the proposed model, the regularization term and fidelity term are both adaptive. At object edges, the regularization term is approximate to the TV norm in order to preserving the edges; in flat and ramp regions, the regularization term is approximate to the L2 norm in order to avoid the staircase effect. Meanwhile, we added a spatially varying fidelity term that locally controls the extent of denoising over image regions according to their content. Local variance measures of the oscillatory part of the signal are to compute the adaptive fidelity term. Comparative results on both natural and medical images demonstrate that the new method can avoid the staircase effect and better preserve fine details than the other variational models.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Cao, Yongcan, and Huixin Zhan. "Efficient Multi-objective Reinforcement Learning via Multiple-gradient Descent with Iteratively Discovered Weight-Vector Sets." Journal of Artificial Intelligence Research 70 (January 20, 2021): 319–49. http://dx.doi.org/10.1613/jair.1.12270.

Повний текст джерела
Анотація:
Solving multi-objective optimization problems is important in various applications where users are interested in obtaining optimal policies subject to multiple (yet often conflicting) objectives. A typical approach to obtain the optimal policies is to first construct a loss function based on the scalarization of individual objectives and then derive optimal policies that minimize the scalarized loss function. Albeit simple and efficient, the typical approach provides no insights/mechanisms on the optimization of multiple objectives due to the lack of ability to quantify the inter-objective relationship. To address the issue, we propose to develop a new efficient gradient-based multi-objective reinforcement learning approach that seeks to iteratively uncover the quantitative inter-objective relationship via finding a minimum-norm point in the convex hull of the set of multiple policy gradients when the impact of one objective on others is unknown a priori. In particular, we first propose a new PAOLS algorithm that integrates pruning and approximate optimistic linear support algorithm to efficiently discover the weight-vector sets of multiple gradients that quantify the inter-objective relationship. Then we construct an actor and a multi-objective critic that can co-learn the policy and the multi-objective vector value function. Finally, the weight discovery process and the policy and vector value function learning process can be iteratively executed to yield stable weight-vector sets and policies. To validate the effectiveness of the proposed approach, we present a quantitative evaluation of the approach based on three case studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Jurisic Bellotti, Maja, and Mladen Vucic. "Sparse FIR Filter Design Based on Signomial Programming." Elektronika ir Elektrotechnika 26, no. 1 (February 16, 2020): 40–45. http://dx.doi.org/10.5755/j01.eie.26.1.23560.

Повний текст джерела
Анотація:
The goal of sparse FIR filter design is to minimize the number of nonzero filter coefficients, while keeping its frequency response within specified boundaries. Such a design can be formally expressed via minimization of l0-norm of filter’s impulse response. Unfortunately, the corresponding minimization problem has combinatorial complexity. Therefore, many design methods are developed, which solve the problem approximately, or which solve the approximate problem exactly. In this paper, we propose an approach, which is based on the approximation of the l0-norm by an lp-norm with 0 < p < 1. We minimize the lp-norm using recently developed method for signomial programming (SGP). Our design starts with forming a SGP problem that describes filter specifications. The optimum solution of the problem is then found by using iterative procedure, which solves a geometric program in each iteration. The filters whose magnitude responses are constrained in minimax sense are considered. The design examples are provided illustrating that the proposed method, in most cases, results in filters with higher sparsity than those of the filters obtained by recently published methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Ficarella, Elisa, Luciano Lamberti, and Sadik Ozgur Degertekin. "Mechanical Identification of Materials and Structures with Optical Methods and Metaheuristic Optimization." Materials 12, no. 13 (July 2, 2019): 2133. http://dx.doi.org/10.3390/ma12132133.

Повний текст джерела
Анотація:
This study presents a hybrid framework for mechanical identification of materials and structures. The inverse problem is solved by combining experimental measurements performed by optical methods and non-linear optimization using metaheuristic algorithms. In particular, we develop three advanced formulations of Simulated Annealing (SA), Harmony Search (HS) and Big Bang-Big Crunch (BBBC) including enhanced approximate line search and computationally cheap gradient evaluation strategies. The rationale behind the new algorithms—denoted as Hybrid Fast Simulated Annealing (HFSA), Hybrid Fast Harmony Search (HFHS) and Hybrid Fast Big Bang-Big Crunch (HFBBBC)—is to generate high quality trial designs lying on a properly selected set of descent directions. Besides hybridizing SA/HS/BBBC metaheuristic search engines with gradient information and approximate line search, HS and BBBC are also hybridized with an enhanced 1-D probabilistic search derived from SA. The results obtained in three inverse problems regarding composite and transversely isotropic hyperelastic materials/structures with up to 17 unknown properties clearly demonstrate the validity of the proposed approach, which allows to significantly reduce the number of structural analyses with respect to previous SA/HS/BBBC formulations and improves robustness of metaheuristic search engines.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Si, Weijian, Xinggen Qu, Yilin Jiang, and Tao Chen. "Multiple Sparse Measurement Gradient Reconstruction Algorithm for DOA Estimation in Compressed Sensing." Mathematical Problems in Engineering 2015 (2015): 1–6. http://dx.doi.org/10.1155/2015/152570.

Повний текст джерела
Анотація:
A novel direction of arrival (DOA) estimation method in compressed sensing (CS) is proposed, in which the DOA estimation problem is cast as the joint sparse reconstruction from multiple measurement vectors (MMV). The proposed method is derived through transforming quadratically constrained linear programming (QCLP) into unconstrained convex optimization which overcomes the drawback thatl1-norm is nondifferentiable when sparse sources are reconstructed by minimizingl1-norm. The convergence rate and estimation performance of the proposed method can be significantly improved, since the steepest descent step and Barzilai-Borwein step are alternately used as the search step in the unconstrained convex optimization. The proposed method can obtain satisfactory performance especially in these scenarios with low signal to noise ratio (SNR), small number of snapshots, or coherent sources. Simulation results show the superior performance of the proposed method as compared with existing methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Xiong, Huan. "DPCD: Discrete Principal Coordinate Descent for Binary Variable Problems." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 10391–98. http://dx.doi.org/10.1609/aaai.v36i9.21281.

Повний текст джерела
Анотація:
Binary optimization, a representative subclass of discrete optimization, plays an important role in mathematical optimization and has various applications in computer vision and machine learning. Generally speaking, binary optimization problems are NP-hard and difficult to solve due to the binary constraints, especially when the number of variables is very large. Existing methods often suffer from high computational costs or large accumulated quantization errors, or are only designed for specific tasks. In this paper, we propose an efficient algorithm, named Discrete Principal Coordinate Descent (DPCD), to find effective approximate solutions for general binary optimization problems. The proposed algorithm iteratively solves optimization problems related to the linear approximation of loss functions, which leads to updating the binary variables that most impact the value of the loss functions at each step. Our method supports a wide range of empirical objective functions with/without restrictions on the numbers of 1s and -1s in the binary variables. Furthermore, the theoretical convergence of our algorithm is proven, and the explicit convergence rates are derived for objective functions with Lipschitz continuous gradients, which are commonly adopted in practice. Extensive experiments on binary hashing tasks and large-scale datasets demonstrate the superiority of the proposed algorithm over several state-of-the-art methods in terms of both effectiveness and efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Wunderlich, Jonathan, and Michael Plum. "Computer-assisted Existence Proofs for One-dimensional Schrödinger-Poisson Systems." Acta Cybernetica 24, no. 3 (March 16, 2020): 373–91. http://dx.doi.org/10.14232/actacyb.24.3.2020.6.

Повний текст джерела
Анотація:
Motivated by the three-dimensional time-dependent Schrödinger-Poisson system we prove the existence of non-trivial solutions of the one-dimensional stationary Schrödinger-Poisson system using computer-assisted methods. Starting from a numerical approximate solution, we compute a bound for its defect, and a norm bound for the inverse of the linearization at the approximate solution. For the latter, eigenvalue bounds play a crucial role, especially for the eigenvalues "close to" zero. Therefor, we use the Rayleigh-Ritz method and a corollary of the Temple-Lehmann Theorem to get enclosures of the crucial eigenvalues of the linearization below the essential spectrum. With these data in hand, we can use a fixed-point argument to obtain the desired existence of a non-trivial solution "nearby" the approximate one. In addition to the pure existence result, the used methods also provide an enclosure of the exact solution.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Sun, Min, Jing Liu, and Yaru Wang. "Two Improved Conjugate Gradient Methods with Application in Compressive Sensing and Motion Control." Mathematical Problems in Engineering 2020 (May 5, 2020): 1–11. http://dx.doi.org/10.1155/2020/9175496.

Повний текст джерела
Анотація:
To solve the monotone equations with convex constraints, a novel multiparameterized conjugate gradient method (MPCGM) is designed and analyzed. This kind of conjugate gradient method is derivative-free and can be viewed as a modified version of the famous Fletcher–Reeves (FR) conjugate gradient method. Under approximate conditions, we show that the proposed method has global convergence property. Furthermore, we generalize the MPCGM to solve unconstrained optimization problem and offer another novel conjugate gradient method (NCGM), which satisfies the sufficient descent property without any line search. Global convergence of the NCGM is also proved. Finally, we report some numerical results to show the efficiency of two novel methods. Specifically, their practical applications in compressive sensing and motion control of robot manipulator are also investigated.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Zuo, Ming, Shuguo Xie, Xian Zhang, and Meiling Yang. "DOA Estimation Based on Weighted l1-norm Sparse Representation for Low SNR Scenarios." Sensors 21, no. 13 (July 5, 2021): 4614. http://dx.doi.org/10.3390/s21134614.

Повний текст джерела
Анотація:
In this paper, a weighted l1-norm is proposed in a l1-norm-based singular value decomposition (L1-SVD) algorithm, which can suppress spurious peaks and improve accuracy of direction of arrival (DOA) estimation for the low signal-to-noise (SNR) scenarios. The weighted matrix is determined by optimizing the orthogonality of subspace, and the weighted l1-norm is used as the minimum objective function to increase the signal sparsity. Thereby, the weighted matrix makes the l1-norm approximate the original l0-norm. Simulated results of orthogonal frequency division multiplexing (OFDM) signal demonstrate that the proposed algorithm has s narrower main lobe and lower side lobe with the characteristics of fewer snapshots and low sensitivity of misestimated signals, which can improve the resolution and accuracy of DOA estimation. Specifically, the proposed method exhibits a better performance than other works for the low SNR scenarios. Outdoor experimental results of OFDM signals show that the proposed algorithm is superior to other methods with a narrower main lobe and lower side lobe, which can be used for DOA estimation of UAV and pseudo base station.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

He, Guodong, Maozhong Song, Shanshan Zhang, Huiping Qin, and Xiaojuan Xie. "GPS Sparse Multipath Signal Estimation Based on Compressive Sensing." Wireless Communications and Mobile Computing 2021 (May 11, 2021): 1–9. http://dx.doi.org/10.1155/2021/5583429.

Повний текст джерела
Анотація:
A GPS sparse multipath signal estimation method based on compressive sensing is proposed. A new 0 norm approximation function is designed, and the parameter of the approximate function is gradually reduced to realize the approximation of 0 norm. The sparse signal is reconstructed by a modified Newton method. The reconstruction performance of the proposed algorithm is better than several commonly reconstruction algorithms at different sparse numbers and noise intensities. The GPS sparse multipath signal model is established, and the sparse multipath signal is estimated by the proposed reconstruction algorithm in this paper. Compared with several commonly used estimation methods, the estimation error of the proposed method is lower.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Li, Ning, Qing-Wen Wang, and Jing Jiang. "An Efficient Algorithm for the Reflexive Solution of the Quaternion Matrix EquationAXB+CXHD=F." Journal of Applied Mathematics 2013 (2013): 1–14. http://dx.doi.org/10.1155/2013/217540.

Повний текст джерела
Анотація:
We propose an iterative algorithm for solving the reflexive solution of the quaternion matrix equationAXB+CXHD=F. When the matrix equation is consistent over reflexive matrixX, a reflexive solution can be obtained within finite iteration steps in the absence of roundoff errors. By the proposed iterative algorithm, the least Frobenius norm reflexive solution of the matrix equation can be derived when an appropriate initial iterative matrix is chosen. Furthermore, the optimal approximate reflexive solution to a given reflexive matrixX0can be derived by finding the least Frobenius norm reflexive solution of a new corresponding quaternion matrix equation. Finally, two numerical examples are given to illustrate the efficiency of the proposed methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Sun, Zhenzhen, and Yuanlong Yu. "Robust multi-class feature selection via l2,0-norm regularization minimization." Intelligent Data Analysis 26, no. 1 (January 14, 2022): 57–73. http://dx.doi.org/10.3233/ida-205724.

Повний текст джерела
Анотація:
Feature selection is an important data preprocessing in data mining and machine learning, that can reduce the number of features without deteriorating model’s performance. Recently, sparse regression has received considerable attention in feature selection task due to its good performance. However, because the l2,0-norm regularization term is non-convex, this problem is hard to solve, and most of the existing methods relaxed it by l2,1-norm. Unlike the existing methods, this paper proposes a novel method to solve the l2,0-norm regularized least squares problem directly based on iterative hard thresholding, which can produce exact row-sparsity solution for weights matrix, and features can be selected more precisely. Furthermore, two homotopy strategies are derived to reduce the computational time of the optimization method, which are more practical for real-world applications. The proposed method is verified on eight biological datasets, experimental results show that our method can achieve higher classification accuracy with fewer number of selected features than the approximate convex counterparts and other state-of-the-art feature selection methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Jain, Nishant, Brian Coyle, Elham Kashefi, and Niraj Kumar. "Graph neural network initialisation of quantum approximate optimisation." Quantum 6 (November 17, 2022): 861. http://dx.doi.org/10.22331/q-2022-11-17-861.

Повний текст джерела
Анотація:
Approximate combinatorial optimisation has emerged as one of the most promising application areas for quantum computers, particularly those in the near term. In this work, we focus on the quantum approximate optimisation algorithm (QAOA) for solving the MaxCut problem. Specifically, we address two problems in the QAOA, how to initialise the algorithm, and how to subsequently train the parameters to find an optimal solution. For the former, we propose graph neural networks (GNNs) as a warm-starting technique for QAOA. We demonstrate that merging GNNs with QAOA can outperform both approaches individually. Furthermore, we demonstrate how graph neural networks enables warm-start generalisation across not only graph instances, but also to increasing graph sizes, a feature not straightforwardly available to other warm-starting methods. For training the QAOA, we test several optimisers for the MaxCut problem up to 16 qubits and benchmark against vanilla gradient descent. These include quantum aware/agnostic and machine learning based/neural optimisers. Examples of the latter include reinforcement and meta-learning. With the incorporation of these initialisation and optimisation toolkits, we demonstrate how the optimisation problems can be solved using QAOA in an end-to-end differentiable pipeline.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Masegosa, Andrés R., Rafael Cabañas, Helge Langseth, Thomas D. Nielsen, and Antonio Salmerón. "Probabilistic Models with Deep Neural Networks." Entropy 23, no. 1 (January 18, 2021): 117. http://dx.doi.org/10.3390/e23010117.

Повний текст джерела
Анотація:
Recent advances in statistical inference have significantly expanded the toolbox of probabilistic modeling. Historically, probabilistic modeling has been constrained to very restricted model classes, where exact or approximate probabilistic inference is feasible. However, developments in variational inference, a general form of approximate probabilistic inference that originated in statistical physics, have enabled probabilistic modeling to overcome these limitations: (i) Approximate probabilistic inference is now possible over a broad class of probabilistic models containing a large number of parameters, and (ii) scalable inference methods based on stochastic gradient descent and distributed computing engines allow probabilistic modeling to be applied to massive data sets. One important practical consequence of these advances is the possibility to include deep neural networks within probabilistic models, thereby capturing complex non-linear stochastic relationships between the random variables. These advances, in conjunction with the release of novel probabilistic modeling toolboxes, have greatly expanded the scope of applications of probabilistic models, and allowed the models to take advantage of the recent strides made by the deep learning community. In this paper, we provide an overview of the main concepts, methods, and tools needed to use deep neural networks within a probabilistic modeling framework.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії