To see the other types of publications on this topic, follow the link: Newton algorithms.

Journal articles on the topic 'Newton algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Newton algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lu, Pei Xin. "Research on BP Neural Network Algorithm Based on Quasi-Newton Method." Applied Mechanics and Materials 686 (October 2014): 388–94. http://dx.doi.org/10.4028/www.scientific.net/amm.686.388.

Full text
Abstract:
With more and more researches about improving BP algorithm, there are more improvement methods. The paper researches two improvement algorithms based on quasi-Newton method, DFP algorithm and L-BFGS algorithm. After fully analyzing the features of quasi-Newton methods, the paper improves BP neural network algorithm. And the adjustment is made for the problems in the improvement process. The paper makes empirical analysis and proves the effectiveness of BP neural network algorithm based on quasi-Newton method. The improved algorithms are compared with the traditional BP algorithm, which indicates that the improved BP algorithm is better.
APA, Harvard, Vancouver, ISO, and other styles
2

KISETA, Jacques SABITI, and Roger LIENDI AKUMOSO. "A Review of Well-Known Robust Line Search and Trust Region Numerical Optimization Algorithms for Solving Nonlinear Least-Squares Problems." International Science Review 2, no. 3 (November 9, 2021): 1–17. http://dx.doi.org/10.47285/isr.v2i3.106.

Full text
Abstract:
The conditional, unconditional, or the exact maximum likelihood estimation and the least-squares estimation involve minimizing either the conditional or the unconditional residual sum of squares. The maximum likelihood estimation (MLE) approach and the nonlinear least squares (NLS) procedure involve an iterative search technique for obtaining global rather than local optimal estimates. Several authors have presented brief overviews of algorithms for solving NLS problems. Snezana S. Djordjevic (2019) presented a review of some unconstrained optimization methods based on the line search techniques. Mahaboob et al. (2017) proposed a different approach to estimate nonlinear regression models using numerical methods also based on the line search techniques. Mohammad, Waziri, and Santos (2019) have briefly reviewed methods for solving NLS problems, paying special attention to the structured quasi-Newton methods which are the family of the search line techniques. Ya-Xiang Yuan (2011) reviewed some recent results on numerical methods for nonlinear equations and NLS problems based on online searches and trust regions techniques, particularly on Levenberg-Marquardt type methods, quasi-Newton type methods, and trust regions algorithms. The purpose of this paper is to review some online searches and trust region's more well-known robust numerical optimization algorithms and the most used in practice for the estimation of time series models and other nonlinear regression models. The line searches algorithms considered are: Gradient algorithm, Steepest Descent (SD) algorithm, Newton-Raphson (NR) algorithm, Murray’s algorithm, Quasi-Newton (QN) algorithm, Gauss-Newton (GN) algorithm, Fletcher and Powell algorithm (FP), Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. While the only trust-region algorithm considered is the Levenberg-Marquardt (LM) algorithm. We also give some main advantages and disadvantages of these different algorithms.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Xiang-Rong, Won-Jee Chung, Young-Hyu Choi, and Xiang-Feng Ma. "A new dynamic formulation for robot manipulators containing closed kinematic chains." Robotica 17, no. 3 (May 1999): 261–67. http://dx.doi.org/10.1017/s0263574799001320.

Full text
Abstract:
This paper presents a new recursive algorithm of robot dynamics based on the Kane's dynamic equations and Newton-Euler formulations. Differing from Kane's work, the algorithm is general-purpose and can be easily realized on computers. It is suited not only for robots with all rotary joints but also for robots with some prismatic joints. Formulations of the algorithm keep the recurrence characteristics of the Newton-Euler formulations, but possess stronger physical significance. Unlike the conventional algorithms, such as the Lagrange and Newton-Euler algorithm, etc., the algorithm can be used to deal with dynamics of robots containing closed chains without cutting the closed chains open. In addition, this paper makes a comparison between the algorithm and those conventional algorithms from the number of multiplications and additions.
APA, Harvard, Vancouver, ISO, and other styles
4

Dussault, Jean-Pierre. "High-order Newton-penalty algorithms." Journal of Computational and Applied Mathematics 182, no. 1 (October 2005): 117–33. http://dx.doi.org/10.1016/j.cam.2004.11.043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cai, Xiao-Chuan, and David E. Keyes. "Nonlinearly Preconditioned Inexact Newton Algorithms." SIAM Journal on Scientific Computing 24, no. 1 (January 2002): 183–200. http://dx.doi.org/10.1137/s106482750037620x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gościniak, Ireneusz, and Krzysztof Gdawiec. "Visual Analysis of Dynamics Behaviour of an Iterative Method Depending on Selected Parameters and Modifications." Entropy 22, no. 7 (July 2, 2020): 734. http://dx.doi.org/10.3390/e22070734.

Full text
Abstract:
There is a huge group of algorithms described in the literature that iteratively find solutions of a given equation. Most of them require tuning. The article presents root-finding algorithms that are based on the Newton–Raphson method which iteratively finds the solutions, and require tuning. The modification of the algorithm implements the best position of particle similarly to the particle swarm optimisation algorithms. The proposed approach allows visualising the impact of the algorithm’s elements on the complex behaviour of the algorithm. Moreover, instead of the standard Picard iteration, various feedback iteration processes are used in this research. Presented examples and the conducted discussion on the algorithm’s operation allow to understand the influence of the proposed modifications on the algorithm’s behaviour. Understanding the impact of the proposed modification on the algorithm’s operation can be helpful in using it in other algorithms. The obtained images also have potential artistic applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Taher, Mardeen Sh, and Salah G. Shareef. "A Combined Conjugate Gradient Quasi-Newton Method with Modification BFGS Formula." International Journal of Analysis and Applications 21 (April 3, 2023): 31. http://dx.doi.org/10.28924/2291-8639-21-2023-31.

Full text
Abstract:
The conjugate gradient and Quasi-Newton methods have advantages and drawbacks, as although quasi-Newton algorithm has more rapid convergence than conjugate gradient, they require more storage compared to conjugate gradient algorithms. In 1976, Buckley designed a method that combines the CG method with QN updates, which is better than that observed for conjugate gradient algorithms but not as good as the quasi-Newton approach. This type of method is called the preconditioned conjugate gradient (PCG) method. In this paper, we introduce two new preconditioned conjugate gradient (PCG) methods that combine conjugate gradient with a new update of quasi-Newton methods. The new quasi-Newton method satisfied the positive define, and the direction of the new preconditioned conjugate gradient is descent direction. In numerical results, it is showing the new preconditioned conjugate gradient method is more effective on several high-dimension test problems than standard preconditioning.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Liping. "A Newton-Type Algorithm for Solving Problems of Search Theory." Advances in Operations Research 2013 (2013): 1–7. http://dx.doi.org/10.1155/2013/513918.

Full text
Abstract:
In the survey of the continuous nonlinear resource allocation problem, Patriksson pointed out that Newton-type algorithms have not been proposed for solving the problem of search theory in the theoretical perspective. In this paper, we propose a Newton-type algorithm to solve the problem. We prove that the proposed algorithm has global and superlinear convergence. Some numerical results indicate that the proposed algorithm is promising.
APA, Harvard, Vancouver, ISO, and other styles
9

Bo, Liefeng, Ling Wang, and Licheng Jiao. "Recursive Finite Newton Algorithm for Support Vector Regression in the Primal." Neural Computation 19, no. 4 (April 2007): 1082–96. http://dx.doi.org/10.1162/neco.2007.19.4.1082.

Full text
Abstract:
Some algorithms in the primal have been recently proposed for training support vector machines. This letter follows those studies and develops a recursive finite Newton algorithm (IHLF-SVR-RFN) for training nonlinear support vector regression. The insensitive Huber loss function and the computation of the Newton step are discussed in detail. Comparisons with LIBSVM 2.82 show that the proposed algorithm gives promising results.
APA, Harvard, Vancouver, ISO, and other styles
10

Aghamiry, H. S., A. Gholami, and S. Operto. "Full waveform inversion by proximal Newton method using adaptive regularization." Geophysical Journal International 224, no. 1 (September 11, 2020): 169–80. http://dx.doi.org/10.1093/gji/ggaa434.

Full text
Abstract:
SUMMARY Regularization is necessary for solving non-linear ill-posed inverse problems arising in different fields of geosciences. The base of a suitable regularization is the prior expressed by the regularizer, which can be non-adaptive or adaptive (data-driven), smooth or non-smooth, variational-based or not. Nevertheless, tailoring a suitable and easy-to-implement prior for describing geophysical models is a non-trivial task. In this paper, we propose two generic optimization algorithms to implement arbitrary regularization in non-linear inverse problems such as full-waveform inversion (FWI), where the regularization task is recast as a denoising problem. We assess these optimization algorithms with the plug-and-play block matching (BM3D) regularization algorithm, which determines empirical priors adaptively without any optimization formulation. The non-linear inverse problem is solved with a proximal Newton method, which generalizes the traditional Newton step in such a way to involve the gradients/subgradients of a (possibly non-differentiable) regularization function through operator splitting and proximal mappings. Furthermore, it requires to account for the Hessian matrix in the regularized least-squares optimization problem. We propose two different splitting algorithms for this task. In the first, we compute the Newton search direction with an iterative method based upon the first-order generalized iterative shrinkage-thresholding algorithm (ISTA), and hence Newton-ISTA (NISTA). The iterations require only Hessian-vector products to compute the gradient step of the quadratic approximation of the non-linear objective function. The second relies on the alternating direction method of multipliers (ADMM), and hence Newton-ADMM (NADMM), where the least-squares optimization subproblem and the regularization subproblem in the composite objective function are decoupled through auxiliary variable and solved in an alternating mode. The least-squares subproblem can be solved with exact, inexact, or quasi-Newton methods. We compare NISTA and NADMM numerically by solving FWI with BM3D regularization. The tests show promising results obtained by both algorithms. However, NADMM shows a faster convergence rate than NISTA when using L-BFGS to solve the Newton system.
APA, Harvard, Vancouver, ISO, and other styles
11

Saito, Kazumi, and Ryohei Nakano. "Partial BFGS Update and Efficient Step-Length Calculation for Three-Layer Neural Networks." Neural Computation 9, no. 1 (January 1, 1997): 123–41. http://dx.doi.org/10.1162/neco.1997.9.1.123.

Full text
Abstract:
Second-order learning algorithms based on quasi-Newton methods have two problems. First, standard quasi-Newton methods are impractical for large-scale problems because they require N2 storage space to maintain an approximation to an inverse Hessian matrix (N is the number of weights). Second, a line search to calculate areasonably accurate step length is indispensable for these algorithms. In order to provide desirable performance, an efficient and reasonably accurate line search is needed. To overcome these problems, we propose a new second-order learning algorithm. Descent direction is calculated on the basis of a partial Broydon-Fletcher-Goldfarb-Shanno (BFGS) update with 2Ns memory space (s « N), and a reasonably accurate step length is efficiently calculated as the minimal point of a second-order approximation to the objective function with respect to the step length. Our experiments, which use a parity problem and a speech synthesis problem, have shown that the proposed algorithm outperformed major learning algorithms. Moreover, it turned out that an efficient and accurate step-length calculation plays an important role for the convergence of quasi-Newton algorithms, and a partial BFGS update greatly saves storage space without losing the convergence performance.
APA, Harvard, Vancouver, ISO, and other styles
12

Awotunde, A. A. A., and R. N. N. Horne. "An Improved Adjoint-Sensitivity Computation for Multiphase Flow Using Wavelets." SPE Journal 17, no. 02 (February 8, 2012): 402–17. http://dx.doi.org/10.2118/133866-pa.

Full text
Abstract:
Summary In history matching, one of the challenges in the use of gradient-based Newton algorithms (e.g., Gauss-Newton and Leven-berg-Marquardt) in solving the inverse problem is the huge cost associated with the computation of the sensitivity matrix. Although the Newton type of algorithm gives faster convergence than most other gradient-based inverse solution algorithms, its use is limited to small- and medium-scale problems in which the sensitivity coefficients are easily and quickly computed. Modelers often use less-efficient algorithms (e.g., conjugate-gradient and quasi-Newton) to model large-scale problems because these algorithms avoid the direct computation of sensitivity coefficients. To find a direction of descent, such algorithms often use less-precise curvature information that would be contained in the gradient of the objective function. Using a sensitivity matrix gives more-complete information about the curvature of the function; however, this comes with a significant computational cost for large-scale problems. An improved adjoint-sensitivity computation is presented for time-dependent partial-differential equations describing multiphase flow in hydrocarbon reservoirs. The method combines the wavelet parameterization of data space with adjoint-sensitivity formulation to reduce the cost of computing sensitivities. This reduction in cost is achieved by reducing the size of the linear system of equations that are typically solved to obtain the sensitivities. This cost-saving technique makes solving an inverse problem with algorithms (e.g., Levenberg-Marquardt and Gauss-Newton) viable for large multiphase-flow history-matching problems. The effectiveness of this approach is demonstrated for two numerical examples involving multiphase flow in a reservoir with several production and injection wells.
APA, Harvard, Vancouver, ISO, and other styles
13

Mavridis, P. P., and G. V. Moustakides. "Simplified Newton-type adaptive estimation algorithms." IEEE Transactions on Signal Processing 44, no. 8 (1996): 1932–40. http://dx.doi.org/10.1109/78.533714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Liu, Lulu, and David E. Keyes. "Field-Split Preconditioned Inexact Newton Algorithms." SIAM Journal on Scientific Computing 37, no. 3 (January 2015): A1388—A1409. http://dx.doi.org/10.1137/140970379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bhotto, Md Zulfiquar Ali, and Andreas Antoniou. "Robust Quasi-Newton Adaptive Filtering Algorithms." IEEE Transactions on Circuits and Systems II: Express Briefs 58, no. 8 (August 2011): 537–41. http://dx.doi.org/10.1109/tcsii.2011.2158722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Camilleri, Liberato. "Bias of Standard Errors in Latent Class Model Applications Using Newton-Raphson and EM Algorithms." Journal of Advanced Computational Intelligence and Intelligent Informatics 13, no. 5 (September 20, 2009): 537–41. http://dx.doi.org/10.20965/jaciii.2009.p0537.

Full text
Abstract:
The EM algorithm is a popular method for computing maximum likelihood estimates. It tends to be numerically stable, reduces execution time compared to other estimation procedures and is easy to implement in latent class models. However, the EM algorithm fails to provide a consistent estimator of the standard errors of maximum likelihood estimates in incomplete data applications. Correct standard errors can be obtained by numerical differentiation. The technique requires computation of a complete-data gradient vector and Hessian matrix, but not those associated with the incomplete data likelihood. Obtaining first and second derivatives numerically is computationally very intensive and execution time may become very expensive when fitting Latent class models using a Newton-type algorithm. When the execution time is too high one is motivated to use the EM algorithm solution to initialize the Newton Raphson algorithm. We also investigate the effect on the execution time when a final Newton-Raphson step follows the EM algorithm after convergence. In this paper we compare the standard errors provided by the EM and Newton-Raphson algorithms for two models and analyze how this bias is affected by the number of parameters in the model fit.
APA, Harvard, Vancouver, ISO, and other styles
17

Pošík, Petr, and Waltraud Huyer. "Restarted Local Search Algorithms for Continuous Black Box Optimization." Evolutionary Computation 20, no. 4 (December 2012): 575–607. http://dx.doi.org/10.1162/evco_a_00087.

Full text
Abstract:
Several local search algorithms for real-valued domains (axis parallel line search, Nelder-Mead simplex search, Rosenbrock's algorithm, quasi-Newton method, NEWUOA, and VXQR) are described and thoroughly compared in this article, embedding them in a multi-start method. Their comparison aims (1) to help the researchers from the evolutionary community to choose the right opponent for their algorithm (to choose an opponent that would constitute a hard-to-beat baseline algorithm), (2) to describe individual features of these algorithms and show how they influence the algorithm on different problems, and (3) to provide inspiration for the hybridization of evolutionary algorithms with these local optimizers. The recently proposed Comparing Continuous Optimizers (COCO) methodology was adopted as the basis for the comparison. The results show that in low dimensional spaces, the old method of Nelder and Mead is still the most successful among those compared, while in spaces of higher dimensions, it is better to choose an algorithm based on quadratic modeling, such as NEWUOA or a quasi-Newton method.
APA, Harvard, Vancouver, ISO, and other styles
18

Kostromitsky, S. M., I. N. Davydzenko, and A. A. Dyatko. "Equivalent forms of writing of processing algorithms of adaptive antenna array." Proceedings of the National Academy of Sciences of Belarus, Physical-Technical Series 67, no. 2 (July 2, 2022): 230–38. http://dx.doi.org/10.29235/1561-8358-2022-67-2-230-238.

Full text
Abstract:
The article is devoted to obtaining equivalent forms of writing of processing algorithms for the operation of adaptive antenna arrays, considering algorithms as varieties of some generalized LMS algorithm. This will facilitate a comparative analysis of the algorithms’ characteristics. The following algorithms of operation are considered: LMS, NLMS, LMS-Newton, SMI, RLS. The article contains the initial operation algorithms of adaptive antenna arrays, conclusions of equivalent processing algorithms and an equivalent block diagram of the generalized LMS algorithm. Equivalent forms of writing the operation algorithms of adaptive antenna arrays and their parameters are also presented in tabular form. Of particular interest is the equivalent operation algorithm in the case of the SMI algorithm, which differs most from the LMS algorithm. Equivalent algorithms differ only by the scalar convergence coefficient and the matrix normalizing factor. For LMS-Newton, SMI, and RLS algorithms, the matrix normalizing factor is the same, it is determined by inverting the estimation of the correlation matrix of input signals and reduces the dependence of the characteristics of the algorithms on the parameters of the correlation matrix. The scalar convergence coefficient of equivalent algorithms in the case of SMI and RLS algorithms depends on the iteration number and tends to zero for the SMI algorithm and to some non-zero value for the RLS algorithm. The dependence of the convergence coefficient on the iteration number makes it possible to optimize the characteristics of the algorithms at the transition stage. The tendency of the convergence coefficient to zero in the case of the SMI algorithm makes it effective only for stationary input signals. The non-zero steady-state value of the convergence coefficient in the case of the RLS algorithm allows its effective use in a non-stationary environment.
APA, Harvard, Vancouver, ISO, and other styles
19

Sabharwal, Chaman Lal. "An Iterative Hybrid Algorithm for Roots of Non-Linear Equations." Eng 2, no. 1 (March 8, 2021): 80–98. http://dx.doi.org/10.3390/eng2010007.

Full text
Abstract:
Finding the roots of non-linear and transcendental equations is an important problem in engineering sciences. In general, such problems do not have an analytic solution; the researchers resort to numerical techniques for exploring. We design and implement a three-way hybrid algorithm that is a blend of the Newton–Raphson algorithm and a two-way blended algorithm (blend of two methods, Bisection and False Position). The hybrid algorithm is a new single pass iterative approach. The method takes advantage of the best in three algorithms in each iteration to estimate an approximate value closer to the root. We show that the new algorithm outperforms the Bisection, Regula Falsi, Newton–Raphson, quadrature based, undetermined coefficients based, and decomposition-based algorithms. The new hybrid root finding algorithm is guaranteed to converge. The experimental results and empirical evidence show that the complexity of the hybrid algorithm is far less than that of other algorithms. Several functions cited in the literature are used as benchmarks to compare and confirm the simplicity, efficiency, and performance of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
20

Essanhaji, A., and M. Errachid. "Lagrange Multivariate Polynomial Interpolation: A Random Algorithmic Approach." Journal of Applied Mathematics 2022 (March 14, 2022): 1–8. http://dx.doi.org/10.1155/2022/8227086.

Full text
Abstract:
The problems of polynomial interpolation with several variables present more difficulties than those of one-dimensional interpolation. The first problem is to study the regularity of the interpolation schemes. In fact, it is well-known that, in contrast to the univariate case, there is no universal space of polynomials which admits unique Lagrange interpolation for all point sets of a given cardinality, and so the interpolation space will depend on the set Z of interpolation points. Techniques of univariate Newton interpolating polynomials are extended to multivariate data points by different generalizations and practical algorithms. The Newton basis format, with divided-difference algorithm for coefficients, generalizes in a straightforward way when interpolating at nodes on a grid within certain schemes. In this work, we propose a random algorithm for computing several interpolating multivariate Lagrange polynomials, called RLMVPIA (Random Lagrange Multivariate Polynomial Interpolation Algorithm), for any finite interpolation set. We will use a Newton-type polynomials basis, and we will introduce a new concept called Z , z -partition. All the given algorithms are tested on examples. RLMVPIA is easy to implement and requires no storage.
APA, Harvard, Vancouver, ISO, and other styles
21

Wu, Zeguan, Mohammadhossein Mohammadisiahroudi, Brandon Augustino, Xiu Yang, and Tamás Terlaky. "An Inexact Feasible Quantum Interior Point Method for Linearly Constrained Quadratic Optimization." Entropy 25, no. 2 (February 10, 2023): 330. http://dx.doi.org/10.3390/e25020330.

Full text
Abstract:
Quantum linear system algorithms (QLSAs) have the potential to speed up algorithms that rely on solving linear systems. Interior point methods (IPMs) yield a fundamental family of polynomial-time algorithms for solving optimization problems. IPMs solve a Newton linear system at each iteration to compute the search direction; thus, QLSAs can potentially speed up IPMs. Due to the noise in contemporary quantum computers, quantum-assisted IPMs (QIPMs) only admit an inexact solution to the Newton linear system. Typically, an inexact search direction leads to an infeasible solution, so, to overcome this, we propose an inexact-feasible QIPM (IF-QIPM) for solving linearly constrained quadratic optimization problems. We also apply the algorithm to ℓ1-norm soft margin support vector machine (SVM) problems, and demonstrate that our algorithm enjoys a speedup in the dimension over existing approaches. This complexity bound is better than any existing classical or quantum algorithm that produces a classical solution.
APA, Harvard, Vancouver, ISO, and other styles
22

Rodi, William, and Randall L. Mackie. "Nonlinear conjugate gradients algorithm for 2-D magnetotelluric inversion." GEOPHYSICS 66, no. 1 (January 2001): 174–87. http://dx.doi.org/10.1190/1.1444893.

Full text
Abstract:
We investigate a new algorithm for computing regularized solutions of the 2-D magnetotelluric inverse problem. The algorithm employs a nonlinear conjugate gradients (NLCG) scheme to minimize an objective function that penalizes data residuals and second spatial derivatives of resistivity. We compare this algorithm theoretically and numerically to two previous algorithms for constructing such “minimum‐structure” models: the Gauss‐Newton method, which solves a sequence of linearized inverse problems and has been the standard approach to nonlinear inversion in geophysics, and an algorithm due to Mackie and Madden, which solves a sequence of linearized inverse problems incompletely using a (linear) conjugate gradients technique. Numerical experiments involving synthetic and field data indicate that the two algorithms based on conjugate gradients (NLCG and Mackie‐Madden) are more efficient than the Gauss‐Newton algorithm in terms of both computer memory requirements and CPU time needed to find accurate solutions to problems of realistic size. This owes largely to the fact that the conjugate gradients‐based algorithms avoid two computationally intensive tasks that are performed at each step of a Gauss‐Newton iteration: calculation of the full Jacobian matrix of the forward modeling operator, and complete solution of a linear system on the model space. The numerical tests also show that the Mackie‐Madden algorithm reduces the objective function more quickly than our new NLCG algorithm in the early stages of minimization, but NLCG is more effective in the later computations. To help understand these results, we describe the Mackie‐Madden and new NLCG algorithms in detail and couch each as a special case of a more general conjugate gradients scheme for nonlinear inversion.
APA, Harvard, Vancouver, ISO, and other styles
23

Jia, Yan Fei, Xiao Dong Yang, Li Yue Xu, and Li Quan Zhao. "An Improved Independent Component Analysis with Reference." Applied Mechanics and Materials 667 (October 2014): 64–67. http://dx.doi.org/10.4028/www.scientific.net/amm.667.64.

Full text
Abstract:
Independent component analysis with reference is a general framework to incorporate a priori information of interesting source signal into the cost function as constrained terms to form an augmented Lagrange function, and utilizes Newton method to optimize the cost function. It can extract any interesting source signal without extracting all source signals comparing with the traditional Independent component analysis method. In this paper, to accelerate the convergence speed of the Independent component analysis with reference, two improved algorithms are presented. The new algorithms, firstly whiten the observed signals to avoid matrix inverse operation to reduce algorithm complexity, secondly use improved Newton method with fast convergence speed to optimize cost function,in the end deduce the improved Independent component analysis with reference algorithms. Simulation result demonstrates the new algorithms have faster convergence speed with smaller error compared with the original method.
APA, Harvard, Vancouver, ISO, and other styles
24

Chu, Xiaofei. "A Distributed Online Newton Step Algorithm for Multi-Agent Systems." Mathematical Problems in Engineering 2022 (October 28, 2022): 1–14. http://dx.doi.org/10.1155/2022/1007032.

Full text
Abstract:
Most of the current algorithms for solving distributed online optimization problems are based on the first-order method, which are simple in computation but slow in convergence. Newton’s algorithm with fast convergence speed needs to calculate the Hessian matrix and its inverse, leading to computationally complex. A distributed online optimization algorithm based on Newton’s step is proposed in this paper, which constructs a positive definite matrix by using the first-order information of the objective function to replace the inverse of the Hessian matrix in Newton’s method. The convergence of the algorithm is proved theoretically and the regret bound of the algorithm is obtained. Finally, numerical experiments are used to verify the feasibility and efficiency of the proposed algorithm. The experimental results show that the proposed algorithm has an efficient performance on practical problems, compared to several existing gradient descent algorithms.
APA, Harvard, Vancouver, ISO, and other styles
25

Ramponi, Giorgia, and Marcello Restelli. "Newton Optimization on Helmholtz Decomposition for Continuous Games." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 11325–33. http://dx.doi.org/10.1609/aaai.v35i13.17350.

Full text
Abstract:
Many learning problems involve multiple agents optimizing different interactive functions. In these problems, the standard policy gradient algorithms fail due to the non-stationarity of the setting and the different interests of each agent. In fact, algorithms must take into account the complex dynamics of these systems to guarantee rapid convergence towards a (local) Nash equilibrium. In this paper, we propose NOHD (Newton Optimization on Helmholtz Decomposition), a Newton-like algorithm for multi-agent learning problems based on the decomposition of the dynamics of the system in its irrotational (Potential) and solenoidal (Hamiltonian) component. This method ensures quadratic convergence in purely irrotational systems and pure solenoidal systems. Furthermore, we show that NOHD is attracted to stable fixed points in general multi-agent systems and repelled by strict saddle ones. Finally, we empirically compare the NOHD's performance with that of state-of-the-art algorithms on some bimatrix games and continuous Gridworlds environment.
APA, Harvard, Vancouver, ISO, and other styles
26

Tien Tay, Lea, William Ong Chew Fen, and Lilik Jamilatul Awalin. "Improved newton-raphson with schur complement methods for load flow analysis." Indonesian Journal of Electrical Engineering and Computer Science 16, no. 2 (November 1, 2019): 699. http://dx.doi.org/10.11591/ijeecs.v16.i2.pp699-605.

Full text
Abstract:
<p>The determination of power and voltage in the power load flow for the purpose of design and operation of the power system is very crucial in the assessment of actual or predicted generation and load conditions. The load flow studies are of the utmost importance and the analysis has been carried out by computer programming to obtain accurate results within a very short period through a simple and convenient way. In this paper, Newton-Raphson method which is the most common, widely-used and reliable algorithm of load flow analysis is further revised and modified to improve the speed and the simplicity of the algorithm. There are 4 Newton-Raphson algorithms carried out, namely Newton-Raphson, Newton-Raphson constant Jacobian, Newton-Raphson Schur Complement and Newton-Raphson Schur Complement constant Jacobian. All the methods are implemented on IEEE 14-, 30-, 57- and 118-bus system for comparative analysis using MATLAB programming. The simulation results are then compared for assessment using measurement parameter of computation time and convergence rate. Newton-Raphson Schur Complement constant Jacobian requires the shortest computational time.</p>
APA, Harvard, Vancouver, ISO, and other styles
27

Wothke, Werner, George Burket, Li-Sue Chen, Furong Gao, Lianghua Shu, and Mike Chia. "Multimodal Likelihoods in Educational Assessment." Journal of Educational and Behavioral Statistics 36, no. 6 (December 2011): 736–54. http://dx.doi.org/10.3102/1076998610381400.

Full text
Abstract:
It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent’s ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood (ML) when standard Newton solutions are used. This article evaluates the performance of several maximization methods, including initial (grid) searches probing the function slopes, simulated annealing, exhaustive likelihood evaluation, and the standard Newton algorithm. In extensive studies, involving several million records of both generated and real data, the algorithms were evaluated with respect to precision and speed. Two methods, exhaustive search and grid search, followed by Newton steps, all yielded ML estimates at the required precision. At today’s computer speeds, either of these algorithms may be considered for high-volume response pattern scoring.
APA, Harvard, Vancouver, ISO, and other styles
28

Driesen, Johan, and Kay Hameyer. "Newton and quasi‐Newton algorithms for non‐linear electromagnetic–thermal coupled problems." COMPEL - The international journal for computation and mathematics in electrical and electronic engineering 21, no. 1 (March 2002): 116–25. http://dx.doi.org/10.1108/03321640210410788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Karim, Hesam, Sharareh R. Niakan, and Reza Safdari. "Comparison of Neural Network Training Algorithms for Classification of Heart Diseases." IAES International Journal of Artificial Intelligence (IJ-AI) 7, no. 4 (October 2, 2018): 185. http://dx.doi.org/10.11591/ijai.v7.i4.pp185-189.

Full text
Abstract:
<span lang="EN-US">Heart disease is the first cause of death in different countries. Artificial neural network (ANN) technique can be used to predict or classification patients getting a heart disease. There are different training algorithms for ANN. We compared eight neural network training algorithms for classification of heart disease data from UCI repository containing 303 samples. Performance measures of each algorithm containing the speed of training, the number of epochs, accuracy, and mean square error (MSE) were obtained and analyzed. Our results showed that training time for gradient descent algorithms was longer than other training algorithms (8-10 seconds). In contrast, Quasi-Newton algorithms were faster than others (&lt;=0 second). MSE for all algorithms was between 0.117 and 0.228. While there was a significant association between training algorithms and training time (p&lt;0.05), the number of neurons in hidden layer had not any significant effect on the MSE and/or accuracy of the models (p&gt;0.05). Based on our findings, for development an ANN classification model for heart diseases, it is best to use Quasi-Newton training algorithms because of the best speed and accuracy.</span>
APA, Harvard, Vancouver, ISO, and other styles
30

Kuczma, Marek, and Halina Światak. "Newton-like algorithms for kth root calculation." Annales Polonici Mathematici 52, no. 3 (1991): 303–12. http://dx.doi.org/10.4064/ap-52-3-303-312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Xia, E. Z., and R. A. Saleh. "Parallel waveform-Newton algorithms for circuit simulation." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 11, no. 4 (April 1992): 432–42. http://dx.doi.org/10.1109/43.125091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wu, J., Venkat Srinivasan, J. Xu, and C. Y. Wang. "Newton-Krylov-Multigrid Algorithms for Battery Simulation." Journal of The Electrochemical Society 149, no. 10 (2002): A1342. http://dx.doi.org/10.1149/1.1505635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Padovan, Joe, and Ralph Moscarello. "Locally bound constrained Newton-Raphson solution algorithms." Computers & Structures 23, no. 2 (January 1986): 181–97. http://dx.doi.org/10.1016/0045-7949(86)90211-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Brown, Peter N., and Youcef Saad. "Convergence Theory of Nonlinear Newton–Krylov Algorithms." SIAM Journal on Optimization 4, no. 2 (May 1994): 297–330. http://dx.doi.org/10.1137/0804017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Bhatnagar, Shalabh, and L. A. Prashanth. "Simultaneous Perturbation Newton Algorithms for Simulation Optimization." Journal of Optimization Theory and Applications 164, no. 2 (December 17, 2013): 621–43. http://dx.doi.org/10.1007/s10957-013-0507-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Khosla, Pradeep K., and Charles P. Neuman. "Computational requirements of customized Newton-Euler algorithms." Journal of Robotic Systems 2, no. 3 (1985): 309–27. http://dx.doi.org/10.1002/rob.4620020308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Souza, Luiz Antonio Farani de, Emerson Vitor Castelani, and Wesley Vagner Inês Shirabayashi. "Adaptation of the Newton-Raphson and Potra-Pták methods for the solution of nonlinear systems." Semina: Ciências Exatas e Tecnológicas 42, no. 1 (June 2, 2021): 63. http://dx.doi.org/10.5433/1679-0375.2021v42n1p63.

Full text
Abstract:
In this paper we adapt the Newton-Raphson and Potra-Pták algorithms by combining them with the modified Newton-Raphson method by inserting a condition. Problems of systems of sparse nonlinear equations are solved the algorithms implemented in Matlab® environment. In addition, the methods are adapted and applied to space trusses problems with geometric nonlinear behavior. Structures are discretized by the Finite Element Positional Method, and nonlinear responses are obtained in an incremental and iterative process using the Linear Arc-Length path-following technique. For the studied problems, the proposed algorithms had good computational performance reaching the solution with shorter processing time and fewer iterations until convergence to a given tolerance, when compared to the standard algorithms of the Newton-Raphson and Potra-Pták methods.
APA, Harvard, Vancouver, ISO, and other styles
38

Wei, Yanfang, Qiang He, Yonghui Sun, Yanzhou Sun, and Cong Ji. "Improved Power Flow Algorithm for VSC-HVDC System Based on High-Order Newton-Type Method." Mathematical Problems in Engineering 2013 (2013): 1–10. http://dx.doi.org/10.1155/2013/235316.

Full text
Abstract:
Voltage source converter (VSC) based high-voltage direct-current (HVDC) system is a new transmission technique, which has the most promising applications in the fields of power systems and power electronics. Considering the importance of power flow analysis of the VSC-HVDC system for its utilization and exploitation, the improved power flow algorithms for VSC-HVDC system based on third-order and sixth-order Newton-type method are presented. The steady power model of VSC-HVDC system is introduced firstly. Then the derivation solving formats of multivariable matrix for third-order and sixth-order Newton-type power flow method of VSC-HVDC system are given. The formats have the feature of third-order and sixth-order convergence based on Newton method. Further, based on the automatic differentiation technology and third-order Newton method, a new improved algorithm is given, which will help in improving the program development, computation efficiency, maintainability, and flexibility of the power flow. Simulations of AC/DC power systems in two-terminal, multi-terminal, and multi-infeed DC with VSC-HVDC are carried out for the modified IEEE bus systems, which show the effectiveness and practicality of the presented algorithms for VSC-HVDC system.
APA, Harvard, Vancouver, ISO, and other styles
39

Tavakoli, Reza, and Albert C. Reynolds. "History Matching With Parameterization Based on the Singular Value Decomposition of a Dimensionless Sensitivity Matrix." SPE Journal 15, no. 02 (December 17, 2009): 495–508. http://dx.doi.org/10.2118/118952-pa.

Full text
Abstract:
Summary In gradient-based automatic history matching, calculation of the derivatives (sensitivities) of all production data with respect to gridblock rock properties and other model parameters is not feasible for large-scale problems. Thus, the Gauss-Newton (GN) method and Levenberg-Marquardt (LM) algorithm, which require calculation of all sensitivities to form the Hessian, are seldom viable. For such problems, the quasi-Newton and nonlinear conjugate gradient algorithms present reasonable alternatives because these two methods do not require explicit calculation of the complete sensitivity matrix or the Hessian. Another possibility, the one explored here, is to define a new parameterization to radically reduce the number of model parameters. We provide a theoretical argument that indicates that reparameterization based on the principal right singular vectors of the dimensionless sensitivity matrix provides an optimal basis for reparameterization of the vector of model parameters. We develop and illustrate algorithms based on this parameterization. Like limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS), these algorithms avoid explicit computation of individual sensitivity coefficients. Explicit computation of the sensitivities is avoided by using a partial singular value decomposition (SVD) based on a form of the Lanczos algorithm. At least for all synthetic problems that we have considered, the reliability, computational efficiency, and robustness of the methods presented here are as good as those obtained with quasi-Newton methods.
APA, Harvard, Vancouver, ISO, and other styles
40

Said Solaiman, Obadah, Rami Sihwail, Hisham Shehadeh, Ishak Hashim, and Kamal Alieyan. "Hybrid Newton–Sperm Swarm Optimization Algorithm for Nonlinear Systems." Mathematics 11, no. 6 (March 17, 2023): 1473. http://dx.doi.org/10.3390/math11061473.

Full text
Abstract:
Several problems have been solved by nonlinear equation systems (NESs), including real-life issues in chemistry and neurophysiology. However, the accuracy of solutions is highly dependent on the efficiency of the algorithm used. In this paper, a Modified Sperm Swarm Optimization Algorithm called MSSO is introduced to solve NESs. MSSO combines Newton’s second-order iterative method with the Sperm Swarm Optimization Algorithm (SSO). Through this combination, MSSO’s search mechanism is improved, its convergence rate is accelerated, local optima are avoided, and more accurate solutions are provided. The method overcomes several drawbacks of Newton’s method, such as the initial points’ selection, falling into the trap of local optima, and divergence. In this study, MSSO was evaluated using eight NES benchmarks that are commonly used in the literature, three of which are from real-life applications. Furthermore, MSSO was compared with several well-known optimization algorithms, including the original SSO, Harris Hawk Optimization (HHO), Butterfly Optimization Algorithm (BOA), Ant Lion Optimizer (ALO), Particle Swarm Optimization (PSO), and Equilibrium Optimization (EO). According to the results, MSSO outperformed the compared algorithms across all selected benchmark systems in four aspects: stability, fitness values, best solutions, and convergence speed.
APA, Harvard, Vancouver, ISO, and other styles
41

Zou, Xiang, Kai Li, and Bing Pan. "The Effect of Low-pass Pre-filtering on Subvoxel Registration Algorithms in Digital Volume Correlation: A revisited study." Measurement Science Review 20, no. 5 (October 1, 2020): 202–9. http://dx.doi.org/10.2478/msr-2020-0025.

Full text
Abstract:
AbstractIn digital volume correlation (DVC), random image noise in volumetric images leads to increased systematic error and random error in the displacements measured by subvoxel registration algorithms. Previous studies in DIC have shown that adopting low-pass pre-filtering to the images prior to the correlation analysis can effectively mitigate the systematic error associated with the classical forward additive Newton-Raphson (FA-NR) algorithm. However, the effect of low-pass pre-filtering on the state-of-the-art inverse compositional Gauss-Newton (ICGN) algorithm has not been investigated so far. In this work, we focus on the effect of low-pass pre-filtering on two mainstream subvoxel registration algorithms (i.e., 3D FA-NR algorithm and 3D IC-GN algorithm) used in DVC. Basic principles and theoretical error analyses of the two algorithms are described first. Then, based on numerical experiments with precisely controlled subvoxel displacements and noise levels, the influences of image noise on the displacements measured by two subvoxel algorithms are examined. Further, the effects of low-pass pre-filtering on these two subvoxel algorithms are examined for simulated image sets with different noise levels and deformation modes. The results show that the low-pass pre-filtering can effectively suppress the systematic errors for the 3D FA-NR algorithm, which is consistent with the previously drawn conclusion in DIC. On the contrary, different form the 3D FA-NR algorithm, the 3D IC-GN algorithm itself can reduce the influence of image noise, and the effect of low-pass pre-filtering on it is not so obvious as on 3D FA-NR algorithm.
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Yong, Hui-Wen Gu, Hai-Long Wu, and Xiang-Yang Yu. "Comparison of the performances of several commonly used algorithms for second-order calibration." Analytical Methods 10, no. 39 (2018): 4801–12. http://dx.doi.org/10.1039/c8ay01443d.

Full text
Abstract:
The present study compared six commonly used algorithms, namely, alternating trilinear decomposition (ATLD), self-weighted alternating trilinear decomposition (SWATLD), alternating coupled two unequal residual functions (ACTUF), parallel factor analysis (PARAFAC), damped Gauss-Newton (dGN) and algorithm combination methodology (ACM).
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Chen, Ximing Sun, and Xian Du. "The Aero-Engine Component-Level Modelling Research Based on NSDE Hybrid Damping Newton Method." International Journal of Aerospace Engineering 2022 (September 10, 2022): 1–13. http://dx.doi.org/10.1155/2022/8212150.

Full text
Abstract:
Advanced aero-engine component-level models are characterized by strong nonlinearity and multivariate, and traditional iterative algorithms cannot meet the requirements of convergence, real-time, and accuracy at the same time. To improve the convergence and alleviate the initial value dependence, a hybrid damped Newton algorithm based on the neighborhood based speciation differential evolution (NSDE) is proposed in this paper for solving the aero-engine component-level model. The computational efficiency and convergence of the hybrid damped Newton algorithm and NSDE hybrid damped Newton algorithm under four typical steady-state operating point conditions are analyzed, and then, the accuracy of the model is verified. It is demonstrated that the hybrid damped Newton method has the advantage of low initial value sensitivity and high computational efficiency under large deviation conditions. The hybrid damped Newton method is more efficient than the Broyden algorithm in terms of iterative efficiency, faster than the traditional N-R algorithm in terms of computation speed, and has the highest computational convergence rate under the four typical operating conditions, but it cannot eliminate the initial value dependence. The NSDE hybrid damped Newton method offers high simulation accuracy and greatly increases computational real-time performance under large deviation conditions, and the maximum error between the numerical simulation results and the experimental reference value is 8.1%. This study provides advanced theoretical support for component-level modelling and has certain engineering application value.
APA, Harvard, Vancouver, ISO, and other styles
44

Tostado-Véliz, Marcos, Salah Kamel, Ibrahim B. M. Taha, and Francisco Jurado. "Exploiting the S-Iteration Process for Solving Power Flow Problems: Novel Algorithms and Comprehensive Analysis." Electronics 10, no. 23 (December 2, 2021): 3011. http://dx.doi.org/10.3390/electronics10233011.

Full text
Abstract:
In recent studies, the competitiveness of the Newton-S-Iteration-Process (Newton-SIP) techniques to efficiently solve the Power Flow (PF) problems in both well and ill-conditioned systems has been highlighted, concluding that these methods may be suitable for industrial applications. This paper aims to tackle some of the open topics brought for this kind of techniques. Different PF techniques are proposed based on the most recently developed Newton-SIP methods. In addition, convergence analysis and a comparative study of four different Newton-SIP methods PF techniques are presented. To check the features of considered PF techniques, several numerical experiments are carried out. Results show that the considered Newton-SIP techniques can achieve up to an eighth order of convergence and typically are more efficient and robust than the Newton–Raphson (NR) technique. Finally, it is shown that the overall performance of the considered PF techniques is strongly influenced by the values of parameters involved in the iterative procedure.
APA, Harvard, Vancouver, ISO, and other styles
45

Cheerla, Sreevardhan, and D. Venkata Ratnam. "RSS Based Wi-Fi Positioning Method Using Recursive Least Square (RLS) Algorithm." International Journal of Engineering & Technology 7, no. 2.24 (April 25, 2018): 492. http://dx.doi.org/10.14419/ijet.v7i2.24.12144.

Full text
Abstract:
Due to rapid increase in demand for services which depends upon exact location of devices leads to the development of numerous Wi-Fi positioning systems. It is very difficult to find the accurate position of a device in indoor environment due to substantial development of structures. There are many algorithms to determine the indoor location but they require expensive software and hardware. Hence receiving signals strength (RSS) based algorithms are implemented to find the self-positioning. In this paper Newton-Raphson, Gauss-Newton and Steepest descent algorithms are implemented to find the accurate location of Wi-Fi receiver in Koneru Lakshmaiah (K L) University, Guntur, Andhra Pradesh, India. From the results it is evident that Newton -Raphson method is better in providing accurate position estimations.
APA, Harvard, Vancouver, ISO, and other styles
46

Geraldo, Issa Cherif, Edoh Katchekpele, and Tchilabalo Abozou Kpanzou. "A Fast and Efficient Estimation of the Parameters of a Model of Accident Frequencies via an MM Algorithm." Journal of Applied Mathematics 2023 (April 19, 2023): 1–10. http://dx.doi.org/10.1155/2023/3377201.

Full text
Abstract:
In this paper, we consider a multivariate statistical model of accident frequencies having a variable number of parameters and whose parameters are dependent and subject to box constraints and linear equality constraints. We design a minorization-maximization (MM) algorithm and an accelerated MM algorithm to compute the maximum likelihood estimates of the parameters. We illustrate, through simulations, the performance of our proposed MM algorithm and its accelerated version by comparing them to Newton-Raphson (NR) and quasi-Newton algorithms. The results suggest that the MM algorithm and its accelerated version are better in terms of convergence proportion and, as the number of parameters increases, they are also better in terms of computation time.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Houzhe, Defeng Gu, Xiaojun Duan, Kai Shao, and Chunbo Wei. "The application of nonlinear least-squares estimation algorithms in atmospheric density model calibration." Aircraft Engineering and Aerospace Technology 92, no. 7 (May 22, 2020): 993–1000. http://dx.doi.org/10.1108/aeat-06-2019-0133.

Full text
Abstract:
Purpose The purpose of this paper is to focus on the performance of three typical nonlinear least-squares estimation algorithms in atmospheric density model calibration. Design/methodology/approach The error of Jacchia-Roberts atmospheric density model is expressed as an objective function about temperature parameters. The estimation of parameter corrections is a typical nonlinear least-squares problem. Three algorithms for nonlinear least-squares problems, Gauss–Newton (G-N), damped Gauss–Newton (damped G-N) and Levenberg–Marquardt (L-M) algorithms, are adopted to estimate temperature parameter corrections of Jacchia-Roberts for model calibration. Findings The results show that G-N algorithm is not convergent at some sampling points. The main reason is the nonlinear relationship between Jacchia-Roberts and its temperature parameters. Damped G-N and L-M algorithms are both convergent at all sampling points. G-N, damped G-N and L-M algorithms reduce the root mean square error of Jacchia-Roberts from 20.4% to 9.3%, 9.4% and 9.4%, respectively. The average iterations of G-N, damped G-N and L-M algorithms are 3.0, 2.8 and 2.9, respectively. Practical implications This study is expected to provide a guidance for the selection of nonlinear least-squares estimation methods in atmospheric density model calibration. Originality/value The study analyses the performance of three typical nonlinear least-squares estimation methods in the calibration of atmospheric density model. The non-convergent phenomenon of G-N algorithm is discovered and explained. Damped G-N and L-M algorithms are more suitable for the nonlinear least-squares problems in model calibration than G-N algorithm and the first two algorithms have slightly fewer iterations.
APA, Harvard, Vancouver, ISO, and other styles
48

Arthur, C. K., V. A. Temeng, and Y. Y. Ziggah. "Performance Evaluation of Training Algorithms in Backpropagation Neural Network Approach to Blast-Induced Ground Vibration Prediction." Ghana Mining Journal 20, no. 1 (July 7, 2020): 20–33. http://dx.doi.org/10.4314/gm.v20i1.3.

Full text
Abstract:
Abstract Backpropagation Neural Network (BPNN) is an artificial intelligence technique that has seen several applications in many fields of science and engineering. It is well-known that, the critical task in developing an effective and accurate BPNN model depends on an appropriate training algorithm, transfer function, number of hidden layers and number of hidden neurons. Despite the numerous contributing factors for the development of a BPNN model, training algorithm is key in achieving optimum BPNN model performance. This study is focused on evaluating and comparing the performance of 13 training algorithms in BPNN for the prediction of blast-induced ground vibration. The training algorithms considered include: Levenberg-Marquardt, Bayesian Regularisation, Broyden–Fletcher–Goldfarb–Shanno (BFGS) Quasi-Newton, Resilient Backpropagation, Scaled Conjugate Gradient, Conjugate Gradient with Powell/Beale Restarts, Fletcher-Powell Conjugate Gradient, Polak-Ribiére Conjugate Gradient, One Step Secant, Gradient Descent with Adaptive Learning Rate, Gradient Descent with Momentum, Gradient Descent, and Gradient Descent with Momentum and Adaptive Learning Rate. Using ranking values for the performance indicators of Mean Squared Error (MSE), correlation coefficient (R), number of training epoch (iteration) and the duration for convergence, the performance of the various training algorithms used to build the BPNN models were evaluated. The obtained overall ranking results showed that the BFGS Quasi-Newton algorithm outperformed the other training algorithms even though the Levenberg Marquardt algorithm was found to have the best computational speed and utilised the smallest number of epochs. Keywords: Artificial Intelligence, Blast-induced Ground Vibration, Backpropagation Training Algorithms
APA, Harvard, Vancouver, ISO, and other styles
49

Gayathri, S. S., R. Kumar, and Samiappan Dhanalakshmi. "Efficient Floating-point Division Quantum Circuit using Newton-Raphson Division." Journal of Physics: Conference Series 2335, no. 1 (September 1, 2022): 012058. http://dx.doi.org/10.1088/1742-6596/2335/1/012058.

Full text
Abstract:
Abstract The development of quantum algorithms is facilitated by quantum circuit designs. A floating-point number can represent a wide range of values and is extremely useful in digital signal processing. A quantum circuit model to implement the floating-point division problem using the Newton-Raphson division algorithm is proposed in this paper. The proposed division circuit offers a significant savings in T-gates and qubits used in the circuit design when correlated with the state of art works proposed on fast division algorithms. The qubits savings are estimated around 17% and 20%, T-count savings are around 59.03% and 20.31%. Similarly, T-depth savings is estimated around 77.45% and 24.33% over the existing works.
APA, Harvard, Vancouver, ISO, and other styles
50

Ge, Dawei. "Kinematics modeling of redundant manipulator based on screw theory and Newton-Raphson method." Journal of Physics: Conference Series 2246, no. 1 (April 1, 2022): 012068. http://dx.doi.org/10.1088/1742-6596/2246/1/012068.

Full text
Abstract:
Abstract In this paper, forward kinematics and inverse kinematicsis algorithms are proposed to solve the problem that the redundant manipulator has more freedom than the traditional manipulator and cannot directly solve the inverse kinematics analytical solution. Firstly, the forward kinematics model is established through the screw theory; secondly, Newton-Raphson method is used to solve the inverse kinematics of the manipulator. Finally, the algorithms of redundant manipulator are verified through an example simulated by Matlab Robotics toolbox. The results show that the kinematic algorithms are correct, which provides a good algorithm basis for subsequent dynamic control.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography