Journal articles on the topic 'Quasi-newton search'

To see the other types of publications on this topic, follow the link: Quasi-newton search.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Quasi-newton search.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bao, Wang, and Wang Yanxin. "Sub-Direction Parallel Search Quasi-Newton Algorithm." International Journal of Modeling and Optimization 8, no. 3 (June 2018): 131–37. http://dx.doi.org/10.7763/ijmo.2018.v8.637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wills, Adrian G., and Thomas B. Schön. "Stochastic quasi-Newton with line-search regularisation." Automatica 127 (May 2021): 109503. http://dx.doi.org/10.1016/j.automatica.2021.109503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hassan, Basim Abbas, Kanikar Muangchoo, Fadhil Alfarag, Abdulkarim Hassan Ibrahim, and Auwal Bala Abubakar. "An improved quasi-Newton equation on the quasi-Newton methods for unconstrained optimizations." Indonesian Journal of Electrical Engineering and Computer Science 22, no. 2 (May 1, 2021): 997. http://dx.doi.org/10.11591/ijeecs.v22.i2.pp997-1005.

Full text
Abstract:
<span><span>Quasi-Newton methods are a class of numerical methods for </span>solving the problem of unconstrained optimization. To improve the overall efficiency of resulting algorithms, we use the quasi-Newton methods which is interesting for quasi-Newton equation. In this manuscript, we present a modified BFGS update formula based on the new quasi-Newton equation, which give a new search direction for solving unconstrained optimizations proplems. We analyse the convergence rate of quasi-Newton method under some mild condition. Numerical experiments are conducted to demonstrate the efficiency of new methods using some test problems. The results indicates that the proposed method is competitive compared to the BFGS methods as it yielded fewer iteration and fewer function evaluations.</span>
APA, Harvard, Vancouver, ISO, and other styles
4

Price, Christopher John. "A direct search quasi-Newton method for nonsmooth unconstrained optimization." ANZIAM Journal 59 (January 3, 2018): 215. http://dx.doi.org/10.21914/anziamj.v59i0.10651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

PRICE, C. J. "A DIRECT SEARCH QUASI-NEWTON METHOD FOR NONSMOOTH UNCONSTRAINED OPTIMIZATION." ANZIAM Journal 59, no. 2 (October 2017): 215–31. http://dx.doi.org/10.1017/s1446181117000323.

Full text
Abstract:
A direct search quasi-Newton algorithm is presented for local minimization of Lipschitz continuous black-box functions. The method estimates the gradient via central differences using a maximal frame around each iterate. When nonsmoothness prevents progress, a global direction search is used to locate a descent direction. Almost sure convergence to Clarke stationary point(s) is shown, where convergence is independent of the accuracy of the gradient estimates. Numerical results show that the method is effective in practice.
APA, Harvard, Vancouver, ISO, and other styles
6

Shi, Zhen-Jun. "Convergence of quasi-Newton method with new inexact line search." Journal of Mathematical Analysis and Applications 315, no. 1 (March 2006): 120–31. http://dx.doi.org/10.1016/j.jmaa.2005.05.077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wu, Ting, and Linping Sun. "A quasi-Newton based pattern search algorithm for unconstrained optimization." Applied Mathematics and Computation 183, no. 1 (December 2006): 685–94. http://dx.doi.org/10.1016/j.amc.2006.05.107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Moghrabi, Issam A. R. "Extra multistep BFGS updates in quasi-Newton methods." International Journal of Mathematics and Mathematical Sciences 2006 (2006): 1–8. http://dx.doi.org/10.1155/ijmms/2006/12583.

Full text
Abstract:
This note focuses on developing quasi-Newton methods that combinem+1multistep and single-step updates on a single iteration for the sake of constructing the new approximation to the Hessian matrix to be used on the next iteration in computing the search direction. The approach considered here exploits the merits of the multistep methods and those of El-Baali (1999) to create a hybrid technique. Our numerical results are encouraging and reveal that our proposed approach is promising. The new methods compete well with El-Baali's extra update algorithms (1999).
APA, Harvard, Vancouver, ISO, and other styles
9

Beigi, H. S. M., and C. J. Li. "Learning Algorithms for Neural Networks Based on Quasi-Newton Methods With Self-Scaling." Journal of Dynamic Systems, Measurement, and Control 115, no. 1 (March 1, 1993): 38–43. http://dx.doi.org/10.1115/1.2897405.

Full text
Abstract:
Previous studies have suggested that, for moderate sized neural networks, the use of classical Quasi-Newton methods yields the best convergence properties among all the state-of-the-art [1]. This paper describes a set of even better learning algorithms based on a class of Quasi-Newton optimization techniques called Self-Scaling Variable Metric (SSVM) methods. One of the characteristics of SSVM methods is that they provide a set of search directions which are invariant under the scaling of the objective function. With an XOR benchmark and an encoder benchmark, simulations using the SSVM algorithms for the learning of general feedforward neural networks were carried out to study their performance. Compared to classical Quasi-Newton methods, it is shown that the SSVM method reduces the number of iterations required for convergence by 40 percent to 60 percent that of the classical Quasi-Newton methods which, in general, converge two to three orders of magnitude faster than the steepest descent techniques.
APA, Harvard, Vancouver, ISO, and other styles
10

Pošík, Petr, and Waltraud Huyer. "Restarted Local Search Algorithms for Continuous Black Box Optimization." Evolutionary Computation 20, no. 4 (December 2012): 575–607. http://dx.doi.org/10.1162/evco_a_00087.

Full text
Abstract:
Several local search algorithms for real-valued domains (axis parallel line search, Nelder-Mead simplex search, Rosenbrock's algorithm, quasi-Newton method, NEWUOA, and VXQR) are described and thoroughly compared in this article, embedding them in a multi-start method. Their comparison aims (1) to help the researchers from the evolutionary community to choose the right opponent for their algorithm (to choose an opponent that would constitute a hard-to-beat baseline algorithm), (2) to describe individual features of these algorithms and show how they influence the algorithm on different problems, and (3) to provide inspiration for the hybridization of evolutionary algorithms with these local optimizers. The recently proposed Comparing Continuous Optimizers (COCO) methodology was adopted as the basis for the comparison. The results show that in low dimensional spaces, the old method of Nelder and Mead is still the most successful among those compared, while in spaces of higher dimensions, it is better to choose an algorithm based on quadratic modeling, such as NEWUOA or a quasi-Newton method.
APA, Harvard, Vancouver, ISO, and other styles
11

Cheng, Lin, Yasunori Iida, Nobuhiro Uno, and Wei Wang. "Alternative Quasi-Newton Methods for Capacitated User Equilibrium Assignment." Transportation Research Record: Journal of the Transportation Research Board 1857, no. 1 (January 2003): 109–16. http://dx.doi.org/10.3141/1857-13.

Full text
Abstract:
Two quasi-Newton methods are proposed to deal with traffic assignment in a capacitated network. The methods combine Newton formula, column generation, and penalty techniques. The first method uses the gradient of the objective function to obtain an improving feasible direction scaled by the second-order derivatives. The second one uses a Rosen gradient to obtain an improving direction scaled by the corresponding origin–destination demand. Both methods make a line search to obtain an optimal step size to guarantee the feasibility of either path or link flow. The proposed methods are of fast convergence and high accuracy at the expense of saving path information. Numerical examples verify their efficiency and stability. The quasi-Newton method with a straight gradient demonstrates more stability than the Rosen gradient for capacitated traffic assignment.
APA, Harvard, Vancouver, ISO, and other styles
12

Ibrahim, Mohd Asrul Hery, Mustafa Mamat, and Wah June Leong. "The Hybrid BFGS-CG Method in Solving Unconstrained Optimization Problems." Abstract and Applied Analysis 2014 (2014): 1–6. http://dx.doi.org/10.1155/2014/507102.

Full text
Abstract:
In solving large scale problems, the quasi-Newton method is known as the most efficient method in solving unconstrained optimization problems. Hence, a new hybrid method, known as the BFGS-CG method, has been created based on these properties, combining the search direction between conjugate gradient methods and quasi-Newton methods. In comparison to standard BFGS methods and conjugate gradient methods, the BFGS-CG method shows significant improvement in the total number of iterations and CPU time required to solve large scale unconstrained optimization problems. We also prove that the hybrid method is globally convergent.
APA, Harvard, Vancouver, ISO, and other styles
13

Saito, Kazumi, and Ryohei Nakano. "Partial BFGS Update and Efficient Step-Length Calculation for Three-Layer Neural Networks." Neural Computation 9, no. 1 (January 1, 1997): 123–41. http://dx.doi.org/10.1162/neco.1997.9.1.123.

Full text
Abstract:
Second-order learning algorithms based on quasi-Newton methods have two problems. First, standard quasi-Newton methods are impractical for large-scale problems because they require N2 storage space to maintain an approximation to an inverse Hessian matrix (N is the number of weights). Second, a line search to calculate areasonably accurate step length is indispensable for these algorithms. In order to provide desirable performance, an efficient and reasonably accurate line search is needed. To overcome these problems, we propose a new second-order learning algorithm. Descent direction is calculated on the basis of a partial Broydon-Fletcher-Goldfarb-Shanno (BFGS) update with 2Ns memory space (s « N), and a reasonably accurate step length is efficiently calculated as the minimal point of a second-order approximation to the objective function with respect to the step length. Our experiments, which use a parity problem and a speech synthesis problem, have shown that the proposed algorithm outperformed major learning algorithms. Moreover, it turned out that an efficient and accurate step-length calculation plays an important role for the convergence of quasi-Newton algorithms, and a partial BFGS update greatly saves storage space without losing the convergence performance.
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Pengyuan, Junyu Lu, and Haishan Feng. "The Global Convergence of a Modified BFGS Method under Inexact Line Search for Nonconvex Functions." Mathematical Problems in Engineering 2021 (May 4, 2021): 1–9. http://dx.doi.org/10.1155/2021/8342536.

Full text
Abstract:
Among the quasi-Newton algorithms, the BFGS method is often discussed by related scholars. However, in the case of inexact Wolfe line searches or even exact line search, the global convergence of the BFGS method for nonconvex functions is not still proven. Based on the aforementioned issues, we propose a new quasi-Newton algorithm to obtain a better convergence property; it is designed according to the following essentials: (1) a modified BFGS formula is designed to guarantee that B k + 1 inherits the positive definiteness of B k ; (2) a modified weak Wolfe–Powell line search is recommended; (3) a parabola, which is considered as the projection plane to avoid using the invalid direction, is proposed, and the next point x k + 1 is designed by a projection technique; (4) to obtain the global convergence of the proposed algorithm more easily, the projection point is used at all the next iteration points instead of the current modified BFGS update formula; and (5) the global convergence of the given algorithm is established under suitable conditions. Numerical results show that the proposed algorithm is efficient.
APA, Harvard, Vancouver, ISO, and other styles
15

Saeed Chilmeran, Hamsa Th, Huda I. Ahmed, Eman T. Hamed, and Abbas Y. Al-Bayati. "A new modification of the quasi-newton method for unconstrained optimization." Indonesian Journal of Electrical Engineering and Computer Science 21, no. 3 (March 10, 2021): 1683. http://dx.doi.org/10.11591/ijeecs.v21.i3.pp1683-1691.

Full text
Abstract:
<p class="MsoNormal" style="text-align: justify;"><span>In this work we propose and analyze a hybrid conjugate gradient (CG) method in which the parameter <!--[if gte mso 9]><xml> <o:OLEObject Type="Embed" ProgID="Equation.3" ShapeID="_x0000_i1025" DrawAspect="Content" ObjectID="_1674222415"> </o:OLEObject> </xml><![endif]-->is computed as a linear combination between Hager-Zhang [HZ] and Dai-Liao [DL] parameters. We use this proposed method to modify BFGS method and to prove the positive definiteness and QN-conditions of the matrix. Theoretical trils confirm that the new search directions aredescent directions under some conditions, as well as, the new search directions areglobally convergent using strong Wolfe conditions. The numerical experiments show that the proposed method is promising and outperforms alternative similar CG-methods using Dolan-Mor'e performance profile. </span><br /><br /></p>
APA, Harvard, Vancouver, ISO, and other styles
16

Memon, Ghulam Q., and A. G. R. Bullen. "Multivariate Optimization Strategies for Real-Time Traffic Control Signals." Transportation Research Record: Journal of the Transportation Research Board 1554, no. 1 (January 1996): 36–42. http://dx.doi.org/10.1177/0361198196155400105.

Full text
Abstract:
The application of modern heuristic techniques, neural networks, simulated annealing, tabu search, and genetic algorithms for multivariate optimization is receiving increased attention compared with traditional techniques like hill climbing and gradient search. In the present research the efficiency and effectiveness of genetic algorithms are investigated for their application to the real-time optimization of traffic control signal timings and are compared with the efficiency and effectiveness of the Quasi-Newton gradient search method. The development, testing, comparison, and evaluation of these two multivariate optimization techniques for inclusion in the real-time traffic adaptive control system LOCAL model developed at the University of Pittsburgh as a part of an FHWA contract to a consortium led by the University of Maryland are described. The measures of effectiveness used in the comparisons include optimum total stopped delay, percentage of improvement in total stopped delay, optimal phase timings, execution time, and code size. Testing and evaluation results indicate that genetic algorithms are more efficient and effective than the Quasi-Newton method for this real-time application.
APA, Harvard, Vancouver, ISO, and other styles
17

Öztoprak, Figen, and Ş. İlker Birbil. "A symmetric rank-one quasi-Newton line-search method using negative curvature directions." Optimization Methods and Software 26, no. 3 (June 2011): 455–86. http://dx.doi.org/10.1080/10556788.2010.544311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Sun, Li, Guoping He, Yongli Wang, and Liang Fang. "An active set quasi-Newton method with projected search for bound constrained minimization." Computers & Mathematics with Applications 58, no. 1 (July 2009): 161–70. http://dx.doi.org/10.1016/j.camwa.2009.03.085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

WANG, Xianbin. "Vehicle Dynamics Equilibriums Solution Search Based on Hybridizationof Genetic Algorithm and Quasi-Newton Method." Journal of Mechanical Engineering 50, no. 4 (2014): 120. http://dx.doi.org/10.3901/jme.2014.04.120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Xinming, Zihao Fu, Haiyan Chen, Wentao Mao, Shangwang Liu, and Guoqi Liu. "Lévy Flight Shuffle Frog Leaping Algorithm Based on Differential Perturbation and Quasi-Newton Search." IEEE Access 7 (2019): 116078–93. http://dx.doi.org/10.1109/access.2019.2936254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Ek, David, and Anders Forsgren. "Exact linesearch limited-memory quasi-Newton methods for minimizing a quadratic function." Computational Optimization and Applications 79, no. 3 (April 28, 2021): 789–816. http://dx.doi.org/10.1007/s10589-021-00277-4.

Full text
Abstract:
AbstractThe main focus in this paper is exact linesearch methods for minimizing a quadratic function whose Hessian is positive definite. We give a class of limited-memory quasi-Newton Hessian approximations which generate search directions parallel to those of the BFGS method, or equivalently, to those of the method of preconditioned conjugate gradients. In the setting of reduced Hessians, the class provides a dynamical framework for the construction of limited-memory quasi-Newton methods. These methods attain finite termination on quadratic optimization problems in exact arithmetic. We show performance of the methods within this framework in finite precision arithmetic by numerical simulations on sequences of related systems of linear equations, which originate from the CUTEst test collection. In addition, we give a compact representation of the Hessian approximations in the full Broyden class for the general unconstrained optimization problem. This representation consists of explicit matrices and gradients only as vector components.
APA, Harvard, Vancouver, ISO, and other styles
22

Hamed, Eman T., Huda I. Ahmed, and Abbas Y. Al-Bayati. "A New Hybrid Algorithm for Convex Nonlinear Unconstrained Optimization." Journal of Applied Mathematics 2019 (April 1, 2019): 1–6. http://dx.doi.org/10.1155/2019/8728196.

Full text
Abstract:
In this study, we tend to propose a replacement hybrid algorithmic rule which mixes the search directions like Steepest Descent (SD) and Quasi-Newton (QN). First, we tend to develop a replacement search direction for combined conjugate gradient (CG) and QN strategies. Second, we tend to depict a replacement positive CG methodology that possesses the adequate descent property with sturdy Wolfe line search. We tend to conjointly prove a replacement theorem to make sure global convergence property is underneath some given conditions. Our numerical results show that the new algorithmic rule is powerful as compared to different standard high scale CG strategies.
APA, Harvard, Vancouver, ISO, and other styles
23

Zahhad, Mohammed Abo, Sabah M. Ahamed, Ahmad F. Al Ajlouni, and Nabil Sabor. "Digital filters design educational software based on immune, genetic and quasi-Newton line search algorithms." International Journal of Innovation and Learning 9, no. 1 (2011): 35. http://dx.doi.org/10.1504/ijil.2011.037191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Zhou, Weijun. "A modified BFGS type quasi-Newton method with line search for symmetric nonlinear equations problems." Journal of Computational and Applied Mathematics 367 (March 2020): 112454. http://dx.doi.org/10.1016/j.cam.2019.112454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Gui, Zhuqing, Chunyan Hu, and Zhibin Zhu. "A Mixed Line Search Smoothing Quasi-Newton Method for Solving Linear Second-Order Cone Programming Problem." ISRN Operations Research 2013 (March 27, 2013): 1–9. http://dx.doi.org/10.1155/2013/230717.

Full text
Abstract:
Firstly, we give the Karush-Kuhn-Tucker (KKT) optimality condition of primal problem and introduce Jordan algebra simply. On the basis of Jordan algebra, we extend smoothing Fischer-Burmeister (F-B) function to Jordan algebra and make the complementarity condition smoothing. So the first-order optimization condition can be reformed to a nonlinear system. Secondly, we use the mixed line search quasi-Newton method to solve this nonlinear system. Finally, we prove the globally and locally superlinear convergence of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
26

Geng, Mingchao, Tieshi Zhao, Chang Wang, Yuhang Chen, and Erwei Li. "Forward kinematics analysis of parallel mechanisms with restricted workspace." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 229, no. 14 (December 7, 2014): 2561–72. http://dx.doi.org/10.1177/0954406214560420.

Full text
Abstract:
The iterative search method (Newton-Raphson or Quasi-Newton) is an important numerical method for solving the forward kinematics problem of parallel mechanisms. But there may be a failure when the iterative search method solves the forward kinematics problems of a class of mechanisms, whose workspace is restricted. The extreme displacement singularity in the limbs is one reason for the workspace restriction. An equivalent method is proposed to remove the extremely displacement singularity in the limbs, and the forward kinematics solutions of two representative 6 degree of freedom mechanisms are given to illustrate the mechanism equivalence. For the coupled fewer degree of freedom mechanisms, the coupled motion is another reason for the workspace restriction. The virtual mechanism method and modified Jacobian matrix method are applied to solve the forward kinematics problems of this class of mechanisms. Numerical examples are given to validate the theories proposed above.
APA, Harvard, Vancouver, ISO, and other styles
27

Lian, Zhigang, Songhua Wang, and Yangquan Chen. "A Velocity-Combined Local Best Particle Swarm Optimization Algorithm for Nonlinear Equations." Mathematical Problems in Engineering 2020 (August 25, 2020): 1–9. http://dx.doi.org/10.1155/2020/6284583.

Full text
Abstract:
Many people use traditional methods such as quasi-Newton method and Gauss–Newton-based BFGS to solve nonlinear equations. In this paper, we present an improved particle swarm optimization algorithm to solve nonlinear equations. The novel algorithm introduces the historical and local optimum information of particles to update a particle’s velocity. Five sets of typical nonlinear equations are employed to test the quality and reliability of the novel algorithm search comparing with the PSO algorithm. Numerical results show that the proposed method is effective for the given test problems. The new algorithm can be used as a new tool to solve nonlinear equations, continuous function optimization, etc., and the combinatorial optimization problem. The global convergence of the given method is established.
APA, Harvard, Vancouver, ISO, and other styles
28

Wu, Ting, and Linping Sun. "A new quasi-Newton pattern search method based on symmetric rank-one update for unconstrained optimization." Computers & Mathematics with Applications 55, no. 6 (March 2008): 1201–14. http://dx.doi.org/10.1016/j.camwa.2007.06.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Nokleby, Scott B., and Ron P. Podhorodeski. "Optimization-Based Synthesis of Grashof Geared Five-Bar Mechanisms." Journal of Mechanical Design 123, no. 4 (July 1, 1999): 529–34. http://dx.doi.org/10.1115/1.1401736.

Full text
Abstract:
A quasi-Newton optimization routine and Grashof criteria for geared five-bar mechanisms are used to develop a Grashof five-bar mechanism synthesis routine. Sequential transformations mapping Grashof mechanism parameters satisfying sub-type specific upper and lower constraints are used. Convergence criteria of: (i) objective function value change, (ii) mechanism parameter change, and (iii) task satisfaction are used. These criteria, combined with search restarts, ensure the synthesis of an acceptable mechanism. Example results demonstrate the effectiveness of the routine.
APA, Harvard, Vancouver, ISO, and other styles
30

Clason, Christian, and Andrej Klassen. "Quasi-solution of linear inverse problems in non-reflexive Banach spaces." Journal of Inverse and Ill-posed Problems 26, no. 5 (October 1, 2018): 689–702. http://dx.doi.org/10.1515/jiip-2018-0026.

Full text
Abstract:
Abstract We consider the method of quasi-solutions (also referred to as Ivanov regularization) for the regularization of linear ill-posed problems in non-reflexive Banach spaces. Using the equivalence to a metric projection onto the image of the forward operator, it is possible to show regularization properties and to characterize parameter choice rules that lead to a convergent regularization method, which includes the Morozov discrepancy principle. Convergence rates in a suitably chosen Bregman distance can be obtained as well. We also address the numerical computation of quasi-solutions to inverse source problems for partial differential equations in {L^{\infty}(\Omega)} using a semi-smooth Newton method and a backtracking line search for the parameter choice according to the discrepancy principle. Numerical examples illustrate the behavior of quasi-solutions in this setting.
APA, Harvard, Vancouver, ISO, and other styles
31

Yu, Zhensheng, Zilun Wang, and Ke Su. "A Double Nonmonotone Quasi-Newton Method for Nonlinear Complementarity Problem Based on Piecewise NCP Functions." Mathematical Problems in Engineering 2020 (December 16, 2020): 1–13. http://dx.doi.org/10.1155/2020/6642725.

Full text
Abstract:
In this paper, a double nonmonotone quasi-Newton method is proposed for the nonlinear complementarity problem. By using 3-1 piecewise and 4-1 piecewise nonlinear complementarity functions, the nonlinear complementarity problem is reformulated into a smooth equation. By a double nonmonotone line search, a smooth Broyden-like algorithm is proposed, where a single solution of a smooth equation at each iteration is required with the reduction in the scale of the calculation. Under suitable conditions, the global convergence of the algorithm is proved, and numerical results with some practical applications are given to show the efficiency of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
32

Griewank, Anderas. "The “global” convergence of Broyden-like methods with suitable line search." Journal of the Australian Mathematical Society. Series B. Applied Mathematics 28, no. 1 (July 1986): 75–92. http://dx.doi.org/10.1017/s0334270000005208.

Full text
Abstract:
Iterative methods for solving a square system of nonlinear equations g(x) = 0 often require that the sum of squares residual γ (x) ≡ ½∥g(x)∥2 be reduced at each step. Since the gradient of γ depends on the Jacobian ∇g, this stabilization strategy is not easily implemented if only approximations Bk to ∇g are available. Therefore most quasi-Newton algorithms either include special updating steps or reset Bk to a divided difference estimate of ∇g whenever no satisfactory progress is made. Here the need for such back-up devices is avoided by a derivative-free line search in the range of g. Assuming that the Bk are generated from an rbitrary B0 by fixed scale updates, we establish superlinear convergence from within any compact level set of γ on which g has a differentiable inverse function g−1.
APA, Harvard, Vancouver, ISO, and other styles
33

Yang, Tianshan, Pengyuan Li, and Xiaoliang Wang. "Convergence Analysis of an Improved BFGS Method and Its Application in the Muskingum Model." Mathematical Problems in Engineering 2020 (August 18, 2020): 1–9. http://dx.doi.org/10.1155/2020/4519274.

Full text
Abstract:
The BFGS method is one of the most effective quasi-Newton algorithms for minimization-optimization problems. In this paper, an improved BFGS method with a modified weak Wolfe–Powell line search technique is used to solve convex minimization problems and its convergence analysis is established. Seventy-four academic test problems and the Muskingum model are implemented in the numerical experiment. The numerical results show that our algorithm is comparable to the usual BFGS algorithm in terms of the number of iterations and the time consumed, which indicates our algorithm is effective and reliable.
APA, Harvard, Vancouver, ISO, and other styles
34

Namala, Dheeraj Kumar, and V. Surendranath. "Parameter Estimation of Spring-Damping System using Unconstrained Optimization by the Quasi-Newton Methods using Line Search Techniques." Advanced Journal of Graduate Research 5, no. 1 (September 9, 2018): 1–7. http://dx.doi.org/10.21467/ajgr.5.1.1-7.

Full text
Abstract:
Optimization is the basic tools to study the behaviour of many complicated mechanical systems by having the knowledge of differential equations which determine the system. The basis of this paper was to present a method to estimate the parameters such as spring constant and damping coefficient of the spring damped system by unconstrained optimization using derivative methods Such as quasi-newton method by Broyden-Fletcher-Goldfarb-Shanno and davidon-Fletcher-Powell hessian updating method by using backtracking line search methods along with Armijo’s condition.it uses the output error approximation procedure. It shows the convergence of different methods which are used to estimate the parameters and how accurately they are measured.
APA, Harvard, Vancouver, ISO, and other styles
35

Guo, Jie, and Zhong Wan. "A new three-term spectral conjugate gradient algorithm with higher numerical performance for solving large scale optimization problems based on Quasi-Newton equation." International Journal of Modeling, Simulation, and Scientific Computing 12, no. 05 (May 31, 2021): 2150053. http://dx.doi.org/10.1142/s1793962321500537.

Full text
Abstract:
A new spectral three-term conjugate gradient algorithm in virtue of the Quasi-Newton equation is developed for solving large-scale unconstrained optimization problems. It is proved that the search directions in this algorithm always satisfy a sufficiently descent condition independent of any line search. Global convergence is established for general objective functions if the strong Wolfe line search is used. Numerical experiments are employed to show its high numerical performance in solving large-scale optimization problems. Particularly, the developed algorithm is implemented to solve the 100 benchmark test problems from CUTE with different sizes from 1000 to 10,000, in comparison with some similar ones in the literature. The numerical results demonstrate that our algorithm outperforms the state-of-the-art ones in terms of less CPU time, less number of iteration or less number of function evaluation.
APA, Harvard, Vancouver, ISO, and other styles
36

Tatsumi, Keiji, and Tetsuzo Tanino. "A Perturbation Based Chaotic System Exploiting the Quasi-Newton Method for Global Optimization." International Journal of Bifurcation and Chaos 27, no. 04 (April 2017): 1750047. http://dx.doi.org/10.1142/s021812741750047x.

Full text
Abstract:
The chaotic system has been exploited in metaheuristic methods of solving continuous global optimization problems. Recently, the gradient method with perturbation (GP) was proposed, which was derived from the steepest descent method for the problem with additional perturbation terms, and it was reported that chaotic metaheuristics with the GP have good performances of solving some benchmark problems. Moreover, the sufficient condition of its parameter values was theoretically shown under which its updating system is chaotic. However, the sufficient condition of its chaoticity and the width of strange attractor around each local minimum, which are important properties for exploiting the chaotic system in optimization, deeply depend on the eigenvalues of the Hessian matrix of the objective function at the local minimum. Thus, if the eigenvalues of different local minima are widely different from each other, or if it is different in different problems, such properties can cause the difficulty of selecting appropriate parameter values for an effective search. Therefore, in this paper, we propose modified GPs based on the quasi-Newton method instead of the steepest descent method, where their chaoticities and the width of strange attractor do not depend on the eigenvalue of the Hessian matrix at any local minimum due to the scale invariant of the quasi-Newton method. In addition, we empirically demonstrate that the parameter selection of the proposed methods is easier than the original GP, especially with respect to the step-size, and the chaotic metaheuristics with the proposed methods can find better solutions for some multimodal functions.
APA, Harvard, Vancouver, ISO, and other styles
37

Babaie-Kafaki, Saman, and Reza Ghanbari. "Descent Symmetrization of the Dai–Liao Conjugate Gradient Method." Asia-Pacific Journal of Operational Research 33, no. 02 (April 2016): 1650008. http://dx.doi.org/10.1142/s0217595916500081.

Full text
Abstract:
Symmetrizing the Dai–Liao (DL) search direction matrix by a rank-one modification, we propose a one-parameter class of nonlinear conjugate gradient (CG) methods which includes the memoryless Broyden–Fletcher–Goldfarb–Shanno (MLBFGS) quasi-Newton updating formula. Then, conducting an eigenvalue analysis, we suggest two choices for the parameter of the proposed class of CG methods which simultaneously guarantee the descent property and well-conditioning of the search direction matrix. A global convergence analysis is made for uniformly convex objective functions. Computational experiments are done on a set of unconstrained optimization test problems of the CUTEr collection. Results of numerical comparisons made by the Dolan–Moré performance profile show that proper choices for the mentioned parameter may lead to promising computational performances.
APA, Harvard, Vancouver, ISO, and other styles
38

Dauda, Muhammad Kabir, Mustafa Mamat, Mohamad Afendee Mohamed, and Mahammad Yusuf Waziri. "Improved quasi-newton method via SR1 update for solving symmetric systems of nonlinear equations." Malaysian Journal of Fundamental and Applied Sciences 15, no. 1 (February 13, 2019): 117–20. http://dx.doi.org/10.11113/mjfas.v15n2019.1085.

Full text
Abstract:
The systems of nonlinear equations emerges from many areas of computing, scientific and engineering research applications. A variety of an iterative methods for solving such systems have been developed, this include the famous Newton method. Unfortunately, the Newton method suffers setback, which includes storing matrix at each iteration and computing Jacobian matrix, which may be difficult or even impossible to compute. To overcome the drawbacks that bedeviling Newton method, a modification to SR1 update was proposed in this study. With the aid of inexact line search procedure by Li and Fukushima, the modification was achieved by simply approximating the inverse Hessian matrix with an identity matrix without computing the Jacobian. Unlike the classical SR1 method, the modification neither require storing matrix at each iteration nor needed to compute the Jacobian matrix. In finding the solution to non-linear problems of the form 40 benchmark test problems were solved. A comparison was made with other two methods based on CPU time and number of iterations. In this study, the proposed method solved 37 problems effectively in terms of number of iterations. In terms of CPU time, the proposed method also outperformed the existing methods. The contribution from the methodology yielded a method that is suitable for solving symmetric systems of nonlinear equations. The derivative-free feature of the proposed method gave its advantage to solve relatively large-scale problems (10,000 variables) compared to the existing methods. From the preliminary numerical results, the proposed method turned out to be significantly faster, effective and suitable for solving large scale symmetric nonlinear equations.
APA, Harvard, Vancouver, ISO, and other styles
39

Bilinskas, Mykolas J., Gintautas Dzemyda, and Martynas Sabaliauskas. "Speeding-up the Fitting of the Model Defining the Ribs-bounded Contour." Applied Computer Systems 21, no. 1 (May 24, 2017): 66–70. http://dx.doi.org/10.1515/acss-2017-0009.

Full text
Abstract:
Abstract The method for analysing transversal plane images from computer tomography scans is considered in the paper. This method allows not only approximating ribs-bounded contour but also evaluating patient rotation around the vertical axis during a scan. In this method, a mathematical model describing the ribs-bounded contour was created and the problem of approximation has been solved by finding the optimal parameters of the mathematical model using least-squares-type objective function. The local search has been per-formed using local descent by quasi-Newton methods. The benefits of analytical derivatives of the function are disclosed in the paper.
APA, Harvard, Vancouver, ISO, and other styles
40

Mosconi, Denis, Maira Martins Silva, and Adriano Almeida Gonçalves Siqueira. "Optimal tuning of a proportional controller for DC motor position control via Fibonacci Search Method." Journal of Mechatronics Engineering 4, no. 1 (April 12, 2021): 12–21. http://dx.doi.org/10.21439/jme.v4i1.88.

Full text
Abstract:
One of the advantages of using DC motors is the ease of controlling their position and speed by manipulating the input voltage. To this, feedback PID-based controllers can be used. However, adjusting these controllers can be challenging and require some reasonable effort from the controller designer. This work proposes an algorithm based on the Fibonacci Search Method to determine the optimal gain of a proportional controller applied to the position control of a DC motor. The project specifications were minimum settling time with null overshoot. The results obtained showed that the proposed method is valid for determining the Kp gain according to the plant and the conditions involved. The proposed method was compared with other optimization techniques such as Golden Section, Quasi-Newton and Grey Wolf Optimization, standing out for its simplicity of implementation, low number of iterations and fast convergence.
APA, Harvard, Vancouver, ISO, and other styles
41

Sparks, A. G., and D. S. Bernstein. "Optimal Rejection of Stochastic and Deterministic Disturbances." Journal of Dynamic Systems, Measurement, and Control 119, no. 1 (March 1, 1997): 140–43. http://dx.doi.org/10.1115/1.2801207.

Full text
Abstract:
The problem of optimal H2 rejection of noisy disturbances while asymptotically rejecting constant or sinusoidal disturbances is considered. The internal model principle is used to ensure that the expected value of the output approaches zero asymptotically in the presence of persistent deterministic disturbances. Necessary conditions are given for dynamic output feedback controllers that minimize an H2 disturbance rejection cost plus an upper bound on the integral square output cost for transient performance. The necessary conditions provide expressions for the gradients of the cost with respect to each of the control gains. These expressions are then used in a quasi-Newton gradient search algorithm to find the optimal feedback gains.
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Pengyuan, Zhan Wang, Dan Luo, and Hongtruong Pham. "Global Convergence of a Modified Two-Parameter Scaled BFGS Method with Yuan-Wei-Lu Line Search for Unconstrained Optimization." Mathematical Problems in Engineering 2020 (August 26, 2020): 1–15. http://dx.doi.org/10.1155/2020/9280495.

Full text
Abstract:
The BFGS method is one of the most efficient quasi-Newton methods for solving small- and medium-size unconstrained optimization problems. For the sake of exploring its more interesting properties, a modified two-parameter scaled BFGS method is stated in this paper. The intention of the modified scaled BFGS method is to improve the eigenvalues structure of the BFGS update. In this method, the first two terms and the last term of the standard BFGS update formula are scaled with two different positive parameters, and the new value of yk is given. Meanwhile, Yuan-Wei-Lu line search is also proposed. Under the mentioned line search, the modified two-parameter scaled BFGS method is globally convergent for nonconvex functions. The extensive numerical experiments show that this form of the scaled BFGS method outperforms the standard BFGS method or some similar scaled methods.
APA, Harvard, Vancouver, ISO, and other styles
43

Qu, Quan, Xianfeng Ding, and Xinyi Wang. "A Filter and Nonmonotone Adaptive Trust Region Line Search Method for Unconstrained Optimization." Symmetry 12, no. 4 (April 21, 2020): 656. http://dx.doi.org/10.3390/sym12040656.

Full text
Abstract:
In this paper, a new nonmonotone adaptive trust region algorithm is proposed for unconstrained optimization by combining a multidimensional filter and the Goldstein-type line search technique. A modified trust region ratio is presented which results in more reasonable consistency between the accurate model and the approximate model. When a trial step is rejected, we use a multidimensional filter to increase the likelihood that the trial step is accepted. If the trial step is still not successful with the filter, a nonmonotone Goldstein-type line search is used in the direction of the rejected trial step. The approximation of the Hessian matrix is updated by the modified Quasi-Newton formula (CBFGS). Under appropriate conditions, the proposed algorithm is globally convergent and superlinearly convergent. The new algorithm shows better performance in terms of the Dolan–Moré performance profile. Numerical results demonstrate the efficiency and robustness of the proposed algorithm for solving unconstrained optimization problems.
APA, Harvard, Vancouver, ISO, and other styles
44

Pan, Wenyong, Kristopher A. Innanen, and Wenyuan Liao. "Accelerating Hessian-free Gauss-Newton full-waveform inversion via l-BFGS preconditioned conjugate-gradient algorithm." GEOPHYSICS 82, no. 2 (March 1, 2017): R49—R64. http://dx.doi.org/10.1190/geo2015-0595.1.

Full text
Abstract:
Full-waveform inversion (FWI) has emerged as a powerful strategy for estimating subsurface model parameters by iteratively minimizing the difference between synthetic data and observed data. The Hessian-free (HF) optimization method represents an attractive alternative to Newton-type and gradient-based optimization methods. At each iteration, the HF approach obtains the search direction by approximately solving the Newton linear system using a matrix-free conjugate-gradient (CG) algorithm. The main drawback with HF optimization is that the CG algorithm requires many iterations. In our research, we develop and compare different preconditioning schemes for the CG algorithm to accelerate the HF Gauss-Newton (GN) method. Traditionally, preconditioners are designed as diagonal Hessian approximations. We additionally use a new pseudo diagonal GN Hessian as a preconditioner, making use of the reciprocal property of Green’s function. Furthermore, we have developed an [Formula: see text]-BFGS inverse Hessian preconditioning strategy with the diagonal Hessian approximations as an initial guess. Several numerical examples are carried out. We determine that the quasi-Newton [Formula: see text]-BFGS preconditioning scheme with the pseudo diagonal GN Hessian as the initial guess is most effective in speeding up the HF GN FWI. We examine the sensitivity of this preconditioning strategy to random noise with numerical examples. Finally, in the case of multiparameter acoustic FWI, we find that the [Formula: see text]-BFGS preconditioned HF GN method can reconstruct velocity and density models better and more efficiently compared with the nonpreconditioned method.
APA, Harvard, Vancouver, ISO, and other styles
45

Selvan, S. Easter, S. Thomas George, and R. Balakrishnan. "Range-Based ICA Using a Nonsmooth Quasi-Newton Optimizer for Electroencephalographic Source Localization in Focal Epilepsy." Neural Computation 27, no. 3 (March 2015): 628–71. http://dx.doi.org/10.1162/neco_a_00700.

Full text
Abstract:
Independent component analysis (ICA) aims at separating a multivariate signal into independent nongaussian signals by optimizing a contrast function with no knowledge on the mixing mechanism. Despite the availability of a constellation of contrast functions, a Hartley-entropy–based ICA contrast endowed with the discriminacy property makes it an appealing choice as it guarantees the absence of mixing local optima. Fueled by an outstanding source separation performance of this contrast function in practical instances, a succession of optimization techniques has recently been adopted to solve the ICA problem. Nevertheless, the nondifferentiability of the considered contrast restricts the choice of the optimizer to the class of derivative-free methods. On the contrary, this letter concerns a Riemannian quasi-Newton scheme involving an explicit expression for the gradient to optimize the contrast function that is differentiable almost everywhere. Furthermore, the inexact line search insisting on the weak Wolfe condition and a terminating criterion befitting the partly smooth function optimization have been generalized to manifold settings, leaving the previous results intact. The investigations with diversified images and the electroencephalographic (EEG) data acquired from 45 focal epileptic subjects demonstrate the efficacy of our approach in terms of computational savings and reliable EEG source localization, respectively. Additional experimental results are available in the online supplement.
APA, Harvard, Vancouver, ISO, and other styles
46

Dalla, Carlos Eduardo Rambalducci, Wellington Betencurte da Silva, Júlio Cesar Sampaio Dutra, and Marcelo José Colaço. "A comparative study of gradient-based and meta-heuristic optimization methods using Griewank benchmark function/ Um estudo comparativo de métodos de otimização baseados em gradientes e meta-heurísticos usando a função de benchmark do Griewank." Brazilian Journal of Development 7, no. 6 (June 7, 2021): 55341–50. http://dx.doi.org/10.34117/bjdv7n6-102.

Full text
Abstract:
Optimization methods are frequently applied to solve real-world problems such, engineering design, computer science, and computational chemistry. This paper aims to compare gradient-based algorithms and the meta-heuristic particle swarm optimization to minimize the multidimensional benchmark Griewank function, a multimodal function with widespread local minima. Several approaches of gradient-based methods such as steepest descent, conjugate gradient with Fletcher-Reeves and Polak-Ribiere formulations, and quasi-Newton Davidon-Fletcher-Powell approach were compared. The results presented showed that the meta-heuristic method is recommended for function with this behavior because is no needed prior information of the search space. The performance comparison includes computation time and convergence of global and local optimum.
APA, Harvard, Vancouver, ISO, and other styles
47

Aghamiry, H. S., A. Gholami, and S. Operto. "Full waveform inversion by proximal Newton method using adaptive regularization." Geophysical Journal International 224, no. 1 (September 11, 2020): 169–80. http://dx.doi.org/10.1093/gji/ggaa434.

Full text
Abstract:
SUMMARY Regularization is necessary for solving non-linear ill-posed inverse problems arising in different fields of geosciences. The base of a suitable regularization is the prior expressed by the regularizer, which can be non-adaptive or adaptive (data-driven), smooth or non-smooth, variational-based or not. Nevertheless, tailoring a suitable and easy-to-implement prior for describing geophysical models is a non-trivial task. In this paper, we propose two generic optimization algorithms to implement arbitrary regularization in non-linear inverse problems such as full-waveform inversion (FWI), where the regularization task is recast as a denoising problem. We assess these optimization algorithms with the plug-and-play block matching (BM3D) regularization algorithm, which determines empirical priors adaptively without any optimization formulation. The non-linear inverse problem is solved with a proximal Newton method, which generalizes the traditional Newton step in such a way to involve the gradients/subgradients of a (possibly non-differentiable) regularization function through operator splitting and proximal mappings. Furthermore, it requires to account for the Hessian matrix in the regularized least-squares optimization problem. We propose two different splitting algorithms for this task. In the first, we compute the Newton search direction with an iterative method based upon the first-order generalized iterative shrinkage-thresholding algorithm (ISTA), and hence Newton-ISTA (NISTA). The iterations require only Hessian-vector products to compute the gradient step of the quadratic approximation of the non-linear objective function. The second relies on the alternating direction method of multipliers (ADMM), and hence Newton-ADMM (NADMM), where the least-squares optimization subproblem and the regularization subproblem in the composite objective function are decoupled through auxiliary variable and solved in an alternating mode. The least-squares subproblem can be solved with exact, inexact, or quasi-Newton methods. We compare NISTA and NADMM numerically by solving FWI with BM3D regularization. The tests show promising results obtained by both algorithms. However, NADMM shows a faster convergence rate than NISTA when using L-BFGS to solve the Newton system.
APA, Harvard, Vancouver, ISO, and other styles
48

Yahaya, Mahmoud Muhammad, Poom Kumam, Aliyu Muhammed Awwal, and Sani Aji. "A structured quasi-Newton algorithm with nonmonotone search strategy for structured NLS problems and its application in robotic motion control." Journal of Computational and Applied Mathematics 395 (October 2021): 113582. http://dx.doi.org/10.1016/j.cam.2021.113582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Mahdavi-Amiri, Nezam, and Mohammad Reza Ansari. "A Superlinearly Convergent Penalty Method with Nonsmooth Line Search for Constrained Nonlinear Least Squares." Sultan Qaboos University Journal for Science [SQUJS] 16 (April 1, 2012): 103. http://dx.doi.org/10.24200/squjs.vol17iss1pp103-124.

Full text
Abstract:
Recently, we have presented a projected structured algorithm for solving constrained nonlinear least squares problems, and established its local two-step Q-superlinear convergence. The approach is based on an adaptive structured scheme due to Mahdavi-Amiri and Bartels of the exact penalty method. The structured adaptation also makes use of the ideas of Nocedal and Overton for handling the quasi-Newton updates of projected Hessians and appropriates the structuring scheme of Dennis, Martinez and Tapia. Here, for robustness, we present a specific nonsmooth line search strategy, taking account of the least squares objective. We also discuss the details of our new nonsmooth line search strategy, implementation details of the algorithm, and provide comparative results obtained by the testing of our program and three nonlinear programming codes from KNITRO on test problems (both small and large residuals) from Hock and Schittkowski, Lukšan and Vlček and some randomly generated ones due to Bartels and Mahdavi-Amiri. The results indeed affirm the practical relevance of our special considerations for the inherent structure of the least squares.
APA, Harvard, Vancouver, ISO, and other styles
50

Song, Kang, Jun Bi Liao, Chang Qing Lin, Xue Dong Cao, Yang Yu, and Rui Ji. "Study of Improvement of Measurement Precision for Roundness Measuring Instrument." Applied Mechanics and Materials 530-531 (February 2014): 117–26. http://dx.doi.org/10.4028/www.scientific.net/amm.530-531.117.

Full text
Abstract:
Based on the requirement of small errors in measuring roundness (cylindricity) error, a leveling methodology which using a dual-point vertical layout has been put forward and analyzed. According to the direction cosine of the axis of workpiece, the amount of leveling has been defined and calculated, which overcomes the problem brought by manual adjustment technology and forms theoretical bases of fast, accurate leveling and high precise measurement. The assessment of roundness (cylindricity) error is to search for a center (cylinder axis) which satisfies the minimum condition. Due to the reliance of initial solutions and relative slowness in terms of convergence precision and convergence rate when using Nelder-Mead simplex method, a combinative method of Quasi-Newton and N-M simplex method has been proposed which achieves a fast, accurate search for global optimums. With the proof of the simulation of classical testing functions using Matlab and the measured data, the convergence rate and precision will be enhanced effectively has been certified with the combination of both methods mentioned above which ensuring and improving the measuring precision of workpiece.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography