Journal articles on the topic 'Gradient search'

To see the other types of publications on this topic, follow the link: Gradient search.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Gradient search.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Stepanenko, Svetlana, and Bernd Engels. "Gradient tabu search." Journal of Computational Chemistry 28, no. 2 (2006): 601–11. http://dx.doi.org/10.1002/jcc.20564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Arnold, Dirk V., and Ralf Salomon. "Evolutionary Gradient Search Revisited." IEEE Transactions on Evolutionary Computation 11, no. 4 (August 2007): 480–95. http://dx.doi.org/10.1109/tevc.2006.882427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Xiaochen, Yi Sun, and Jizhong Xiao. "Adaptive source search in a gradient field." Robotica 33, no. 08 (April 25, 2014): 1589–608. http://dx.doi.org/10.1017/s0263574714000903.

Full text
Abstract:
SUMMARYMost existing source search algorithms suffer from a high travel cost, and few of them have been analyzed in performance in noisy environments where local basins are presented. In this paper, the theseus gradient search (TGS) is proposed to effectively overcome local basins in search. Analytical performances of TGS and the gradient ascend with correlated random walk (GACRW), which is a variant of correlated random walk, are derived and compared. A gradient field model is proposed as an analytical tool that makes it feasible to analyze the performances. The analytical average searching costs of GACRW and TGS are obtained for the first time for this class of algorithms in the environments with local basins. The costs, expressed as functions of searching space size, local basin size, and local basin number are confirmed by simulation results. The performances of GACRW, TGS, and two chemotaxis algorithms are compared in the gradient field and a scenario of indoor radio source search in a hallway driven by real data of signal strengths. The results illustrate that GACRW and TGS are robust to noisy gradients and are more competitive than the chemotaxis-based algorithms in real applications. Both analytical and simulation results indicate that in the presence of local basins, TGS almost always costs the lowest.
APA, Harvard, Vancouver, ISO, and other styles
4

Pajarinen, Joni, Hong Linh Thai, Riad Akrour, Jan Peters, and Gerhard Neumann. "Compatible natural gradient policy search." Machine Learning 108, no. 8-9 (May 20, 2019): 1443–66. http://dx.doi.org/10.1007/s10994-019-05807-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schwabacher, Mark, and Andrew Gelsey. "Intelligent gradient-based search of incompletely defined design spaces." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 11, no. 3 (June 1997): 199–210. http://dx.doi.org/10.1017/s0890060400003127.

Full text
Abstract:
AbstractGradient-based numerical optimization of complex engineering designs offers the promise of rapidly producing better designs. However, such methods generally assume that the objective function and constraint functions are continuous, smooth, and defined everywhere. Unfortunately, realistic simulators tend to violate these assumptions. We present a rule-based technique for intelligently computing gradients in the presence of such pathologies in the simulators, and show how this gradient computation method can be used as part of a gradient-based numerical optimization system. We tested the resulting system in the domain of conceptual design of supersonic transport aircraft, and found that using rule-based gradients can decrease the cost of design space search by one or more orders of magnitude.
APA, Harvard, Vancouver, ISO, and other styles
6

Konnov, I. V. "Conditional Gradient Method Without Line-Search." Russian Mathematics 62, no. 1 (January 2018): 82–85. http://dx.doi.org/10.3103/s1066369x18010127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

D'Oro, Pierluca, Alberto Maria Metelli, Andrea Tirinzoni, Matteo Papini, and Marcello Restelli. "Gradient-Aware Model-Based Policy Search." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3801–8. http://dx.doi.org/10.1609/aaai.v34i04.5791.

Full text
Abstract:
Traditional model-based reinforcement learning approaches learn a model of the environment dynamics without explicitly considering how it will be used by the agent. In the presence of misspecified model classes, this can lead to poor estimates, as some relevant available information is ignored. In this paper, we introduce a novel model-based policy search approach that exploits the knowledge of the current agent policy to learn an approximate transition model, focusing on the portions of the environment that are most relevant for policy improvement. We leverage a weighting scheme, derived from the minimization of the error on the model-based policy gradient estimator, in order to define a suitable objective function that is optimized for learning the approximate transition model. Then, we integrate this procedure into a batch policy improvement algorithm, named Gradient-Aware Model-based Policy Search (GAMPS), which iteratively learns a transition model and uses it, together with the collected trajectories, to compute the new policy parameters. Finally, we empirically validate GAMPS on benchmark domains analyzing and discussing its properties.
APA, Harvard, Vancouver, ISO, and other styles
8

Tadić, Vladislav B., and Arnaud Doucet. "Asymptotic bias of stochastic gradient search." Annals of Applied Probability 27, no. 6 (December 2017): 3255–304. http://dx.doi.org/10.1214/16-aap1272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Xiangli, and Feng Ding. "Signal modeling using the gradient search." Applied Mathematics Letters 26, no. 8 (August 2013): 807–13. http://dx.doi.org/10.1016/j.aml.2013.02.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Moguerza, Javier M., and Francisco J. Prieto. "Combining search directions using gradient flows." Mathematical Programming 96, no. 3 (June 1, 2003): 529–59. http://dx.doi.org/10.1007/s10107-002-0367-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ringuest, Jeffrey L., and Thomas R. Gulledge. "An interactive multi-objective gradient search." Operations Research Letters 12, no. 1 (July 1992): 53–58. http://dx.doi.org/10.1016/0167-6377(92)90022-u.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Nickels, W., W. Rödder, L. Xu, and H. J. Zimmermann. "Intelligent gradient search in linear programming." European Journal of Operational Research 22, no. 3 (December 1985): 293–303. http://dx.doi.org/10.1016/0377-2217(85)90248-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Guo, Longyuan, Guoyun Zhang, and Jianhu Wu. "Variable Search Region Base on Disparity Gradient." Procedia Engineering 29 (2012): 2151–55. http://dx.doi.org/10.1016/j.proeng.2012.01.278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Fateen, Seif-Eddeen K., and Adrián Bonilla-Petriciolet. "Gradient-Based Cuckoo Search for Global Optimization." Mathematical Problems in Engineering 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/493740.

Full text
Abstract:
One of the major advantages of stochastic global optimization methods is the lack of the need of the gradient of the objective function. However, in some cases, this gradient is readily available and can be used to improve the numerical performance of stochastic optimization methods specially the quality and precision of global optimal solution. In this study, we proposed a gradient-based modification to the cuckoo search algorithm, which is a nature-inspired swarm-based stochastic global optimization method. We introduced the gradient-based cuckoo search (GBCS) and evaluated its performance vis-à-vis the original algorithm in solving twenty-four benchmark functions. The use of GBCS improved reliability and effectiveness of the algorithm in all but four of the tested benchmark problems. GBCS proved to be a strong candidate for solving difficult optimization problems, for which the gradient of the objective function is readily available.
APA, Harvard, Vancouver, ISO, and other styles
15

Yogendra Kumar and H. S. Chauhan. "Gradient Search Technique for Land LeveUng Design." Transactions of the ASAE 30, no. 2 (1987): 0391–93. http://dx.doi.org/10.13031/2013.31959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Luus, Rein, and Paul Brenek. "Incorporation of gradient into random search optimization." Chemical Engineering & Technology - CET 12, no. 1 (1989): 309–18. http://dx.doi.org/10.1002/ceat.270120143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Stanimirović, Predrag S., and Marko B. Miladinović. "Accelerated gradient descent methods with line search." Numerical Algorithms 54, no. 4 (December 10, 2009): 503–20. http://dx.doi.org/10.1007/s11075-009-9350-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Shi, Zhen-Jun, and Jie Shen. "Memory gradient method with Goldstein line search." Computers & Mathematics with Applications 53, no. 1 (January 2007): 28–40. http://dx.doi.org/10.1016/j.camwa.2007.02.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Zhang, Chuheng, Yuanqi Li, and Jian Li. "Policy Search by Target Distribution Learning for Continuous Control." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6770–77. http://dx.doi.org/10.1609/aaai.v34i04.6156.

Full text
Abstract:
It is known that existing policy gradient methods (such as vanilla policy gradient, PPO, A2C) may suffer from overly large gradients when the current policy is close to deterministic, leading to an unstable training process. We show that such instability can happen even in a very simple environment. To address this issue, we propose a new method, called target distribution learning (TDL), for policy improvement in reinforcement learning. TDL alternates between proposing a target distribution and training the policy network to approach the target distribution. TDL is more effective in constraining the KL divergence between updated policies, and hence leads to more stable policy improvements over iterations. Our experiments show that TDL algorithms perform comparably to (or better than) state-of-the-art algorithms for most continuous control tasks in the MuJoCo environment while being more stable in training.
APA, Harvard, Vancouver, ISO, and other styles
20

Ghose, Debraj, Katherine Jacobs, Samuel Ramirez, Timothy Elston, and Daniel Lew. "Chemotactic movement of a polarity site enables yeast cells to find their mates." Proceedings of the National Academy of Sciences 118, no. 22 (May 28, 2021): e2025445118. http://dx.doi.org/10.1073/pnas.2025445118.

Full text
Abstract:
How small eukaryotic cells can interpret dynamic, noisy, and spatially complex chemical gradients to orient growth or movement is poorly understood. We address this question using Saccharomyces cerevisiae, where cells orient polarity up pheromone gradients during mating. Initial orientation is often incorrect, but polarity sites then move around the cortex in a search for partners. We find that this movement is biased by local pheromone gradients across the polarity site: that is, movement of the polarity site is chemotactic. A bottom-up computational model recapitulates this biased movement. The model reveals how even though pheromone-bound receptors do not mimic the shape of external pheromone gradients, nonlinear and stochastic effects combine to generate effective gradient tracking. This mechanism for gradient tracking may be applicable to any cell that searches for a target in a complex chemical landscape.
APA, Harvard, Vancouver, ISO, and other styles
21

Sellami, Badreddine, and Mohamed Chiheb Eddine Sellami. "Global convergence of a modified Fletcher–Reeves conjugate gradient method with Wolfe line search." Asian-European Journal of Mathematics 13, no. 04 (April 4, 2019): 2050081. http://dx.doi.org/10.1142/s1793557120500813.

Full text
Abstract:
In this paper, we are concerned with the conjugate gradient methods for solving unconstrained optimization problems. we propose a modified Fletcher–Reeves (abbreviated FR) [Function minimization by conjugate gradients, Comput. J. 7 (1964) 149–154] conjugate gradient algorithm satisfying a parametrized sufficient descent condition with a parameter [Formula: see text] is proposed. The parameter [Formula: see text] is computed by means of the conjugacy condition, thus an algorithm which is a positive multiplicative modification of the Hestenes and Stiefel (abbreviated HS) [Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Standards Sec. B 48 (1952) 409–436] algorithm is obtained, which produces a descent search direction at every iteration that the line search satisfies the Wolfe conditions. Under appropriate conditions, we show that the modified FR method with the strong Wolfe line search is globally convergent of uniformly convex functions. We also present extensive preliminary numerical experiments to show the efficiency of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
22

Bellavia, Stefania, Nataša Krklec Jerinkić, and Greta Malaspina. "Subsampled Nonmonotone Spectral Gradient Methods." Communications in Applied and Industrial Mathematics 11, no. 1 (January 1, 2020): 19–34. http://dx.doi.org/10.2478/caim-2020-0002.

Full text
Abstract:
AbstractThis paper deals with subsampled spectral gradient methods for minimizing finite sums. Subsample function and gradient approximations are employed in order to reduce the overall computational cost of the classical spectral gradient methods. The global convergence is enforced by a nonmonotone line search procedure. Global convergence is proved provided that functions and gradients are approximated with increasing accuracy. R-linear convergence and worst-case iteration complexity is investigated in case of strongly convex objective function. Numerical results on well known binary classification problems are given to show the effectiveness of this framework and analyze the effect of different spectral coefficient approximations arising from the variable sample nature of this procedure.
APA, Harvard, Vancouver, ISO, and other styles
23

Lee, Jongmin, Wonseok Jeon, Geon-Hyeong Kim, and Kee-Eung Kim. "Monte-Carlo Tree Search in Continuous Action Spaces with Value Gradients." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4561–68. http://dx.doi.org/10.1609/aaai.v34i04.5885.

Full text
Abstract:
Monte-Carlo Tree Search (MCTS) is the state-of-the-art online planning algorithm for large problems with discrete action spaces. However, many real-world problems involve continuous action spaces, where MCTS is not as effective as in discrete action spaces. This is mainly due to common practices such as coarse discretization of the entire action space and failure to exploit local smoothness. In this paper, we introduce Value-Gradient UCT (VG-UCT), which combines traditional MCTS with gradient-based optimization of action particles. VG-UCT simultaneously performs a global search via UCT with respect to the finitely sampled set of actions and performs a local improvement via action value gradients. In the experiments, we demonstrate that our approach outperforms existing MCTS methods and other strong baseline algorithms for continuous action spaces.
APA, Harvard, Vancouver, ISO, and other styles
24

Smaili, Ahmad A., Nadim A. Diab, and Naji A. Atallah. "Optimum Synthesis of Mechanisms Using Tabu-Gradient Search Algorithm." Journal of Mechanical Design 127, no. 5 (November 29, 2004): 917–23. http://dx.doi.org/10.1115/1.1904640.

Full text
Abstract:
A tabu-gradient search is herein presented for optimum synthesis of planar mechanisms. The solution generated by a recency-based, short term memory tabu search is used to start a gradient search to drive the solution ever closer to the global minimum. A brief overview of the tabu-search method is first presented. A tabu-gradient algorithm is then used to synthesize four-bar mechanisms for path generation tasks by way of three examples, including two benchmark examples used before to test other deterministic and intelligent optimization schemes. Compared with the corresponding results generated by other schemes, the tabu-gradient search rendered the most optimal solutions of all.
APA, Harvard, Vancouver, ISO, and other styles
25

Yoon, Yourim, and Zong Woo Geem. "Parameter Optimization of Single-Diode Model of Photovoltaic Cell Using Memetic Algorithm." International Journal of Photoenergy 2015 (2015): 1–7. http://dx.doi.org/10.1155/2015/963562.

Full text
Abstract:
This study proposes a memetic approach for optimally determining the parameter values of single-diode-equivalent solar cell model. The memetic algorithm, which combines metaheuristic and gradient-based techniques, has the merit of good performance in both global and local searches. First, 10 single algorithms were considered including genetic algorithm, simulated annealing, particle swarm optimization, harmony search, differential evolution, cuckoo search, least squares method, and pattern search; then their final solutions were used as initial vectors for generalized reduced gradient technique. From this memetic approach, we could further improve the accuracy of the estimated solar cell parameters when compared with single algorithm approaches.
APA, Harvard, Vancouver, ISO, and other styles
26

Li, Xilin, Xianda Zhang, and Peisheng Li. "Gradient search non-orthogonal approximate joint diagonalization algorithm." Tsinghua Science and Technology 12, no. 6 (December 2007): 669–73. http://dx.doi.org/10.1016/s1007-0214(07)70173-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Li, Q. "Efficient Stochastic Gradient Search for Automatic Image Registration." International Journal of Simulation Modelling 6, no. 2 (June 5, 2007): 114–23. http://dx.doi.org/10.2507/ijsimm06(2)s.06.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Hu, J., and R. S. Blum. "A gradient guided search algorithm for multiuser detection." IEEE Communications Letters 4, no. 11 (November 2000): 340–42. http://dx.doi.org/10.1109/4234.892195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Salomon, R. "Evolutionary algorithms and gradient search: similarities and differences." IEEE Transactions on Evolutionary Computation 2, no. 2 (July 1998): 45–55. http://dx.doi.org/10.1109/4235.728207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Hager, William W., and Soonchul Park. "The Gradient Projection Method with Exact Line Search." Journal of Global Optimization 30, no. 1 (September 2004): 103–18. http://dx.doi.org/10.1023/b:jogo.0000049118.13265.9b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Wills, Adrian, Brett Ninness, and Stuart Gibson. "ON GRADIENT-BASED SEARCH FOR MULTIVARIABLE SYSTEM ESTIMATES." IFAC Proceedings Volumes 38, no. 1 (2005): 832–37. http://dx.doi.org/10.3182/20050703-6-cz-1902.00140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wills, Adrian, and Brett Ninness. "On Gradient-Based Search for Multivariable System Estimates." IEEE Transactions on Automatic Control 53, no. 1 (February 2008): 298–306. http://dx.doi.org/10.1109/tac.2007.914953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Winn, Karina M., David G. Bourne, and James G. Mitchell. "Vibrio coralliilyticus Search Patterns across an Oxygen Gradient." PLoS ONE 8, no. 7 (July 10, 2013): e67975. http://dx.doi.org/10.1371/journal.pone.0067975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Farina, D. J., and R. P. Flam. "A self-normalizing gradient search adaptive array algorithm." IEEE Transactions on Aerospace and Electronic Systems 27, no. 6 (1991): 901–5. http://dx.doi.org/10.1109/7.104256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Saxena, Anupam. "Topology design with negative masks using gradient search." Structural and Multidisciplinary Optimization 44, no. 5 (May 7, 2011): 629–49. http://dx.doi.org/10.1007/s00158-011-0649-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Stanimirovic, Predrag, Marko Miladinovic, and Snezana Djordjevic. "Multiplicative parameters in gradient descent methods." Filomat 23, no. 3 (2009): 23–36. http://dx.doi.org/10.2298/fil0903023s.

Full text
Abstract:
We introduced an algorithm for unconstrained optimization based on the reduction of the modified Newton method with line search into a gradient descent method. Main idea used in the algorithm construction is approximation of Hessian by a diagonal matrix. The step length calculation algorithm is based on the Taylor's development in two successive iterative points and the backtracking line search procedure.
APA, Harvard, Vancouver, ISO, and other styles
37

Ma, Shiqing, Ping Yang, Boheng Lai, Chunxuan Su, Wang Zhao, Kangjian Yang, Ruiyan Jin, Tao Cheng, and Bing Xu. "Adaptive Gradient Estimation Stochastic Parallel Gradient Descent Algorithm for Laser Beam Cleanup." Photonics 8, no. 5 (May 19, 2021): 165. http://dx.doi.org/10.3390/photonics8050165.

Full text
Abstract:
For a high-power slab solid-state laser, obtaining high output power and high output beam quality are the most important indicators. Adaptive optics systems can significantly improve beam qualities by compensating for the phase distortions of the laser beams. In this paper, we developed an improved algorithm called Adaptive Gradient Estimation Stochastic Parallel Gradient Descent (AGESPGD) algorithm for beam cleanup of a solid-state laser. A second-order gradient of the search point was introduced to modify the gradient estimation, and it was introduced with the adaptive gain coefficient method into the classical Stochastic Parallel Gradient Descent (SPGD) algorithm. The improved algorithm accelerates the search for convergence and prevents it from falling into a local extremum. Simulation and experimental results show that this method reduces the number of iterations by 40%, and the algorithm stability is also improved compared with the original SPGD method.
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Yuan-Yuan, and Shou-Qiang Du. "Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search." Abstract and Applied Analysis 2013 (2013): 1–5. http://dx.doi.org/10.1155/2013/742815.

Full text
Abstract:
Nonlinear conjugate gradient method is one of the useful methods for unconstrained optimization problems. In this paper, we consider three kinds of nonlinear conjugate gradient methods with Wolfe type line search for unstrained optimization problems. Under some mild assumptions, the global convergence results of the given methods are proposed. The numerical results show that the nonlinear conjugate gradient methods with Wolfe type line search are efficient for some unconstrained optimization problems.
APA, Harvard, Vancouver, ISO, and other styles
39

LI, XIA, and XIONGDA CHEN. "GLOBAL CONVERGENCE OF SHORTEST-RESIDUAL FAMILY OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH." Asia-Pacific Journal of Operational Research 22, no. 04 (December 2005): 529–38. http://dx.doi.org/10.1142/s0217595905000716.

Full text
Abstract:
The shortest-residual family of conjugate gradient methods was first proposed by Hestenes and was studied by Pytlak, and Dai and Yuan. Recently, a no-line-search scheme in conjugate gradient methods was given by Sun and Zhang, and Chen and Sun. In this paper, we show the global convergence of two shortest-residual conjugate gradient methods (FRSR and PRPSR) without line search. In addition, computational results are presented to show that the methods with line search have similar numerical behavior to the methods without line search.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhao, Tingting, Hirotaka Hachiya, Voot Tangkaratt, Jun Morimoto, and Masashi Sugiyama. "Efficient Sample Reuse in Policy Gradients with Parameter-Based Exploration." Neural Computation 25, no. 6 (June 2013): 1512–47. http://dx.doi.org/10.1162/neco_a_00452.

Full text
Abstract:
The policy gradient approach is a flexible and powerful reinforcement learning method particularly for problems with continuous actions such as robot control. A common challenge is how to reduce the variance of policy gradient estimates for reliable policy updates. In this letter, we combine the following three ideas and give a highly effective policy gradient method: (1) policy gradients with parameter-based exploration, a recently proposed policy search method with low variance of gradient estimates; (2) an importance sampling technique, which allows us to reuse previously gathered data in a consistent way; and (3) an optimal baseline, which minimizes the variance of gradient estimates with their unbiasedness being maintained. For the proposed method, we give a theoretical analysis of the variance of gradient estimates and show its usefulness through extensive experiments.
APA, Harvard, Vancouver, ISO, and other styles
41

Dehghan, Mehdi, and Masoud Hajarian. "Convergence of the descent Dai–Yuan conjugate gradient method for unconstrained optimization." Journal of Vibration and Control 18, no. 9 (September 23, 2011): 1249–53. http://dx.doi.org/10.1177/1077546311405750.

Full text
Abstract:
The conjugate gradient method is one of the most useful and the earliest-discovered techniques for solving large-scale nonlinear optimization problems. Many variants of this method have been proposed, and some are widely used in practice. In this article, we study the descent Dai–Yuan conjugate gradient method which guarantees the sufficient descent condition for any line search. With exact line search, the introduced conjugate gradient method reduces to the Dai–Yuan conjugate gradient method. Finally, a global convergence result is established when the line search fulfils the Goldstein conditions.
APA, Harvard, Vancouver, ISO, and other styles
42

Belloufi, Mohammed, Rachid Benzine, and Laskri Yamina. "Modification of the Armijo line search to satisfy the convergence properties of HS method." An International Journal of Optimization and Control: Theories & Applications (IJOCTA) 3, no. 2 (May 29, 2013): 145–52. http://dx.doi.org/10.11121/ijocta.01.2013.00141.

Full text
Abstract:
The Hestenes-Stiefel (HS) conjugate gradient algorithm is a useful tool of unconstrainednumerical optimization, which has good numerical performance but no global convergence result under traditional line searches. This paper proposes a line search technique that guarantee the globalconvergence of the Hestenes-Stiefel (HS) conjugate gradient method. Numerical tests are presented tovalidate the different approaches.
APA, Harvard, Vancouver, ISO, and other styles
43

Guo, Jiaojiao, Mingyong Liu, Kun Liu, Yun Niu, and Mengfan Wang. "Research of Bio-Inspired Geomagnetic Navigation for AUV Based on Evolutionary Gradient Search." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 37, no. 5 (October 2019): 865–70. http://dx.doi.org/10.1051/jnwpu/20193750865.

Full text
Abstract:
At present, bio-inspired geomagnetic navigation is mostly based on evolutionary strategy, which requires long navigation time and low efficiency. To solve this problem, a bio-inspired geomagnetic navigation method for AUV based on evolutionary gradient search is proposed. Combining the bionic evolutionary search algorithm with the classical gradient algorithm to search the function extremum can not only ensure the global optimization of the search, but also have fast convergence, which can improve the efficiency of bio-inspired geomagnetic navigation. The simulation results show that this method does not need prior geomagnetic information and can complete navigation tasks according to the geomagnetic trend. Comparing with the evolutionary search strategy, the effectiveness and superiority of the evolutionary gradient search strategy are verified.
APA, Harvard, Vancouver, ISO, and other styles
44

Yuan, Gonglin. "A Conjugate Gradient Method for Unconstrained Optimization Problems." International Journal of Mathematics and Mathematical Sciences 2009 (2009): 1–14. http://dx.doi.org/10.1155/2009/329623.

Full text
Abstract:
A hybrid method combining the FR conjugate gradient method and the WYL conjugate gradient method is proposed for unconstrained optimization problems. The presented method possesses the sufficient descent property under the strong Wolfe-Powell (SWP) line search rule relaxing the parameterσ<1. Under the suitable conditions, the global convergence with the SWP line search rule and the weak Wolfe-Powell (WWP) line search rule is established for nonconvex function. Numerical results show that this method is better than the FR method and the WYL method.
APA, Harvard, Vancouver, ISO, and other styles
45

Zhang, Z., C. G. Koh, and W. H. Duan. "Uniformly sampled genetic algorithm with gradient search for structural identification – Part I: Global search." Computers & Structures 88, no. 15-16 (August 2010): 949–62. http://dx.doi.org/10.1016/j.compstruc.2010.05.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Zhang, Z., C. G. Koh, and W. H. Duan. "Uniformly sampled genetic algorithm with gradient search for structural identification – Part II: Local search." Computers & Structures 88, no. 19-20 (October 2010): 1149–61. http://dx.doi.org/10.1016/j.compstruc.2010.07.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Yang, Jian-Bo. "Gradient projection and local region search for multiobjective optimisation." European Journal of Operational Research 112, no. 2 (January 1999): 432–59. http://dx.doi.org/10.1016/s0377-2217(97)00451-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Regalia, P. A., and M. Mboup. "A gradient search interpretation of the super-exponential algorithm." IEEE Transactions on Information Theory 46, no. 7 (2000): 2731–34. http://dx.doi.org/10.1109/18.887889.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Trishchenko, Alexander P. "Reprojection of VIIRS SDR Imagery Using Concurrent Gradient Search." IEEE Transactions on Geoscience and Remote Sensing 56, no. 7 (July 2018): 4016–24. http://dx.doi.org/10.1109/tgrs.2018.2819825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Zhou, Enlu, and Jiaqiao Hu. "Gradient-Based Adaptive Stochastic Search for Non-Differentiable Optimization." IEEE Transactions on Automatic Control 59, no. 7 (July 2014): 1818–32. http://dx.doi.org/10.1109/tac.2014.2310052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography