Academic literature on the topic 'Gradient search'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Gradient search.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Gradient search"

1

Stepanenko, Svetlana, and Bernd Engels. "Gradient tabu search." Journal of Computational Chemistry 28, no. 2 (2006): 601–11. http://dx.doi.org/10.1002/jcc.20564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Arnold, Dirk V., and Ralf Salomon. "Evolutionary Gradient Search Revisited." IEEE Transactions on Evolutionary Computation 11, no. 4 (August 2007): 480–95. http://dx.doi.org/10.1109/tevc.2006.882427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Xiaochen, Yi Sun, and Jizhong Xiao. "Adaptive source search in a gradient field." Robotica 33, no. 08 (April 25, 2014): 1589–608. http://dx.doi.org/10.1017/s0263574714000903.

Full text
Abstract:
SUMMARYMost existing source search algorithms suffer from a high travel cost, and few of them have been analyzed in performance in noisy environments where local basins are presented. In this paper, the theseus gradient search (TGS) is proposed to effectively overcome local basins in search. Analytical performances of TGS and the gradient ascend with correlated random walk (GACRW), which is a variant of correlated random walk, are derived and compared. A gradient field model is proposed as an analytical tool that makes it feasible to analyze the performances. The analytical average searching costs of GACRW and TGS are obtained for the first time for this class of algorithms in the environments with local basins. The costs, expressed as functions of searching space size, local basin size, and local basin number are confirmed by simulation results. The performances of GACRW, TGS, and two chemotaxis algorithms are compared in the gradient field and a scenario of indoor radio source search in a hallway driven by real data of signal strengths. The results illustrate that GACRW and TGS are robust to noisy gradients and are more competitive than the chemotaxis-based algorithms in real applications. Both analytical and simulation results indicate that in the presence of local basins, TGS almost always costs the lowest.
APA, Harvard, Vancouver, ISO, and other styles
4

Pajarinen, Joni, Hong Linh Thai, Riad Akrour, Jan Peters, and Gerhard Neumann. "Compatible natural gradient policy search." Machine Learning 108, no. 8-9 (May 20, 2019): 1443–66. http://dx.doi.org/10.1007/s10994-019-05807-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schwabacher, Mark, and Andrew Gelsey. "Intelligent gradient-based search of incompletely defined design spaces." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 11, no. 3 (June 1997): 199–210. http://dx.doi.org/10.1017/s0890060400003127.

Full text
Abstract:
AbstractGradient-based numerical optimization of complex engineering designs offers the promise of rapidly producing better designs. However, such methods generally assume that the objective function and constraint functions are continuous, smooth, and defined everywhere. Unfortunately, realistic simulators tend to violate these assumptions. We present a rule-based technique for intelligently computing gradients in the presence of such pathologies in the simulators, and show how this gradient computation method can be used as part of a gradient-based numerical optimization system. We tested the resulting system in the domain of conceptual design of supersonic transport aircraft, and found that using rule-based gradients can decrease the cost of design space search by one or more orders of magnitude.
APA, Harvard, Vancouver, ISO, and other styles
6

Konnov, I. V. "Conditional Gradient Method Without Line-Search." Russian Mathematics 62, no. 1 (January 2018): 82–85. http://dx.doi.org/10.3103/s1066369x18010127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

D'Oro, Pierluca, Alberto Maria Metelli, Andrea Tirinzoni, Matteo Papini, and Marcello Restelli. "Gradient-Aware Model-Based Policy Search." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3801–8. http://dx.doi.org/10.1609/aaai.v34i04.5791.

Full text
Abstract:
Traditional model-based reinforcement learning approaches learn a model of the environment dynamics without explicitly considering how it will be used by the agent. In the presence of misspecified model classes, this can lead to poor estimates, as some relevant available information is ignored. In this paper, we introduce a novel model-based policy search approach that exploits the knowledge of the current agent policy to learn an approximate transition model, focusing on the portions of the environment that are most relevant for policy improvement. We leverage a weighting scheme, derived from the minimization of the error on the model-based policy gradient estimator, in order to define a suitable objective function that is optimized for learning the approximate transition model. Then, we integrate this procedure into a batch policy improvement algorithm, named Gradient-Aware Model-based Policy Search (GAMPS), which iteratively learns a transition model and uses it, together with the collected trajectories, to compute the new policy parameters. Finally, we empirically validate GAMPS on benchmark domains analyzing and discussing its properties.
APA, Harvard, Vancouver, ISO, and other styles
8

Tadić, Vladislav B., and Arnaud Doucet. "Asymptotic bias of stochastic gradient search." Annals of Applied Probability 27, no. 6 (December 2017): 3255–304. http://dx.doi.org/10.1214/16-aap1272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Xiangli, and Feng Ding. "Signal modeling using the gradient search." Applied Mathematics Letters 26, no. 8 (August 2013): 807–13. http://dx.doi.org/10.1016/j.aml.2013.02.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Moguerza, Javier M., and Francisco J. Prieto. "Combining search directions using gradient flows." Mathematical Programming 96, no. 3 (June 1, 2003): 529–59. http://dx.doi.org/10.1007/s10107-002-0367-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Gradient search"

1

Kevorkiants, Rouslan. "Linear scaling conjugate gradient density matrix search: implementation, validation, and application with semiempirical molecular orbital methods." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=968547028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Resmer, Frank. "A gradient and RF system for open access low field MRI." Thesis, University of Aberdeen, 2004. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU208240.

Full text
Abstract:
This thesis describes work carried out to improve the operation of a 0.01 T open access low field MRI system. Two planar surface gradient coil sets and a new three channel gradient amplifier were designed, constructed and tested. Surface transmit coils and surface receive coils and a balanced preamplifier were developed to allow RF pulse generation and signal detection without compromising the single-sided access of the system. The properties of litz wire were investigated by simulation and testing, and it was shown and the established litz wire theory can also be applied to the design of cooled coils. Images were obtained to show the operation of the new gradient amplifier and to demonstrate the combined operation of a surface gradient coil, surface receive coil, surface transmit coil and the low noise preamplifier. A large surface receive coil was used to acquire in vivo images for the first time of surface regions of a volunteer's head. Images obtained with a cryogenic receive coil, showed an SNR improvement relative to a room temperature coil in agreement with theoretical predictions.
APA, Harvard, Vancouver, ISO, and other styles
3

Bedida, Kirthi. "AN APPROACH TO INVERSE MODELING THROUGH THE INTEGRATION OF ARTIFICIAL NEURAL NETWORKS AND GENETIC ALGORITHMS." UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/493.

Full text
Abstract:
A hybrid model integrating predictive capabilities of Artificial Neural Network (ANN) and optimization feature of Genetic Algorithm (GA) is developed for the purpose of inverse modeling. The proposed approach is applied to Superplastic forming of materials to predict the material properties which characterize the performance of a material. The study is carried out on two problems. For the first problem, ANN is trained to predict the strain rate sensitivity index m given the temperature and the strain rate. The performance of different gradient search methods used in training the ANN model is demonstrated. Similar approach is used for the second problem. The objective of which is to predict the input parameters, i.e. strain rate and temperature corresponding to a given flow stress value. An attempt to address one of the major drawbacks of ANN, which is the black box behavior of the model, is made by collecting information about the weights and biases used in training and formulating a mathematical expression. The results from the two problems are compared to the experimental data and validated. The results indicated proximity to the experimental data.
APA, Harvard, Vancouver, ISO, and other styles
4

Al-Mudhaf, Ali F. "A feed forward neural network approach for matrix computations." Thesis, Brunel University, 2001. http://bura.brunel.ac.uk/handle/2438/5010.

Full text
Abstract:
A new neural network approach for performing matrix computations is presented. The idea of this approach is to construct a feed-forward neural network (FNN) and then train it by matching a desired set of patterns. The solution of the problem is the converged weight of the FNN. Accordingly, unlike the conventional FNN research that concentrates on external properties (mappings) of the networks, this study concentrates on the internal properties (weights) of the network. The present network is linear and its weights are usually strongly constrained; hence, complicated overlapped network needs to be construct. It should be noticed, however, that the present approach depends highly on the training algorithm of the FNN. Unfortunately, the available training methods; such as, the original Back-propagation (BP) algorithm, encounter many deficiencies when applied to matrix algebra problems; e. g., slow convergence due to improper choice of learning rates (LR). Thus, this study will focus on the development of new efficient and accurate FNN training methods. One improvement suggested to alleviate the problem of LR choice is the use of a line search with steepest descent method; namely, bracketing with golden section method. This provides an optimal LR as training progresses. Another improvement proposed in this study is the use of conjugate gradient (CG) methods to speed up the training process of the neural network. The computational feasibility of these methods is assessed on two matrix problems; namely, the LU-decomposition of both band and square ill-conditioned unsymmetric matrices and the inversion of square ill-conditioned unsymmetric matrices. In this study, two performance indexes have been considered; namely, learning speed and convergence accuracy. Extensive computer simulations have been carried out using the following training methods: steepest descent with line search (SDLS) method, conventional back propagation (BP) algorithm, and conjugate gradient (CG) methods; specifically, Fletcher Reeves conjugate gradient (CGFR) method and Polak Ribiere conjugate gradient (CGPR) method. The performance comparisons between these minimization methods have demonstrated that the CG training methods give better convergence accuracy and are by far the superior with respect to learning time; they offer speed-ups of anything between 3 and 4 over SDLS depending on the severity of the error goal chosen and the size of the problem. Furthermore, when using Powell's restart criteria with the CG methods, the problem of wrong convergence directions usually encountered in pure CG learning methods is alleviated. In general, CG methods with restarts have shown the best performance among all other methods in training the FNN for LU-decomposition and matrix inversion. Consequently, it is concluded that CG methods are good candidates for training FNN of matrix computations, in particular, Polak-Ribidre conjugate gradient method with Powell's restart criteria.
APA, Harvard, Vancouver, ISO, and other styles
5

Jacmenovic, Dennis, and dennis_jacman@yahoo com au. "Optimisation of Active Microstrip Patch Antennas." RMIT University. Electrical and Computer Engineering, 2004. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20060307.144507.

Full text
Abstract:
This thesis presents a study of impedance optimisation of active microstrip patch antennas to multiple frequency points. A single layered aperture coupled microstrip patch antenna has been optimised to match the source reflection coefficient of a transistor in designing an active antenna. The active aperture coupled microstrip patch antenna was optimised to satisfy Global Positioning System (GPS) frequency specifications. A rudimentary aperture coupled microstrip patch antenna consists of a rectangular antenna element etched on the top surface of two dielectric substrates. The substrates are separated by a ground plane and a microstrip feed is etched on the bottom surface. A rectangular aperture in the ground plane provides coupling between the feed and the antenna element. This type of antenna, which conveniently isolates any circuit at the feed from the antenna element, is suitable for integrated circuit design and is simple to fabricate. An active antenna design directly couples an antenna to an active device, therefore saving real estate and power. This thesis focuses on designing an aperture coupled patch antenna directly coupled to a low noise amplifier as part of the front end of a GPS receiver. In this work an in-house software package, dubbed ACP by its creator Dr Rod Waterhouse, for calculating aperture coupled microstrip patch antenna performance parameters was linked to HP-EEsof, a microwave computer aided design and simulation package by Hewlett-Packard. An ANSI C module in HP-EEsof was written to bind the two packages. This process affords the client the benefit of powerful analysis tools offered in HP-EEsof and the fast analysis of ACP for seamless system design. Moreover, the optimisation algorithms in HP-EEsof were employed to investigate which algorithms are best suited for optimising patch antennas. The active antenna design presented in this study evades an input matching network, which is accomplished by designing the antenna to represent the desired source termination of a transistor. It has been demonstrated that a dual-band microstrip patch antenna can be successfully designed to match the source reflection coefficient, avoiding the need to insert a matching network. Maximum power transfer in electrical circuits is accomplished by matching the impedance between entities, which is generally acheived with the use of a matching network. Passive matching networks employed in amplifier design generally consist of discrete components up to the low GHz frequency range or distributed elements at greater frequencies. The source termination for a low noise amplifier will greatly influence its noise, gain and linearity which is controlled by designing a suitable input matching network. Ten diverse search methods offered in HP-EEsof were used to optimise an active aperture coupled microstrip patch antenna. This study has shown that the algorithms based on the randomised search techniques and the Genetic algorithm provide the most robust performance. The optimisation results were used to design an active dual-band antenna.
APA, Harvard, Vancouver, ISO, and other styles
6

Fischer, Manfred M. "Learning in neural spatial interaction models: A statistical perspective." Springer, 2002. http://epub.wu.ac.at/5503/1/neural.pdf.

Full text
Abstract:
In this paper we view learning as an unconstrained non-linear minimization problem in which the objective function is defined by the negative log-likelihood function and the search space by the parameter space of an origin constrained product unit neural spatial interaction model. We consider Alopex based global search, as opposed to local search based upon backpropagation of gradient descents, each in combination with the bootstrapping pairs approach to solve the maximum likelihood learning problem. Interregional telecommunication traffic flow data from Austria are used as test bed for comparing the performance of the two learning procedures. The study illustrates the superiority of Alopex based global search, measured in terms of Kullback and Leibler's information criterion.
APA, Harvard, Vancouver, ISO, and other styles
7

Clausner, André. "Anwendung von Line-Search-Strategien zur Formoptimierung und Parameteridentifikation." Master's thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-114858.

Full text
Abstract:
Die kontinuierliche Weiterentwicklung und Verbesserung technischer Prozesse erfolgt heute auf der Basis stochastischer und deterministischer Optimierungsstrategien in Kombination mit der numerischen Simulation dieser Abläufe. Da die FE-Simulation von Umformvorgängen in der Regel sehr zeitintensiv ist, bietet sich für die Optimierung solcher Prozesse der Einsatz deterministischer Methoden an, da hier weniger Optimierungsschritte und somit auch weniger FE-Simulationen notwendig sind. Eine wichtige Anforderung an solche Optimierungsverfahren ist globale Konvergenz zu lokalen Minima, da die optimalen Parametersätze nicht immer näherungsweise bekannt sind. Die zwei wichtigsten Strategien zum Ausdehnen des beschränkten Konvergenzradius der natürlichen Optimierungsverfahren (newtonschrittbasierte Verfahren und Gradientenverfahren) sind die Line-Search-Strategie und die Trust-Region-Strategie. Die Grundlagen der Line-Search-Strategie werden aufgearbeitet und die wichtigsten Teilalgorithmen implementiert. Danach wird dieses Verfahren auf eine effiziente Kombination der Teilalgorithmen und Verfahrensparameter hin untersucht. Im Anschluss wird die Leistung eines Optimierungsverfahrens mit Line-Search-Strategie verglichen mit der eines ebenfalls implementierten Optimierungsverfahrens mit skalierter Trust-Region-Strategie. Die Tests werden nach Einfügen der implementierten Verfahren in das Programm SPC-Opt anhand der Lösung eines Quadratmittelproblems aus der Materialparameteridentifikation sowie der Formoptimierung eines Umformwerkzeugs vorgenommen.
APA, Harvard, Vancouver, ISO, and other styles
8

O'Neal, Jerome W. "The use of preconditioned iterative linear solvers in interior-point methods and related topics." Diss., Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-06242005-162854/.

Full text
Abstract:
Thesis (Ph. D.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2006.
Parker, R. Gary, Committee Member ; Shapiro, Alexander, Committee Member ; Nemirovski, Arkadi, Committee Member ; Green, William, Committee Member ; Monteiro, Renato, Committee Chair.
APA, Harvard, Vancouver, ISO, and other styles
9

Zoltan, Pap. "Projektivni postupci tipa konjugovanih gradijenata za rešavanje nelinearnih monotonih sistema velikih dimenzija." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=110614&source=NDLTD&language=en.

Full text
Abstract:
U disertaciji su posmatrani projektivni postupci tipa konjugovanih gradijenata za rešavanje nelinearnih monotonih sistema velikih dimenzija. Ovi postupci kombinuju projektivnu metodu sa pravcima pretraživanja tipa konjugovanih gradijenata. Zbog osobine monotonosti sistema, projektivna metoda omogućava jednostavnu globalizaciju, a pravci pretraživanja tipa konjugovanih gradijenata zahtevaju maloračunarske memorije pa su pogodni za rešavanje sistema velikih dimenzija. Projektivni postupci tipa konjugovanih gradijenata ne koriste izvode niti funkciju cilja i zasnovani su samo na izračunavanju vrednosti funkcije sistema, pa su pogodni i za rešavanje neglatkih monotonih sistema. Pošto se globalna konvergencija dokazuje bez pretpostavki o regularnosti, ovi postupci se mogu koristiti i za rešavanje sistema sa singularnim rešenjima. U disertaciji su definisana tri nova tročlana pravca pretraživanjatipa Flečer-Rivs i dva nova hibridna pravca tipa Hu-Stori. Formulisani su projektivni postupci sa novim pravcima pretraživanja i dokazana je njihova globalna konvergencija. Numeričke performanse postupaka testirane su na relevantnim primerima i poređene sa poznatim postupcima iz literature. Numerički rezultati potvrđuju da su novi postupci robusni, efikasni i uporedivi sa postojećim postupcima.
Projection based CG methods for solving large-scale nonlinear monotone systems are considered in this thesis. These methods combine hyperplane projection technique with conjugate gradient (CG) search directions. Hyperplane projection method is suitable for monotone systems, because it enables simply globalization, while CG directions are efficient for large-scale nonlinear systems, due to low memory. Projection based CG methods are funcion-value based, they don’t use merit function and derivatives, and because of that they are also suitable for solving nonsmooth monotone systems. The global convergence of these methods are ensured without additional regularity assumptions, so they can be used for solving singular systems.Three new three-term search directions of Fletcher-Reeves type and two new hybrid search directions of Hu-Storey type are defined. PCG algorithm with five new CG type directions is proposed and its global convergence is established. Numerical performances of methods are tested on relevant examples from literature. These results point out that new projection based CG methods have good computational performances. They are efficient, robust and competitive with other methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Beddiaf, Salah. "Continuous steepest descent path for traversing non-convex regions." Thesis, University of Hertfordshire, 2016. http://hdl.handle.net/2299/17175.

Full text
Abstract:
In this thesis, we investigate methods of finding a local minimum for unconstrained problems of non-convex functions with n variables, by following the solution curve of a system of ordinary differential equations. The motivation for this was the fact that existing methods (e.g. those based on Newton methods with line search) sometimes terminate at a non-stationary point when applied to functions f(x) that do not a have positive-definite Hessian (i.e. ∇²f → 0) for all x. Even when methods terminate at a stationary point it could be a saddle or maximum rather than a minimum. The only method which makes intuitive sense in non-convex region is the trust region approach where we seek a step which minimises a quadratic model subject to a restriction on the two-norm of the step size. This gives a well-defined search direction but at the expense of a costly evaluation. The algorithms derived in this thesis are gradient based methods which require systems of equations to be solved at each step but which do not use a line search in the usual sense. Progress along the Continuous Steepest Descent Path (CSDP) is governed both by the decrease in the function value and measures of accuracy of a local quadratic model. Numerical results on specially constructed test problems and a number of standard test problems from CUTEr [38] show that the approaches we have considered are more promising when compared with routines in the optimization tool box of MATLAB [46], namely the trust region method and the quasi-Newton method. In particular, they perform well in comparison with the, superficially similar, gradient-flow method proposed by Behrman [7].
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Gradient search"

1

Michael, Savage. SEEK - a Fortran optimization program using a feasible directions gradient search. Cleveland, Ohio: Lewis Research Center, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Savage, M. SEEK-a Fortran optimization program using a feasible directions gradient search. [Washington, DC]: National Aeronautics and Space Administration, Office of Management, Scientific and Technical Information Program, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Brenek, Paul. Improvement of random search optimization procedures by incorporating the gradient. 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

United States. National Aeronautics and Space Administration. Scientific and Technical Information Program., ed. SEEK-a Fortran optimization program using a feasible directions gradient search. [Washington, DC]: National Aeronautics and Space Administration, Office of Management, Scientific and Technical Information Program, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

United States. National Aeronautics and Space Administration. Scientific and Technical Information Program., ed. SEEK-a Fortran optimization program using a feasible directions gradient search. [Washington, DC]: National Aeronautics and Space Administration, Office of Management, Scientific and Technical Information Program, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Gradient search"

1

Hong, Chin-Ming, Chih-Ming Chen, and Heng-Kang Fan. "A New Gradient-Based Search Method: Grey-Gradient Search Method." In Multiple Approaches to Intelligent Systems, 185–94. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/978-3-540-48765-4_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sokolovska, Nataliya. "Sparse Gradient-Based Direct Policy Search." In Neural Information Processing, 212–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34478-7_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tian, Yuan, Yong-quan Liang, and Yan-jun Peng. "Cuckoo Search Algorithm Based on Stochastic Gradient Descent." In Proceedings of the Fifth Euro-China Conference on Intelligent Data Analysis and Applications, 90–99. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03766-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Peng, Yiming, Gang Chen, Mengjie Zhang, and Shaoning Pang. "Generalized Compatible Function Approximation for Policy Gradient Search." In Neural Information Processing, 615–22. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46687-3_68.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Salomon, Ralf. "Accelerating the evolutionary-gradient-search procedure: Individual step sizes." In Lecture Notes in Computer Science, 408–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0056883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Salomon, Ralf, and Dirk V. Arnold. "The Evolutionary-Gradient-Search Procedure in Theory and Practice." In Nature-Inspired Algorithms for Optimisation, 77–101. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-00267-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Trillos, Nicolás García, Félix Morales, and Javier Morales. "Traditional and Accelerated Gradient Descent for Neural Architecture Search." In Lecture Notes in Computer Science, 507–14. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-80209-7_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lara, Adriana, Sergio Alvarado, Shaul Salomon, Gideon Avigad, Carlos A. Coello Coello, and Oliver Schütze. "The Gradient Free Directed Search Method as Local Search within Multi-Objective Evolutionary Algorithms." In Advances in Intelligent Systems and Computing, 153–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-31519-0_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yuan, Gonglin, Wujie Hu, and Zhou Sheng. "A Conjugate Gradient Algorithm with Yuan-Wei-Lu Line Search." In Cloud Computing and Security, 738–46. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68542-7_64.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Salman, K. H., Arun K. Pujari, Vikas Kumar, and Sowmini Devi Veeramachaneni. "Combining Swarm with Gradient Search for Maximum Margin Matrix Factorization." In PRICAI 2016: Trends in Artificial Intelligence, 167–79. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-42911-3_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Gradient search"

1

Boyang Li, Yew-Soon Ong, Minh Nghia Le, and Chi Keong Goh. "Memetic Gradient Search." In 2008 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2008. http://dx.doi.org/10.1109/cec.2008.4631187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yi, Sun, Daan Wierstra, Tom Schaul, and Jürgen Schmidhuber. "Stochastic search using the natural gradient." In the 26th Annual International Conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1553374.1553522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tadic, Vladislav B., and A. Doucet. "Asymptotic bias of stochastic gradient search." In 2011 50th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC 2011). IEEE, 2011. http://dx.doi.org/10.1109/cdc.2011.6160812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chandramouli, K., K. J. Prabuchandran, D. Sai Koti Reddy, and Shalabh Bhatnagar. "Generalized Deterministic Perturbations For Stochastic Gradient Search." In 2018 IEEE Conference on Decision and Control (CDC). IEEE, 2018. http://dx.doi.org/10.1109/cdc.2018.8619736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Enlu, Shalabh Bhatnagar, and Xi Chen. "Simulation optimization via gradient-based stochastic search." In 2014 Winter Simulation Conference - (WSC 2014). IEEE, 2014. http://dx.doi.org/10.1109/wsc.2014.7020213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Enlu, and Jiaqiao Hu. "Combining gradient-based optimization with stochastic search." In 2012 Winter Simulation Conference - (WSC 2012). IEEE, 2012. http://dx.doi.org/10.1109/wsc.2012.6465032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Haobin, Loo Hay Lee, and Ek Peng Chew. "Optimization via gradient oriented polar random search." In 2012 Winter Simulation Conference - (WSC 2012). IEEE, 2012. http://dx.doi.org/10.1109/wsc.2012.6465039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jun-bo, Xia. "Template matching algorithm based on gradient search." In 2014 International Conference on Mechatronics and Control (ICMC). IEEE, 2014. http://dx.doi.org/10.1109/icmc.2014.7231800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Santucci, Valentino, Josu Ceberio, and Marco Baioletti. "Gradient search in the space of permutations." In GECCO '20: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3377929.3398094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Martins, Almeida, and Silva. "Coordinated maneuver for gradient search using multiple AUVs." In Oceans 2003. Celebrating the Past ... Teaming Toward the Future. IEEE, 2003. http://dx.doi.org/10.1109/oceans.2003.178583.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Gradient search"

1

Homaifar, Abdollah, Albert Esterline, and Bahram Kimiaghalam. Hybrid Projected Gradient-Evolutionary Search Algorithm for Mixed Integer Nonlinear Optimization Problems. Fort Belvoir, VA: Defense Technical Information Center, April 2005. http://dx.doi.org/10.21236/ada455904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography