Academic literature on the topic 'Steplength selection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Steplength selection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Steplength selection"

1

Curtis, Frank, and Jorge Nocedal. "Steplength selection in interior-point methods for quadratic programming." Applied Mathematics Letters 20, no. 5 (May 2007): 516–23. http://dx.doi.org/10.1016/j.aml.2006.05.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

di Serafino, Daniela, Valeria Ruggiero, Gerardo Toraldo, and Luca Zanni. "On the steplength selection in gradient methods for unconstrained optimization." Applied Mathematics and Computation 318 (February 2018): 176–95. http://dx.doi.org/10.1016/j.amc.2017.07.037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Crisci, Serena, Valeria Ruggiero, and Luca Zanni. "Steplength selection in gradient projection methods for box-constrained quadratic programs." Applied Mathematics and Computation 356 (September 2019): 312–27. http://dx.doi.org/10.1016/j.amc.2019.03.039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gunzburger, Max D., and Janet S. Peterson. "Predictor and steplength selection in continuation methods for the Navier-Stokes equations." Computers & Mathematics with Applications 22, no. 8 (1991): 73–81. http://dx.doi.org/10.1016/0898-1221(91)90015-v.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Porta, Federica, Marco Prato, and Luca Zanni. "A New Steplength Selection for Scaled Gradient Methods with Application to Image Deblurring." Journal of Scientific Computing 65, no. 3 (January 30, 2015): 895–919. http://dx.doi.org/10.1007/s10915-015-9991-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Loris, I., M. Bertero, C. De Mol, R. Zanella, and L. Zanni. "Accelerating gradient projection methods for ℓ1-constrained signal recovery by steplength selection rules." Applied and Computational Harmonic Analysis 27, no. 2 (September 2009): 247–54. http://dx.doi.org/10.1016/j.acha.2009.02.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Franchini, Giorgia, Valeria Ruggiero, Federica Porta, and Luca Zanni. "Neural architecture search via standard machine learning methodologies." Mathematics in Engineering 5, no. 1 (2022): 1–21. http://dx.doi.org/10.3934/mine.2023012.

Full text
Abstract:
<abstract><p>In the context of deep learning, the more expensive computational phase is the full training of the learning methodology. Indeed, its effectiveness depends on the choice of proper values for the so-called hyperparameters, namely the parameters that are not trained during the learning process, and such a selection typically requires an extensive numerical investigation with the execution of a significant number of experimental trials. The aim of the paper is to investigate how to choose the hyperparameters related to both the architecture of a Convolutional Neural Network (CNN), such as the number of filters and the kernel size at each convolutional layer, and the optimisation algorithm employed to train the CNN itself, such as the steplength, the mini-batch size and the potential adoption of variance reduction techniques. The main contribution of the paper consists in introducing an automatic Machine Learning technique to set these hyperparameters in such a way that a measure of the CNN performance can be optimised. In particular, given a set of values for the hyperparameters, we propose a low-cost strategy to predict the performance of the corresponding CNN, based on its behavior after only few steps of the training process. To achieve this goal, we generate a dataset whose input samples are provided by a limited number of hyperparameter configurations together with the corresponding CNN measures of performance obtained with only few steps of the CNN training process, while the label of each input sample is the performance corresponding to a complete training of the CNN. Such dataset is used as training set for a Support Vector Machines for Regression and/or Random Forest techniques to predict the performance of the considered learning methodology, given its performance at the initial iterations of its learning process. Furthermore, by a probabilistic exploration of the hyperparameter space, we are able to find, at a quite low cost, the setting of a CNN hyperparameters which provides the optimal performance. The results of an extensive numerical experimentation, carried out on CNNs, together with the use of our performance predictor with NAS-Bench-101, highlight how the proposed methodology for the hyperparameter setting appears very promising.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
8

Franchini, Giorgia, Valeria Ruggiero, and Luca Zanni. "Ritz-like values in steplength selections for stochastic gradient methods." Soft Computing 24, no. 23 (August 18, 2020): 17573–88. http://dx.doi.org/10.1007/s00500-020-05219-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pellegrini, Riccardo, Andrea Serani, Giampaolo Liuzzi, Francesco Rinaldi, Stefano Lucidi, and Matteo Diez. "A Derivative-Free Line-Search Algorithm for Simulation-Driven Design Optimization Using Multi-Fidelity Computations." Mathematics 10, no. 3 (February 2, 2022): 481. http://dx.doi.org/10.3390/math10030481.

Full text
Abstract:
The paper presents a multi-fidelity extension of a local line-search-based derivative-free algorithm for nonsmooth constrained optimization (MF-CS-DFN). The method is intended for use in the simulation-driven design optimization (SDDO) context, where multi-fidelity computations are used to evaluate the objective function. The proposed algorithm starts using low-fidelity evaluations and automatically switches to higher-fidelity evaluations based on the line-search step length. The multi-fidelity algorithm is driven by a suitably defined threshold and initialization values for the step length, which are associated to each fidelity level. These are selected to increase the accuracy of the objective evaluations while progressing to the optimal solution. The method is demonstrated for a multi-fidelity SDDO benchmark, namely pertaining to the hull-form optimization of a destroyer-type vessel, aiming at resistance minimization in calm water at fixed speed. Numerical simulations are based on a linear potential flow solver, where seven fidelity levels are used selecting systematically refined computational grids for the hull and the free surface. The method performance is assessed varying the steplength threshold and initialization approach. Specifically, four MF-CS-DFN setups are tested, and the optimization results are compared to its single-fidelity (high-fidelity-based) counterpart (CS-DFN). The MF-CS-DFN results are promising, achieving a resistance reduction of about 12% and showing a faster convergence than CS-DFN. Specifically, the MF extension is between one and two orders of magnitude faster than the original single-fidelity algorithm. For low computational budgets, MF-CS-DFN optimized designs exhibit a resistance that is about 6% lower than that achieved by CS-DFN.
APA, Harvard, Vancouver, ISO, and other styles
10

Crisci, Serena, Federica Porta, Valeria Ruggiero, and Luca Zanni. "Hybrid limited memory gradient projection methods for box-constrained optimization problems." Computational Optimization and Applications, September 4, 2022. http://dx.doi.org/10.1007/s10589-022-00409-4.

Full text
Abstract:
AbstractGradient projection methods represent effective tools for solving large-scale constrained optimization problems thanks to their simple implementation and low computational cost per iteration. Despite these good properties, a slow convergence rate can affect gradient projection schemes, especially when high accurate solutions are needed. A strategy to mitigate this drawback consists in properly selecting the values for the steplength along the negative gradient. In this paper, we consider the class of gradient projection methods with line search along the projected arc for box-constrained minimization problems and we analyse different strategies to define the steplength. It is well known in the literature that steplength selection rules able to approximate, at each iteration, the eigenvalues of the inverse of a suitable submatrix of the Hessian of the objective function can improve the performance of gradient projection methods. In this perspective, we propose an automatic hybrid steplength selection technique that employs a proper alternation of standard Barzilai–Borwein rules, when the final active set is not well approximated, and a generalized limited memory strategy based on the Ritz-like values of the Hessian matrix restricted to the inactive constraints, when the final active set is reached. Numerical experiments on quadratic and non-quadratic test problems show the effectiveness of the proposed steplength scheme.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Steplength selection"

1

DE, ASMUNDIS ROBERTA. "New gradient methods: spectral properties and steplength selection." Doctoral thesis, 2014. http://hdl.handle.net/11573/918064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sgattoni, Cristina. "Solving systems of nonlinear equations via spectral residual methods." Doctoral thesis, 2021. http://hdl.handle.net/2158/1238325.

Full text
Abstract:
This thesis addresses the numerical solution of systems of nonlinear equations via spectral residual methods. Spectral residual methods are iterative procedures, they use the residual vector as search direction and a spectral steplength, i.e., a steplength that is related to the spectrum of the average matrices associated to the Jacobian matrix of the system. Such procedures are widely studied and employed since they are derivative-free and low-cost per iteration. The first aim of the work is to analyze the properties of the spectral residual steplengths and study how they affect the performance of the methods. This aim is addressed both from a theoretical and experimental point of view. The main contributions in this direction are: the theoretical analysis of the steplengths proposed in the literature and of their impact on the methods behaviour; the analysis of the performance of spectral methods with various rules for updating the steplengths. We propose and extensively test different steplength strategies. Rules based on adaptive strategies that suitably combine small and large steplengths result by far more effective than rules based on static choices of the steplength. Numerical experience is conducted on sequences of nonlinear systems arising from rolling contact models which play a central role in many important applications, such as rolling bearings and wheel-rail interaction. Solving these models gives rise to sequences which consist of a large number of medium-size nonlinear systems and represent a relevant benchmark test set for the purpose of the thesis. The second purpose of the thesis is to propose a variant of the derivative-free spectral residual method used in the first part and obtain a general scheme globally convergent under more general conditions. The robustness of the new method is potentially improved with respect to the previous version. Numerical experiments are conducted both on the problems arising in rolling contact models and on a set of problems commonly used for testing solvers for nonlinear systems.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Steplength selection"

1

Franchini, Giorgia, Valeria Ruggiero, and Luca Zanni. "On the Steplength Selection in Stochastic Gradient Methods." In Lecture Notes in Computer Science, 186–97. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-39081-5_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Franchini, Giorgia, Valeria Ruggiero, and Luca Zanni. "Steplength and Mini-batch Size Selection in Stochastic Gradient Methods." In Machine Learning, Optimization, and Data Science, 259–63. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64580-9_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Franchini, Giorgia, Valeria Ruggiero, and Ilaria Trombini. "Thresholding Procedure via Barzilai-Borwein Rules for the Steplength Selection in Stochastic Gradient Methods." In Machine Learning, Optimization, and Data Science, 277–82. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95470-3_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography