Littérature scientifique sur le sujet « Steplength selection »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Steplength selection ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Steplength selection"

1

Curtis, Frank, et Jorge Nocedal. « Steplength selection in interior-point methods for quadratic programming ». Applied Mathematics Letters 20, no 5 (mai 2007) : 516–23. http://dx.doi.org/10.1016/j.aml.2006.05.020.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

di Serafino, Daniela, Valeria Ruggiero, Gerardo Toraldo et Luca Zanni. « On the steplength selection in gradient methods for unconstrained optimization ». Applied Mathematics and Computation 318 (février 2018) : 176–95. http://dx.doi.org/10.1016/j.amc.2017.07.037.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Crisci, Serena, Valeria Ruggiero et Luca Zanni. « Steplength selection in gradient projection methods for box-constrained quadratic programs ». Applied Mathematics and Computation 356 (septembre 2019) : 312–27. http://dx.doi.org/10.1016/j.amc.2019.03.039.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Gunzburger, Max D., et Janet S. Peterson. « Predictor and steplength selection in continuation methods for the Navier-Stokes equations ». Computers & ; Mathematics with Applications 22, no 8 (1991) : 73–81. http://dx.doi.org/10.1016/0898-1221(91)90015-v.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Porta, Federica, Marco Prato et Luca Zanni. « A New Steplength Selection for Scaled Gradient Methods with Application to Image Deblurring ». Journal of Scientific Computing 65, no 3 (30 janvier 2015) : 895–919. http://dx.doi.org/10.1007/s10915-015-9991-9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Loris, I., M. Bertero, C. De Mol, R. Zanella et L. Zanni. « Accelerating gradient projection methods for ℓ1-constrained signal recovery by steplength selection rules ». Applied and Computational Harmonic Analysis 27, no 2 (septembre 2009) : 247–54. http://dx.doi.org/10.1016/j.acha.2009.02.003.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Franchini, Giorgia, Valeria Ruggiero, Federica Porta et Luca Zanni. « Neural architecture search via standard machine learning methodologies ». Mathematics in Engineering 5, no 1 (2022) : 1–21. http://dx.doi.org/10.3934/mine.2023012.

Texte intégral
Résumé :
<abstract><p>In the context of deep learning, the more expensive computational phase is the full training of the learning methodology. Indeed, its effectiveness depends on the choice of proper values for the so-called hyperparameters, namely the parameters that are not trained during the learning process, and such a selection typically requires an extensive numerical investigation with the execution of a significant number of experimental trials. The aim of the paper is to investigate how to choose the hyperparameters related to both the architecture of a Convolutional Neural Network (CNN), such as the number of filters and the kernel size at each convolutional layer, and the optimisation algorithm employed to train the CNN itself, such as the steplength, the mini-batch size and the potential adoption of variance reduction techniques. The main contribution of the paper consists in introducing an automatic Machine Learning technique to set these hyperparameters in such a way that a measure of the CNN performance can be optimised. In particular, given a set of values for the hyperparameters, we propose a low-cost strategy to predict the performance of the corresponding CNN, based on its behavior after only few steps of the training process. To achieve this goal, we generate a dataset whose input samples are provided by a limited number of hyperparameter configurations together with the corresponding CNN measures of performance obtained with only few steps of the CNN training process, while the label of each input sample is the performance corresponding to a complete training of the CNN. Such dataset is used as training set for a Support Vector Machines for Regression and/or Random Forest techniques to predict the performance of the considered learning methodology, given its performance at the initial iterations of its learning process. Furthermore, by a probabilistic exploration of the hyperparameter space, we are able to find, at a quite low cost, the setting of a CNN hyperparameters which provides the optimal performance. The results of an extensive numerical experimentation, carried out on CNNs, together with the use of our performance predictor with NAS-Bench-101, highlight how the proposed methodology for the hyperparameter setting appears very promising.</p></abstract>
Styles APA, Harvard, Vancouver, ISO, etc.
8

Franchini, Giorgia, Valeria Ruggiero et Luca Zanni. « Ritz-like values in steplength selections for stochastic gradient methods ». Soft Computing 24, no 23 (18 août 2020) : 17573–88. http://dx.doi.org/10.1007/s00500-020-05219-6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Pellegrini, Riccardo, Andrea Serani, Giampaolo Liuzzi, Francesco Rinaldi, Stefano Lucidi et Matteo Diez. « A Derivative-Free Line-Search Algorithm for Simulation-Driven Design Optimization Using Multi-Fidelity Computations ». Mathematics 10, no 3 (2 février 2022) : 481. http://dx.doi.org/10.3390/math10030481.

Texte intégral
Résumé :
The paper presents a multi-fidelity extension of a local line-search-based derivative-free algorithm for nonsmooth constrained optimization (MF-CS-DFN). The method is intended for use in the simulation-driven design optimization (SDDO) context, where multi-fidelity computations are used to evaluate the objective function. The proposed algorithm starts using low-fidelity evaluations and automatically switches to higher-fidelity evaluations based on the line-search step length. The multi-fidelity algorithm is driven by a suitably defined threshold and initialization values for the step length, which are associated to each fidelity level. These are selected to increase the accuracy of the objective evaluations while progressing to the optimal solution. The method is demonstrated for a multi-fidelity SDDO benchmark, namely pertaining to the hull-form optimization of a destroyer-type vessel, aiming at resistance minimization in calm water at fixed speed. Numerical simulations are based on a linear potential flow solver, where seven fidelity levels are used selecting systematically refined computational grids for the hull and the free surface. The method performance is assessed varying the steplength threshold and initialization approach. Specifically, four MF-CS-DFN setups are tested, and the optimization results are compared to its single-fidelity (high-fidelity-based) counterpart (CS-DFN). The MF-CS-DFN results are promising, achieving a resistance reduction of about 12% and showing a faster convergence than CS-DFN. Specifically, the MF extension is between one and two orders of magnitude faster than the original single-fidelity algorithm. For low computational budgets, MF-CS-DFN optimized designs exhibit a resistance that is about 6% lower than that achieved by CS-DFN.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Crisci, Serena, Federica Porta, Valeria Ruggiero et Luca Zanni. « Hybrid limited memory gradient projection methods for box-constrained optimization problems ». Computational Optimization and Applications, 4 septembre 2022. http://dx.doi.org/10.1007/s10589-022-00409-4.

Texte intégral
Résumé :
AbstractGradient projection methods represent effective tools for solving large-scale constrained optimization problems thanks to their simple implementation and low computational cost per iteration. Despite these good properties, a slow convergence rate can affect gradient projection schemes, especially when high accurate solutions are needed. A strategy to mitigate this drawback consists in properly selecting the values for the steplength along the negative gradient. In this paper, we consider the class of gradient projection methods with line search along the projected arc for box-constrained minimization problems and we analyse different strategies to define the steplength. It is well known in the literature that steplength selection rules able to approximate, at each iteration, the eigenvalues of the inverse of a suitable submatrix of the Hessian of the objective function can improve the performance of gradient projection methods. In this perspective, we propose an automatic hybrid steplength selection technique that employs a proper alternation of standard Barzilai–Borwein rules, when the final active set is not well approximated, and a generalized limited memory strategy based on the Ritz-like values of the Hessian matrix restricted to the inactive constraints, when the final active set is reached. Numerical experiments on quadratic and non-quadratic test problems show the effectiveness of the proposed steplength scheme.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Steplength selection"

1

DE, ASMUNDIS ROBERTA. « New gradient methods : spectral properties and steplength selection ». Doctoral thesis, 2014. http://hdl.handle.net/11573/918064.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Sgattoni, Cristina. « Solving systems of nonlinear equations via spectral residual methods ». Doctoral thesis, 2021. http://hdl.handle.net/2158/1238325.

Texte intégral
Résumé :
This thesis addresses the numerical solution of systems of nonlinear equations via spectral residual methods. Spectral residual methods are iterative procedures, they use the residual vector as search direction and a spectral steplength, i.e., a steplength that is related to the spectrum of the average matrices associated to the Jacobian matrix of the system. Such procedures are widely studied and employed since they are derivative-free and low-cost per iteration. The first aim of the work is to analyze the properties of the spectral residual steplengths and study how they affect the performance of the methods. This aim is addressed both from a theoretical and experimental point of view. The main contributions in this direction are: the theoretical analysis of the steplengths proposed in the literature and of their impact on the methods behaviour; the analysis of the performance of spectral methods with various rules for updating the steplengths. We propose and extensively test different steplength strategies. Rules based on adaptive strategies that suitably combine small and large steplengths result by far more effective than rules based on static choices of the steplength. Numerical experience is conducted on sequences of nonlinear systems arising from rolling contact models which play a central role in many important applications, such as rolling bearings and wheel-rail interaction. Solving these models gives rise to sequences which consist of a large number of medium-size nonlinear systems and represent a relevant benchmark test set for the purpose of the thesis. The second purpose of the thesis is to propose a variant of the derivative-free spectral residual method used in the first part and obtain a general scheme globally convergent under more general conditions. The robustness of the new method is potentially improved with respect to the previous version. Numerical experiments are conducted both on the problems arising in rolling contact models and on a set of problems commonly used for testing solvers for nonlinear systems.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Steplength selection"

1

Franchini, Giorgia, Valeria Ruggiero et Luca Zanni. « On the Steplength Selection in Stochastic Gradient Methods ». Dans Lecture Notes in Computer Science, 186–97. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-39081-5_17.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Franchini, Giorgia, Valeria Ruggiero et Luca Zanni. « Steplength and Mini-batch Size Selection in Stochastic Gradient Methods ». Dans Machine Learning, Optimization, and Data Science, 259–63. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64580-9_22.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Franchini, Giorgia, Valeria Ruggiero et Ilaria Trombini. « Thresholding Procedure via Barzilai-Borwein Rules for the Steplength Selection in Stochastic Gradient Methods ». Dans Machine Learning, Optimization, and Data Science, 277–82. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95470-3_21.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie