Literatura académica sobre el tema "Hyperparameter selection and optimization"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Hyperparameter selection and optimization".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Hyperparameter selection and optimization"

1

Sun, Yunlei, Huiquan Gong, Yucong Li y Dalin Zhang. "Hyperparameter Importance Analysis based on N-RReliefF Algorithm". International Journal of Computers Communications & Control 14, n.º 4 (5 de agosto de 2019): 557–73. http://dx.doi.org/10.15837/ijccc.2019.4.3593.

Texto completo
Resumen
Hyperparameter selection has always been the key to machine learning. The Bayesian optimization algorithm has recently achieved great success, but it has certain constraints and limitations in selecting hyperparameters. In response to these constraints and limitations, this paper proposed the N-RReliefF algorithm, which can evaluate the importance of hyperparameters and the importance weights between hyperparameters. The N-RReliefF algorithm estimates the contribution of a single hyperparameter to the performance according to the influence degree of each hyperparameter on the performance and calculates the weight of importance between the hyperparameters according to the improved normalization formula. The N-RReliefF algorithm analyses the hyperparameter configuration and performance set generated by Bayesian optimization, and obtains the important hyperparameters in random forest algorithm and SVM algorithm. The experimental results verify the effectiveness of the N-RReliefF algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Bengio, Yoshua. "Gradient-Based Optimization of Hyperparameters". Neural Computation 12, n.º 8 (1 de agosto de 2000): 1889–900. http://dx.doi.org/10.1162/089976600300015187.

Texto completo
Resumen
Many machine learning algorithms can be formulated as the minimization of a training criterion that involves a hyperparameter. This hyperparameter is usually chosen by trial and error with a model selection criterion. In this article we present a methodology to optimize several hyper-parameters, based on the computation of the gradient of a model selection criterion with respect to the hyperparameters. In the case of a quadratic training criterion, the gradient of the selection criterion with respect to the hyperparameters is efficiently computed by backpropagating through a Cholesky decomposition. In the more general case, we show that the implicit function theorem can be used to derive a formula for the hyper-parameter gradient involving second derivatives of the training criterion.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Nystrup, Peter, Erik Lindström y Henrik Madsen. "Hyperparameter Optimization for Portfolio Selection". Journal of Financial Data Science 2, n.º 3 (18 de junio de 2020): 40–54. http://dx.doi.org/10.3905/jfds.2020.1.035.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Li, Yang, Jiawei Jiang, Jinyang Gao, Yingxia Shao, Ce Zhang y Bin Cui. "Efficient Automatic CASH via Rising Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 4763–71. http://dx.doi.org/10.1609/aaai.v34i04.5910.

Texto completo
Resumen
The Combined Algorithm Selection and Hyperparameter optimization (CASH) is one of the most fundamental problems in Automatic Machine Learning (AutoML). The existing Bayesian optimization (BO) based solutions turn the CASH problem into a Hyperparameter Optimization (HPO) problem by combining the hyperparameters of all machine learning (ML) algorithms, and use BO methods to solve it. As a result, these methods suffer from the low-efficiency problem due to the huge hyperparameter space in CASH. To alleviate this issue, we propose the alternating optimization framework, where the HPO problem for each ML algorithm and the algorithm selection problem are optimized alternately. In this framework, the BO methods are used to solve the HPO problem for each ML algorithm separately, incorporating a much smaller hyperparameter space for BO methods. Furthermore, we introduce Rising Bandits, a CASH-oriented Multi-Armed Bandits (MAB) variant, to model the algorithm selection in CASH. This framework can take the advantages of both BO in solving the HPO problem with a relatively small hyperparameter space and the MABs in accelerating the algorithm selection. Moreover, we further develop an efficient online algorithm to solve the Rising Bandits with provably theoretical guarantees. The extensive experiments on 30 OpenML datasets demonstrate the superiority of the proposed approach over the competitive baselines.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Li, Yuqi. "Discrete Hyperparameter Optimization Model Based on Skewed Distribution". Mathematical Problems in Engineering 2022 (9 de agosto de 2022): 1–10. http://dx.doi.org/10.1155/2022/2835596.

Texto completo
Resumen
As for the machine learning algorithm, one of the main factors restricting its further large-scale application is the value of hyperparameter. Therefore, researchers have done a lot of original numerical optimization algorithms to ensure the validity of hyperparameter selection. Based on previous studies, this study innovatively puts forward a model generated using skewed distribution (gamma distribution) as hyperparameter fitting and combines the Bayesian estimation method and Gauss hypergeometric function to propose a mathematically optimal solution for discrete hyperparameter selection. The results show that under strict mathematical conditions, the value of discrete hyperparameters can be given a reasonable expected value. This heuristic parameter adjustment method based on prior conditions can improve the accuracy of some traditional models in experiments and then improve the application value of models. At the same time, through the empirical study of relevant datasets, the effectiveness of the parameter adjustment strategy proposed in this study is further proved.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Mohapatra, Shubhankar, Sajin Sasy, Xi He, Gautam Kamath y Om Thakkar. "The Role of Adaptive Optimizers for Honest Private Hyperparameter Selection". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 7 (28 de junio de 2022): 7806–13. http://dx.doi.org/10.1609/aaai.v36i7.20749.

Texto completo
Resumen
Hyperparameter optimization is a ubiquitous challenge in machine learning, and the performance of a trained model depends crucially upon their effective selection. While a rich set of tools exist for this purpose, there are currently no practical hyperparameter selection methods under the constraint of differential privacy (DP). We study honest hyperparameter selection for differentially private machine learning, in which the process of hyperparameter tuning is accounted for in the overall privacy budget. To this end, we i) show that standard composition tools outperform more advanced techniques in many settings, ii) empirically and theoretically demonstrate an intrinsic connection between the learning rate and clipping norm hyperparameters, iii) show that adaptive optimizers like DPAdam enjoy a significant advantage in the process of honest hyperparameter tuning, and iv) draw upon novel limiting behaviour of Adam in the DP setting to design a new and more efficient optimizer.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Kurnia, Deni, Muhammad Itqan Mazdadi, Dwi Kartini, Radityo Adi Nugroho y Friska Abadi. "Seleksi Fitur dengan Particle Swarm Optimization pada Klasifikasi Penyakit Parkinson Menggunakan XGBoost". Jurnal Teknologi Informasi dan Ilmu Komputer 10, n.º 5 (17 de octubre de 2023): 1083–94. http://dx.doi.org/10.25126/jtiik.20231057252.

Texto completo
Resumen
Penyakit Parkinson merupakan gangguan pada sistem saraf pusat yang mempengaruhi sistem motorik. Diagnosis penyakit ini cukup sulit dilakukan karena gejalanya yang serupa dengan penyakit lain. Saat ini diagnosa dapat dilakukan menggunakan machine learning dengan memanfaatkan rekaman suara pasien. Fitur yang dihasilkan dari ekstraksi rekaman suara tersebut relatif cukup banyak sehingga seleksi fitur perlu dilakukan untuk menghindari memburuknya kinerja sebuah model. Pada penelitian ini, Particle Swarm Optimization digunakan sebagai seleksi fitur, sedangkan XGBoost akan digunakan sebagai model klasifikasi. Selain itu model juga akan diterapkan SMOTE untuk mengatasi masalah ketidakseimbangan kelas data dan hyperparameter tuning pada XGBoost untuk mendapatkan hyperparameter yang optimal. Hasil pengujian menunjukkan bahwa nilai AUC pada model dengan seleksi fitur tanpa SMOTE dan hyperparameter tuning adalah 0,9325, sedangkan pada model tanpa seleksi fitur hanya mendapat nilai AUC sebesar 0,9250. Namun, ketika kedua teknik SMOTE dan hyperparameter tuning digunakan bersamaan, penggunaan seleksi fitur mampu memberikan peningkatan kinerja pada model. Model dengan seleksi fitur mendapat nilai AUC sebesar 0,9483, sedangkan model tanpa seleksi fitur hanya mendapat nilai AUC sebesar 0,9366. Abstract Parkinson's disease is a disorder of the central nervous system that affects the motor system. Diagnosis of this disease is quite difficult because the symptoms are similar to other diseases. Currently, diagnosis can be done using machine learning by utilizing patient voice recordings. The features generated from the extraction of voice recordings are relatively large, so feature selection needs to be done to avoid deteriorating the performance of a model. In this research, Particle Swarm Optimization is used as feature selection, while XGBoost will be used as a classification model. In addition, the model will also be applied SMOTE to overcome the problem of data class imbalance and hyperparameter tuning on XGBoost to get optimal hyperparameters. The test results show that the AUC value on the model with feature selection without SMOTE and hyperparameter tuning is 0.9325, while the model without feature selection only gets an AUC value of 0.9250. However, when both SMOTE and hyperparameter tuning techniques are used together, the use of feature selection is able to provide improved performance on the model. The model with feature selection gets an AUC value of 0.9483, while the model without feature selection only gets an AUC value of 0.9366.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Prochukhan, Dmytro. "IMPLEMENTATION OF TECHNOLOGY FOR IMPROVING THE QUALITY OF SEGMENTATION OF MEDICAL IMAGES BY SOFTWARE ADJUSTMENT OF CONVOLUTIONAL NEURAL NETWORK HYPERPARAMETERS". Information and Telecommunication Sciences, n.º 1 (24 de junio de 2023): 59–63. http://dx.doi.org/10.20535/2411-2976.12023.59-63.

Texto completo
Resumen
Background. The scientists have built effective convolutional neural networks in their research, but the issue of optimal setting of the hyperparameters of these neural networks remains insufficiently researched. Hyperparameters affect model selection. They have the greatest impact on the number and size of hidden layers. Effective selection of hyperparameters improves the speed and quality of the learning algorithm. It is also necessary to pay attention to the fact that the hyperparameters of the convolutional neural network are interconnected. That is why it is very difficult to manually select the effective values of hyperparameters, which will ensure the maximum efficiency of the convolutional neural network. It is necessary to automate the process of selecting hyperparameters, to implement a software mechanism for setting hyperparameters of a convolutional neural network. The author has successfully implemented the specified task. Objective. The purpose of the paper is to develop a technology for selecting hyperparameters of a convolutional neural network to improve the quality of segmentation of medical images.. Methods. Selection of a convolutional neural network model that will enable effective segmentation of medical images, modification of the Keras Tuner library by developing an additional function, use of convolutional neural network optimization methods and hyperparameters, compilation of the constructed model and its settings, selection of the model with the best hyperparameters. Results. A comparative analysis of U-Net and FCN-32 convolutional neural networks was carried out. U-Net was selected as the tuning network due to its higher quality and accuracy of image segmentation. Modified the Keras Tuner library by developing an additional function for tuning hyperparameters. To optimize hyperparameters, the use of the Hyperband method is justified. The optimal number of epochs was selected - 20. In the process of setting hyperparameters, the best model with an accuracy index of 0.9665 was selected. The hyperparameter start_neurons is set to 80, the hyperparameter net_depth is 5, the activation function is Mish, the hyperparameter dropout is set to False, and the hyperparameter bn_after_act is set to True. Conclusions. The convolutional neural network U-Net, which is configured with the specified parameters, has a significant potential in solving the problems of segmentation of medical images. The prospect of further research is the use of a modified network for the diagnosis of symptoms of the coronavirus disease COVID-19, pneumonia, cancer and other complex medical diseases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Raji, Ismail Damilola, Habeeb Bello-Salau, Ime Jarlath Umoh, Adeiza James Onumanyi, Mutiu Adesina Adegboye y Ahmed Tijani Salawudeen. "Simple Deterministic Selection-Based Genetic Algorithm for Hyperparameter Tuning of Machine Learning Models". Applied Sciences 12, n.º 3 (24 de enero de 2022): 1186. http://dx.doi.org/10.3390/app12031186.

Texto completo
Resumen
Hyperparameter tuning is a critical function necessary for the effective deployment of most machine learning (ML) algorithms. It is used to find the optimal hyperparameter settings of an ML algorithm in order to improve its overall output performance. To this effect, several optimization strategies have been studied for fine-tuning the hyperparameters of many ML algorithms, especially in the absence of model-specific information. However, because most ML training procedures need a significant amount of computational time and memory, it is frequently necessary to build an optimization technique that converges within a small number of fitness evaluations. As a result, a simple deterministic selection genetic algorithm (SDSGA) is proposed in this article. The SDSGA was realized by ensuring that both chromosomes and their accompanying fitness values in the original genetic algorithm are selected in an elitist-like way. We assessed the SDSGA over a variety of mathematical test functions. It was then used to optimize the hyperparameters of two well-known machine learning models, namely, the convolutional neural network (CNN) and the random forest (RF) algorithm, with application on the MNIST and UCI classification datasets. The SDSGA’s efficiency was compared to that of the Bayesian Optimization (BO) and three other popular metaheuristic optimization algorithms (MOAs), namely, the genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO) algorithms. The results obtained reveal that the SDSGA performed better than the other MOAs in solving 11 of the 17 known benchmark functions considered in our study. While optimizing the hyperparameters of the two ML models, it performed marginally better in terms of accuracy than the other methods while taking less time to compute.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ridho, Akhmad y Alamsyah Alamsyah. "Chaotic Whale Optimization Algorithm in Hyperparameter Selection in Convolutional Neural Network Algorithm". Journal of Advances in Information Systems and Technology 4, n.º 2 (10 de marzo de 2023): 156–69. http://dx.doi.org/10.15294/jaist.v4i2.60595.

Texto completo
Resumen
In several previous studies, metaheuristic methods were used to search for CNN hyperparameters. However, this research only focuses on searching for CNN hyperparameters in the type of network architecture, network structure, and initializing network weights. Therefore, in this article, we only focus on searching for CNN hyperparameters with network architecture type, and network structure with additional regularization. In this article, the CNN hyperparameter search with regularization uses CWOA on the MNIST and FashionMNIST datasets. Each dataset consists of 60,000 training data and 10,000 testing data. Then during the research, the training data was only taken 50% of the total data, then the data was divided again by 10% for data validation and the rest for training data. The results of the research on the MNIST CWOA dataset have an error value of 0.023 and an accuracy of 99.63. Then the FashionMNIST CWOA dataset has an error value of 0.23 and an accuracy of 91.36.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Hyperparameter selection and optimization"

1

Ndiaye, Eugene. "Safe optimization algorithms for variable selection and hyperparameter tuning". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT004/document.

Texto completo
Resumen
Le traitement massif et automatique des données requiert le développement de techniques de filtration des informations les plus importantes. Parmi ces méthodes, celles présentant des structures parcimonieuses se sont révélées idoines pour améliorer l’efficacité statistique et computationnelle des estimateurs, dans un contexte de grandes dimensions. Elles s’expriment souvent comme solution de la minimisation du risque empirique régularisé s’écrivant comme une somme d’un terme lisse qui mesure la qualité de l’ajustement aux données, et d’un terme non lisse qui pénalise les solutions complexes. Cependant, une telle manière d’inclure des informations a priori, introduit de nombreuses difficultés numériques pour résoudre le problème d’optimisation sous-jacent et pour calibrer le niveau de régularisation. Ces problématiques ont été au coeur des questions que nous avons abordées dans cette thèse.Une technique récente, appelée «Screening Rules», propose d’ignorer certaines variables pendant le processus d’optimisation en tirant bénéfice de la parcimonie attendue des solutions. Ces règles d’élimination sont dites sûres lorsqu’elles garantissent de ne pas rejeter les variables à tort. Nous proposons un cadre unifié pour identifier les structures importantes dans ces problèmes d’optimisation convexes et nous introduisons les règles «Gap Safe Screening Rules». Elles permettent d’obtenir des gains considérables en temps de calcul grâce à la réduction de la dimension induite par cette méthode. De plus, elles s’incorporent facilement aux algorithmes itératifs et s’appliquent à un plus grand nombre de problèmes que les méthodes précédentes.Pour trouver un bon compromis entre minimisation du risque et introduction d’un biais d’apprentissage, les algorithmes d’homotopie offrent la possibilité de tracer la courbe des solutions en fonction du paramètre de régularisation. Toutefois, ils présentent des instabilités numériques dues à plusieurs inversions de matrice, et sont souvent coûteux en grande dimension. Aussi, ils ont des complexités exponentielles en la dimension du modèle dans des cas défavorables. En autorisant des solutions approchées, une approximation de la courbe des solutions permet de contourner les inconvénients susmentionnés. Nous revisitons les techniques d’approximation des chemins de régularisation pour une tolérance prédéfinie, et nous analysons leur complexité en fonction de la régularité des fonctions de perte en jeu. Il s’ensuit une proposition d’algorithmes optimaux ainsi que diverses stratégies d’exploration de l’espace des paramètres. Ceci permet de proposer une méthode de calibration de la régularisation avec une garantie de convergence globale pour la minimisation du risque empirique sur les données de validation.Le Lasso, un des estimateurs parcimonieux les plus célèbres et les plus étudiés, repose sur une théorie statistique qui suggère de choisir la régularisation en fonction de la variance des observations. Ceci est difficilement utilisable en pratique car, la variance du modèle est une quantité souvent inconnue. Dans de tels cas, il est possible d’optimiser conjointement les coefficients de régression et le niveau de bruit. Ces estimations concomitantes, apparues dans la littérature sous les noms de Scaled Lasso, Square-Root Lasso, fournissent des résultats théoriques aussi satisfaisants que celui du Lasso tout en étant indépendant de la variance réelle. Bien que présentant des avancées théoriques et pratiques importantes, ces méthodes sont aussi numériquement instables et les algorithmes actuellement disponibles sont coûteux en temps de calcul. Nous illustrons ces difficultés et nous proposons à la fois des modifications basées sur des techniques de lissage pour accroitre la stabilité numérique de ces estimateurs, ainsi qu’un algorithme plus efficace pour les obtenir
Massive and automatic data processing requires the development of techniques able to filter the most important information. Among these methods, those with sparse structures have been shown to improve the statistical and computational efficiency of estimators in a context of large dimension. They can often be expressed as a solution of regularized empirical risk minimization and generally lead to non differentiable optimization problems in the form of a sum of a smooth term, measuring the quality of the fit, and a non-smooth term, penalizing complex solutions. Although it has considerable advantages, such a way of including prior information, unfortunately introduces many numerical difficulties both for solving the underlying optimization problem and to calibrate the level of regularization. Solving these issues has been at the heart of this thesis. A recently introduced technique, called "Screening Rules", proposes to ignore some variables during the optimization process by benefiting from the expected sparsity of the solutions. These elimination rules are said to be safe when the procedure guarantees to not reject any variable wrongly. In this work, we propose a unified framework for identifying important structures in these convex optimization problems and we introduce the "Gap Safe Screening Rules". They allows to obtain significant gains in computational time thanks to the dimensionality reduction induced by this method. In addition, they can be easily inserted into iterative algorithms and apply to a large number of problems.To find a good compromise between minimizing risk and introducing a learning bias, (exact) homotopy continuation algorithms offer the possibility of tracking the curve of the solutions as a function of the regularization parameters. However, they exhibit numerical instabilities due to several matrix inversions and are often expensive in large dimension. Another weakness is that a worst-case analysis shows that they have exact complexities that are exponential in the dimension of the model parameter. Allowing approximated solutions makes possible to circumvent the aforementioned drawbacks by approximating the curve of the solutions. In this thesis, we revisit the approximation techniques of the regularization paths given a predefined tolerance and we propose an in-depth analysis of their complexity w.r.t. the regularity of the loss functions involved. Hence, we propose optimal algorithms as well as various strategies for exploring the parameters space. We also provide calibration method (for the regularization parameter) that enjoys globalconvergence guarantees for the minimization of the empirical risk on the validation data.Among sparse regularization methods, the Lasso is one of the most celebrated and studied. Its statistical theory suggests choosing the level of regularization according to the amount of variance in the observations, which is difficult to use in practice because the variance of the model is oftenan unknown quantity. In such case, it is possible to jointly optimize the regression parameter as well as the level of noise. These concomitant estimates, appeared in the literature under the names of Scaled Lasso or Square-Root Lasso, and provide theoretical results as sharp as that of theLasso while being independent of the actual noise level of the observations. Although presenting important advances, these methods are numerically unstable and the currently available algorithms are expensive in computation time. We illustrate these difficulties and we propose modifications based on smoothing techniques to increase stability of these estimators as well as to introduce a faster algorithm
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Thornton, Chris. "Auto-WEKA : combined selection and hyperparameter optimization of supervised machine learning algorithms". Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/46177.

Texto completo
Resumen
Many different machine learning algorithms exist; taking into account each algorithm's set of hyperparameters, there is a staggeringly large number of possible choices. This project considers the problem of simultaneously selecting a learning algorithm and setting its hyperparameters. Previous works attack these issues separately, but this problem can be addressed by a fully automated approach, in particular by leveraging recent innovations in Bayesian optimization. The WEKA software package provides an implementation for a number of feature selection and supervised machine learning algorithms, which we use inside our automated tool, Auto-WEKA. Specifically, we examined the 3 search and 8 evaluator methods for feature selection, as well as all of the classification and regression methods, spanning 2 ensemble methods, 10 meta-methods, 27 base algorithms, and their associated hyperparameters. On 34 popular datasets from the UCI repository, the Delve repository, the KDD Cup 09, variants of the MNIST dataset and CIFAR-10, our method produces classification and regression performance often much better than obtained using state-of-the-art algorithm selection and hyperparameter optimization methods from the literature. Using this integrated approach, users can more effectively identify not only the best machine learning algorithm, but also the corresponding hyperparameter settings and feature selection methods appropriate for that algorithm, and hence achieve improved performance for their specific classification or regression task.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Bertrand, Quentin. "Hyperparameter selection for high dimensional sparse learning : application to neuroimaging". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG054.

Texto completo
Resumen
Grâce à leur caractère non invasif et leur excellente résolution temporelle, la magnéto- et l'électroencéphalographie (M/EEG) sont devenues des outils incontournables pour observer l'activité cérébrale. La reconstruction des signaux cérébraux à partir des enregistrements M/EEG peut être vue comme un problème inverse de grande dimension mal posé. Les estimateurs typiques des signaux cérébraux se basent sur des problèmes d'optimisation difficiles à résoudre, composés de la somme d'un terme d'attache aux données et d'un terme favorisant la parcimonie. À cause du paramètre de régularisation notoirement difficile à calibrer, les estimateurs basés sur la parcimonie ne sont actuellement pas massivement utilisés par les praticiens. L'objectif de cette thèse est de fournir un moyen simple, rapide et automatisé de calibrer des modèles linéaires parcimonieux. Nous étudions d'abord quelques propriétés de la descente par coordonnées : identification du modèle, convergence linéaire locale, et accélération. En nous appuyant sur les schémas d'extrapolation d'Anderson, nous proposons un moyen efficace d'accélérer la descente par coordonnées en théorie et en pratique. Nous explorons ensuite une approche statistique pour calibrer le paramètre de régularisation des problèmes de type Lasso. Il est possible de construire des estimateurs pour lesquels le paramètre de régularisation optimal ne dépend pas du niveau de bruit. Cependant, ces estimateurs nécessitent de résoudre des problèmes d'optimisation "non lisses + non lisses". Nous montrons que le lissage partiel préserve leurs propriétés statistiques et nous proposons une application aux problèmes de localisation de sources M/EEG. Enfin, nous étudions l'optimisation d'hyperparamètres, qui comprend notamment la validation croisée. Cela nécessite de résoudre des problèmes d'optimisation à deux niveaux avec des problèmes internes non lisses. De tels problèmes sont résolus de manière usuelle via des techniques d'ordre zéro, telles que la recherche sur grille ou la recherche aléatoire. Nous présentons une technique efficace pour résoudre ces problèmes d'optimisation à deux niveaux en utilisant des méthodes du premier ordre
Due to non-invasiveness and excellent time resolution, magneto- and electroencephalography (M/EEG) have emerged as tools of choice to monitor brain activity. Reconstructing brain signals from M/EEG measurements can be cast as a high dimensional ill-posed inverse problem. Typical estimators of brain signals involve challenging optimization problems, composed of the sum of a data-fidelity term, and a sparsity promoting term. Because of their notoriously hard to tune regularization hyperparameters, sparsity-based estimators are currently not massively used by practitioners. The goal of this thesis is to provide a simple, fast, and automatic way to calibrate sparse linear models. We first study some properties of coordinate descent: model identification, local linear convergence, and acceleration. Relying on Anderson extrapolation schemes, we propose an effective way to speed up coordinate descent in theory and practice. We then explore a statistical approach to set the regularization parameter of Lasso-type problems. A closed-form formula can be derived for the optimal regularization parameter of L1 penalized linear regressions. Unfortunately, it relies on the true noise level, unknown in practice. To remove this dependency, one can resort to estimators for which the regularization parameter does not depend on the noise level. However, they require to solve challenging "nonsmooth + nonsmooth" optimization problems. We show that partial smoothing preserves their statistical properties and we propose an application to M/EEG source localization problems. Finally we investigate hyperparameter optimization, encompassing held-out or cross-validation hyperparameter selection. It requires tackling bilevel optimization with nonsmooth inner problems. Such problems are canonically solved using zeros order techniques, such as grid-search or random-search. We present an efficient technique to solve these challenging bilevel optimization problems using first-order methods
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Thomas, Janek [Verfasser] y Bernd [Akademischer Betreuer] Bischl. "Gradient boosting in automatic machine learning: feature selection and hyperparameter optimization / Janek Thomas ; Betreuer: Bernd Bischl". München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2019. http://d-nb.info/1189584808/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Nakisa, Bahareh. "Emotion classification using advanced machine learning techniques applied to wearable physiological signals data". Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/129875/9/Bahareh%20Nakisa%20Thesis.pdf.

Texto completo
Resumen
This research contributed to the development of advanced feature selection model, hyperparameter optimization and temporal multimodal deep learning model to improve the performance of dimensional emotion recognition. This study adopts different approaches based on portable wearable physiological sensors. It identified best models for feature selection and best hyperparameter values for Long Short-Term Memory network and how to fuse multi-modal sensors efficiently for assessing emotion recognition. All methods of this thesis collectively deliver better algorithms and maximize the use of miniaturized sensors to provide an accurate measurement of emotion recognition.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Klein, Aaron [Verfasser] y Frank [Akademischer Betreuer] Hutter. "Efficient bayesian hyperparameter optimization". Freiburg : Universität, 2020. http://d-nb.info/1214592961/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Gousseau, Clément. "Hyperparameter Optimization for Convolutional Neural Networks". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-272107.

Texto completo
Resumen
Training algorithms for artificial neural networks depend on parameters called the hyperparameters. They can have a strong influence on the trained model but are often chosen manually with trial and error experiments. This thesis, conducted at Orange Labs Lannion, presents and evaluates three algorithms that aim at solving this task: a naive approach (random search), a Bayesian approach (Tree Parzen Estimator) and an evolutionary approach (Particle Swarm Optimization). A well-known dataset for handwritten digit recognition (MNIST) is used to compare these algorithms. These algorithms are also evaluated on audio classification, which is one of the main activities in the company team where the thesis was conducted. The evolutionary algorithm (PSO) showed better results than the two other methods.
Hyperparameteroptimering är en viktig men svår uppgift vid träning av ett artificiellt neuralt nätverk. Detta examensarbete, genomfört vid Orange Labs Lannion, presenterar och utvärderar tre algoritmer som syftar till att lösa denna uppgift: en naiv strategi (slumpmässig sökning), en Bayesiansk metod (TPE) och en evolutionär strategi (PSO). För att jämföra dessa algoritmer har MNIST-datasetet använts. Algoritmerna utvärderas även med hjälp av ljudklassificering, som är kärnverksamheten på företaget där examensarbetet genomfördes. Evolutionsalgoritmen (PSO) gav bättre resultat än de två andra metoderna.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Lévesque, Julien-Charles. "Bayesian hyperparameter optimization : overfitting, ensembles and conditional spaces". Doctoral thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/28364.

Texto completo
Resumen
Dans cette thèse, l’optimisation bayésienne sera analysée et étendue pour divers problèmes reliés à l’apprentissage supervisé. Les contributions de la thèse sont en lien avec 1) la surestimation de la performance de généralisation des hyperparamètres et des modèles résultants d’une optimisation bayésienne, 2) une application de l’optimisation bayésienne pour la génération d’ensembles de classifieurs, et 3) l’optimisation d’espaces avec une structure conditionnelle telle que trouvée dans les problèmes “d’apprentissage machine automatique” (AutoML). Généralement, les algorithmes d’apprentissage automatique ont des paramètres libres, appelés hyperparamètres, permettant de réguler ou de modifier leur comportement à plus haut niveau. Auparavant, ces hyperparamètres étaient choisis manuellement ou par recherche exhaustive. Des travaux récents ont souligné la pertinence d’utiliser des méthodes plus intelligentes pour l’optimisation d’hyperparamètres, notamment l’optimisation bayésienne. Effectivement, l’optimisation bayésienne est un outil polyvalent pour l’optimisation de fonctions inconnues ou non dérivables, ancré fortement dans la modélisation probabiliste et l’estimation d’incertitude. C’est pourquoi nous adoptons cet outil pour le travail dans cette thèse. La thèse débute avec une introduction de l’optimisation bayésienne avec des processus gaussiens (Gaussian processes, GP) et décrit son application à l’optimisation d’hyperparamètres. Ensuite, des contributions originales sont présentées sur les dangers du surapprentissage durant l’optimisation d’hyperparamètres, où l’on se trouve à mémoriser les plis de validation utilisés pour l’évaluation. Il est démontré que l’optimisation d’hyperparamètres peut en effet mener à une surestimation de la performance de validation, même avec des méthodologies de validation croisée. Des méthodes telles que le rebrassage des plis d’entraînement et de validation sont ensuite proposées pour réduire ce surapprentissage. Une autre méthode prometteuse est démontrée dans l’utilisation de la moyenne a posteriori d’un GP pour effectuer la sélection des hyperparamètres finaux, plutôt que sélectionner directement le modèle avec l’erreur minimale en validation croisée. Les deux approches suggérées ont montré une amélioration significative sur la performance en généralisation pour un banc de test de 118 jeux de données. Les contributions suivantes proviennent d’une application de l’optimisation d’hyperparamètres pour des méthodes par ensembles. Les méthodes dites d’empilage (stacking) ont précédemment été employées pour combiner de multiples classifieurs à l’aide d’un métaclassifieur. Ces méthodes peuvent s’appliquer au résultat final d’une optimisation bayésienne d’hyperparamètres en conservant les meilleurs classifieurs identifiés lors de l’optimisation et en les combinant à la fin de l’optimisation. Notre méthode d’optimisation bayésienne d’ensembles consiste en une modification du pipeline d’optimisation d’hyperparamètres pour rechercher des hyperparamètres produisant de meilleurs modèles pour un ensemble, plutôt que d’optimiser pour la performance d’un modèle seul. L’approche suggérée a l’avantage de ne pas nécessiter plus d’entraînement de modèles qu’une méthode classique d’optimisation bayésienne d’hyperparamètres. Une évaluation empirique démontre l’intérêt de l’approche proposée. Les dernières contributions sont liées à l’optimisation d’espaces d’hyperparamètres plus complexes, notamment des espaces contenant une structure conditionnelle. Ces conditions apparaissent dans l’optimisation d’hyperparamètres lorsqu’un modèle modulaire est défini – certains hyperparamètres sont alors seulement définis si leur composante parente est activée. Un exemple de tel espace de recherche est la sélection de modèles et l’optimisation d’hyperparamètres combinée, maintenant davantage connu sous l’appellation AutoML, où l’on veut à la fois choisir le modèle de base et optimiser ses hyperparamètres. Des techniques et de nouveaux noyaux pour processus gaussiens sont donc proposées afin de mieux gérer la structure de tels espaces d’une manière fondée sur des principes. Les contributions présentées sont appuyées par une autre étude empirique sur de nombreux jeux de données. En résumé, cette thèse consiste en un rassemblement de travaux tous reliés directement à l’optimisation bayésienne d’hyperparamètres. La thèse présente de nouvelles méthodes pour l’optimisation bayésienne d’ensembles de classifieurs, ainsi que des procédures pour réduire le surapprentissage et pour optimiser des espaces d’hyperparamètres structurés.
In this thesis, we consider the analysis and extension of Bayesian hyperparameter optimization methodology to various problems related to supervised machine learning. The contributions of the thesis are attached to 1) the overestimation of the generalization accuracy of hyperparameters and models resulting from Bayesian optimization, 2) an application of Bayesian optimization to ensemble learning, and 3) the optimization of spaces with a conditional structure such as found in automatic machine learning (AutoML) problems. Generally, machine learning algorithms have some free parameters, called hyperparameters, allowing to regulate or modify these algorithms’ behaviour. For the longest time, hyperparameters were tuned by hand or with exhaustive search algorithms. Recent work highlighted the conceptual advantages in optimizing hyperparameters with more rational methods, such as Bayesian optimization. Bayesian optimization is a very versatile framework for the optimization of unknown and non-derivable functions, grounded strongly in probabilistic modelling and uncertainty estimation, and we adopt it for the work in this thesis. We first briefly introduce Bayesian optimization with Gaussian processes (GP) and describe its application to hyperparameter optimization. Next, original contributions are presented on the dangers of overfitting during hyperparameter optimization, where the optimization ends up learning the validation folds. We show that there is indeed overfitting during the optimization of hyperparameters, even with cross-validation strategies, and that it can be reduced by methods such as a reshuffling of the training and validation splits at every iteration of the optimization. Another promising method is demonstrated in the use of a GP’s posterior mean for the selection of final hyperparameters, rather than directly returning the model with the minimal crossvalidation error. Both suggested approaches are demonstrated to deliver significant improvements in the generalization accuracy of the final selected model on a benchmark of 118 datasets. The next contributions are provided by an application of Bayesian hyperparameter optimization for ensemble learning. Stacking methods have been exploited for some time to combine multiple classifiers in a meta classifier system. Those can be applied to the end result of a Bayesian hyperparameter optimization pipeline by keeping the best classifiers and combining them at the end. Our Bayesian ensemble optimization method consists in a modification of the Bayesian optimization pipeline to search for the best hyperparameters to use for an ensemble, which is different from optimizing hyperparameters for the performance of a single model. The approach has the advantage of not requiring the training of more models than a regular Bayesian hyperparameter optimization. Experiments show the potential of the suggested approach on three different search spaces and many datasets. The last contributions are related to the optimization of more complex hyperparameter spaces, namely spaces that contain a structure of conditionality. Conditions arise naturally in hyperparameter optimization when one defines a model with multiple components – certain hyperparameters then only need to be specified if their parent component is activated. One example of such a space is the combined algorithm selection and hyperparameter optimization, now better known as AutoML, where the objective is to choose the base model and optimize its hyperparameters. We thus highlight techniques and propose new kernels for GPs that handle structure in such spaces in a principled way. Contributions are also supported by experimental evaluation on many datasets. Overall, the thesis regroups several works directly related to Bayesian hyperparameter optimization. The thesis showcases novel ways to apply Bayesian optimization for ensemble learning, as well as methodologies to reduce overfitting or optimize more complex spaces.
Dans cette thèse, l’optimisation bayésienne sera analysée et étendue pour divers problèmes reliés à l’apprentissage supervisé. Les contributions de la thèse sont en lien avec 1) la surestimation de la performance de généralisation des hyperparamètres et des modèles résultants d’une optimisation bayésienne, 2) une application de l’optimisation bayésienne pour la génération d’ensembles de classifieurs, et 3) l’optimisation d’espaces avec une structure conditionnelle telle que trouvée dans les problèmes “d’apprentissage machine automatique” (AutoML). Généralement, les algorithmes d’apprentissage automatique ont des paramètres libres, appelés hyperparamètres, permettant de réguler ou de modifier leur comportement à plus haut niveau. Auparavant, ces hyperparamètres étaient choisis manuellement ou par recherche exhaustive. Des travaux récents ont souligné la pertinence d’utiliser des méthodes plus intelligentes pour l’optimisation d’hyperparamètres, notamment l’optimisation bayésienne. Effectivement, l’optimisation bayésienne est un outil polyvalent pour l’optimisation de fonctions inconnues ou non dérivables, ancré fortement dans la modélisation probabiliste et l’estimation d’incertitude. C’est pourquoi nous adoptons cet outil pour le travail dans cette thèse. La thèse débute avec une introduction de l’optimisation bayésienne avec des processus gaussiens (Gaussian processes, GP) et décrit son application à l’optimisation d’hyperparamètres. Ensuite, des contributions originales sont présentées sur les dangers du surapprentissage durant l’optimisation d’hyperparamètres, où l’on se trouve à mémoriser les plis de validation utilisés pour l’évaluation. Il est démontré que l’optimisation d’hyperparamètres peut en effet mener à une surestimation de la performance de validation, même avec des méthodologies de validation croisée. Des méthodes telles que le rebrassage des plis d’entraînement et de validation sont ensuite proposées pour réduire ce surapprentissage. Une autre méthode prometteuse est démontrée dans l’utilisation de la moyenne a posteriori d’un GP pour effectuer la sélection des hyperparamètres finaux, plutôt que sélectionner directement le modèle avec l’erreur minimale en validation croisée. Les deux approches suggérées ont montré une amélioration significative sur la performance en généralisation pour un banc de test de 118 jeux de données. Les contributions suivantes proviennent d’une application de l’optimisation d’hyperparamètres pour des méthodes par ensembles. Les méthodes dites d’empilage (stacking) ont précédemment été employées pour combiner de multiples classifieurs à l’aide d’un métaclassifieur. Ces méthodes peuvent s’appliquer au résultat final d’une optimisation bayésienne d’hyperparamètres en conservant les meilleurs classifieurs identifiés lors de l’optimisation et en les combinant à la fin de l’optimisation. Notre méthode d’optimisation bayésienne d’ensembles consiste en une modification du pipeline d’optimisation d’hyperparamètres pour rechercher des hyperparamètres produisant de meilleurs modèles pour un ensemble, plutôt que d’optimiser pour la performance d’un modèle seul. L’approche suggérée a l’avantage de ne pas nécessiter plus d’entraînement de modèles qu’une méthode classique d’optimisation bayésienne d’hyperparamètres. Une évaluation empirique démontre l’intérêt de l’approche proposée. Les dernières contributions sont liées à l’optimisation d’espaces d’hyperparamètres plus complexes, notamment des espaces contenant une structure conditionnelle. Ces conditions apparaissent dans l’optimisation d’hyperparamètres lorsqu’un modèle modulaire est défini – certains hyperparamètres sont alors seulement définis si leur composante parente est activée. Un exemple de tel espace de recherche est la sélection de modèles et l’optimisation d’hyperparamètres combinée, maintenant davantage connu sous l’appellation AutoML, où l’on veut à la fois choisir le modèle de base et optimiser ses hyperparamètres. Des techniques et de nouveaux noyaux pour processus gaussiens sont donc proposées afin de mieux gérer la structure de tels espaces d’une manière fondée sur des principes. Les contributions présentées sont appuyées par une autre étude empirique sur de nombreux jeux de données. En résumé, cette thèse consiste en un rassemblement de travaux tous reliés directement à l’optimisation bayésienne d’hyperparamètres. La thèse présente de nouvelles méthodes pour l’optimisation bayésienne d’ensembles de classifieurs, ainsi que des procédures pour réduire le surapprentissage et pour optimiser des espaces d’hyperparamètres structurés.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Nygren, Rasmus. "Evaluation of hyperparameter optimization methods for Random Forest classifiers". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301739.

Texto completo
Resumen
In order to create a machine learning model, one is often tasked with selecting certain hyperparameters which configure the behavior of the model. The performance of the model can vary greatly depending on how these hyperparameters are selected, thus making it relevant to investigate the effects of hyperparameter optimization on the classification accuracy of a machine learning model. In this study, we train and evaluate a Random Forest classifier whose hyperparameters are set to default values and compare its classification accuracy to another classifier whose hyperparameters are obtained through the use of the hyperparameter optimization (HPO) methods Random Search, Bayesian Optimization and Particle Swarm Optimization. This is done on three different datasets, and each HPO method is evaluated based on the classification accuracy change it induces across the datasets. We found that every HPO method yielded a total classification accuracy increase of approximately 2-3% across all datasets compared to the accuracies obtained using the default hyperparameters. However, due to limitations of time, data and computational resources, no assertions can be made as to whether the observed positive effect is generalizable at a larger scale. Instead, we could conclude that the utility of HPO methods is dependent on the dataset at hand.
För att skapa en maskininlärningsmodell behöver en ofta välja olika hyperparametrar som konfigurerar modellens egenskaper. Prestandan av en sådan modell beror starkt på valet av dessa hyperparametrar, varför det är relevant att undersöka hur optimering av hyperparametrar kan påverka klassifikationssäkerheten av en maskininlärningsmodell. I denna studie tränar och utvärderar vi en Random Forest-klassificerare vars hyperparametrar sätts till särskilda standardvärden och jämför denna med en klassificerare vars hyperparametrar bestäms av tre olika metoder för optimering av hyperparametrar (HPO) - Random Search, Bayesian Optimization och Particle Swarm Optimization. Detta görs på tre olika dataset, och varje HPO- metod utvärderas baserat på den ändring av klassificeringsträffsäkerhet som den medför över dessa dataset. Vi fann att varje HPO-metod resulterade i en total ökning av klassificeringsträffsäkerhet på cirka 2-3% över alla dataset jämfört med den träffsäkerhet som kruleslassificeraren fick med standardvärdena för hyperparametrana. På grund av begränsningar i form av tid och data kunde vi inte fastställa om den positiva effekten är generaliserbar till en större skala. Slutsatsen som kunde dras var istället att användbarheten av metoder för optimering av hyperparametrar är beroende på det dataset de tillämpas på.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Matosevic, Antonio. "On Bayesian optimization and its application to hyperparameter tuning". Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-74962.

Texto completo
Resumen
This thesis introduces the concept of Bayesian optimization, primarly used in optimizing costly black-box functions. Besides theoretical treatment of the topic, the focus of the thesis is on two numerical experiments. Firstly, different types of acquisition functions, which are the key components responsible for the performance, are tested and compared. Special emphasis is on the analysis of a so-called exploration-exploitation trade-off. Secondly, one of the most recent applications of Bayesian optimization concerns hyperparameter tuning in machine learning algorithms, where the objective function is expensive to evaluate and not given analytically. However, some results indicate that much simpler methods can give similar results. Our contribution is therefore a statistical comparison of simple random search and Bayesian optimization in the context of finding the optimal set of hyperparameters in support vector regression. It has been found that there is no significant difference in performance of these two methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "Hyperparameter selection and optimization"

1

Agrawal, Tanay. Hyperparameter Optimization in Machine Learning. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-6579-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zheng, Minrui. Spatially Explicit Hyperparameter Optimization for Neural Networks. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-5399-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Pappalardo, Elisa, Panos M. Pardalos y Giovanni Stracquadanio. Optimization Approaches for Solving String Selection Problems. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-9053-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Li︠a︡tkher, V. M. Wind power: Turbine design, selection, and optimization. Hoboken, New Jersey: Scrivener Publishing, Wiley, 2014.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

East, Donald R. Optimization technology for leach and liner selection. Littleton, CO: Society of Mining Engineers, 1987.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Zheng, Maosheng, Haipeng Teng, Jie Yu, Ying Cui y Yi Wang. Probability-Based Multi-objective Optimization for Material Selection. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-3351-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Membranes for membrane reactors: Preparation, optimization, and selection. Chichester, West Sussex: Wiley, 2011.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Zheng, Maosheng, Jie Yu, Haipeng Teng, Ying Cui y Yi Wang. Probability-Based Multi-objective Optimization for Material Selection. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-3939-8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Toy, Ayhan Özgür. Route, aircraft prioritization and selection for airlift mobility optimization. Monterey, Calif: Naval Postgraduate School, 1996.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

S, Handen Jeffrey, ed. Industrialization of drug discovery: From target selection through lead optimization. New York: Dekker/CRC Press, 2005.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Hyperparameter selection and optimization"

1

Brazdil, Pavel, Jan N. van Rijn, Carlos Soares y Joaquin Vanschoren. "Metalearning for Hyperparameter Optimization". En Metalearning, 103–22. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-67024-5_6.

Texto completo
Resumen
SummaryThis chapter describes various approaches for the hyperparameter optimization (HPO) and combined algorithm selection and hyperparameter optimization problems (CASH). It starts by presenting some basic hyperparameter optimization methods, including grid search, random search, racing strategies, successive halving and hyperband. Next, it discusses Bayesian optimization, a technique that learns from the observed performance of previously tried hyperparameter settings on the current task. This knowledge is used to build a meta-model (surrogate model) that can be used to predict which unseen configurations may work better on that task. This part includes the description sequential model-based optimization (SMBO). This chapter also covers metalearning techniques that extend the previously discussed optimization techniques with the ability to transfer knowledge across tasks. This includes techniques such as warm-starting the search, or transferring previously learned meta-models that were trained on prior (similar) tasks. A key question here is how to establish how similar prior tasks are to the new task. This can be done on the basis of past experiments, but can also exploit the information gained from recent experiments on the target task. This chapter presents an overview of some recent methods proposed in this area.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Brazdil, Pavel, Jan N. van Rijn, Carlos Soares y Joaquin Vanschoren. "Metalearning Approaches for Algorithm Selection I (Exploiting Rankings)". En Metalearning, 19–37. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-67024-5_2.

Texto completo
Resumen
SummaryThis chapter discusses an approach to the problem of algorithm selection, which exploits the performance metadata of algorithms (workflows) on prior tasks to generate recommendations for a given target dataset. The recommendations are in the form of rankings of candidate algorithms. The methodology involves two phases. In the first one, rankings of algorithms/workflows are elaborated on the basis of historical performance data on different datasets. These are subsequently aggregated into a single ranking (e.g. average ranking). In the second phase, the average ranking is used to schedule tests on the target dataset with the objective of identifying the best performing algorithm. This approach requires that an appropriate evaluation measure, such as accuracy, is set beforehand. In this chapter we also describe a method that builds this ranking based on a combination of accuracy and runtime, yielding good anytime performance. While this approach is rather simple, it can still provide good recommendations to the user. Although the examples in this chapter are from the classification domain, this approach can be applied to other tasks besides algorithm selection, namely hyperparameter optimization (HPO), as well as the combined algorithm selection and hyperparameter optimization (CASH) problem. As this approach works with discrete data, continuous hyperparameters need to be discretized first.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Goshtasbpour, Shirin y Fernando Perez-Cruz. "Optimization of Annealed Importance Sampling Hyperparameters". En Machine Learning and Knowledge Discovery in Databases, 174–90. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26419-1_11.

Texto completo
Resumen
AbstractAnnealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models. Although AIS is guaranteed to provide unbiased estimate for any set of hyperparameters, the common implementations rely on simple heuristics such as the geometric average bridging distributions between initial and the target distribution which affect the estimation performance when the computation budget is limited. In order to reduce the number of sampling iterations, we present a parameteric AIS process with flexible intermediary distributions defined by a residual density with respect to the geometric mean path. Our method allows parameter sharing between annealing distributions, the use of fix linear schedule for discretization and amortization of hyperparameter selection in latent variable models. We assess the performance of Optimized-Path AIS for marginal likelihood estimation of deep generative models and compare it to compare it to more computationally intensive AIS.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Kotthoff, Lars, Chris Thornton, Holger H. Hoos, Frank Hutter y Kevin Leyton-Brown. "Auto-WEKA: Automatic Model Selection and Hyperparameter Optimization in WEKA". En Automated Machine Learning, 81–95. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-05318-5_4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Taubert, Oskar, Marie Weiel, Daniel Coquelin, Anis Farshian, Charlotte Debus, Alexander Schug, Achim Streit y Markus Götz. "Massively Parallel Genetic Optimization Through Asynchronous Propagation of Populations". En Lecture Notes in Computer Science, 106–24. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-32041-5_6.

Texto completo
Resumen
AbstractWe present , an evolutionary optimization algorithm and software package for global optimization and in particular hyperparameter search. For efficient use of HPC resources, omits the synchronization after each generation as done in conventional genetic algorithms. Instead, it steers the search with the complete population present at time of breeding new individuals. We provide an MPI-based implementation of our algorithm, which features variants of selection, mutation, crossover, and migration and is easy to extend with custom functionality. We compare to the established optimization tool . We find that is up to three orders of magnitude faster without sacrificing solution accuracy, demonstrating the efficiency and efficacy of our lazy synchronization approach. Code and documentation are available at https://github.com/Helmholtz-AI-Energy/propulate/.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Esuli, Andrea, Alessandro Fabris, Alejandro Moreo y Fabrizio Sebastiani. "Evaluation of Quantification Algorithms". En The Information Retrieval Series, 33–54. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-20467-8_3.

Texto completo
Resumen
AbstractIn this chapter we discuss the experimental evaluation of quantification systems. We look at evaluation measures for the various types of quantification systems (binary, single-label multiclass, multi-label multiclass, ordinal), but also at evaluation protocols for quantification, that essentially consist in ways to extract multiple testing samples for use in quantification evaluation from a single classification test set. The chapter ends with a discussion on how to perform model selection (i.e., hyperparameter optimization) in a quantification-specific way.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Ponnuru, Suchith y Lekha S. Nair. "Feature Extraction and Selection with Hyperparameter Optimization for Mitosis Detection in Breast Histopathology Images". En Data Intelligence and Cognitive Informatics, 727–49. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-6004-8_55.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Guan, Ruei-Sing, Yu-Chee Tseng, Jen-Jee Chen y Po-Tsun Kuo. "Combined Bayesian and RNN-Based Hyperparameter Optimization for Efficient Model Selection Applied for autoML". En Communications in Computer and Information Science, 86–97. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-9582-8_8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Martinez-de-Pison, F. J., R. Gonzalez-Sendino, J. Ferreiro, E. Fraile y A. Pernia-Espinoza. "GAparsimony: An R Package for Searching Parsimonious Models by Combining Hyperparameter Optimization and Feature Selection". En Lecture Notes in Computer Science, 62–73. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92639-1_6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Martinez-de-Pison, Francisco Javier, Ruben Gonzalez-Sendino, Alvaro Aldama, Javier Ferreiro y Esteban Fraile. "Hybrid Methodology Based on Bayesian Optimization and GA-PARSIMONY for Searching Parsimony Models by Combining Hyperparameter Optimization and Feature Selection". En Lecture Notes in Computer Science, 52–62. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59650-1_5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Hyperparameter selection and optimization"

1

Izaú, Leonardo, Mariana Fortes, Vitor Ribeiro, Celso Marques, Carla Oliveira, Eduardo Bezerra, Fabio Porto, Rebecca Salles y Eduardo Ogasawara. "Towards Robust Cluster-Based Hyperparameter Optimization". En Simpósio Brasileiro de Banco de Dados. Sociedade Brasileira de Computação - SBC, 2022. http://dx.doi.org/10.5753/sbbd.2022.224330.

Texto completo
Resumen
Hyperparameter optimization is a fundamental step in machine learning pipelines since it can influence the predictive performance of the resulting models. However, the setup generally selected by classical hyperparameter optimization based on minimizing an objective function may not be robust to overfitting. This work proposes CHyper, a novel clustering-based approach to hyperparameter selection. CHyper derives a candidate cluster of close or similar hyperparameters with low prediction errors in the validation dataset. Hyperparameters chosen are likely to produce models that generalize the inherent behavior of the data. CHyper was evaluated with two different clustering techniques, namely k-means and spectral clustering, in the context of time series prediction of annual fertilizer consumption. Complementary to minimizing an objective function, cluster-based hyperparameter selection achieved robustness to negative overfitting effects and contributed to lowering a generalization error.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Takenaga, Shintaro, Yoshihiko Ozaki y Masaki Onishi. "Dynamic Fidelity Selection for Hyperparameter Optimization". En GECCO '23 Companion: Companion Conference on Genetic and Evolutionary Computation. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3583133.3596320.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Owoyele, Opeoluwa, Pinaki Pal y Alvaro Vidal Torreira. "An Automated Machine Learning-Genetic Algorithm (AutoML-GA) Framework With Active Learning for Design Optimization". En ASME 2020 Internal Combustion Engine Division Fall Technical Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/icef2020-3000.

Texto completo
Resumen
Abstract The use of machine learning (ML) based surrogate models is a promising technique to significantly accelerate simulation-based design optimization of IC engines, due to the high computational cost of running computational fluid dynamics (CFD) simulations. However, surrogate-based optimization for IC engine applications suffers from two main issues. First, training ML models requires hyperparameter selection, often involving trial-and-error combined with domain expertise. The second issue is that the data required to train these models is often unknown a priori. In this work, we present an automated hyperparameter selection technique coupled with an active learning approach to address these challenges. The technique presented in this study involves the use of a Bayesian approach to optimize the hyperparameters of the base learners that make up a Super Learner model to obtain better test performance. In addition to performing hyperparameter optimization (HPO), an active learning approach is employed, where the process of data generation using simulations, ML training, and surrogate optimization, is performed repeatedly to refine the solution in the vicinity of the predicted optimum. The proposed approach is applied to the optimization of a compression ignition engine with control parameters relating to fuel injection, in-cylinder flow, and thermodynamic conditions. It is demonstrated that by automatically selecting the best values of the hyperparameters, a 1.6% improvement in merit value is obtained, compared to an improvement of 1.0% with default hyperparameters. Overall, the framework introduced in this study reduces the need for technical expertise in training ML models for optimization, while also reducing the number of simulations needed for performing surrogate-based design optimization.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Frey, Nathan C., Dan Zhao, Simon Axelrod, Michael Jones, David Bestor, Vijay Gadepally, Rafael Gomez-Bombarelli y Siddharth Samsi. "Energy-aware neural architecture selection and hyperparameter optimization". En 2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2022. http://dx.doi.org/10.1109/ipdpsw55747.2022.00125.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Costa, Victor O. y Cesar R. Rodrigues. "Hierarchical Ant Colony for Simultaneous Classifier Selection and Hyperparameter Optimization". En 2018 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2018. http://dx.doi.org/10.1109/cec.2018.8477834.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Sunkad, Zubin A. y Soujanya. "Feature Selection and Hyperparameter Optimization of SVM for Human Activity Recognition". En 2016 3rd International Conference on Soft Computing & Machine Intelligence (ISCMI). IEEE, 2016. http://dx.doi.org/10.1109/iscmi.2016.30.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Hagemann, Simon, Atakan Sunnetcioglu, Tobias Fahse y Rainer Stark. "Neural Network Hyperparameter Optimization for the Assisted Selection of Assembly Equipment". En 2019 23rd International Conference on Mechatronics Technology (ICMT). IEEE, 2019. http://dx.doi.org/10.1109/icmect.2019.8932099.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Sandru, Elena-Diana y Emilian David. "Unified Feature Selection and Hyperparameter Bayesian Optimization for Machine Learning based Regression". En 2019 International Symposium on Signals, Circuits and Systems (ISSCS). IEEE, 2019. http://dx.doi.org/10.1109/isscs.2019.8801728.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kam, Yasin, Mert Bayraktar y Umit Deniz Ulusar. "Swarm Optimization-Based Hyperparameter Selection for Machine Learning Algorithms in Indoor Localization". En 2023 8th International Conference on Computer Science and Engineering (UBMK). IEEE, 2023. http://dx.doi.org/10.1109/ubmk59864.2023.10286800.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Baghirov, Elshan. "Comprehensive Framework for Malware Detection: Leveraging Ensemble Methods, Feature Selection and Hyperparameter Optimization". En 2023 IEEE 17th International Conference on Application of Information and Communication Technologies (AICT). IEEE, 2023. http://dx.doi.org/10.1109/aict59525.2023.10313179.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Hyperparameter selection and optimization"

1

Filippov, A., I. Goumiri y B. Priest. Genetic Algorithm for Hyperparameter Optimization in Gaussian Process Modeling. Office of Scientific and Technical Information (OSTI), agosto de 2020. http://dx.doi.org/10.2172/1659396.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Kamath, C. Intelligent Sampling for Surrogate Modeling, Hyperparameter Optimization, and Data Analysis. Office of Scientific and Technical Information (OSTI), diciembre de 2021. http://dx.doi.org/10.2172/1836193.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Tropp, Joel A. Column Subset Selection, Matrix Factorization, and Eigenvalue Optimization. Fort Belvoir, VA: Defense Technical Information Center, julio de 2008. http://dx.doi.org/10.21236/ada633832.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Edwards, D. A. y M. J. Syphers. Parameter selection for the SSC trade-offs and optimization. Office of Scientific and Technical Information (OSTI), octubre de 1991. http://dx.doi.org/10.2172/67463.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Li, Zhenjiang y J. J. Garcia-Luna-Aceves. A Distributed Approach for Multi-Constrained Path Selection and Routing Optimization. Fort Belvoir, VA: Defense Technical Information Center, enero de 2006. http://dx.doi.org/10.21236/ada467530.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Knapp, Adam C. y Kevin J. Johnson. Using Fisher Information Criteria for Chemical Sensor Selection via Convex Optimization Methods. Fort Belvoir, VA: Defense Technical Information Center, noviembre de 2016. http://dx.doi.org/10.21236/ada640843.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Selbach-Allen, Megan E. Using Biomechanical Optimization To Interpret Dancers' Pose Selection For A Partnered Spin. Fort Belvoir, VA: Defense Technical Information Center, mayo de 2009. http://dx.doi.org/10.21236/ada548785.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Cole, J. Vernon, Abhra Roy, Ashok Damle, Hari Dahr, Sanjiv Kumar, Kunal Jain y Ned Djilai. WaterTransport in PEM Fuel Cells: Advanced Modeling, Material Selection, Testing and Design Optimization. Office of Scientific and Technical Information (OSTI), octubre de 2012. http://dx.doi.org/10.2172/1052343.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Weller, Joel I., Ignacy Misztal y Micha Ron. Optimization of methodology for genomic selection of moderate and large dairy cattle populations. United States Department of Agriculture, marzo de 2015. http://dx.doi.org/10.32747/2015.7594404.bard.

Texto completo
Resumen
The main objectives of this research was to detect the specific polymorphisms responsible for observed quantitative trait loci and develop optimal strategies for genomic evaluations and selection for moderate (Israel) and large (US) dairy cattle populations. A joint evaluation using all phenotypic, pedigree, and genomic data is the optimal strategy. The specific objectives were: 1) to apply strategies for determination of the causative polymorphisms based on the “a posteriori granddaughter design” (APGD), 2) to develop methods to derive unbiased estimates of gene effects derived from SNP chips analyses, 3) to derive optimal single-stage methods to estimate breeding values of animals based on marker, phenotypic and pedigree data, 4) to extend these methods to multi-trait genetic evaluations and 5) to evaluate the results of long-term genomic selection, as compared to traditional selection. Nearly all of these objectives were met. The major achievements were: The APGD and the modified granddaughter designs were applied to the US Holstein population, and regions harboring segregating quantitative trait loci (QTL) were identified for all economic traits of interest. The APGD was able to find segregating QTL for all the economic traits analyzed, and confidence intervals for QTL location ranged from ~5 to 35 million base pairs. Genomic estimated breeding values (GEBV) for milk production traits in the Israeli Holstein population were computed by the single-step method and compared to results for the two-step method. The single-step method was extended to derive GEBV for multi-parity evaluation. Long-term analysis of genomic selection demonstrated that inclusion of pedigree data from previous generations may result in less accurate GEBV. Major conclusions are: Predictions using single-step genomic best linear unbiased prediction (GBLUP) were the least biased, and that method appears to be the best tool for genomic evaluation of a small population, as it automatically accounts for parental index and allows for inclusion of female genomic information without additional steps. None of the methods applied to the Israeli Holstein population were able to derive GEBV for young bulls that were significantly better than parent averages. Thus we confirm previous studies that the main limiting factor for the accuracy of GEBV is the number of bulls with genotypes and progeny tests. Although 36 of the grandsires included in the APGD were genotyped for the BovineHDBeadChip, which includes 777,000 SNPs, we were not able to determine the causative polymorphism for any of the detected QTL. The number of valid unique markers on the BovineHDBeadChip is not sufficient for a reasonable probability to find the causative polymorphisms. Complete resequencing of the genome of approximately 50 bulls will be required, but this could not be accomplished within the framework of the current project due to funding constraints. Inclusion of pedigree data from older generations in the derivation of GEBV may result is less accurate evaluations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Crisman, Everett E. Semiconductor Selection and Optimization for use in a Laser Induced Pulsed Pico-Second Electromagnetic Source. Fort Belvoir, VA: Defense Technical Information Center, enero de 2000. http://dx.doi.org/10.21236/ada408051.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía