Literatura académica sobre el tema "Hyperparameter selection and optimization"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Hyperparameter selection and optimization".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Hyperparameter selection and optimization"
Sun, Yunlei, Huiquan Gong, Yucong Li y Dalin Zhang. "Hyperparameter Importance Analysis based on N-RReliefF Algorithm". International Journal of Computers Communications & Control 14, n.º 4 (5 de agosto de 2019): 557–73. http://dx.doi.org/10.15837/ijccc.2019.4.3593.
Texto completoBengio, Yoshua. "Gradient-Based Optimization of Hyperparameters". Neural Computation 12, n.º 8 (1 de agosto de 2000): 1889–900. http://dx.doi.org/10.1162/089976600300015187.
Texto completoNystrup, Peter, Erik Lindström y Henrik Madsen. "Hyperparameter Optimization for Portfolio Selection". Journal of Financial Data Science 2, n.º 3 (18 de junio de 2020): 40–54. http://dx.doi.org/10.3905/jfds.2020.1.035.
Texto completoLi, Yang, Jiawei Jiang, Jinyang Gao, Yingxia Shao, Ce Zhang y Bin Cui. "Efficient Automatic CASH via Rising Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 4763–71. http://dx.doi.org/10.1609/aaai.v34i04.5910.
Texto completoLi, Yuqi. "Discrete Hyperparameter Optimization Model Based on Skewed Distribution". Mathematical Problems in Engineering 2022 (9 de agosto de 2022): 1–10. http://dx.doi.org/10.1155/2022/2835596.
Texto completoMohapatra, Shubhankar, Sajin Sasy, Xi He, Gautam Kamath y Om Thakkar. "The Role of Adaptive Optimizers for Honest Private Hyperparameter Selection". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 7 (28 de junio de 2022): 7806–13. http://dx.doi.org/10.1609/aaai.v36i7.20749.
Texto completoKurnia, Deni, Muhammad Itqan Mazdadi, Dwi Kartini, Radityo Adi Nugroho y Friska Abadi. "Seleksi Fitur dengan Particle Swarm Optimization pada Klasifikasi Penyakit Parkinson Menggunakan XGBoost". Jurnal Teknologi Informasi dan Ilmu Komputer 10, n.º 5 (17 de octubre de 2023): 1083–94. http://dx.doi.org/10.25126/jtiik.20231057252.
Texto completoProchukhan, Dmytro. "IMPLEMENTATION OF TECHNOLOGY FOR IMPROVING THE QUALITY OF SEGMENTATION OF MEDICAL IMAGES BY SOFTWARE ADJUSTMENT OF CONVOLUTIONAL NEURAL NETWORK HYPERPARAMETERS". Information and Telecommunication Sciences, n.º 1 (24 de junio de 2023): 59–63. http://dx.doi.org/10.20535/2411-2976.12023.59-63.
Texto completoRaji, Ismail Damilola, Habeeb Bello-Salau, Ime Jarlath Umoh, Adeiza James Onumanyi, Mutiu Adesina Adegboye y Ahmed Tijani Salawudeen. "Simple Deterministic Selection-Based Genetic Algorithm for Hyperparameter Tuning of Machine Learning Models". Applied Sciences 12, n.º 3 (24 de enero de 2022): 1186. http://dx.doi.org/10.3390/app12031186.
Texto completoRidho, Akhmad y Alamsyah Alamsyah. "Chaotic Whale Optimization Algorithm in Hyperparameter Selection in Convolutional Neural Network Algorithm". Journal of Advances in Information Systems and Technology 4, n.º 2 (10 de marzo de 2023): 156–69. http://dx.doi.org/10.15294/jaist.v4i2.60595.
Texto completoTesis sobre el tema "Hyperparameter selection and optimization"
Ndiaye, Eugene. "Safe optimization algorithms for variable selection and hyperparameter tuning". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT004/document.
Texto completoMassive and automatic data processing requires the development of techniques able to filter the most important information. Among these methods, those with sparse structures have been shown to improve the statistical and computational efficiency of estimators in a context of large dimension. They can often be expressed as a solution of regularized empirical risk minimization and generally lead to non differentiable optimization problems in the form of a sum of a smooth term, measuring the quality of the fit, and a non-smooth term, penalizing complex solutions. Although it has considerable advantages, such a way of including prior information, unfortunately introduces many numerical difficulties both for solving the underlying optimization problem and to calibrate the level of regularization. Solving these issues has been at the heart of this thesis. A recently introduced technique, called "Screening Rules", proposes to ignore some variables during the optimization process by benefiting from the expected sparsity of the solutions. These elimination rules are said to be safe when the procedure guarantees to not reject any variable wrongly. In this work, we propose a unified framework for identifying important structures in these convex optimization problems and we introduce the "Gap Safe Screening Rules". They allows to obtain significant gains in computational time thanks to the dimensionality reduction induced by this method. In addition, they can be easily inserted into iterative algorithms and apply to a large number of problems.To find a good compromise between minimizing risk and introducing a learning bias, (exact) homotopy continuation algorithms offer the possibility of tracking the curve of the solutions as a function of the regularization parameters. However, they exhibit numerical instabilities due to several matrix inversions and are often expensive in large dimension. Another weakness is that a worst-case analysis shows that they have exact complexities that are exponential in the dimension of the model parameter. Allowing approximated solutions makes possible to circumvent the aforementioned drawbacks by approximating the curve of the solutions. In this thesis, we revisit the approximation techniques of the regularization paths given a predefined tolerance and we propose an in-depth analysis of their complexity w.r.t. the regularity of the loss functions involved. Hence, we propose optimal algorithms as well as various strategies for exploring the parameters space. We also provide calibration method (for the regularization parameter) that enjoys globalconvergence guarantees for the minimization of the empirical risk on the validation data.Among sparse regularization methods, the Lasso is one of the most celebrated and studied. Its statistical theory suggests choosing the level of regularization according to the amount of variance in the observations, which is difficult to use in practice because the variance of the model is oftenan unknown quantity. In such case, it is possible to jointly optimize the regression parameter as well as the level of noise. These concomitant estimates, appeared in the literature under the names of Scaled Lasso or Square-Root Lasso, and provide theoretical results as sharp as that of theLasso while being independent of the actual noise level of the observations. Although presenting important advances, these methods are numerically unstable and the currently available algorithms are expensive in computation time. We illustrate these difficulties and we propose modifications based on smoothing techniques to increase stability of these estimators as well as to introduce a faster algorithm
Thornton, Chris. "Auto-WEKA : combined selection and hyperparameter optimization of supervised machine learning algorithms". Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/46177.
Texto completoBertrand, Quentin. "Hyperparameter selection for high dimensional sparse learning : application to neuroimaging". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG054.
Texto completoDue to non-invasiveness and excellent time resolution, magneto- and electroencephalography (M/EEG) have emerged as tools of choice to monitor brain activity. Reconstructing brain signals from M/EEG measurements can be cast as a high dimensional ill-posed inverse problem. Typical estimators of brain signals involve challenging optimization problems, composed of the sum of a data-fidelity term, and a sparsity promoting term. Because of their notoriously hard to tune regularization hyperparameters, sparsity-based estimators are currently not massively used by practitioners. The goal of this thesis is to provide a simple, fast, and automatic way to calibrate sparse linear models. We first study some properties of coordinate descent: model identification, local linear convergence, and acceleration. Relying on Anderson extrapolation schemes, we propose an effective way to speed up coordinate descent in theory and practice. We then explore a statistical approach to set the regularization parameter of Lasso-type problems. A closed-form formula can be derived for the optimal regularization parameter of L1 penalized linear regressions. Unfortunately, it relies on the true noise level, unknown in practice. To remove this dependency, one can resort to estimators for which the regularization parameter does not depend on the noise level. However, they require to solve challenging "nonsmooth + nonsmooth" optimization problems. We show that partial smoothing preserves their statistical properties and we propose an application to M/EEG source localization problems. Finally we investigate hyperparameter optimization, encompassing held-out or cross-validation hyperparameter selection. It requires tackling bilevel optimization with nonsmooth inner problems. Such problems are canonically solved using zeros order techniques, such as grid-search or random-search. We present an efficient technique to solve these challenging bilevel optimization problems using first-order methods
Thomas, Janek [Verfasser] y Bernd [Akademischer Betreuer] Bischl. "Gradient boosting in automatic machine learning: feature selection and hyperparameter optimization / Janek Thomas ; Betreuer: Bernd Bischl". München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2019. http://d-nb.info/1189584808/34.
Texto completoNakisa, Bahareh. "Emotion classification using advanced machine learning techniques applied to wearable physiological signals data". Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/129875/9/Bahareh%20Nakisa%20Thesis.pdf.
Texto completoKlein, Aaron [Verfasser] y Frank [Akademischer Betreuer] Hutter. "Efficient bayesian hyperparameter optimization". Freiburg : Universität, 2020. http://d-nb.info/1214592961/34.
Texto completoGousseau, Clément. "Hyperparameter Optimization for Convolutional Neural Networks". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-272107.
Texto completoHyperparameteroptimering är en viktig men svår uppgift vid träning av ett artificiellt neuralt nätverk. Detta examensarbete, genomfört vid Orange Labs Lannion, presenterar och utvärderar tre algoritmer som syftar till att lösa denna uppgift: en naiv strategi (slumpmässig sökning), en Bayesiansk metod (TPE) och en evolutionär strategi (PSO). För att jämföra dessa algoritmer har MNIST-datasetet använts. Algoritmerna utvärderas även med hjälp av ljudklassificering, som är kärnverksamheten på företaget där examensarbetet genomfördes. Evolutionsalgoritmen (PSO) gav bättre resultat än de två andra metoderna.
Lévesque, Julien-Charles. "Bayesian hyperparameter optimization : overfitting, ensembles and conditional spaces". Doctoral thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/28364.
Texto completoIn this thesis, we consider the analysis and extension of Bayesian hyperparameter optimization methodology to various problems related to supervised machine learning. The contributions of the thesis are attached to 1) the overestimation of the generalization accuracy of hyperparameters and models resulting from Bayesian optimization, 2) an application of Bayesian optimization to ensemble learning, and 3) the optimization of spaces with a conditional structure such as found in automatic machine learning (AutoML) problems. Generally, machine learning algorithms have some free parameters, called hyperparameters, allowing to regulate or modify these algorithms’ behaviour. For the longest time, hyperparameters were tuned by hand or with exhaustive search algorithms. Recent work highlighted the conceptual advantages in optimizing hyperparameters with more rational methods, such as Bayesian optimization. Bayesian optimization is a very versatile framework for the optimization of unknown and non-derivable functions, grounded strongly in probabilistic modelling and uncertainty estimation, and we adopt it for the work in this thesis. We first briefly introduce Bayesian optimization with Gaussian processes (GP) and describe its application to hyperparameter optimization. Next, original contributions are presented on the dangers of overfitting during hyperparameter optimization, where the optimization ends up learning the validation folds. We show that there is indeed overfitting during the optimization of hyperparameters, even with cross-validation strategies, and that it can be reduced by methods such as a reshuffling of the training and validation splits at every iteration of the optimization. Another promising method is demonstrated in the use of a GP’s posterior mean for the selection of final hyperparameters, rather than directly returning the model with the minimal crossvalidation error. Both suggested approaches are demonstrated to deliver significant improvements in the generalization accuracy of the final selected model on a benchmark of 118 datasets. The next contributions are provided by an application of Bayesian hyperparameter optimization for ensemble learning. Stacking methods have been exploited for some time to combine multiple classifiers in a meta classifier system. Those can be applied to the end result of a Bayesian hyperparameter optimization pipeline by keeping the best classifiers and combining them at the end. Our Bayesian ensemble optimization method consists in a modification of the Bayesian optimization pipeline to search for the best hyperparameters to use for an ensemble, which is different from optimizing hyperparameters for the performance of a single model. The approach has the advantage of not requiring the training of more models than a regular Bayesian hyperparameter optimization. Experiments show the potential of the suggested approach on three different search spaces and many datasets. The last contributions are related to the optimization of more complex hyperparameter spaces, namely spaces that contain a structure of conditionality. Conditions arise naturally in hyperparameter optimization when one defines a model with multiple components – certain hyperparameters then only need to be specified if their parent component is activated. One example of such a space is the combined algorithm selection and hyperparameter optimization, now better known as AutoML, where the objective is to choose the base model and optimize its hyperparameters. We thus highlight techniques and propose new kernels for GPs that handle structure in such spaces in a principled way. Contributions are also supported by experimental evaluation on many datasets. Overall, the thesis regroups several works directly related to Bayesian hyperparameter optimization. The thesis showcases novel ways to apply Bayesian optimization for ensemble learning, as well as methodologies to reduce overfitting or optimize more complex spaces.
Dans cette thèse, l’optimisation bayésienne sera analysée et étendue pour divers problèmes reliés à l’apprentissage supervisé. Les contributions de la thèse sont en lien avec 1) la surestimation de la performance de généralisation des hyperparamètres et des modèles résultants d’une optimisation bayésienne, 2) une application de l’optimisation bayésienne pour la génération d’ensembles de classifieurs, et 3) l’optimisation d’espaces avec une structure conditionnelle telle que trouvée dans les problèmes “d’apprentissage machine automatique” (AutoML). Généralement, les algorithmes d’apprentissage automatique ont des paramètres libres, appelés hyperparamètres, permettant de réguler ou de modifier leur comportement à plus haut niveau. Auparavant, ces hyperparamètres étaient choisis manuellement ou par recherche exhaustive. Des travaux récents ont souligné la pertinence d’utiliser des méthodes plus intelligentes pour l’optimisation d’hyperparamètres, notamment l’optimisation bayésienne. Effectivement, l’optimisation bayésienne est un outil polyvalent pour l’optimisation de fonctions inconnues ou non dérivables, ancré fortement dans la modélisation probabiliste et l’estimation d’incertitude. C’est pourquoi nous adoptons cet outil pour le travail dans cette thèse. La thèse débute avec une introduction de l’optimisation bayésienne avec des processus gaussiens (Gaussian processes, GP) et décrit son application à l’optimisation d’hyperparamètres. Ensuite, des contributions originales sont présentées sur les dangers du surapprentissage durant l’optimisation d’hyperparamètres, où l’on se trouve à mémoriser les plis de validation utilisés pour l’évaluation. Il est démontré que l’optimisation d’hyperparamètres peut en effet mener à une surestimation de la performance de validation, même avec des méthodologies de validation croisée. Des méthodes telles que le rebrassage des plis d’entraînement et de validation sont ensuite proposées pour réduire ce surapprentissage. Une autre méthode prometteuse est démontrée dans l’utilisation de la moyenne a posteriori d’un GP pour effectuer la sélection des hyperparamètres finaux, plutôt que sélectionner directement le modèle avec l’erreur minimale en validation croisée. Les deux approches suggérées ont montré une amélioration significative sur la performance en généralisation pour un banc de test de 118 jeux de données. Les contributions suivantes proviennent d’une application de l’optimisation d’hyperparamètres pour des méthodes par ensembles. Les méthodes dites d’empilage (stacking) ont précédemment été employées pour combiner de multiples classifieurs à l’aide d’un métaclassifieur. Ces méthodes peuvent s’appliquer au résultat final d’une optimisation bayésienne d’hyperparamètres en conservant les meilleurs classifieurs identifiés lors de l’optimisation et en les combinant à la fin de l’optimisation. Notre méthode d’optimisation bayésienne d’ensembles consiste en une modification du pipeline d’optimisation d’hyperparamètres pour rechercher des hyperparamètres produisant de meilleurs modèles pour un ensemble, plutôt que d’optimiser pour la performance d’un modèle seul. L’approche suggérée a l’avantage de ne pas nécessiter plus d’entraînement de modèles qu’une méthode classique d’optimisation bayésienne d’hyperparamètres. Une évaluation empirique démontre l’intérêt de l’approche proposée. Les dernières contributions sont liées à l’optimisation d’espaces d’hyperparamètres plus complexes, notamment des espaces contenant une structure conditionnelle. Ces conditions apparaissent dans l’optimisation d’hyperparamètres lorsqu’un modèle modulaire est défini – certains hyperparamètres sont alors seulement définis si leur composante parente est activée. Un exemple de tel espace de recherche est la sélection de modèles et l’optimisation d’hyperparamètres combinée, maintenant davantage connu sous l’appellation AutoML, où l’on veut à la fois choisir le modèle de base et optimiser ses hyperparamètres. Des techniques et de nouveaux noyaux pour processus gaussiens sont donc proposées afin de mieux gérer la structure de tels espaces d’une manière fondée sur des principes. Les contributions présentées sont appuyées par une autre étude empirique sur de nombreux jeux de données. En résumé, cette thèse consiste en un rassemblement de travaux tous reliés directement à l’optimisation bayésienne d’hyperparamètres. La thèse présente de nouvelles méthodes pour l’optimisation bayésienne d’ensembles de classifieurs, ainsi que des procédures pour réduire le surapprentissage et pour optimiser des espaces d’hyperparamètres structurés.
Nygren, Rasmus. "Evaluation of hyperparameter optimization methods for Random Forest classifiers". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301739.
Texto completoFör att skapa en maskininlärningsmodell behöver en ofta välja olika hyperparametrar som konfigurerar modellens egenskaper. Prestandan av en sådan modell beror starkt på valet av dessa hyperparametrar, varför det är relevant att undersöka hur optimering av hyperparametrar kan påverka klassifikationssäkerheten av en maskininlärningsmodell. I denna studie tränar och utvärderar vi en Random Forest-klassificerare vars hyperparametrar sätts till särskilda standardvärden och jämför denna med en klassificerare vars hyperparametrar bestäms av tre olika metoder för optimering av hyperparametrar (HPO) - Random Search, Bayesian Optimization och Particle Swarm Optimization. Detta görs på tre olika dataset, och varje HPO- metod utvärderas baserat på den ändring av klassificeringsträffsäkerhet som den medför över dessa dataset. Vi fann att varje HPO-metod resulterade i en total ökning av klassificeringsträffsäkerhet på cirka 2-3% över alla dataset jämfört med den träffsäkerhet som kruleslassificeraren fick med standardvärdena för hyperparametrana. På grund av begränsningar i form av tid och data kunde vi inte fastställa om den positiva effekten är generaliserbar till en större skala. Slutsatsen som kunde dras var istället att användbarheten av metoder för optimering av hyperparametrar är beroende på det dataset de tillämpas på.
Matosevic, Antonio. "On Bayesian optimization and its application to hyperparameter tuning". Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-74962.
Texto completoLibros sobre el tema "Hyperparameter selection and optimization"
Agrawal, Tanay. Hyperparameter Optimization in Machine Learning. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-6579-6.
Texto completoZheng, Minrui. Spatially Explicit Hyperparameter Optimization for Neural Networks. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-5399-5.
Texto completoPappalardo, Elisa, Panos M. Pardalos y Giovanni Stracquadanio. Optimization Approaches for Solving String Selection Problems. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-9053-1.
Texto completoLi︠a︡tkher, V. M. Wind power: Turbine design, selection, and optimization. Hoboken, New Jersey: Scrivener Publishing, Wiley, 2014.
Buscar texto completoEast, Donald R. Optimization technology for leach and liner selection. Littleton, CO: Society of Mining Engineers, 1987.
Buscar texto completoZheng, Maosheng, Haipeng Teng, Jie Yu, Ying Cui y Yi Wang. Probability-Based Multi-objective Optimization for Material Selection. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-3351-6.
Texto completoMembranes for membrane reactors: Preparation, optimization, and selection. Chichester, West Sussex: Wiley, 2011.
Buscar texto completoZheng, Maosheng, Jie Yu, Haipeng Teng, Ying Cui y Yi Wang. Probability-Based Multi-objective Optimization for Material Selection. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-3939-8.
Texto completoToy, Ayhan Özgür. Route, aircraft prioritization and selection for airlift mobility optimization. Monterey, Calif: Naval Postgraduate School, 1996.
Buscar texto completoS, Handen Jeffrey, ed. Industrialization of drug discovery: From target selection through lead optimization. New York: Dekker/CRC Press, 2005.
Buscar texto completoCapítulos de libros sobre el tema "Hyperparameter selection and optimization"
Brazdil, Pavel, Jan N. van Rijn, Carlos Soares y Joaquin Vanschoren. "Metalearning for Hyperparameter Optimization". En Metalearning, 103–22. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-67024-5_6.
Texto completoBrazdil, Pavel, Jan N. van Rijn, Carlos Soares y Joaquin Vanschoren. "Metalearning Approaches for Algorithm Selection I (Exploiting Rankings)". En Metalearning, 19–37. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-67024-5_2.
Texto completoGoshtasbpour, Shirin y Fernando Perez-Cruz. "Optimization of Annealed Importance Sampling Hyperparameters". En Machine Learning and Knowledge Discovery in Databases, 174–90. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26419-1_11.
Texto completoKotthoff, Lars, Chris Thornton, Holger H. Hoos, Frank Hutter y Kevin Leyton-Brown. "Auto-WEKA: Automatic Model Selection and Hyperparameter Optimization in WEKA". En Automated Machine Learning, 81–95. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-05318-5_4.
Texto completoTaubert, Oskar, Marie Weiel, Daniel Coquelin, Anis Farshian, Charlotte Debus, Alexander Schug, Achim Streit y Markus Götz. "Massively Parallel Genetic Optimization Through Asynchronous Propagation of Populations". En Lecture Notes in Computer Science, 106–24. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-32041-5_6.
Texto completoEsuli, Andrea, Alessandro Fabris, Alejandro Moreo y Fabrizio Sebastiani. "Evaluation of Quantification Algorithms". En The Information Retrieval Series, 33–54. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-20467-8_3.
Texto completoPonnuru, Suchith y Lekha S. Nair. "Feature Extraction and Selection with Hyperparameter Optimization for Mitosis Detection in Breast Histopathology Images". En Data Intelligence and Cognitive Informatics, 727–49. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-6004-8_55.
Texto completoGuan, Ruei-Sing, Yu-Chee Tseng, Jen-Jee Chen y Po-Tsun Kuo. "Combined Bayesian and RNN-Based Hyperparameter Optimization for Efficient Model Selection Applied for autoML". En Communications in Computer and Information Science, 86–97. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-9582-8_8.
Texto completoMartinez-de-Pison, F. J., R. Gonzalez-Sendino, J. Ferreiro, E. Fraile y A. Pernia-Espinoza. "GAparsimony: An R Package for Searching Parsimonious Models by Combining Hyperparameter Optimization and Feature Selection". En Lecture Notes in Computer Science, 62–73. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92639-1_6.
Texto completoMartinez-de-Pison, Francisco Javier, Ruben Gonzalez-Sendino, Alvaro Aldama, Javier Ferreiro y Esteban Fraile. "Hybrid Methodology Based on Bayesian Optimization and GA-PARSIMONY for Searching Parsimony Models by Combining Hyperparameter Optimization and Feature Selection". En Lecture Notes in Computer Science, 52–62. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59650-1_5.
Texto completoActas de conferencias sobre el tema "Hyperparameter selection and optimization"
Izaú, Leonardo, Mariana Fortes, Vitor Ribeiro, Celso Marques, Carla Oliveira, Eduardo Bezerra, Fabio Porto, Rebecca Salles y Eduardo Ogasawara. "Towards Robust Cluster-Based Hyperparameter Optimization". En Simpósio Brasileiro de Banco de Dados. Sociedade Brasileira de Computação - SBC, 2022. http://dx.doi.org/10.5753/sbbd.2022.224330.
Texto completoTakenaga, Shintaro, Yoshihiko Ozaki y Masaki Onishi. "Dynamic Fidelity Selection for Hyperparameter Optimization". En GECCO '23 Companion: Companion Conference on Genetic and Evolutionary Computation. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3583133.3596320.
Texto completoOwoyele, Opeoluwa, Pinaki Pal y Alvaro Vidal Torreira. "An Automated Machine Learning-Genetic Algorithm (AutoML-GA) Framework With Active Learning for Design Optimization". En ASME 2020 Internal Combustion Engine Division Fall Technical Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/icef2020-3000.
Texto completoFrey, Nathan C., Dan Zhao, Simon Axelrod, Michael Jones, David Bestor, Vijay Gadepally, Rafael Gomez-Bombarelli y Siddharth Samsi. "Energy-aware neural architecture selection and hyperparameter optimization". En 2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2022. http://dx.doi.org/10.1109/ipdpsw55747.2022.00125.
Texto completoCosta, Victor O. y Cesar R. Rodrigues. "Hierarchical Ant Colony for Simultaneous Classifier Selection and Hyperparameter Optimization". En 2018 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2018. http://dx.doi.org/10.1109/cec.2018.8477834.
Texto completoSunkad, Zubin A. y Soujanya. "Feature Selection and Hyperparameter Optimization of SVM for Human Activity Recognition". En 2016 3rd International Conference on Soft Computing & Machine Intelligence (ISCMI). IEEE, 2016. http://dx.doi.org/10.1109/iscmi.2016.30.
Texto completoHagemann, Simon, Atakan Sunnetcioglu, Tobias Fahse y Rainer Stark. "Neural Network Hyperparameter Optimization for the Assisted Selection of Assembly Equipment". En 2019 23rd International Conference on Mechatronics Technology (ICMT). IEEE, 2019. http://dx.doi.org/10.1109/icmect.2019.8932099.
Texto completoSandru, Elena-Diana y Emilian David. "Unified Feature Selection and Hyperparameter Bayesian Optimization for Machine Learning based Regression". En 2019 International Symposium on Signals, Circuits and Systems (ISSCS). IEEE, 2019. http://dx.doi.org/10.1109/isscs.2019.8801728.
Texto completoKam, Yasin, Mert Bayraktar y Umit Deniz Ulusar. "Swarm Optimization-Based Hyperparameter Selection for Machine Learning Algorithms in Indoor Localization". En 2023 8th International Conference on Computer Science and Engineering (UBMK). IEEE, 2023. http://dx.doi.org/10.1109/ubmk59864.2023.10286800.
Texto completoBaghirov, Elshan. "Comprehensive Framework for Malware Detection: Leveraging Ensemble Methods, Feature Selection and Hyperparameter Optimization". En 2023 IEEE 17th International Conference on Application of Information and Communication Technologies (AICT). IEEE, 2023. http://dx.doi.org/10.1109/aict59525.2023.10313179.
Texto completoInformes sobre el tema "Hyperparameter selection and optimization"
Filippov, A., I. Goumiri y B. Priest. Genetic Algorithm for Hyperparameter Optimization in Gaussian Process Modeling. Office of Scientific and Technical Information (OSTI), agosto de 2020. http://dx.doi.org/10.2172/1659396.
Texto completoKamath, C. Intelligent Sampling for Surrogate Modeling, Hyperparameter Optimization, and Data Analysis. Office of Scientific and Technical Information (OSTI), diciembre de 2021. http://dx.doi.org/10.2172/1836193.
Texto completoTropp, Joel A. Column Subset Selection, Matrix Factorization, and Eigenvalue Optimization. Fort Belvoir, VA: Defense Technical Information Center, julio de 2008. http://dx.doi.org/10.21236/ada633832.
Texto completoEdwards, D. A. y M. J. Syphers. Parameter selection for the SSC trade-offs and optimization. Office of Scientific and Technical Information (OSTI), octubre de 1991. http://dx.doi.org/10.2172/67463.
Texto completoLi, Zhenjiang y J. J. Garcia-Luna-Aceves. A Distributed Approach for Multi-Constrained Path Selection and Routing Optimization. Fort Belvoir, VA: Defense Technical Information Center, enero de 2006. http://dx.doi.org/10.21236/ada467530.
Texto completoKnapp, Adam C. y Kevin J. Johnson. Using Fisher Information Criteria for Chemical Sensor Selection via Convex Optimization Methods. Fort Belvoir, VA: Defense Technical Information Center, noviembre de 2016. http://dx.doi.org/10.21236/ada640843.
Texto completoSelbach-Allen, Megan E. Using Biomechanical Optimization To Interpret Dancers' Pose Selection For A Partnered Spin. Fort Belvoir, VA: Defense Technical Information Center, mayo de 2009. http://dx.doi.org/10.21236/ada548785.
Texto completoCole, J. Vernon, Abhra Roy, Ashok Damle, Hari Dahr, Sanjiv Kumar, Kunal Jain y Ned Djilai. WaterTransport in PEM Fuel Cells: Advanced Modeling, Material Selection, Testing and Design Optimization. Office of Scientific and Technical Information (OSTI), octubre de 2012. http://dx.doi.org/10.2172/1052343.
Texto completoWeller, Joel I., Ignacy Misztal y Micha Ron. Optimization of methodology for genomic selection of moderate and large dairy cattle populations. United States Department of Agriculture, marzo de 2015. http://dx.doi.org/10.32747/2015.7594404.bard.
Texto completoCrisman, Everett E. Semiconductor Selection and Optimization for use in a Laser Induced Pulsed Pico-Second Electromagnetic Source. Fort Belvoir, VA: Defense Technical Information Center, enero de 2000. http://dx.doi.org/10.21236/ada408051.
Texto completo