Academic literature on the topic 'Selection of hyperparameters'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Selection of hyperparameters.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Selection of hyperparameters"
Sun, Yunlei, Huiquan Gong, Yucong Li, and Dalin Zhang. "Hyperparameter Importance Analysis based on N-RReliefF Algorithm." International Journal of Computers Communications & Control 14, no. 4 (August 5, 2019): 557–73. http://dx.doi.org/10.15837/ijccc.2019.4.3593.
Full textBengio, Yoshua. "Gradient-Based Optimization of Hyperparameters." Neural Computation 12, no. 8 (August 1, 2000): 1889–900. http://dx.doi.org/10.1162/089976600300015187.
Full textLohvithee, Manasavee, Wenjuan Sun, Stephane Chretien, and Manuchehr Soleimani. "Ant Colony-Based Hyperparameter Optimisation in Total Variation Reconstruction in X-ray Computed Tomography." Sensors 21, no. 2 (January 15, 2021): 591. http://dx.doi.org/10.3390/s21020591.
Full textAdewole, Ayoade I., and Olusoga A. Fasoranbaku. "Determination of Quantile Range of Optimal Hyperparameters Using Bayesian Estimation." Tanzania Journal of Science 47, no. 3 (August 13, 2021): 988–98. http://dx.doi.org/10.4314/tjs.v47i3.10.
Full textJohnson, Kara Layne, and Nicole Bohme Carnegie . "Calibration of an Adaptive Genetic Algorithm for Modeling Opinion Diffusion." Algorithms 15, no. 2 (January 28, 2022): 45. http://dx.doi.org/10.3390/a15020045.
Full textRaji, Ismail Damilola, Habeeb Bello-Salau, Ime Jarlath Umoh, Adeiza James Onumanyi, Mutiu Adesina Adegboye, and Ahmed Tijani Salawudeen. "Simple Deterministic Selection-Based Genetic Algorithm for Hyperparameter Tuning of Machine Learning Models." Applied Sciences 12, no. 3 (January 24, 2022): 1186. http://dx.doi.org/10.3390/app12031186.
Full textLu, Wanjie, Hongpeng Mao, Fanhao Lin, Zilin Chen, Hua Fu, and Yaosong Xu. "Recognition of rolling bearing running state based on genetic algorithm and convolutional neural network." Advances in Mechanical Engineering 14, no. 4 (April 2022): 168781322210956. http://dx.doi.org/10.1177/16878132221095635.
Full textHan, Junjie, Cedric Gondro, and Juan Steibel. "98 Using differential evolution to improve predictive accuracy of deep learning models applied to pig production data." Journal of Animal Science 98, Supplement_3 (November 2, 2020): 27. http://dx.doi.org/10.1093/jas/skaa054.048.
Full textWang, Chung-Ying, Chien-Yao Huang, and Yen-Han Chiang. "Solutions of Feature and Hyperparameter Model Selection in the Intelligent Manufacturing." Processes 10, no. 5 (April 27, 2022): 862. http://dx.doi.org/10.3390/pr10050862.
Full textHendriks, Jacob, and Patrick Dumond. "Exploring the Relationship between Preprocessing and Hyperparameter Tuning for Vibration-Based Machine Fault Diagnosis Using CNNs." Vibration 4, no. 2 (April 3, 2021): 284–309. http://dx.doi.org/10.3390/vibration4020019.
Full textDissertations / Theses on the topic "Selection of hyperparameters"
Ndiaye, Eugene. "Safe optimization algorithms for variable selection and hyperparameter tuning." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT004/document.
Full textMassive and automatic data processing requires the development of techniques able to filter the most important information. Among these methods, those with sparse structures have been shown to improve the statistical and computational efficiency of estimators in a context of large dimension. They can often be expressed as a solution of regularized empirical risk minimization and generally lead to non differentiable optimization problems in the form of a sum of a smooth term, measuring the quality of the fit, and a non-smooth term, penalizing complex solutions. Although it has considerable advantages, such a way of including prior information, unfortunately introduces many numerical difficulties both for solving the underlying optimization problem and to calibrate the level of regularization. Solving these issues has been at the heart of this thesis. A recently introduced technique, called "Screening Rules", proposes to ignore some variables during the optimization process by benefiting from the expected sparsity of the solutions. These elimination rules are said to be safe when the procedure guarantees to not reject any variable wrongly. In this work, we propose a unified framework for identifying important structures in these convex optimization problems and we introduce the "Gap Safe Screening Rules". They allows to obtain significant gains in computational time thanks to the dimensionality reduction induced by this method. In addition, they can be easily inserted into iterative algorithms and apply to a large number of problems.To find a good compromise between minimizing risk and introducing a learning bias, (exact) homotopy continuation algorithms offer the possibility of tracking the curve of the solutions as a function of the regularization parameters. However, they exhibit numerical instabilities due to several matrix inversions and are often expensive in large dimension. Another weakness is that a worst-case analysis shows that they have exact complexities that are exponential in the dimension of the model parameter. Allowing approximated solutions makes possible to circumvent the aforementioned drawbacks by approximating the curve of the solutions. In this thesis, we revisit the approximation techniques of the regularization paths given a predefined tolerance and we propose an in-depth analysis of their complexity w.r.t. the regularity of the loss functions involved. Hence, we propose optimal algorithms as well as various strategies for exploring the parameters space. We also provide calibration method (for the regularization parameter) that enjoys globalconvergence guarantees for the minimization of the empirical risk on the validation data.Among sparse regularization methods, the Lasso is one of the most celebrated and studied. Its statistical theory suggests choosing the level of regularization according to the amount of variance in the observations, which is difficult to use in practice because the variance of the model is oftenan unknown quantity. In such case, it is possible to jointly optimize the regression parameter as well as the level of noise. These concomitant estimates, appeared in the literature under the names of Scaled Lasso or Square-Root Lasso, and provide theoretical results as sharp as that of theLasso while being independent of the actual noise level of the observations. Although presenting important advances, these methods are numerically unstable and the currently available algorithms are expensive in computation time. We illustrate these difficulties and we propose modifications based on smoothing techniques to increase stability of these estimators as well as to introduce a faster algorithm
Thornton, Chris. "Auto-WEKA : combined selection and hyperparameter optimization of supervised machine learning algorithms." Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/46177.
Full textThomas, Janek [Verfasser], and Bernd [Akademischer Betreuer] Bischl. "Gradient boosting in automatic machine learning: feature selection and hyperparameter optimization / Janek Thomas ; Betreuer: Bernd Bischl." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2019. http://d-nb.info/1189584808/34.
Full textРадюк, Павло Михайлович, and Pavlo Radiuk. "Інформаційна технологія раннього діагностування пневмонії за індивідуальним підбором параметрів моделі класифікації медичних зображень легень." Дисертація, Хмельницький національний університет, 2021. http://elar.khnu.km.ua/jspui/handle/123456789/11937.
Full textThe present thesis is devoted to solving the topical scientific and applied problem of automating the process of diagnosing viral pneumonia by medical images of the lungs through the development of information technology for early diagnosis of pneumonia by the individual selection of parameters of the classification model by medical images of the lungs. Applying the developed information technology for the early diagnosis of pneumonia in clinical practice by medical images of the human chest increases the accuracy and reliability of pneumonia identification in the early stages
Maillard, Guillaume. "Hold-out and Aggregated hold-out Aggregated Hold-Out Aggregated hold-out for sparse linear regression with a robust loss function." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM005.
Full textIn statistics, it is often necessary to choose between different estimators (estimator selection) or to combine them (agregation). For risk-minimization problems, a simple method, called hold-out or validation, is to leave out some of the data, using it to estimate the risk of the estimators, in order to select the estimator with minimal risk. This method requires the statistician to arbitrarily select a subset of the data to form the "validation sample". The influence of this choice can be reduced by averaging several hold-out estimators (Aggregated hold-out, Agghoo). In this thesis, the hold-out and Agghoo are studied in various settings. First, theoretical guarantees for the hold-out (and Agghoo) are extended to two settings where the risk is unbounded: kernel methods and sparse linear regression. Secondly, a comprehensive analysis of the risk of both methods is carried out in a particular case: least-squares density estimation using Fourier series. It is proved that aggregated hold-out can perform better than the best estimator in the given collection, something that is clearly impossible for a procedure, such as hold-out or cross-validation, which selects only one estimator
Rusch, Thomas, Patrick Mair, and Kurt Hornik. "The STOPS framework for structure-based hyperparameter selection in multidimensional scaling." 2018. http://epub.wu.ac.at/6399/4/stops%2Ddssv18.pdf.
Full textChen, Jing-Wun, and 陳靖玟. "Exploring Effects of Optimizer Selection and Their Hyperparameter Tuning on Performance of Deep Neural Networks for Image Recognition." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/fpx347.
Full text國立中央大學
數學系
107
In recent years, deep learning has flourished and people have begun to use deep learning to solve problems. Deep neural networks can be used for speech recognition, image recognition, object detection, face recognition, or driverless. The most basic neural network is the Multilayer Perceptron (MLP), which consists of multiple node layers, each layer is fully connected to each other, and one of the drawbacks of MLP is that it ignores the shape of the data which is important for image data. Compare to traditional neural networks, the convolutional neural network (CNN) has additional convolution and pooling layers which are used for preserving and capturing image features. The accuracy rate for prediction using neural network depends on many factors, such as the architecture of neural networks, the cost functions, and the selection of an optimizer. The goal of this work is to investigate the effects of optimizer selection and their hyperparameter tuning on the performance of deep neural networks for image recognition problems. We use three data sets including MNIST, CIFAR-10 and train route scenarios as test problems and test six optimizers (Gradient descent, Momentum, Adaptive gradient algorithm, Adadelta, Root Mean Square Propagation, and Adam). Our numerical results show that Adam is a good choice because of its efficiency and robustness.
Rawat, Waseem. "Optimization of convolutional neural networks for image classification using genetic algorithms and bayesian optimization." Diss., 2018. http://hdl.handle.net/10500/24977.
Full textElectrical and Mining Engineering
M. Tech. (Electrical Engineering)
Book chapters on the topic "Selection of hyperparameters"
Brazdil, Pavel, Jan N. van Rijn, Carlos Soares, and Joaquin Vanschoren. "Metalearning Approaches for Algorithm Selection I (Exploiting Rankings)." In Metalearning, 19–37. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-67024-5_2.
Full textEfimova, Valeria, Andrey Filchenkov, and Anatoly Shalyto. "Reinforcement-Based Simultaneous Algorithm and Its Hyperparameters Selection." In Communications in Computer and Information Science, 15–27. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-35400-8_2.
Full textCheng, Jian, Jiansheng Qian, and Yi-nan Guo. "Adaptive Chaotic Cultural Algorithm for Hyperparameters Selection of Support Vector Regression." In Emerging Intelligent Computing Technology and Applications. With Aspects of Artificial Intelligence, 286–93. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04020-7_31.
Full textDernoncourt, Franck, Shamim Nemati, Elias Baedorf Kassis, and Mohammad Mahdi Ghassemi. "Hyperparameter Selection." In Secondary Analysis of Electronic Health Records, 419–27. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43742-2_29.
Full textLehrer, Steven F., Tian Xie, and Guanxi Yi. "Do the Hype of the Benefits from Using New Data Science Tools Extend to Forecasting Extremely Volatile Assets?" In Data Science for Economics and Finance, 287–330. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-66891-4_13.
Full textMontesinos López, Osval Antonio, Abelardo Montesinos López, and Jose Crossa. "Random Forest for Genomic Prediction." In Multivariate Statistical Machine Learning Methods for Genomic Prediction, 633–81. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-89010-0_15.
Full textTing, Michael. "Hyperparameter Selection Using the SURE Criterion." In Molecular Imaging in Nano MRI, 43–52. Hoboken, USA: John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781118760949.ch4.
Full textBrazdil, Pavel, Jan N. van Rijn, Carlos Soares, and Joaquin Vanschoren. "Metalearning for Hyperparameter Optimization." In Metalearning, 103–22. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-67024-5_6.
Full textPacula, Maciej, Jason Ansel, Saman Amarasinghe, and Una-May O’Reilly. "Hyperparameter Tuning in Bandit-Based Adaptive Operator Selection." In Applications of Evolutionary Computation, 73–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29178-4_8.
Full textLendasse, Amaury, Yongnan Ji, Nima Reyhani, and Michel Verleysen. "LS-SVM Hyperparameter Selection with a Nonparametric Noise Estimator." In Lecture Notes in Computer Science, 625–30. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11550907_99.
Full textConference papers on the topic "Selection of hyperparameters"
Owoyele, Opeoluwa, Pinaki Pal, and Alvaro Vidal Torreira. "An Automated Machine Learning-Genetic Algorithm (AutoML-GA) Framework With Active Learning for Design Optimization." In ASME 2020 Internal Combustion Engine Division Fall Technical Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/icef2020-3000.
Full textAshrafi, Parivash, Yi Sun, Neil Davey, Rod Adams, Marc B. Brown, Maria Prapopoulou, and Gary Moss. "The importance of hyperparameters selection within small datasets." In 2015 International Joint Conference on Neural Networks (IJCNN). IEEE, 2015. http://dx.doi.org/10.1109/ijcnn.2015.7280645.
Full textEgorov, Aleksej D., and Maksim S. Reznik. "Selection of Hyperparameters and Data Augmentation Method for Diverse Backbone Models Mask R-CNN." In 2021 IV International Conference on Control in Technical Systems (CTS). IEEE, 2021. http://dx.doi.org/10.1109/cts53513.2021.9562845.
Full textSalvador, Rodolfo C., Elmer P. Dadios, Irister M. Javel, and Antipas T. Teologo. "PULSE: A Pulsar Searching Model with Genetic Algorithm Implementation for Best Pipeline Selection and Hyperparameters Optimization." In 2019 IEEE 11th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management ( HNICEM ). IEEE, 2019. http://dx.doi.org/10.1109/hnicem48295.2019.9072764.
Full textAlghamdi, Muataz. "AI Driven Approach to Predict Sonic Response Utilizing Typical Formation Evaluation Logs." In International Petroleum Technology Conference. IPTC, 2022. http://dx.doi.org/10.2523/iptc-22132-ea.
Full textRakhshani, Hojjat, Lhassane Idoumghar, Julien Lepagnot, Mathieu Brevilliers, and Edward Keedwell. "Automatic hyperparameter selection in Autodock." In 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 2018. http://dx.doi.org/10.1109/bibm.2018.8621172.
Full textMoore, Gregory M., Charles Bergeron, and Kristin P. Bennett. "Nonsmooth Bilevel Programming for Hyperparameter Selection." In 2009 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 2009. http://dx.doi.org/10.1109/icdmw.2009.74.
Full textZhang, Hongbao, Baoping Lu, Lulu Liao, Hongzhi Bao, Zhifa Wang, Xutian Hou, Amol Mulunjkar, and Xin Jin. "Combining Machine Learning and Classic Drilling Theories to Improve Rate of Penetration Prediction." In SPE/IADC Middle East Drilling Technology Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/202202-ms.
Full textSmets, Koen, Brigitte Verdonk, and Elsa M. Jordaan. "Evaluation of Performance Measures for SVR Hyperparameter Selection." In 2007 International Joint Conference on Neural Networks. IEEE, 2007. http://dx.doi.org/10.1109/ijcnn.2007.4371031.
Full textKronvall, Ted, and Andreas Jakobsson. "Hyperparameter-selection for sparse regression: A probablistic approach." In 2017 51st Asilomar Conference on Signals, Systems, and Computers. IEEE, 2017. http://dx.doi.org/10.1109/acssc.2017.8335469.
Full text