Siga este enlace para ver otros tipos de publicaciones sobre el tema: Hyperparameter selection and optimization.

Artículos de revistas sobre el tema "Hyperparameter selection and optimization"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Hyperparameter selection and optimization".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Sun, Yunlei, Huiquan Gong, Yucong Li y Dalin Zhang. "Hyperparameter Importance Analysis based on N-RReliefF Algorithm". International Journal of Computers Communications & Control 14, n.º 4 (5 de agosto de 2019): 557–73. http://dx.doi.org/10.15837/ijccc.2019.4.3593.

Texto completo
Resumen
Hyperparameter selection has always been the key to machine learning. The Bayesian optimization algorithm has recently achieved great success, but it has certain constraints and limitations in selecting hyperparameters. In response to these constraints and limitations, this paper proposed the N-RReliefF algorithm, which can evaluate the importance of hyperparameters and the importance weights between hyperparameters. The N-RReliefF algorithm estimates the contribution of a single hyperparameter to the performance according to the influence degree of each hyperparameter on the performance and calculates the weight of importance between the hyperparameters according to the improved normalization formula. The N-RReliefF algorithm analyses the hyperparameter configuration and performance set generated by Bayesian optimization, and obtains the important hyperparameters in random forest algorithm and SVM algorithm. The experimental results verify the effectiveness of the N-RReliefF algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Bengio, Yoshua. "Gradient-Based Optimization of Hyperparameters". Neural Computation 12, n.º 8 (1 de agosto de 2000): 1889–900. http://dx.doi.org/10.1162/089976600300015187.

Texto completo
Resumen
Many machine learning algorithms can be formulated as the minimization of a training criterion that involves a hyperparameter. This hyperparameter is usually chosen by trial and error with a model selection criterion. In this article we present a methodology to optimize several hyper-parameters, based on the computation of the gradient of a model selection criterion with respect to the hyperparameters. In the case of a quadratic training criterion, the gradient of the selection criterion with respect to the hyperparameters is efficiently computed by backpropagating through a Cholesky decomposition. In the more general case, we show that the implicit function theorem can be used to derive a formula for the hyper-parameter gradient involving second derivatives of the training criterion.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Nystrup, Peter, Erik Lindström y Henrik Madsen. "Hyperparameter Optimization for Portfolio Selection". Journal of Financial Data Science 2, n.º 3 (18 de junio de 2020): 40–54. http://dx.doi.org/10.3905/jfds.2020.1.035.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Li, Yang, Jiawei Jiang, Jinyang Gao, Yingxia Shao, Ce Zhang y Bin Cui. "Efficient Automatic CASH via Rising Bandits". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 4763–71. http://dx.doi.org/10.1609/aaai.v34i04.5910.

Texto completo
Resumen
The Combined Algorithm Selection and Hyperparameter optimization (CASH) is one of the most fundamental problems in Automatic Machine Learning (AutoML). The existing Bayesian optimization (BO) based solutions turn the CASH problem into a Hyperparameter Optimization (HPO) problem by combining the hyperparameters of all machine learning (ML) algorithms, and use BO methods to solve it. As a result, these methods suffer from the low-efficiency problem due to the huge hyperparameter space in CASH. To alleviate this issue, we propose the alternating optimization framework, where the HPO problem for each ML algorithm and the algorithm selection problem are optimized alternately. In this framework, the BO methods are used to solve the HPO problem for each ML algorithm separately, incorporating a much smaller hyperparameter space for BO methods. Furthermore, we introduce Rising Bandits, a CASH-oriented Multi-Armed Bandits (MAB) variant, to model the algorithm selection in CASH. This framework can take the advantages of both BO in solving the HPO problem with a relatively small hyperparameter space and the MABs in accelerating the algorithm selection. Moreover, we further develop an efficient online algorithm to solve the Rising Bandits with provably theoretical guarantees. The extensive experiments on 30 OpenML datasets demonstrate the superiority of the proposed approach over the competitive baselines.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Li, Yuqi. "Discrete Hyperparameter Optimization Model Based on Skewed Distribution". Mathematical Problems in Engineering 2022 (9 de agosto de 2022): 1–10. http://dx.doi.org/10.1155/2022/2835596.

Texto completo
Resumen
As for the machine learning algorithm, one of the main factors restricting its further large-scale application is the value of hyperparameter. Therefore, researchers have done a lot of original numerical optimization algorithms to ensure the validity of hyperparameter selection. Based on previous studies, this study innovatively puts forward a model generated using skewed distribution (gamma distribution) as hyperparameter fitting and combines the Bayesian estimation method and Gauss hypergeometric function to propose a mathematically optimal solution for discrete hyperparameter selection. The results show that under strict mathematical conditions, the value of discrete hyperparameters can be given a reasonable expected value. This heuristic parameter adjustment method based on prior conditions can improve the accuracy of some traditional models in experiments and then improve the application value of models. At the same time, through the empirical study of relevant datasets, the effectiveness of the parameter adjustment strategy proposed in this study is further proved.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Mohapatra, Shubhankar, Sajin Sasy, Xi He, Gautam Kamath y Om Thakkar. "The Role of Adaptive Optimizers for Honest Private Hyperparameter Selection". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 7 (28 de junio de 2022): 7806–13. http://dx.doi.org/10.1609/aaai.v36i7.20749.

Texto completo
Resumen
Hyperparameter optimization is a ubiquitous challenge in machine learning, and the performance of a trained model depends crucially upon their effective selection. While a rich set of tools exist for this purpose, there are currently no practical hyperparameter selection methods under the constraint of differential privacy (DP). We study honest hyperparameter selection for differentially private machine learning, in which the process of hyperparameter tuning is accounted for in the overall privacy budget. To this end, we i) show that standard composition tools outperform more advanced techniques in many settings, ii) empirically and theoretically demonstrate an intrinsic connection between the learning rate and clipping norm hyperparameters, iii) show that adaptive optimizers like DPAdam enjoy a significant advantage in the process of honest hyperparameter tuning, and iv) draw upon novel limiting behaviour of Adam in the DP setting to design a new and more efficient optimizer.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Kurnia, Deni, Muhammad Itqan Mazdadi, Dwi Kartini, Radityo Adi Nugroho y Friska Abadi. "Seleksi Fitur dengan Particle Swarm Optimization pada Klasifikasi Penyakit Parkinson Menggunakan XGBoost". Jurnal Teknologi Informasi dan Ilmu Komputer 10, n.º 5 (17 de octubre de 2023): 1083–94. http://dx.doi.org/10.25126/jtiik.20231057252.

Texto completo
Resumen
Penyakit Parkinson merupakan gangguan pada sistem saraf pusat yang mempengaruhi sistem motorik. Diagnosis penyakit ini cukup sulit dilakukan karena gejalanya yang serupa dengan penyakit lain. Saat ini diagnosa dapat dilakukan menggunakan machine learning dengan memanfaatkan rekaman suara pasien. Fitur yang dihasilkan dari ekstraksi rekaman suara tersebut relatif cukup banyak sehingga seleksi fitur perlu dilakukan untuk menghindari memburuknya kinerja sebuah model. Pada penelitian ini, Particle Swarm Optimization digunakan sebagai seleksi fitur, sedangkan XGBoost akan digunakan sebagai model klasifikasi. Selain itu model juga akan diterapkan SMOTE untuk mengatasi masalah ketidakseimbangan kelas data dan hyperparameter tuning pada XGBoost untuk mendapatkan hyperparameter yang optimal. Hasil pengujian menunjukkan bahwa nilai AUC pada model dengan seleksi fitur tanpa SMOTE dan hyperparameter tuning adalah 0,9325, sedangkan pada model tanpa seleksi fitur hanya mendapat nilai AUC sebesar 0,9250. Namun, ketika kedua teknik SMOTE dan hyperparameter tuning digunakan bersamaan, penggunaan seleksi fitur mampu memberikan peningkatan kinerja pada model. Model dengan seleksi fitur mendapat nilai AUC sebesar 0,9483, sedangkan model tanpa seleksi fitur hanya mendapat nilai AUC sebesar 0,9366. Abstract Parkinson's disease is a disorder of the central nervous system that affects the motor system. Diagnosis of this disease is quite difficult because the symptoms are similar to other diseases. Currently, diagnosis can be done using machine learning by utilizing patient voice recordings. The features generated from the extraction of voice recordings are relatively large, so feature selection needs to be done to avoid deteriorating the performance of a model. In this research, Particle Swarm Optimization is used as feature selection, while XGBoost will be used as a classification model. In addition, the model will also be applied SMOTE to overcome the problem of data class imbalance and hyperparameter tuning on XGBoost to get optimal hyperparameters. The test results show that the AUC value on the model with feature selection without SMOTE and hyperparameter tuning is 0.9325, while the model without feature selection only gets an AUC value of 0.9250. However, when both SMOTE and hyperparameter tuning techniques are used together, the use of feature selection is able to provide improved performance on the model. The model with feature selection gets an AUC value of 0.9483, while the model without feature selection only gets an AUC value of 0.9366.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Prochukhan, Dmytro. "IMPLEMENTATION OF TECHNOLOGY FOR IMPROVING THE QUALITY OF SEGMENTATION OF MEDICAL IMAGES BY SOFTWARE ADJUSTMENT OF CONVOLUTIONAL NEURAL NETWORK HYPERPARAMETERS". Information and Telecommunication Sciences, n.º 1 (24 de junio de 2023): 59–63. http://dx.doi.org/10.20535/2411-2976.12023.59-63.

Texto completo
Resumen
Background. The scientists have built effective convolutional neural networks in their research, but the issue of optimal setting of the hyperparameters of these neural networks remains insufficiently researched. Hyperparameters affect model selection. They have the greatest impact on the number and size of hidden layers. Effective selection of hyperparameters improves the speed and quality of the learning algorithm. It is also necessary to pay attention to the fact that the hyperparameters of the convolutional neural network are interconnected. That is why it is very difficult to manually select the effective values of hyperparameters, which will ensure the maximum efficiency of the convolutional neural network. It is necessary to automate the process of selecting hyperparameters, to implement a software mechanism for setting hyperparameters of a convolutional neural network. The author has successfully implemented the specified task. Objective. The purpose of the paper is to develop a technology for selecting hyperparameters of a convolutional neural network to improve the quality of segmentation of medical images.. Methods. Selection of a convolutional neural network model that will enable effective segmentation of medical images, modification of the Keras Tuner library by developing an additional function, use of convolutional neural network optimization methods and hyperparameters, compilation of the constructed model and its settings, selection of the model with the best hyperparameters. Results. A comparative analysis of U-Net and FCN-32 convolutional neural networks was carried out. U-Net was selected as the tuning network due to its higher quality and accuracy of image segmentation. Modified the Keras Tuner library by developing an additional function for tuning hyperparameters. To optimize hyperparameters, the use of the Hyperband method is justified. The optimal number of epochs was selected - 20. In the process of setting hyperparameters, the best model with an accuracy index of 0.9665 was selected. The hyperparameter start_neurons is set to 80, the hyperparameter net_depth is 5, the activation function is Mish, the hyperparameter dropout is set to False, and the hyperparameter bn_after_act is set to True. Conclusions. The convolutional neural network U-Net, which is configured with the specified parameters, has a significant potential in solving the problems of segmentation of medical images. The prospect of further research is the use of a modified network for the diagnosis of symptoms of the coronavirus disease COVID-19, pneumonia, cancer and other complex medical diseases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Raji, Ismail Damilola, Habeeb Bello-Salau, Ime Jarlath Umoh, Adeiza James Onumanyi, Mutiu Adesina Adegboye y Ahmed Tijani Salawudeen. "Simple Deterministic Selection-Based Genetic Algorithm for Hyperparameter Tuning of Machine Learning Models". Applied Sciences 12, n.º 3 (24 de enero de 2022): 1186. http://dx.doi.org/10.3390/app12031186.

Texto completo
Resumen
Hyperparameter tuning is a critical function necessary for the effective deployment of most machine learning (ML) algorithms. It is used to find the optimal hyperparameter settings of an ML algorithm in order to improve its overall output performance. To this effect, several optimization strategies have been studied for fine-tuning the hyperparameters of many ML algorithms, especially in the absence of model-specific information. However, because most ML training procedures need a significant amount of computational time and memory, it is frequently necessary to build an optimization technique that converges within a small number of fitness evaluations. As a result, a simple deterministic selection genetic algorithm (SDSGA) is proposed in this article. The SDSGA was realized by ensuring that both chromosomes and their accompanying fitness values in the original genetic algorithm are selected in an elitist-like way. We assessed the SDSGA over a variety of mathematical test functions. It was then used to optimize the hyperparameters of two well-known machine learning models, namely, the convolutional neural network (CNN) and the random forest (RF) algorithm, with application on the MNIST and UCI classification datasets. The SDSGA’s efficiency was compared to that of the Bayesian Optimization (BO) and three other popular metaheuristic optimization algorithms (MOAs), namely, the genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO) algorithms. The results obtained reveal that the SDSGA performed better than the other MOAs in solving 11 of the 17 known benchmark functions considered in our study. While optimizing the hyperparameters of the two ML models, it performed marginally better in terms of accuracy than the other methods while taking less time to compute.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ridho, Akhmad y Alamsyah Alamsyah. "Chaotic Whale Optimization Algorithm in Hyperparameter Selection in Convolutional Neural Network Algorithm". Journal of Advances in Information Systems and Technology 4, n.º 2 (10 de marzo de 2023): 156–69. http://dx.doi.org/10.15294/jaist.v4i2.60595.

Texto completo
Resumen
In several previous studies, metaheuristic methods were used to search for CNN hyperparameters. However, this research only focuses on searching for CNN hyperparameters in the type of network architecture, network structure, and initializing network weights. Therefore, in this article, we only focus on searching for CNN hyperparameters with network architecture type, and network structure with additional regularization. In this article, the CNN hyperparameter search with regularization uses CWOA on the MNIST and FashionMNIST datasets. Each dataset consists of 60,000 training data and 10,000 testing data. Then during the research, the training data was only taken 50% of the total data, then the data was divided again by 10% for data validation and the rest for training data. The results of the research on the MNIST CWOA dataset have an error value of 0.023 and an accuracy of 99.63. Then the FashionMNIST CWOA dataset has an error value of 0.23 and an accuracy of 91.36.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Ma, Zhixin, Shengmin Cui y Inwhee Joe. "An Enhanced Proximal Policy Optimization-Based Reinforcement Learning Method with Random Forest for Hyperparameter Optimization". Applied Sciences 12, n.º 14 (11 de julio de 2022): 7006. http://dx.doi.org/10.3390/app12147006.

Texto completo
Resumen
For most machine learning and deep learning models, the selection of hyperparameters has a significant impact on the performance of the model. Therefore, deep learning and data analysis experts have to spend a lot of time on hyperparameter tuning when building a model for accomplishing a task. Although there are many algorithms used to solve hyperparameter optimization (HPO), these methods require the results of the actual trials at each epoch to help perform the search. To reduce the number of trials, model-based reinforcement learning adopts multilayer perceptron (MLP) to capture the relationship between hyperparameter settings and model performance. However, MLP needs to be carefully designed because there is a risk of overfitting. Thus, we propose a random forest-enhanced proximal policy optimization (RFEPPO) reinforcement learning algorithm to solve the HPO problem. In addition, reinforcement learning as a solution to HPO will encounter the sparse reward problem, eventually leading to slow convergence. To address this problem, we employ the intrinsic reward, which introduces the prediction error as the reward signal. Experiments carried on nine tabular datasets and two image classification datasets demonstrate the effectiveness of our model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Aviles, Marcos, Juvenal Rodríguez-Reséndiz y Danjela Ibrahimi. "Optimizing EMG Classification through Metaheuristic Algorithms". Technologies 11, n.º 4 (2 de julio de 2023): 87. http://dx.doi.org/10.3390/technologies11040087.

Texto completo
Resumen
This work proposes a metaheuristic-based approach to hyperparameter selection in a multilayer perceptron to classify EMG signals. The main goal of the study is to improve the performance of the model by optimizing four important hyperparameters: the number of neurons, the learning rate, the epochs, and the training batches. The approach proposed in this work shows that hyperparameter optimization using particle swarm optimization and the gray wolf optimizer significantly improves the performance of a multilayer perceptron in classifying EMG motion signals. The final model achieves an average classification rate of 93% for the validation phase. The results obtained are promising and suggest that the proposed approach may be helpful for the optimization of deep learning models in other signal processing applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Jervis, Michael, Mingliang Liu y Robert Smith. "Deep learning network optimization and hyperparameter tuning for seismic lithofacies classification". Leading Edge 40, n.º 7 (julio de 2021): 514–23. http://dx.doi.org/10.1190/tle40070514.1.

Texto completo
Resumen
Deep learning is increasingly being applied in many aspects of seismic processing and interpretation. Here, we look at a deep convolutional neural network approach to multiclass seismic lithofacies characterization using well logs and seismic data. In particular, we focus on network performance and hyperparameter tuning. Several hyperparameter tuning approaches are compared, including true and directed random search methods such as very fast simulated annealing and Bayesian hyperparameter optimization. The results show that improvements in predictive capability are possible by using automatic optimization compared with manual parameter selection. In addition to evaluating the prediction accuracy's sensitivity to hyperparameters, we test various types of data representations. The choice of input seismic data can significantly impact the overall accuracy and computation speed of the optimized networks for the classification challenge under consideration. This is validated on a 3D synthetic seismic lithofacies example with acoustic and lithologic properties based on real well data and structure from an onshore oil field.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Bruni, Renato, Gianpiero Bianchi y Pasquale Papa. "Hyperparameter Black-Box Optimization to Improve the Automatic Classification of Support Tickets". Algorithms 16, n.º 1 (10 de enero de 2023): 46. http://dx.doi.org/10.3390/a16010046.

Texto completo
Resumen
User requests to a customer service, also known as tickets, are essentially short texts in natural language. They should be grouped by topic to be answered efficiently. The effectiveness increases if this semantic categorization becomes automatic. We pursue this goal by using text mining to extract the features from the tickets, and classification to perform the categorization. This is however a difficult multi-class problem, and the classification algorithm needs a suitable hyperparameter configuration to produce a practically useful categorization. As recently highlighted by several researchers, the selection of these hyperparameters is often the crucial aspect. Therefore, we propose to view the hyperparameter choice as a higher-level optimization problem where the hyperparameters are the decision variables and the objective is the predictive performance of the classifier. However, an explicit analytical model of this problem cannot be defined. Therefore, we propose to solve it as a black-box model by means of derivative-free optimization techniques. We conduct experiments on a relevant application: the categorization of the requests received by the Contact Center of the Italian National Statistics Institute (Istat). Results show that the proposed approach is able to effectively categorize the requests, and that its performance is increased by the proposed hyperparameter optimization.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Kumar, Suraj y Kukku Youseff. "Integrated Feature Selection and Hyperparameter Optimization for Multi-Label Classification of Medical Conditions". International Journal of Science and Research (IJSR) 13, n.º 3 (5 de marzo de 2024): 408–13. http://dx.doi.org/10.21275/sr24304214035.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Johnson, Kara Layne y Nicole Bohme Carnegie . "Calibration of an Adaptive Genetic Algorithm for Modeling Opinion Diffusion". Algorithms 15, n.º 2 (28 de enero de 2022): 45. http://dx.doi.org/10.3390/a15020045.

Texto completo
Resumen
Genetic algorithms mimic the process of natural selection in order to solve optimization problems with minimal assumptions and perform well when the objective function has local optima on the search space. These algorithms treat potential solutions to the optimization problem as chromosomes, consisting of genes which undergo biologically-inspired operators to identify a better solution. Hyperparameters or control parameters determine the way these operators are implemented. We created a genetic algorithm in order to fit a DeGroot opinion diffusion model using limited data, making use of selection, blending, crossover, mutation, and survival operators. We adapted the algorithm from a genetic algorithm for design of mixture experiments, but the new algorithm required substantial changes due to model assumptions and the large parameter space relative to the design space. In addition to introducing new hyperparameters, these changes mean the hyperparameter values suggested for the original algorithm cannot be expected to result in optimal performance. To make the algorithm for modeling opinion diffusion more accessible to researchers, we conduct a simulation study investigating hyperparameter values. We find the algorithm is robust to the values selected for most hyperparameters and provide suggestions for initial, if not default, values and recommendations for adjustments based on algorithm output.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Abbas, Farkhanda, Feng Zhang, Muhammad Ismail, Garee Khan, Javed Iqbal, Abdulwahed Fahad Alrefaei y Mohammed Fahad Albeshr. "Optimizing Machine Learning Algorithms for Landslide Susceptibility Mapping along the Karakoram Highway, Gilgit Baltistan, Pakistan: A Comparative Study of Baseline, Bayesian, and Metaheuristic Hyperparameter Optimization Techniques". Sensors 23, n.º 15 (1 de agosto de 2023): 6843. http://dx.doi.org/10.3390/s23156843.

Texto completo
Resumen
Algorithms for machine learning have found extensive use in numerous fields and applications. One important aspect of effectively utilizing these algorithms is tuning the hyperparameters to match the specific task at hand. The selection and configuration of hyperparameters directly impact the performance of machine learning models. Achieving optimal hyperparameter settings often requires a deep understanding of the underlying models and the appropriate optimization techniques. While there are many automatic optimization techniques available, each with its own advantages and disadvantages, this article focuses on hyperparameter optimization for well-known machine learning models. It explores cutting-edge optimization methods such as metaheuristic algorithms, deep learning-based optimization, Bayesian optimization, and quantum optimization, and our paper focused mainly on metaheuristic and Bayesian optimization techniques and provides guidance on applying them to different machine learning algorithms. The article also presents real-world applications of hyperparameter optimization by conducting tests on spatial data collections for landslide susceptibility mapping. Based on the experiment’s results, both Bayesian optimization and metaheuristic algorithms showed promising performance compared to baseline algorithms. For instance, the metaheuristic algorithm boosted the random forest model’s overall accuracy by 5% and 3%, respectively, from baseline optimization methods GS and RS, and by 4% and 2% from baseline optimization methods GA and PSO. Additionally, for models like KNN and SVM, Bayesian methods with Gaussian processes had good results. When compared to the baseline algorithms RS and GS, the accuracy of the KNN model was enhanced by BO-TPE by 1% and 11%, respectively, and by BO-GP by 2% and 12%, respectively. For SVM, BO-TPE outperformed GS and RS by 6% in terms of performance, while BO-GP improved results by 5%. The paper thoroughly discusses the reasons behind the efficiency of these algorithms. By successfully identifying appropriate hyperparameter configurations, this research paper aims to assist researchers, spatial data analysts, and industrial users in developing machine learning models more effectively. The findings and insights provided in this paper can contribute to enhancing the performance and applicability of machine learning algorithms in various domains.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Lu, Wanjie, Hongpeng Mao, Fanhao Lin, Zilin Chen, Hua Fu y Yaosong Xu. "Recognition of rolling bearing running state based on genetic algorithm and convolutional neural network". Advances in Mechanical Engineering 14, n.º 4 (abril de 2022): 168781322210956. http://dx.doi.org/10.1177/16878132221095635.

Texto completo
Resumen
In this study, the GA-CNN model is proposed to realize the automatic recognition of rolling bearing running state. Firstly, to avoid the over-fitting and gradient dispersion in the training process of the CNN model, the BN layer and Dropout technology are introduced into the LeNet-5 model. Secondly, to obtain the automatic selection of hyperparameters in CNN model, a method of hyperparameter selection combined with genetic algorithm (GA) is proposed. In the proposed method, each hyperparameter is encoded as a chromosome, and each hyperparameter has a mapping relationship with the corresponding gene position on the chromosome. After the process of chromosome selection, crossover and variation, the fitness value is calculated to present the superiority of the current chromosome. The chromosomes with high fitness values are more likely to be selected in the next genetic iteration, that is, the optimal hyperparameters of the CNN model are obtained. Then, vibration signals from CWRU are used for the time-frequency analysis, and the obtained time-frequency image set is used to train and test the proposed GA-CNN model, and the accuracy of the proposed model can reach 99.85% on average, and the training speed is four times faster than the model LeNet-5. Finally, the result of the experiment on the laboratory test platform The experimental results confirm the superiority of the method and the transplantability of the optimization model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Abu, Masyitah, Nik Adilah Hanin Zahri, Amiza Amir, Muhammad Izham Ismail, Azhany Yaakub, Said Amirul Anwar y Muhammad Imran Ahmad. "A Comprehensive Performance Analysis of Transfer Learning Optimization in Visual Field Defect Classification". Diagnostics 12, n.º 5 (18 de mayo de 2022): 1258. http://dx.doi.org/10.3390/diagnostics12051258.

Texto completo
Resumen
Numerous research have demonstrated that Convolutional Neural Network (CNN) models are capable of classifying visual field (VF) defects with great accuracy. In this study, we evaluated the performance of different pre-trained models (VGG-Net, MobileNet, ResNet, and DenseNet) in classifying VF defects and produced a comprehensive comparative analysis to compare the performance of different CNN models before and after hyperparameter tuning and fine-tuning. Using 32 batch sizes, 50 epochs, and ADAM as the optimizer to optimize weight, bias, and learning rate, VGG-16 obtained the highest accuracy of 97.63 percent, according to experimental findings. Subsequently, Bayesian optimization was utilized to execute automated hyperparameter tuning and automated fine-tuning layers of the pre-trained models to determine the optimal hyperparameter and fine-tuning layer for classifying many VF defect with the highest accuracy. We found that the combination of different hyperparameters and fine-tuning of the pre-trained models significantly impact the performance of deep learning models for this classification task. In addition, we also discovered that the automated selection of optimal hyperparameters and fine-tuning by Bayesian has significantly enhanced the performance of the pre-trained models. The results observed the best performance for the DenseNet-121 model with a validation accuracy of 98.46% and a test accuracy of 99.57% for the tested datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Hendriks, Jacob y Patrick Dumond. "Exploring the Relationship between Preprocessing and Hyperparameter Tuning for Vibration-Based Machine Fault Diagnosis Using CNNs". Vibration 4, n.º 2 (3 de abril de 2021): 284–309. http://dx.doi.org/10.3390/vibration4020019.

Texto completo
Resumen
This paper demonstrates the differences between popular transformation-based input representations for vibration-based machine fault diagnosis. This paper highlights the dependency of different input representations on hyperparameter selection with the results of training different configurations of classical convolutional neural networks (CNNs) with three common benchmarking datasets. Raw temporal measurement, Fourier spectrum, envelope spectrum, and spectrogram input types are individually used to train CNNs. Many configurations of CNNs are trained, with variable input sizes, convolutional kernel sizes and stride. The results show that each input type favors different combinations of hyperparameters, and that each of the datasets studied yield different performance characteristics. The input sizes are found to be the most significant determiner of whether overfitting will occur. It is demonstrated that CNNs trained with spectrograms are less dependent on hyperparameter optimization over all three datasets. This paper demonstrates the wide range of performance achieved by CNNs when preprocessing method and hyperparameters are varied as well as their complex interaction, providing researchers with useful background information and a starting place for further optimization.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Han, Junjie, Cedric Gondro y Juan Steibel. "98 Using differential evolution to improve predictive accuracy of deep learning models applied to pig production data". Journal of Animal Science 98, Supplement_3 (2 de noviembre de 2020): 27. http://dx.doi.org/10.1093/jas/skaa054.048.

Texto completo
Resumen
Abstract Deep learning (DL) is being used for prediction in precision livestock farming and in genomic prediction. However, optimizing hyperparameters in DL models is critical for their predictive performance. Grid search is the traditional approach to select hyperparameters in DL, but it requires exhaustive search over the parameter space. We propose hyperparameter selection using differential evolution (DE), which is a heuristic algorithm that does not require exhaustive search. The goal of this study was to design and apply DE to optimize hyperparameters of DL models for genomic prediction and image analysis in pig production systems. One dataset consisted of 910 pigs genotyped with 28,916 SNP markers to predict their post-mortem meat pH. Another dataset consisted of 1,334 images of pigs eating inside a single-spaced feeder classified as: “single pig” or “multiple pigs.” The accuracy of genomic prediction was defined as the correlation between the predicted pH and the observed pH. The image classification prediction accuracy was the proportion of correctly classified images. For genomic prediction, a multilayer perceptron (MLP) was optimized. For image classification, MLP and convolutional neural networks (CNN) were optimized. For genomic prediction, the initial hyperparameter set resulted in an accuracy of 0.032 and for image classification, the initial accuracy was between 0.72 and 0.76. After optimization using DE, the genomic prediction accuracy was 0.3688 compared to 0.334 using GBLUP. The top selected models included one layer, 60 neurons, sigmoid activation and L2 penalty = 0.3. The accuracy of image classification after optimization was between 0.89 and 0.92. Selected models included three layers, adamax optimizer and relu or elu activation for the MLP, and one layer, 64 filters and 5×5 filter size for the CNN. DE can adapt the hyperparameter selection to each problem, dataset and model, and it significantly increased prediction accuracy with minimal user input.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Truger, Felix, Martin Beisel, Johanna Barzen, Frank Leymann y Vladimir Yussupov. "Selection and Optimization of Hyperparameters in Warm-Started Quantum Optimization for the MaxCut Problem". Electronics 11, n.º 7 (25 de marzo de 2022): 1033. http://dx.doi.org/10.3390/electronics11071033.

Texto completo
Resumen
Today’s quantum computers are limited in their capabilities, e.g., the size of executable quantum circuits. The Quantum Approximate Optimization Algorithm (QAOA) addresses these limitations and is, therefore, a promising candidate for achieving a near-term quantum advantage. Warm-starting can further improve QAOA by utilizing classically pre-computed approximations to achieve better solutions at a small circuit depth. However, warm-starting requirements often depend on the quantum algorithm and problem at hand. Warm-started QAOA (WS-QAOA) requires developers to understand how to select approach-specific hyperparameter values that tune the embedding of classically pre-computed approximations. In this paper, we address the problem of hyperparameter selection in WS-QAOA for the maximum cut problem using the classical Goemans–Williamson algorithm for pre-computations. The contributions of this work are as follows: We implement and run a set of experiments to determine how different hyperparameter settings influence the solution quality. In particular, we (i) analyze how the regularization parameter that tunes the bias of the warm-started quantum algorithm towards the pre-computed solution can be selected and optimized, (ii) compare three distinct optimization strategies, and (iii) evaluate five objective functions for the classical optimization, two of which we introduce specifically for our scenario. The experimental results provide insights on efficient selection of the regularization parameter, optimization strategy, and objective function and, thus, support developers in setting up one of the central algorithms of contemporary and near-term quantum computing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Singh, Sandeep Pratap y Shamik Tiwari. "Optimizing dual modal biometric authentication: hybrid HPO-ANFIS and HPO-CNN framework". Indonesian Journal of Electrical Engineering and Computer Science 33, n.º 3 (1 de marzo de 2024): 1676. http://dx.doi.org/10.11591/ijeecs.v33.i3.pp1676-1693.

Texto completo
Resumen
In the realm of secure data access, biometric authentication frameworks are vital. This work proposes a hybrid model, with a 90% confidence interval, that combines "hyperparameter optimization-adaptive neuro-fuzzy inference system (HPO-ANFIS)" parallel and "hyperparameter optimization-convolutional neural network (HPO-CNN)" sequential techniques. This approach addresses challenges in feature selection, hyperparameter optimization (HPO), and classification in dual multimodal biometric authentication. HPO-ANFIS optimizes feature selection, enhancing discriminative abilities, resulting in improved accuracy and reduced false acceptance and rejection rates in the parallel modal architecture. Meanwhile, HPO-CNN focuses on optimizing network designs and parameters in the sequential modal architecture. The hybrid model's 90% confidence interval ensures accurate and statistically significant performance evaluation, enhancing overall system accuracy, precision, recall, F1 score, and specificity. Through rigorous analysis and comparison, the hybrid model surpasses existing approaches across critical criteria, providing an advanced solution for secure and accurate biometric authentication.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Zhang, Shuangbo. "Automatic Selection and Parameter Optimization of Mathematical Models Based on Machine Learning". Transactions on Computer Science and Intelligent Systems Research 3 (10 de abril de 2024): 34–39. http://dx.doi.org/10.62051/nx5n1v79.

Texto completo
Resumen
With the rapid progress of machine learning (ML) technology, more and more ML algorithms have emerged, and the complexity of models is also constantly increasing. This development trend brings two significant challenges in practice: how to choose appropriate algorithm models and how to optimize hyperparameters for these models. In this context, the concept of Automatic Machine Learning (AutoML) has emerged. Due to the applicability of different algorithm models to different data types and problem scenarios, it is crucial to automatically select the most suitable model based on the characteristics of specific tasks. AutoML integrates multiple ML algorithms and automatically filters based on the statistical characteristics of data and task requirements, aiming to provide users with the best model selection solution. Hyperparameters are parameters that ML models need to set before training, such as learning rate, number of iterations, regularization strength, etc., which have a significant impact on the performance of the model. AutoML integrates advanced hyperparameter optimization techniques to automatically find the optimal parameter combination, thereby improving the model's generalization ability and prediction accuracy. This article studies the automatic selection and parameter optimization of mathematical models based on ML.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Adivarekar1, Pravin P., Amarnath Prabhakaran A, Sukhwinder Sharma, Divya P, Muniyandy Elangovan y Ravi Rastogi. "Automated machine learning and neural architecture optimization". Scientific Temper 14, n.º 04 (27 de diciembre de 2023): 1345–51. http://dx.doi.org/10.58414/scientifictemper.2023.14.4.42.

Texto completo
Resumen
Automated machine learning (AutoML) and neural architecture optimization (NAO) represent pivotal components in the landscape of machine learning and artificial intelligence. This paper extensively explores these domains, aiming to delineate their significance, methodologies, cutting-edge techniques, challenges, and emerging trends. AutoML streamlines and democratizes machine learning by automating intricate procedures, such as algorithm selection and hyperparameter tuning. Conversely, NAO automates the design of neural network architectures, a critical aspect for optimizing deep learning model performance. Both domains have made substantial advancements, significantly impacting research, industry practices, and societal applications. Through a series of experiments, classifier accuracy, NAO model selection based on hidden unit count, and learning curve analysis were investigated. The results underscored the efficacy of machine learning models, the substantial impact of architectural choices on test accuracy, and the significance of selecting an optimal number of training epochs for model convergence. These findings offer valuable insights into the potential and limitations of AutoML and NAO, emphasizing the transformative potential of automation and optimization within the machine learning field. Additionally, this study highlights the imperative for further research to explore synergies between AutoML and NAO, aiming to bridge the gap between model selection, architecture design, and hyperparameter tuning. Such endeavors hold promise in opening new frontiers in automated machine learning methodologies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Pratomo, Awang Hendrianto, Nur Heri Cahyana y Septi Nur Indrawati. "Optimizing CNN hyperparameters with genetic algorithms for face mask usage classification". Science in Information Technology Letters 4, n.º 1 (30 de mayo de 2023): 54–64. http://dx.doi.org/10.31763/sitech.v4i1.1182.

Texto completo
Resumen
Convolutional Neural Networks (CNNs) have gained significant traction in the field of image categorization, particularly in the domains of health and safety. This study aims to categorize the utilization of face masks, which is a vital determinant of respiratory health. Convolutional neural networks (CNNs) possess a high level of complexity, making it crucial to execute hyperparameter adjustment in order to optimize the performance of the model. The conventional approach of trial-and-error hyperparameter configuration often yields suboptimal outcomes and is time-consuming. Genetic Algorithms (GA), an optimization technique grounded in the principles of natural selection, were employed to identify the optimal hyperparameters for Convolutional Neural Networks (CNNs). The objective was to enhance the performance of the model, namely in the classification of photographs into two categories: those with face masks and those without face masks. The convolutional neural network (CNN) model, which was enhanced by the utilization of hyperparameters adjusted by a genetic algorithm (GA), demonstrated a commendable accuracy rate of 94.82% following rigorous testing and validation procedures. The observed outcome exhibited a 2.04% improvement compared to models that employed a trial and error approach for hyperparameter tuning. Our research exhibits exceptional quality in the domain of investigations utilizing Convolutional Neural Networks (CNNs). Our research integrates the resilience of Genetic Algorithms (GA), in contrast to previous studies that employed Convolutional Neural Networks (CNN) or conventional machine learning models without adjusting hyperparameters. This unique approach enhances the accuracy and methodology of hyperparameter tuning in Convolutional Neural Networks (CNNs).
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Loukili, Manal. "Supervised Learning Algorithms for Predicting Customer Churn with Hyperparameter Optimization". International Journal of Advances in Soft Computing and its Applications 14, n.º 3 (28 de noviembre de 2022): 50–63. http://dx.doi.org/10.15849/ijasca.221128.04.

Texto completo
Resumen
Abstract Churn risk is one of the most worrying issues in the telecommunications industry. The methods for predicting churn have been improved to a great extent by the remarkable developments in the word of artificial intelligence and machine learning. In this context, a comparative study of four machine learning models was conducted. The first phase consists of data preprocessing, followed by feature analysis. In the third phase, feature selection. Then, the data is split into the training set and the test set. During the prediction phase, some of the commonly used predictive models were adopted, namely k-nearest neighbor, logistic regression, random forest, and support vector machine. Furthermore, we used cross-validation on the training set for hyperparameter adjustment and for avoiding model overfitting. Next, the hyperparameters were adjusted to increase the models' performance. The results obtained on the test set were evaluated using the feature weights, confusion matrix, accuracy score, precision, recall, error rate, and f1 score. Finally, it was found that the support vector machine model outperformed the other prediction models with an accuracy equal to 96.92%. Keywords: Churn Prediction, Classification Algorithms, Hyperparameter Optimization, Machine Learning, Telecommunications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Bergstra, James, Brent Komer, Chris Eliasmith, Dan Yamins y David D. Cox. "Hyperopt: a Python library for model selection and hyperparameter optimization". Computational Science & Discovery 8, n.º 1 (28 de julio de 2015): 014008. http://dx.doi.org/10.1088/1749-4699/8/1/014008.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Zhang, Xuan y Kevin Duh. "Reproducible and Efficient Benchmarks for Hyperparameter Optimization of Neural Machine Translation Systems". Transactions of the Association for Computational Linguistics 8 (julio de 2020): 393–408. http://dx.doi.org/10.1162/tacl_a_00322.

Texto completo
Resumen
Hyperparameter selection is a crucial part of building neural machine translation (NMT) systems across both academia and industry. Fine-grained adjustments to a model’s architecture or training recipe can mean the difference between a positive and negative research result or between a state-of-the-art and underperforming system. While recent literature has proposed methods for automatic hyperparameter optimization (HPO), there has been limited work on applying these methods to neural machine translation (NMT), due in part to the high costs associated with experiments that train large numbers of model variants. To facilitate research in this space, we introduce a lookup-based approach that uses a library of pre-trained models for fast, low cost HPO experimentation. Our contributions include (1) the release of a large collection of trained NMT models covering a wide range of hyperparameters, (2) the proposal of targeted metrics for evaluating HPO methods on NMT, and (3) a reproducible benchmark of several HPO methods against our model library, including novel graph-based and multiobjective methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Badriyah, Tessy, Dimas Bagus Santoso, Iwan Syarif y Daisy Rahmania Syarif. "Improving stroke diagnosis accuracy using hyperparameter optimized deep learning". International Journal of Advances in Intelligent Informatics 5, n.º 3 (17 de noviembre de 2019): 256. http://dx.doi.org/10.26555/ijain.v5i3.427.

Texto completo
Resumen
Stroke may cause death for anyone, including youngsters. One of the early stroke detection techniques is a Computerized Tomography (CT) scan. This research aimed to optimize hyperparameter in Deep Learning, Random Search and Bayesian Optimization for determining the right hyperparameter. The CT scan images were processed by scaling, grayscale, smoothing, thresholding, and morphological operation. Then, the images feature was extracted by the Gray Level Co-occurrence Matrix (GLCM). This research was performed a feature selection to select relevant features for reducing computing expenses, while deep learning based on hyperparameter setting was used to the data classification process. The experiment results showed that the Random Search had the best accuracy, while Bayesian Optimization excelled in optimization time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

FİDAN, Sertuğ y Ali Murat Tiryaki. "Hyperparameter Optimization in Convolutional Neural Networks for Maize Seed Classification". European Journal of Research and Development 3, n.º 1 (28 de marzo de 2023): 139–49. http://dx.doi.org/10.56038/ejrnd.v3i1.254.

Texto completo
Resumen
Corn farming is of great importance for the continuity of our society. Because corn is a cheap and efficient food, especially for animal feeding. However, with the Doubled-haploid technique, the selection of the haploid seeds necessary for this job to be done efficiently creates a problem. Today, the selection of haploid seeds is usually done by trained technicians. With the development of machine learning methods, the parts expected from technicians can be made by machines. In this study, a new model architecture based on a convolutional neural network (CNN) was produced to perform the selection of haploid seeds and the hyperparameters of this model were optimized with the use of tree-structured parzen estimator algorithm. The newly produced model achieved a 94.66% validation score, higher than the VGG-19 model, which proved to be relatively efficient.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Qin, Chao, Yunfeng Zhang, Fangxun Bao, Caiming Zhang, Peide Liu y Peipei Liu. "XGBoost Optimized by Adaptive Particle Swarm Optimization for Credit Scoring". Mathematical Problems in Engineering 2021 (23 de marzo de 2021): 1–18. http://dx.doi.org/10.1155/2021/6655510.

Texto completo
Resumen
Personal credit scoring is a challenging issue. In recent years, research has shown that machine learning has satisfactory performance in credit scoring. Because of the advantages of feature combination and feature selection, decision trees can match credit data which have high dimension and a complex correlation. Decision trees tend to overfitting yet. eXtreme Gradient Boosting is an advanced gradient enhanced tree that overcomes its shortcomings by integrating tree models. The structure of the model is determined by hyperparameters, which is aimed at the time-consuming and laborious problem of manual tuning, and the optimization method is employed for tuning. As particle swarm optimization describes the particle state and its motion law as continuous real numbers, the hyperparameter applicable to eXtreme Gradient Boosting can find its optimal value in the continuous search space. However, classical particle swarm optimization tends to fall into local optima. To solve this problem, this paper proposes an eXtreme Gradient Boosting credit scoring model that is based on adaptive particle swarm optimization. The swarm split, which is based on the clustering idea and two kinds of learning strategies, is employed to guide the particles to improve the diversity of the subswarms, in order to prevent the algorithm from falling into a local optimum. In the experiment, several traditional machine learning algorithms and popular ensemble learning classifiers, as well as four hyperparameter optimization methods (grid search, random search, tree-structured Parzen estimator, and particle swarm optimization), are considered for comparison. Experiments were performed with four credit datasets and seven KEEL benchmark datasets over five popular evaluation measures: accuracy, error rate (type I error and type II error), Brier score, and F 1 score. Results demonstrate that the proposed model outperforms other models on average. Moreover, adaptive particle swarm optimization performs better than the other hyperparameter optimization strategies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Soper, Daniel S. "Hyperparameter Optimization Using Successive Halving with Greedy Cross Validation". Algorithms 16, n.º 1 (27 de diciembre de 2022): 17. http://dx.doi.org/10.3390/a16010017.

Texto completo
Resumen
Training and evaluating the performance of many competing Artificial Intelligence (AI)/Machine Learning (ML) models can be very time-consuming and expensive. Furthermore, the costs associated with this hyperparameter optimization task grow exponentially when cross validation is used during the model selection process. Finding ways of quickly identifying high-performing models when conducting hyperparameter optimization with cross validation is hence an important problem in AI/ML research. Among the proposed methods of accelerating hyperparameter optimization, successive halving has emerged as a popular, state-of-the-art early stopping algorithm. Concurrently, recent work on cross validation has yielded a greedy cross validation algorithm that prioritizes the most promising candidate AI/ML models during the early stages of the model selection process. The current paper proposes a greedy successive halving algorithm in which greedy cross validation is integrated into successive halving. An extensive series of experiments is then conducted to evaluate the comparative performance of the proposed greedy successive halving algorithm. The results show that the quality of the AI/ML models selected by the greedy successive halving algorithm is statistically identical to those selected by standard successive halving, but that greedy successive halving is typically more than 3.5 times faster than standard successive halving.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Rahul Singhal. "Enhancing Health Monitoring using Efficient Hyperparameter Optimization". December 2022 4, n.º 4 (29 de noviembre de 2022): 274–89. http://dx.doi.org/10.36548/jaicn.2022.4.004.

Texto completo
Resumen
Nowadays, healthcare problems among elders have been increasing at an unprecedented rate, and every year, more than a quarter of the elderly people face weakening injuries such as unexpected falls, etc. resulting in broken bones and serious injuries in some cases. Sometimes, these injuries may go unnoticed, and the resulting health consequences can have a considerable negative impact on their quality of life. Constant surveillance by trained professionals is impossible owing to the expense and effort. The detection of physical activities by different sensors and recognition processes is a key topic of research in wireless systems, smartphones and mobile computing. Sensors document and keep track of the patient's movements, to report immediately when any irregularity is found, thus saving a variety of resources. Multiple types of sensors and devices are needed for activity identification of a person's various behaviours that record or sense human actions. This work intends to gather relevant insights from data gathered from sensors and use it to categorize various human actions with machine learning using appropriate feature selection and hyperparameter tuning, and then compare the implemented models based on their performance. Understanding human behaviour is very useful in the healthcare industry, particularly in the areas of rehabilitation, elder care assistance, and cognitive impairment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Piccolo, Stephen R., Avery Mecham, Nathan P. Golightly, Jérémie L. Johnson y Dustin B. Miller. "The ability to classify patients based on gene-expression data varies by algorithm and performance metric". PLOS Computational Biology 18, n.º 3 (11 de marzo de 2022): e1009926. http://dx.doi.org/10.1371/journal.pcbi.1009926.

Texto completo
Resumen
By classifying patients into subgroups, clinicians can provide more effective care than using a uniform approach for all patients. Such subgroups might include patients with a particular disease subtype, patients with a good (or poor) prognosis, or patients most (or least) likely to respond to a particular therapy. Transcriptomic measurements reflect the downstream effects of genomic and epigenomic variations. However, high-throughput technologies generate thousands of measurements per patient, and complex dependencies exist among genes, so it may be infeasible to classify patients using traditional statistical models. Machine-learning classification algorithms can help with this problem. However, hundreds of classification algorithms exist—and most support diverse hyperparameters—so it is difficult for researchers to know which are optimal for gene-expression biomarkers. We performed a benchmark comparison, applying 52 classification algorithms to 50 gene-expression datasets (143 class variables). We evaluated algorithms that represent diverse machine-learning methodologies and have been implemented in general-purpose, open-source, machine-learning libraries. When available, we combined clinical predictors with gene-expression data. Additionally, we evaluated the effects of performing hyperparameter optimization and feature selection using nested cross validation. Kernel- and ensemble-based algorithms consistently outperformed other types of classification algorithms; however, even the top-performing algorithms performed poorly in some cases. Hyperparameter optimization and feature selection typically improved predictive performance, and univariate feature-selection algorithms typically outperformed more sophisticated methods. Together, our findings illustrate that algorithm performance varies considerably when other factors are held constant and thus that algorithm selection is a critical step in biomarker studies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Mathew, Steve Koshy y Yu Zhang. "Acoustic-Based Engine Fault Diagnosis Using WPT, PCA and Bayesian Optimization". Applied Sciences 10, n.º 19 (1 de octubre de 2020): 6890. http://dx.doi.org/10.3390/app10196890.

Texto completo
Resumen
Engine fault diagnosis aims to assist engineers in undertaking vehicle maintenance in an efficient manner. This paper presents an automatic model and hyperparameter selection scheme for engine combustion fault classification, using acoustic signals captured from cylinder heads of the engine. Wavelet Packet Transform (WPT) is utilized for time–frequency analysis, and statistical features are extracted from both high- and low-level WPT coefficients. Then, the extracted features are used to compare three models: (i) standard classification model; (ii) Bayesian optimization for automatic model and hyperparameters selection; and (iii) Principle Component Analysis (PCA) for feature space dimensionality reduction combined with Bayesian optimization. The latter two models both demonstrated improved accuracy and the other performance metrics compared to the standard model. Moreover, with similar accuracy level, PCA with Bayesian optimized model achieved around 20% less total evaluation time and 8–19% less testing time, compared to the second model, for all fault conditions, which thus shows a promising solution for further development in real-time engine fault diagnosis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Maniezzo, Vittorio y Tingting Zhou. "Learning Individualized Hyperparameter Settings". Algorithms 16, n.º 6 (26 de mayo de 2023): 267. http://dx.doi.org/10.3390/a16060267.

Texto completo
Resumen
The performance of optimization algorithms, and consequently of AI/machine learning solutions, is strongly influenced by the setting of their hyperparameters. Over the last decades, a rich literature has developed proposing methods to automatically determine the parameter setting for a problem of interest, aiming at either robust or instance-specific settings. Robust setting optimization is already a mature area of research, while instance-level setting is still in its infancy, with contributions mainly dealing with algorithm selection. The work reported in this paper belongs to the latter category, exploiting the learning and generalization capabilities of artificial neural networks to adapt a general setting generated by state-of-the-art automatic configurators. Our approach differs significantly from analogous ones in the literature, both because we rely on neural systems to suggest the settings, and because we propose a novel learning scheme in which different outputs are proposed for each input, in order to support generalization from examples. The approach was validated on two different algorithms that optimized instances of two different problems. We used an algorithm that is very sensitive to parameter settings, applied to generalized assignment problem instances, and a robust tabu search that is purportedly little sensitive to its settings, applied to quadratic assignment problem instances. The computational results in both cases attest to the effectiveness of the approach, especially when applied to instances that are structurally very different from those previously encountered.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Sharipova, Saltanat y Akanova Akerke. "PREDICTION SYSTEM FOR THE INFLUENCE OF PHOSPHORUS ON WHEAT YIELD: OPTIMAL HYPERPARAMETER SELECTION". Вестник Алматинского университета энергетики и связи 4, n.º 63 (30 de diciembre de 2023): 87–95. http://dx.doi.org/10.51775/2790-0886_2023_63_4_87.

Texto completo
Resumen
Modern studies in the application of machine learning in agrotechnology have showcased significant advancements in accurate crop yield forecasting and agricultural production optimization. This study aimed to select optimal hyperparameters for a neural network designed to forecast the impact of phosphorus on the yield of spring wheat. Tasks to achieve this goal included the selection of optimal neural network hyperparameters and deployment of the prediction model into forecasting system, ensuring accessibility and user convenience. Input dataset consisted of year, region, climatic indicators (temperature of the soil surface, precipitation, humidity), applied phosphorus, and the target variable is the yield of spring wheat. To select the optimal hyperparameters, GridSearchCV was utilized and integrated into the experiment, enabling the achievement of more precise and impactful forecasting outcomes. Experiments aimed at hyperparameter tuning identified an optimal network configuration (3 layers, 32 neurons, 300 epochs, 32 batches). Results from training showcased that this network configuration achieved the best performance, demonstrating minimal mean squared error (MSE). These results introduced a novel scientific approach in training data using this neural network model for forecasting the yield of cereal crops. Developing the prediction system's interface relied on the Streamlit, allowing the creation of an intuitive and user-friendly interface. The visualization derived from this model empowers users to evaluate crop yield and receive recommendations for optimal phosphorus application to enhance spring wheat yield. These results highlighted practical significance of this prediction system, based on neural network utilizing optimal hyperparameters that are ready for practical implementation by system developers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Lindawati, Lindawati, Mohammad Fadhli y Antoniy Sandi Wardana. "Optimasi Gaussian Naïve Bayes dengan Hyperparameter Tuning dan Univariate Feature Selection dalam Prediksi Cuaca". Edumatic: Jurnal Pendidikan Informatika 7, n.º 2 (19 de diciembre de 2023): 237–46. http://dx.doi.org/10.29408/edumatic.v7i2.21179.

Texto completo
Resumen
The importance of conducting weather prediction research is due to the significant influence of weather changes on daily life. The purpose of this study is to apply an optimal machine-learning classification method for weather prediction. The method used is the Gaussian Naïve Bayes model, which has been optimized using Univariate Feature Selection ANOVA-f test and Hyperparameter Tuning GridsearchCV techniques. The data used consists of 6454 daily weather data in Palembang City. There are 5 tests on the Gaussian Naïve Bayes model before and after optimization. The research results show that the optimization of the model successfully improves the performance in weather prediction. The highest accuracy result after optimization reaches 98.33% with 644 test data, an improvement from the pre-optimization accuracy of only 96.95%. Before optimization, the predictions for weather conditions such as sunny, cloudy/rainy, light rain, and heavy rain match the actual data. However, there were 20 prediction errors when dealing with data that should represent very heavy rain conditions. After optimization, the number of prediction errors for the very heavy rain data was reduced to seven. The optimization approach used in this research helps find the most suitable parameter combinations and eliminates irrelevant features, allowing the model to consider only significant features in weather p
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Zeng, Shaoxiang, Mengfei Yu, Shanmin Chen y Mengfen Shen. "An Intelligent Multi-Ring Shield Movement Performance Prediction and Control Method". Applied Sciences 14, n.º 10 (16 de mayo de 2024): 4223. http://dx.doi.org/10.3390/app14104223.

Texto completo
Resumen
Accurate control of the shield attitude can ensure precise tunnel excavation and minimize impact on the surrounding areas. However, neglecting the total thrust force may cause excessive disturbance to the strata, leading to collapse. This study proposes a Bayesian optimization-based temporal attention long short-term memory model (BOTA-LSTM) for multi-objective prediction and control of shield tunneling, including shield attitude and total thrust. The model can achieve multi-ring predictions of shield attitude and total thrust by allocating larger weights to significant moments through a temporal attention mechanism. The hyperparameters of the proposed model are automatically selected through Bayesian hyperparameter optimization, which can effectively address the issue of complex parameter selection and optimization difficulties in multi-ring, multi-objective tasks. Based on the predictive results of the optimal model, an intelligent control method that considers both shield attitude and total thrust is proposed. Compared to a method that solely predicts and corrects for the next ring, the proposed multi-ring correction method provides the opportunity for further adjustments, if the initial correction falls short of expectations. A shield tunneling project in Hangzhou is used to demonstrate the effectiveness of the proposed model. The results show that the BOTA-LSTM model outperforms the models without the integration of a temporal attention mechanism and Bayesian hyperparameter optimization. The proposed multi-ring intelligent correction method can adjust the shield attitude and total thrust to a reasonable range, providing references for practical engineering applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Newcomer, Max W. y Randall J. Hunt. "NWTOPT – A hyperparameter optimization approach for selection of environmental model solver settings". Environmental Modelling & Software 147 (enero de 2022): 105250. http://dx.doi.org/10.1016/j.envsoft.2021.105250.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Beck, Daniel, Trevor Cohn, Christian Hardmeier y Lucia Specia. "Learning Structural Kernels for Natural Language Processing". Transactions of the Association for Computational Linguistics 3 (diciembre de 2015): 461–73. http://dx.doi.org/10.1162/tacl_a_00151.

Texto completo
Resumen
Structural kernels are a flexible learning paradigm that has been widely used in Natural Language Processing. However, the problem of model selection in kernel-based methods is usually overlooked. Previous approaches mostly rely on setting default values for kernel hyperparameters or using grid search, which is slow and coarse-grained. In contrast, Bayesian methods allow efficient model selection by maximizing the evidence on the training data through gradient-based methods. In this paper we show how to perform this in the context of structural kernels by using Gaussian Processes. Experimental results on tree kernels show that this procedure results in better prediction performance compared to hyperparameter optimization via grid search. The framework proposed in this paper can be adapted to other structures besides trees, e.g., strings and graphs, thereby extending the utility of kernel-based methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Agasiev, Taleh y Anatoly Karpenko. "Exploratory Landscape Validation for Bayesian Optimization Algorithms". Mathematics 12, n.º 3 (28 de enero de 2024): 426. http://dx.doi.org/10.3390/math12030426.

Texto completo
Resumen
Bayesian optimization algorithms are widely used for solving problems with a high computational complexity in terms of objective function evaluation. The efficiency of Bayesian optimization is strongly dependent on the quality of the surrogate models of an objective function, which are built and refined at each iteration. The quality of surrogate models, and hence the performance of an optimization algorithm, can be greatly improved by selecting the appropriate hyperparameter values of the approximation algorithm. The common approach to finding good hyperparameter values for each iteration of Bayesian optimization is to build surrogate models with different hyperparameter values and choose the best one based on some estimation of the approximation error, for example, a cross-validation score. Building multiple surrogate models for each iteration of Bayesian optimization is computationally demanding and significantly increases the time required to solve an optimization problem. This paper suggests a new approach, called exploratory landscape validation, to find good hyperparameter values with less computational effort. Exploratory landscape validation metrics can be used to predict the best hyperparameter values, which can improve both the quality of the solutions found by Bayesian optimization and the time needed to solve problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

AlGhamdi, Rayed. "Design of Network Intrusion Detection System Using Lion Optimization-Based Feature Selection with Deep Learning Model". Mathematics 11, n.º 22 (10 de noviembre de 2023): 4607. http://dx.doi.org/10.3390/math11224607.

Texto completo
Resumen
In the domain of network security, intrusion detection systems (IDSs) play a vital role in data security. While the utilization of the internet amongst consumers is increasing on a daily basis, the significance of security and privacy preservation of system alerts, due to malicious actions, is also increasing. IDS is a widely executed system that protects computer networks from attacks. For the identification of unknown attacks and anomalies, several Machine Learning (ML) approaches such as Neural Networks (NNs) are explored. However, in real-world applications, the classification performances of these approaches are fluctuant with distinct databases. The major reason for this drawback is the presence of some ineffective or redundant features. So, the current study proposes the Network Intrusion Detection System using a Lion Optimization Feature Selection with a Deep Learning (NIDS-LOFSDL) approach to remedy the aforementioned issue. The NIDS-LOFSDL technique follows the concept of FS with a hyperparameter-tuned DL model for the recognition of intrusions. For the purpose of FS, the NIDS-LOFSDL method uses the LOFS technique, which helps in improving the classification results. Furthermore, the attention-based bi-directional long short-term memory (ABiLSTM) system is applied for intrusion detection. In order to enhance the intrusion detection performance of the ABiLSTM algorithm, the gorilla troops optimizer (GTO) is deployed so as to perform hyperparameter tuning. Since trial-and-error manual hyperparameter tuning is a tedious process, the GTO-based hyperparameter tuning process is performed, which demonstrates the novelty of the work. In order to validate the enhanced solution of the NIDS-LOFSDL system in terms of intrusion detection, a comprehensive range of experiments was performed. The simulation values confirm the promising results of the NIDS-LOFSDL system compared to existing DL methodologies, with a maximum accuracy of 96.88% and 96.92% on UNSW-NB15 and AWID datasets, respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Kishimoto, Akihiro, Djallel Bouneffouf, Radu Marinescu, Parikshit Ram, Ambrish Rawat, Martin Wistuba, Paulito Palmes y Adi Botea. "Bandit Limited Discrepancy Search and Application to Machine Learning Pipeline Optimization". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 9 (28 de junio de 2022): 10228–37. http://dx.doi.org/10.1609/aaai.v36i9.21263.

Texto completo
Resumen
Optimizing a machine learning (ML) pipeline has been an important topic of AI and ML. Despite recent progress, pipeline optimization remains a challenging problem, due to potentially many combinations to consider as well as slow training and validation. We present the BLDS algorithm for optimized algorithm selection (ML operations) in a fixed ML pipeline structure. BLDS performs multi-fidelity optimization for selecting ML algorithms trained with smaller computational overhead, while controlling its pipeline search based on multi-armed bandit and limited discrepancy search. Our experiments on well-known classification benchmarks show that BLDS is superior to competing algorithms. We also combine BLDS with hyperparameter optimization, empirically showing the advantage of BLDS.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Fuentes-Ramos, Mirta, Eddy Sánchez-DelaCruz, Iván-Vladimir Meza-Ruiz y Cecilia-Irene Loeza-Mejía. "Neurodegenerative diseases categorization by applying the automatic model selection and hyperparameter optimization method". Journal of Intelligent & Fuzzy Systems 42, n.º 5 (31 de marzo de 2022): 4759–67. http://dx.doi.org/10.3233/jifs-219263.

Texto completo
Resumen
Neurodegenerative diseases affect a large part of the population in the world and also in Mexico, deteriorating gradually the quality of patients’ life. Therefore, it is important to diagnose them with a high degree of reliability. In order to solve it, various computational methods have been applied in the analysis of biomarkers of human gait. In this study, we propose employing the automatic model selection and hyperparameter optimization method that has not been addressed before for this problem. Our results showed highly competitive percentages of correctly classified instances when discriminating binary and multiclass sets of neurodegenerative diseases: Parkinson’s disease, Huntington’s disease, and Spinocerebellar ataxias.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Reddy, Karna Vishnu Vardhana, Irraivan Elamvazuthi, Azrina Abd Aziz, Sivajothi Paramasivam, Hui Na Chua y Satyamurthy Pranavanand. "An Efficient Prediction System for Coronary Heart Disease Risk Using Selected Principal Components and Hyperparameter Optimization". Applied Sciences 13, n.º 1 (22 de diciembre de 2022): 118. http://dx.doi.org/10.3390/app13010118.

Texto completo
Resumen
Medical science-related studies have reinforced that the prevalence of coronary heart disease which is associated with the heart and blood vessels has been the most significant cause of health loss and death globally. Recently, data mining and machine learning have been used to detect diseases based on the unique characteristics of a person. However, these techniques have often posed challenges due to the complexity in understanding the objective of the datasets, the existence of too many factors to analyze as well as lack of performance accuracy. This research work is of two-fold effort: firstly, feature extraction and selection. This entails extraction of the principal components, and consequently, the Correlation-based Feature Selection (CFS) method was applied to select the finest principal components of the combined (Cleveland and Statlog) heart dataset. Secondly, by applying datasets to three single and three ensemble classifiers, the best hyperparameters that reflect the pre-eminent predictive outcomes were investigated. The experimental result reveals that hyperparameter optimization has improved the accuracy of all the models. In the comparative studies, the proposed work outperformed related works with an accuracy of 97.91%, and an AUC of 0.996 by employing six optimal principal components selected from the CFS method and optimizing parameters of the Rotation Forest ensemble classifier.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

El-Hasnony, Ibrahim M., Omar M. Elzeki, Ali Alshehri y Hanaa Salem. "Multi-Label Active Learning-Based Machine Learning Model for Heart Disease Prediction". Sensors 22, n.º 3 (4 de febrero de 2022): 1184. http://dx.doi.org/10.3390/s22031184.

Texto completo
Resumen
The rapid growth and adaptation of medical information to identify significant health trends and help with timely preventive care have been recent hallmarks of the modern healthcare data system. Heart disease is the deadliest condition in the developed world. Cardiovascular disease and its complications, including dementia, can be averted with early detection. Further research in this area is needed to prevent strokes and heart attacks. An optimal machine learning model can help achieve this goal with a wealth of healthcare data on heart disease. Heart disease can be predicted and diagnosed using machine-learning-based systems. Active learning (AL) methods improve classification quality by incorporating user–expert feedback with sparsely labelled data. In this paper, five (MMC, Random, Adaptive, QUIRE, and AUDI) selection strategies for multi-label active learning were applied and used for reducing labelling costs by iteratively selecting the most relevant data to query their labels. The selection methods with a label ranking classifier have hyperparameters optimized by a grid search to implement predictive modelling in each scenario for the heart disease dataset. Experimental evaluation includes accuracy and F-score with/without hyperparameter optimization. Results show that the generalization of the learning model beyond the existing data for the optimized label ranking model uses the selection method versus others due to accuracy. However, the selection method was highlighted in regards to the F-score using optimized settings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Yang, Eun-Suk, Jong Dae Kim, Chan-Young Park, Hye-Jeong Song y Yu-Seop Kim. "Hyperparameter tuning for hidden unit conditional random fields". Engineering Computations 34, n.º 6 (7 de agosto de 2017): 2054–62. http://dx.doi.org/10.1108/ec-11-2015-0350.

Texto completo
Resumen
Purpose In this paper, the problem of a nonlinear model – specifically the hidden unit conditional random fields (HUCRFs) model, which has binary stochastic hidden units between the data and the labels – exhibiting unstable performance depending on the hyperparameter under consideration. Design/methodology/approach There are three main optimization search methods for hyperparameter tuning: manual search, grid search and random search. This study shows that HUCRFs’ unstable performance depends on the hyperparameter values used and its performance is based on tuning that draws on grid and random searches. All experiments conducted used the n-gram features – specifically, unigram, bigram, and trigram. Findings Naturally, selecting a list of hyperparameter values based on a researchers’ experience to find a set in which the best performance is exhibited is better than finding it from a probability distribution. Realistically, however, it is impossible to calculate using the parameters in all combinations. The present research indicates that the random search method has a better performance compared with the grid search method while requiring shorter computation time and a reduced cost. Originality/value In this paper, the issues affecting the performance of HUCRF, a nonlinear model with performance that varies depending on the hyperparameters, but performs better than CRF, has been examined.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Soper, Daniel S. "Greed Is Good: Rapid Hyperparameter Optimization and Model Selection Using Greedy k-Fold Cross Validation". Electronics 10, n.º 16 (16 de agosto de 2021): 1973. http://dx.doi.org/10.3390/electronics10161973.

Texto completo
Resumen
Selecting a final machine learning (ML) model typically occurs after a process of hyperparameter optimization in which many candidate models with varying structural properties and algorithmic settings are evaluated and compared. Evaluating each candidate model commonly relies on k-fold cross validation, wherein the data are randomly subdivided into k folds, with each fold being iteratively used as a validation set for a model that has been trained using the remaining folds. While many research studies have sought to accelerate ML model selection by applying metaheuristic and other search methods to the hyperparameter space, no consideration has been given to the k-fold cross validation process itself as a means of rapidly identifying the best-performing model. The current study rectifies this oversight by introducing a greedy k-fold cross validation method and demonstrating that greedy k-fold cross validation can vastly reduce the average time required to identify the best-performing model when given a fixed computational budget and a set of candidate models. This improved search time is shown to hold across a variety of ML algorithms and real-world datasets. For scenarios without a computational budget, this paper also introduces an early stopping algorithm based on the greedy cross validation method. The greedy early stopping method is shown to outperform a competing, state-of-the-art early stopping method both in terms of search time and the quality of the ML models selected by the algorithm. Since hyperparameter optimization is among the most time-consuming, computationally intensive, and monetarily expensive tasks in the broader process of developing ML-based solutions, the ability to rapidly identify optimal machine learning models using greedy cross validation has obvious and substantial benefits to organizations and researchers alike.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía