Статті в журналах з теми "FEATURE OPTIMIZATION METHODS"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: FEATURE OPTIMIZATION METHODS.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "FEATURE OPTIMIZATION METHODS".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Hamad, Zana O. "REVIEW OF FEATURE SELECTION METHODS USING OPTIMIZATION ALGORITHM." Polytechnic Journal 12, no. 2 (March 15, 2023): 203–14. http://dx.doi.org/10.25156/ptj.v12n2y2022.pp203-214.

Повний текст джерела
Анотація:
Many works have been done to reduce complexity in terms of time and memory space. The feature selection process is one of the strategies to reduce system complexity and can be defined as a process of selecting the most important feature among feature space. Therefore, the most useful features will be kept, and the less useful features will be eliminated. In the fault classification and diagnosis field, feature selection plays an important role in reducing dimensionality and sometimes might lead to having a high classification rate. In this paper, a comprehensive review is presented about feature selection processing and how it can be done. The primary goal of this research is to examine all of the strategies that have been used to highlight the (selection) selected process, including filter, wrapper, Meta-heuristic algorithm, and embedded. Review of Nature-inspired algorithms that have been used for features selection is more focused such as particle swarm, Grey Wolf, Bat, Genetic, wale, and ant colony algorithm. The overall results confirmed that the feature selection approach is important in reducing the complexity of any model-based machine learning algorithm and may sometimes result in improved performance of the simulated model.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zhang, Yang, Emil Tochev, Svetan Ratchev, and Carl German. "Production process optimization using feature selection methods." Procedia CIRP 88 (2020): 554–59. http://dx.doi.org/10.1016/j.procir.2020.05.096.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Goodarzi, Mohammad, Bieke Dejaegher, and Yvan Vander Heyden. "Feature Selection Methods in QSAR Studies." Journal of AOAC INTERNATIONAL 95, no. 3 (May 1, 2012): 636–51. http://dx.doi.org/10.5740/jaoacint.sge_goodarzi.

Повний текст джерела
Анотація:
Abstract A quantitative structure-activity relationship (QSAR) relates quantitative chemical structure attributes (molecular descriptors) to a biological activity. QSAR studies have now become attractive in drug discovery and development because their application can save substantial time and human resources. Several parameters are important in the prediction ability of a QSAR model. On the one hand, different statistical methods may be applied to check the linear or nonlinear behavior of a data set. On the other hand, feature selection techniques are applied to decrease the model complexity, to decrease the overfitting/overtraining risk, and to select the most important descriptors from the often more than 1000 calculated. The selected descriptors are then linked to a biological activity of the corresponding compound by means of a mathematical model. Different modeling techniques can be applied, some of which explicitly require a feature selection. A QSAR model can be useful in the design of new compounds with improved potency in the class under study. Only molecules with a predicted interesting activity will be synthesized. In the feature selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus attention, while ignoring the rest. Up to now, many feature selection techniques, such as genetic algorithms, forward selection, backward elimination, stepwise regression, and simulated annealing have been used extensively. Swarm intelligence optimizations, such as ant colony optimization and partial swarm optimization, which are feature selection techniques usually simulated based on animal and insect life behavior to find the shortest path between a food source and their nests, recently are also involved in QSAR studies. This review paper provides an overview of different feature selection techniques applied in QSAR modeling.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Jameel, Noor, and Hasanen S. Abdullah. "Intelligent Feature Selection Methods: A Survey." Engineering and Technology Journal 39, no. 1B (March 25, 2021): 175–83. http://dx.doi.org/10.30684/etj.v39i1b.1623.

Повний текст джерела
Анотація:
Consider feature selection is the main in intelligent algorithms and machine learning to select the subset of data to help acquire the optimal solution. Feature selection used an extract the relevance of the data and discarding the irrelevance of the data with increment fast to select it and to reduce the dimensional of dataset. In the past, it used traditional methods, but these methods are slow of fast and accuracy. In modern times, however, it uses the intelligent methods, Genetic algorithm and swarm optimization methods Ant colony, Bees colony, Cuckoo search, Particle optimization, fish algorithm, cat algorithm, Genetic algorithm ...etc. In feature selection because to increment fast, high accuracy and easy to use of user. In this paper survey it used the Some the swarm intelligent method: Ant colony, Bees colony, Cuckoo search, Particle optimization and Genetic algorithm (GA). It done take the related work for each algorithms the swarm intelligent the ideas, dataset and accuracy of the results after that was done to compare the result in the table among the algorithms and learning the better algorithm is discuses in the discussion and why it is better. Finally, it learning who is the advantage and disadvantage for each algorithms of swarm intelligent in feature selection.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wu, Shaohua, Yong Hu, Wei Wang, Xinyong Feng, and Wanneng Shu. "Application of Global Optimization Methods for Feature Selection and Machine Learning." Mathematical Problems in Engineering 2013 (2013): 1–8. http://dx.doi.org/10.1155/2013/241517.

Повний текст джерела
Анотація:
The feature selection process constitutes a commonly encountered problem of global combinatorial optimization. The process reduces the number of features by removing irrelevant and redundant data. This paper proposed a novel immune clonal genetic algorithm based on immune clonal algorithm designed to solve the feature selection problem. The proposed algorithm has more exploration and exploitation abilities due to the clonal selection theory, and each antibody in the search space specifies a subset of the possible features. Experimental results show that the proposed algorithm simplifies the feature selection process effectively and obtains higher classification accuracy than other feature selection algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wein, Fabian, Peter D. Dunning, and Julián A. Norato. "A review on feature-mapping methods for structural optimization." Structural and Multidisciplinary Optimization 62, no. 4 (August 3, 2020): 1597–638. http://dx.doi.org/10.1007/s00158-020-02649-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Larabi-Marie-Sainte, Souad. "Outlier Detection Based Feature Selection Exploiting Bio-Inspired Optimization Algorithms." Applied Sciences 11, no. 15 (July 23, 2021): 6769. http://dx.doi.org/10.3390/app11156769.

Повний текст джерела
Анотація:
The curse of dimensionality problem occurs when the data are high-dimensional. It affects the learning process and reduces the accuracy. Feature selection is one of the dimensionality reduction approaches that mainly contribute to solving the curse of the dimensionality problem by selecting the relevant features. Irrelevant features are the dependent and redundant features that cause noise in the data and then reduce its quality. The main well-known feature-selection methods are wrapper and filter techniques. However, wrapper feature selection techniques are computationally expensive, whereas filter feature selection methods suffer from multicollinearity. In this research study, four new feature selection methods based on outlier detection using the Projection Pursuit method are proposed. Outlier detection involves identifying abnormal data (irrelevant features of the transpose matrix obtained from the original dataset matrix). The concept of outlier detection using projection pursuit has proved its efficiency in many applications but has not yet been used as a feature selection approach. To the author’s knowledge, this study is the first of its kind. Experimental results on nineteen real datasets using three classifiers (k-NN, SVM, and Random Forest) indicated that the suggested methods enhanced the classification accuracy rate by an average of 6.64% when compared to the classification accuracy without applying feature selection. It also outperformed the state-of-the-art methods on most of the used datasets with an improvement rate ranging between 0.76% and 30.64%. Statistical analysis showed that the results of the proposed methods are statistically significant.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Boubezoul, Abderrahmane, and Sébastien Paris. "Application of global optimization methods to model and feature selection." Pattern Recognition 45, no. 10 (October 2012): 3676–86. http://dx.doi.org/10.1016/j.patcog.2012.04.015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Uzun, Mehmet Zahit, Yuksel Celik, and Erdal Basaran. "Micro-Expression Recognition by Using CNN Features with PSO Algorithm and SVM Methods." Traitement du Signal 39, no. 5 (November 30, 2022): 1685–93. http://dx.doi.org/10.18280/ts.390526.

Повний текст джерела
Анотація:
This study proposes a framework for defining ME expressions, in which preprocessing, feature extraction with deep learning, feature selection with an optimization algorithm, and classification methods are used. CASME-II, SMIC-HS, and SAMM, which are among the most used ME datasets in the literature, were combined to overcome the under-sampling problem caused by the datasets. In the preprocessing stage, onset, and apex frames in each video clip in datasets were detected, and optical flow images were obtained from the frames using the FarneBack method. The features of these obtained images were extracted by applying AlexNet, VGG16, MobilenetV2, EfficientNet, Squeezenet from CNN models. Then, combining the image features obtained from all CNN models. And then, the ones which are the most distinctive features were selected with the Particle Swarm Optimization (PSO) algorithm. The new feature set obtained was divided into classes positive, negative, and surprise using SVM. As a result, its success has been demonstrated with an accuracy rate of 0.8784 obtained in our proposed ME framework.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Liu, Yong Xia, Ru Shu Peng, Ai Hong Hou, and De Wen Tang. "Methods of Cam Structure Optimization Based on Behavioral Modeling." Advanced Materials Research 139-141 (October 2010): 1245–48. http://dx.doi.org/10.4028/www.scientific.net/amr.139-141.1245.

Повний текст джерела
Анотація:
By defining analysis feature and using analysis results to drive the parametric model, behavioral modeling establishes feature parameter of model automatically for meeting design objectives and making model technology intelligent, that is to say the result could be optimized automatically. In this paper, the application of PRO/E parametric modeling technologies in design of cam profile curve is researched. In order to optimization for dynamic balance of the cam, the methods of defining analysis feature and sensitivity/optimization analysis are proposed by using the technology of PRO/E behavioral modeling. These technologies can enhance the efficiency and quality of the cam design and provide a practical method for 3D modeling of this kind of cam. The fifth generation of CAD model technology named behavioral modeling provides the method of flexible and intelligent solution of practical engineering problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Yang, Jian, Zixin Tang, Zhenkai Guan, Wenjia Hua, Mingyu Wei, Chunjie Wang, and Chenglong Gu. "Automatic Feature Engineering-Based Optimization Method for Car Loan Fraud Detection." Discrete Dynamics in Nature and Society 2021 (December 7, 2021): 1–10. http://dx.doi.org/10.1155/2021/6077540.

Повний текст джерела
Анотація:
Fraud detection is one of the core issues of loan risk control, which aims to detect fraudulent loan applications and safeguard the property of both individuals and organizations. Because of its close relevance to the security of financial operations, fraud detection has received widespread attention from industry. In recent years, with the rapid development of artificial intelligence technology, an automatic feature engineering method that can help to generate features has been applied to fraud detection with good results. However, in car loan fraud detection, the existing methods do not satisfy the requirements because of overreliance on behavioral features. To tackle this issue, this paper proposed an optimized deep feature synthesis (DFS) method in the automatic feature engineering scheme to improve the car loan fraud detection. Problems like feature dimension explosion, low interpretability, long training time, and low detection accuracy are solved by compressing abstract and uninterpretable features to limit the depth of DFS algorithm. Experiments are developed based on actual car loan credit database to evaluate the performance of the proposed scheme. Compared with traditional automatic feature engineering methods, the number of features and training time are reduced by 92.5% and 54.3%, respectively, whereas accuracy is improved by 23%. The experiment demonstrates that our scheme effectively improved the existing automatic feature engineering car loan fraud detection methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Lee, Jaesung, Jaegyun Park, Hae-Cheon Kim, and Dae-Won Kim. "Competitive Particle Swarm Optimization for Multi-Category Text Feature Selection." Entropy 21, no. 6 (June 18, 2019): 602. http://dx.doi.org/10.3390/e21060602.

Повний текст джерела
Анотація:
Multi-label feature selection is an important task for text categorization. This is because it enables learning algorithms to focus on essential features that foreshadow relevant categories, thereby improving the accuracy of text categorization. Recent studies have considered the hybridization of evolutionary feature wrappers and filters to enhance the evolutionary search process. However, the relative effectiveness of feature subset searches of evolutionary and feature filter operators has not been considered. This results in degenerated final feature subsets. In this paper, we propose a novel hybridization approach based on competition between the operators. This enables the proposed algorithm to apply each operator selectively and modify the feature subset according to its relative effectiveness, unlike conventional methods. The experimental results on 16 text datasets verify that the proposed method is superior to conventional methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Singh, Chandrabhan, Mohit Gangwar, and Upendra Kumar. "Analysis of Meta-Heuristic Feature Selection Techniques on classifier performance with specific reference to psychiatric disorder." International Journal of Experimental Research and Review 31, Spl Volume (July 30, 2023): 51–60. http://dx.doi.org/10.52756/10.52756/ijerr.2023.v31spl.006.

Повний текст джерела
Анотація:
Optimization plays an important role in solving complex computational problems. Meta-Heuristic approaches work as an optimization technique. In any search space, these approaches play an excellent role in local as well as global search. Nature-inspired approaches, especially population-based ones, play a role in solving the problem. In the past decade, many nature-inspired population-based methods have been explored by researchers to facilitate computational intelligence. These methods are based on insects, birds, animals, sea creatures, etc. This research focuses on the use of Meta-Heuristic methods for the feature selection. A better optimization approach must be introduced to reduce the computational load, depending on the problem size and complexity. The correct feature set must be chosen for the diagnostic system to operate effectively. Here, population-based Meta-Heuristic optimization strategies have been used to pick the features. By choosing the best feature set, the Butterfly Optimization Algorithm (BOA) with the Enhanced Lion Optimization Algorithm (ELOA) approach would reduce classifier overhead. The results clearly demonstrate that the combined strategy has higher performance outcomes when compared to other optimization strategies.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Zheng, Fanchen. "Facial Expression Recognition Based on LDA Feature Space Optimization." Computational Intelligence and Neuroscience 2022 (August 29, 2022): 1–11. http://dx.doi.org/10.1155/2022/9521329.

Повний текст джерела
Анотація:
With the development of artificial intelligence, facial expression recognition has become an important part of the current research due to its wide application potential. However, the qualities of the face features will directly affect the accuracy of the model. Based on the KDEF face public dataset, the author conducts a comprehensive analysis of the effect of linear discriminant analysis (LDA) dimensionality reduction on facial expression recognition. First, the features of face images are extracted respectively by manual method and deep learning method, which constitute 35-dimensional artificial features, 128-dimensional deep features, and the hybrid features. Second, LDA is used to reduce the dimensionality of the three feature sets. Then, machine learning models, such as Naive Bayes and decision tree, are used to analyze the results of facial expression recognition before and after LDA feature dimensionality reduction. Finally, the effects of several classical feature reduction methods on the effectiveness of facial expression recognition are evaluated. The results show that after the LDA feature dimensionality reduction being used, the facial expression recognition based on these three feature sets is improved to a certain extent, which indicates the good effect of LDA in reducing feature redundancy.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Chen, Guoming. "On Optimal Feature Selection Using Intelligent Optimization Methods for Image Steganalysis." Journal of Information and Computational Science 10, no. 13 (September 1, 2013): 4145–55. http://dx.doi.org/10.12733/jics20102403.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

ÖZALTIN, Öznur, and Özgür YENİAY. "Detection of monkeypox disease from skin lesion images using Mobilenetv2 architecture." Communications Faculty Of Science University of Ankara Series A1Mathematics and Statistics 72, no. 2 (June 23, 2023): 482–99. http://dx.doi.org/10.31801/cfsuasmas.1202806.

Повний текст джерела
Анотація:
Monkeypox has recently become an endemic disease that threatens the whole world. The most distinctive feature of this disease is occurring skin lesions. However, in other types of diseases such as chickenpox, measles, and smallpox skin lesions can also be seen. The main aim of this study was to quickly detect monkeypox disease from others through deep learning approaches based on skin images. In this study, MobileNetv2 was used to determine in images whether it was monkeypox or non-monkeypox. To find splitting methods and optimization methods, a comprehensive analysis was performed. The splitting methods included training and testing (70:30 and 80:20) and 10 fold cross validation. The optimization methods as adaptive moment estimation (adam), root mean square propagation (rmsprop), and stochastic gradient descent momentum (sgdm) were used. Then, MobileNetv2 was tasked as a deep feature extractor and features were obtained from the global pooling layer. The Chi-Square feature selection method was used to reduce feature dimensions. Finally, selected features were classified using the Support Vector Machine (SVM) with different kernel functions. In this study, 10 fold cross validation and adam were seen as the best splitting and optimization methods, respectively, with an accuracy of 98.59%. Then, significant features were selected via the Chi-Square method and while classifying 500 features with SVM, an accuracy of 99.69% was observed.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Prabhakar, Sunil Kumar, Harikumar Rajaguru, and Sun-Hee Kim. "An Amalgamated Approach to Bilevel Feature Selection Techniques Utilizing Soft Computing Methods for Classifying Colon Cancer." BioMed Research International 2020 (October 13, 2020): 1–13. http://dx.doi.org/10.1155/2020/8427574.

Повний текст джерела
Анотація:
One of the deadliest diseases which affects the large intestine is colon cancer. Older adults are typically affected by colon cancer though it can happen at any age. It generally starts as small benign growth of cells that forms on the inside of the colon, and later, it develops into cancer. Due to the propagation of somatic alterations that affects the gene expression, colon cancer is caused. A standardized format for assessing the expression levels of thousands of genes is provided by the DNA microarray technology. The tumors of various anatomical regions can be distinguished by the patterns of gene expression in microarray technology. As the microarray data is too huge to process due to the curse of dimensionality problem, an amalgamated approach of utilizing bilevel feature selection techniques is proposed in this paper. In the first level, the genes or the features are dimensionally reduced with the help of Multivariate Minimum Redundancy–Maximum Relevance (MRMR) technique. Then, in the second level, six optimization techniques are utilized in this work for selecting the best genes or features before proceeding to classification process. The optimization techniques considered in this work are Invasive Weed Optimization (IWO), Teaching Learning-Based Optimization (TLBO), League Championship Optimization (LCO), Beetle Antennae Search Optimization (BASO), Crow Search Optimization (CSO), and Fruit Fly Optimization (FFO). Finally, it is classified with five suitable classifiers, and the best results show when IWO is utilized with MRMR, and then classified with Quadratic Discriminant Analysis (QDA), a classification accuracy of 99.16% is obtained.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Majidov, Ikhtiyor, and Taegkeun Whangbo. "Efficient Classification of Motor Imagery Electroencephalography Signals Using Deep Learning Methods." Sensors 19, no. 7 (April 11, 2019): 1736. http://dx.doi.org/10.3390/s19071736.

Повний текст джерела
Анотація:
Single-trial motor imagery classification is a crucial aspect of brain–computer applications. Therefore, it is necessary to extract and discriminate signal features involving motor imagery movements. Riemannian geometry-based feature extraction methods are effective when designing these types of motor-imagery-based brain–computer interface applications. In the field of information theory, Riemannian geometry is mainly used with covariance matrices. Accordingly, investigations showed that if the method is used after the execution of the filterbank approach, the covariance matrix preserves the frequency and spatial information of the signal. Deep-learning methods are superior when the data availability is abundant and while there is a large number of features. The purpose of this study is to a) show how to use a single deep-learning-based classifier in conjunction with BCI (brain–computer interface) applications with the CSP (common spatial features) and the Riemannian geometry feature extraction methods in BCI applications and to b) describe one of the wrapper feature-selection algorithms, referred to as the particle swarm optimization, in combination with a decision tree algorithm. In this work, the CSP method was used for a multiclass case by using only one classifier. Additionally, a combination of power spectrum density features with covariance matrices mapped onto the tangent space of a Riemannian manifold was used. Furthermore, the particle swarm optimization method was implied to ease the training by penalizing bad features, and the moving windows method was used for augmentation. After empirical study, the convolutional neural network was adopted to classify the pre-processed data. Our proposed method improved the classification accuracy for several subjects that comprised the well-known BCI competition IV 2a dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Ding, Enjie, Xu Chu, Zhongyu Liu, Kai Zhang, and Qiankun Yu. "A Novel Hierarchical Adaptive Feature Fusion Method for Meta-Learning." Applied Sciences 12, no. 11 (May 27, 2022): 5458. http://dx.doi.org/10.3390/app12115458.

Повний текст джерела
Анотація:
Meta-learning aims to teach the machine how to learn. Embedding model-based meta-learning performs well in solving the few-shot problem. The methods use an embedding model, usually a convolutional neural network, to extract features from samples and use a classifier to measure the features extracted from a particular stage of the embedding model. However, the feature of the embedding model at the low stage contains richer visual information, while the feature at the high stage contains richer semantic information. Existing methods fail to consider the impact of the information carried by the features at different stages on the performance of the classifier. Therefore, we propose a meta-learning method based on adaptive feature fusion and weight optimization. The main innovations of the method are as follows: firstly, a feature fusion strategy is used to fuse the feature of each stage of the embedding model based on certain weights, effectively utilizing the information carried by different stage features. Secondly, the particle swarm optimization algorithm was used to optimize the weight of feature fusion, and determine each stage feature’s weight in the process of feature fusion. Compared to current mainstream baseline methods on multiple few-shot image recognition benchmarks, the method performs better.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Hao, Lijun, Min Zhang, and Gang Huang. "Feature Optimization of Exhaled Breath Signals Based on Pearson-BPSO." Mobile Information Systems 2021 (December 3, 2021): 1–9. http://dx.doi.org/10.1155/2021/1478384.

Повний текст джерела
Анотація:
Feature optimization, which is the theme of this paper, is actually the selective selection of the variables on the input side at the time of making a predictive kind of model. However, an improved feature optimization algorithm for breath signal based on the Pearson-BPSO was proposed and applied to distinguish hepatocellular carcinoma by electronic nose (eNose) in the paper. First, the multidimensional features of the breath curves of hepatocellular carcinoma patients and healthy controls in the training samples were extracted; then, the features with less relevance to the classification were removed according to the Pearson correlation coefficient; next, the fitness function was constructed based on K-Nearest Neighbor (KNN) classification error and feature dimension, and the feature optimization transformation matrix was obtained based on BPSO. Furthermore, the transformation matrix was applied to optimize the test sample’s features. Finally, the performance of the optimization algorithm was evaluated by the classifier. The experiment results have shown that the Pearson-BPSO algorithm could effectively improve the classification performance compared with BPSO and PCA optimization methods. The accuracy of SVM and RF classifier was 86.03% and 90%, respectively, and the sensitivity and specificity were about 90% and 80%. Consequently, the application of Pearson-BPSO feature optimization algorithm will help improve the accuracy of hepatocellular carcinoma detection by eNose and promote the clinical application of intelligent detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Ali, Shereen H. "A Novel Intrusion Detection Framework (IDF) using Machine Learning Methods." Journal of Cybersecurity and Information Management 10, no. 01 (2023): 43–54. http://dx.doi.org/10.54216/jcim.100103.

Повний текст джерела
Анотація:
An intrusion detection system is a critical security feature that analyses network traffic in order to avoid serious unauthorized access to network resources. For securing networks against potential breaches, effective intrusion detection is critical. In this paper, a novel Intrusion Detection Framework (IDF) is proposed. The three modules that comprise the suggested IDF are: (i) Data Pre-processing Module (DPM), (ii) Feature Selection Module (FSM), and Classification Module (CM). DPM collects and processes network traffic in order to prepare data for training and testing. The FSM seeks to identify the key elements for recognizing DPM intrusion attempts. An Improved Particle Swarm Optimization is used (IPSO). IPSO is a hybrid method that uses both filter and wrapper approaches to generate accurate and relevant information for the classification step that follows. Primary Selection Phase (PSP) and Completed Selection Phase (CSP) are the two consecutive feature selection phases in IPSO. PSP employs a filtering approaches to quickly identify the most significant features for detecting intrusion threats while eliminating those that are redundant or ineffective. In CSP, the next level of IPSO, this behavior reduces the computing cost. For accurate feature selection, CSP uses Binary Particle Swarm Optimization (Bi-PSO) as a wrapper approach. Based on the most effective features identified by FSM, The CM aims to identify intrusion attempts with the minimal processing time. Therefore, a K-Nearest Neighbor KNN classifier has been deployed. As a result, based on the significant features identified by the IPSO technique, KNN can accurately detect intrusion attacks with the least amount of processing time. The experimental results have shown that the proposed IDF outperforms other recent techniques using UNSW_NB-15 dataset. The accuracy, precision, recall, F1score, and processing time of the experimental outcomes of our findings were assessed. Our results were competitive with an accuracy of 99.8%, precision of 99.94%, recall of 99.85%, F1-score of 99.89%, and excursion time of 59.15s when compared to the findings of the current works.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Chen, Bingsheng, Huijie Chen, and Mengshan Li. "Feature Selection Based on BP Neural Network and Adaptive Particle Swarm Algorithm." Mobile Information Systems 2021 (May 31, 2021): 1–11. http://dx.doi.org/10.1155/2021/6715564.

Повний текст джерела
Анотація:
Feature selection can classify the data with irrelevant features and improve the accuracy of data classification in pattern classification. At present, back propagation (BP) neural network and particle swarm optimization algorithm can be well combined with feature selection. On this basis, this paper adds interference factors to BP neural network and particle swarm optimization algorithm to improve the accuracy and practicability of feature selection. This paper summarizes the basic methods and requirements for feature selection and combines the benefits of global optimization with the feedback mechanism of BP neural networks to feature based on backpropagation and particle swarm optimization (BP-PSO). Firstly, a chaotic model is introduced to increase the diversity of particles in the initial process of particle swarm optimization, and an adaptive factor is introduced to enhance the global search ability of the algorithm. Then, the number of features is optimized to reduce the number of features on the basis of ensuring the accuracy of feature selection. Finally, different data sets are introduced to test the accuracy of feature selection, and the evaluation mechanisms of encapsulation mode and filtering mode are used to verify the practicability of the model. The results show that the average accuracy of BP-PSO is 8.65% higher than the suboptimal NDFs model in different data sets, and the performance of BP-PSO is 2.31% to 18.62% higher than the benchmark method in all data sets. It shows that BP-PSO can select more distinguishing feature subsets, which verifies the accuracy and practicability of this model.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Ali, Mona A. S., Fathimathul Rajeena P. P., and Diaa Salama Abd Elminaam. "An Efficient Heap Based Optimizer Algorithm for Feature Selection." Mathematics 10, no. 14 (July 8, 2022): 2396. http://dx.doi.org/10.3390/math10142396.

Повний текст джерела
Анотація:
The heap-based optimizer (HBO) is an innovative meta-heuristic inspired by human social behavior. In this research, binary adaptations of the heap-based optimizer B_HBO are presented and used to determine the optimal features for classifications in wrapping form. In addition, HBO balances exploration and exploitation by employing self-adaptive parameters that can adaptively search the solution domain for the optimal solution. In the feature selection domain, the presented algorithms for the binary Heap-based optimizer B_HBO are used to find feature subsets that maximize classification performance while lowering the number of selected features. The textitk-nearest neighbor (textitk-NN) classifier ensures that the selected features are significant. The new binary methods are compared to eight common optimization methods recently employed in this field, including Ant Lion Optimization (ALO), Archimedes Optimization Algorithm (AOA), Backtracking Search Algorithm (BSA), Crow Search Algorithm (CSA), Levy flight distribution (LFD), Particle Swarm Optimization (PSO), Slime Mold Algorithm (SMA), and Tree Seed Algorithm (TSA) in terms of fitness, accuracy, precision, sensitivity, F-score, the number of selected features, and statistical tests. Twenty datasets from the UCI repository are evaluated and compared using a set of evaluation indicators. The non-parametric Wilcoxon rank-sum test was used to determine whether the proposed algorithms’ results varied statistically significantly from those of the other compared methods. The comparison analysis demonstrates that B_HBO is superior or equivalent to the other algorithms used in the literature.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Saleem, Sobia, Marcus Gallagher, and Ian Wood. "Direct Feature Evaluation in Black-Box Optimization Using Problem Transformations." Evolutionary Computation 27, no. 1 (March 2019): 75–98. http://dx.doi.org/10.1162/evco_a_00247.

Повний текст джерела
Анотація:
Exploratory Landscape Analysis provides sample-based methods to calculate features of black-box optimization problems in a quantitative and measurable way. Many problem features have been proposed in the literature in an attempt to provide insights into the structure of problem landscapes and to use in selecting an effective algorithm for a given optimization problem. While there has been some success, evaluating the utility of problem features in practice presents some significant challenges. Machine learning models have been employed as part of the evaluation process, but they may require additional information about the problems as well as having their own hyper-parameters, biases and experimental variability. As a result, extra layers of uncertainty and complexity are added into the experimental evaluation process, making it difficult to clearly assess the effect of the problem features. In this article, we propose a novel method for the evaluation of problem features which can be applied directly to individual or groups of features and does not require additional machine learning techniques or confounding experimental factors. The method is based on the feature's ability to detect a prior ranking of similarity in a set of problems. Analysis of Variance (ANOVA) significance tests are used to determine if the feature has successfully distinguished the successive problems in the set. Based on ANOVA test results, a percentage score is assigned to each feature for different landscape characteristics. Experimental results for twelve different features on four problem transformations demonstrate the method and provide quantitative evidence about the ability of different problem features to detect specific properties of problem landscapes.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Kanisha, Bright, and Ganesan Balarishnanan. "Speech Recognition with Advanced Feature Extraction Methods Using Adaptive Particle Swarm Optimization." International Journal of Intelligent Engineering and Systems 9, no. 4 (December 31, 2016): 21–30. http://dx.doi.org/10.22266/ijies2016.1231.03.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

БОДЯНСЬКИЙ, Є. В., І. Г. ПЕРОВА, and Г. В. СТОЙКА. "OPTIMIZATION OF EVALUATION OF THE INFORMATIVITY OF MEDICAL INDICATORS ON THE BASIS OF THE HYBRID APPROACH." Transport development, no. 1(1) (September 27, 2017): 108–15. http://dx.doi.org/10.33082/td.2017.1-1.11.

Повний текст джерела
Анотація:
Feature Selection task is one of most complicated and actual in Data Mining area. Any approaches for it solving are based on non-mathematical and presentative hypothesis. New approach for evaluation of medical features information quantity, based on optimal combination of Feature Selection and Feature Extraction methods. This approach permits to produce optimal reduced number of features with linguistic interpreting of each ones. Hybrid system of Feature Selection/Extraction is proposed. This system is numerically simple, can produce Feature Selection/ Extraction with any number of features using standard method of principal component analysis and calculating distance between first principal component and all medical features.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Fahad, Labiba Gillani, Syed Fahad Tahir, Waseem Shahzad, Mehdi Hassan, Hani Alquhayz, and Rabia Hassan. "Ant Colony Optimization-Based Streaming Feature Selection: An Application to the Medical Image Diagnosis." Scientific Programming 2020 (October 7, 2020): 1–10. http://dx.doi.org/10.1155/2020/1064934.

Повний текст джерела
Анотація:
Irrelevant and redundant features increase the computation and storage requirements, and the extraction of required information becomes challenging. Feature selection enables us to extract the useful information from the given data. Streaming feature selection is an emerging field for the processing of high-dimensional data, where the total number of attributes may be infinite or unknown while the number of data instances is fixed. We propose a hybrid feature selection approach for streaming features using ant colony optimization with symmetric uncertainty (ACO-SU). The proposed approach tests the usefulness of the incoming features and removes the redundant features. The algorithm updates the obtained feature set when a new feature arrives. We evaluate our approach on fourteen datasets from the UCI repository. The results show that our approach achieves better accuracy with a minimal number of features compared with the existing methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Puyalnithi, Thendral, and Madhuviswanatham Vankadara. "A Unified Feature Selection Model for High Dimensional Clinical Data Using Mutated Binary Particle Swarm Optimization and Genetic Algorithm." International Journal of Healthcare Information Systems and Informatics 13, no. 4 (October 2018): 1–14. http://dx.doi.org/10.4018/ijhisi.2018100101.

Повний текст джерела
Анотація:
This article contends that feature selection is an important pre-processing step in case the data set is huge in size with many features. Once there are many features, then the probability of existence of noisy features is high which might bring down the efficiency of classifiers created out of that. Since the clinical data sets naturally having very large number of features, the necessity of reducing the features is imminent to get good classifier accuracy. Nowadays, there has been an increase in the use of evolutionary algorithms in optimization in feature selection methods due to the high success rate. A hybrid algorithm which uses a modified binary particle swarm optimization called mutated binary particle swarm optimization and binary genetic algorithm is proposed in this article which enhanced the exploration and exploitation capability and it has been a verified with proposed parameter called trade off factor through which the proposed method is compared with other methods and the result shows the improved efficiency of the proposed method over other methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Too, Jingwei, Abdul Abdullah, Norhashimah Mohd Saad, and Weihown Tee. "EMG Feature Selection and Classification Using a Pbest-Guide Binary Particle Swarm Optimization." Computation 7, no. 1 (February 22, 2019): 12. http://dx.doi.org/10.3390/computation7010012.

Повний текст джерела
Анотація:
Due to the increment in hand motion types, electromyography (EMG) features are increasingly required for accurate EMG signals classification. However, increasing in the number of EMG features not only degrades classification performance, but also increases the complexity of the classifier. Feature selection is an effective process for eliminating redundant and irrelevant features. In this paper, we propose a new personal best (Pbest) guide binary particle swarm optimization (PBPSO) to solve the feature selection problem for EMG signal classification. First, the discrete wavelet transform (DWT) decomposes the signal into multiresolution coefficients. The features are then extracted from each coefficient to form the feature vector. After which pbest-guide binary particle swarm optimization (PBPSO) is used to evaluate the most informative features from the original feature set. In order to measure the effectiveness of PBPSO, binary particle swarm optimization (BPSO), genetic algorithm (GA), modified binary tree growth algorithm (MBTGA), and binary differential evolution (BDE) were used for performance comparison. Our experimental results show the superiority of PBPSO over other methods, especially in feature reduction; where it can reduce more than 90% of features while keeping a very high classification accuracy. Hence, PBPSO is more appropriate for application in clinical and rehabilitation applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Başeğmez, Hülya, Emrah Sezer, and Çiğdem Selçukcan Erol. "Optimization for Gene Selection and Cancer Classification." Proceedings 74, no. 1 (March 16, 2021): 21. http://dx.doi.org/10.3390/proceedings2021074021.

Повний текст джерела
Анотація:
Recently, gene selection has played an important role in cancer diagnosis and classification. In this study, it was studied to select high descriptive genes for use in cancer diagnosis in order to develop a classification analysis for cancer diagnosis using microarray data. For this purpose, comparative analysis and intersections of six different methods obtained by using two feature selection algorithms and three search algorithms are presented. As a result of the six different feature subset selection methods applied, it was seen that instead of 15,155 genes, 24 genes should be focused. In this case, cancer diagnosis may be possible using 24 candidate genes that have been reduced, rather than similar studies involving larger features. However, in order to see the diagnostic success of diagnoses made using these candidate genes, they should be examined in a wet laboratory.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Jia, Guangfei, and Yanchao Meng. "Study on Fault Feature Extraction of Rolling Bearing Based on Improved WOA-FMD Algorithm." Shock and Vibration 2023 (June 14, 2023): 1–19. http://dx.doi.org/10.1155/2023/5097144.

Повний текст джерела
Анотація:
The vibration signal of rolling bearing fault is nonlinear and nonstationary under the interference of background noise, and it is difficult to extract fault features from it. When feature mode decomposition is used to analyze signals, prior parameter settings can easily affect the decomposition results. Therefore, a fault feature extraction method based on improved whale optimization algorithm is proposed to optimize feature modal decomposition parameters. The improved WOA integrates Lévy flight and adaptive weight, and envelope entropy is used as fitness function to optimize feature modal decomposition parameters. The feature mode decomposition of the original signal is performed using the optimal combination of parameters to obtain multiple IMF components. The optimal IMF component envelope demodulation analysis is selected according to the kurtosis value, and the fault feature is extracted through the envelope spectrum. Comparing the LMWOA method with PSO and WOA methods by simulated and experimental signals, the results show that the optimization speed of LMWOA is faster than that of other methods. Compared with CEEMD, VMD, and FMD methods, the improved WOA-FMD method has higher fault feature ratio and can accurately extract fault features under noise interference. This method can effectively solve the parameter adaptive ability and improve the accuracy of fault diagnosis, which has practical significance.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Aghdam, Mehdi Hosseinzadeh, and Setareh Heidari. "Feature Selection Using Particle Swarm Optimization in Text Categorization." Journal of Artificial Intelligence and Soft Computing Research 5, no. 4 (October 1, 2015): 231–38. http://dx.doi.org/10.1515/jaiscr-2015-0031.

Повний текст джерела
Анотація:
Abstract Feature selection is the main step in classification systems, a procedure that selects a subset from original features. Feature selection is one of major challenges in text categorization. The high dimensionality of feature space increases the complexity of text categorization process, because it plays a key role in this process. This paper presents a novel feature selection method based on particle swarm optimization to improve the performance of text categorization. Particle swarm optimization inspired by social behavior of fish schooling or bird flocking. The complexity of the proposed method is very low due to application of a simple classifier. The performance of the proposed method is compared with performance of other methods on the Reuters-21578 data set. Experimental results display the superiority of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Chen, Chao, and Hao Dong Zhu. "Feature Selection Method Based on Parallel Binary Immune Quantum-Behaved Particle Swarm Optimization." Advanced Materials Research 546-547 (July 2012): 1538–43. http://dx.doi.org/10.4028/www.scientific.net/amr.546-547.1538.

Повний текст джерела
Анотація:
In order to enhance the operating speed and reduce the occupied memory space and filter out irrelevant or lower degree of features, feature selection algorithms must be used. However, most of existing feature selection methods are serial and are inefficient timely to be applied to massive text data sets, so it is a hotspot how to improve efficiency of feature selection by means of parallel thinking. This paper presented a feature selection method based on Parallel Binary Immune Quantum-Behaved Particle Swarm Optimization (PBIQPSO). The presented method uses the Binary Immune Quantum-Behaved Particle Swarm Optimization to select feature subset, takes advantage of multiple computing nodes to enhance time efficiency, so can acquire quickly the feature subsets which are more representative. Experimental results show that the method is effective.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Naz, Mehreen, Kashif Zafar, and Ayesha Khan. "Ensemble Based Classification of Sentiments Using Forest Optimization Algorithm." Data 4, no. 2 (May 23, 2019): 76. http://dx.doi.org/10.3390/data4020076.

Повний текст джерела
Анотація:
Feature subset selection is a process to choose a set of relevant features from a high dimensionality dataset to improve the performance of classifiers. The meaningful words extracted from data forms a set of features for sentiment analysis. Many evolutionary algorithms, like the Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), have been applied to feature subset selection problem and computational performance can still be improved. This research presents a solution to feature subset selection problem for classification of sentiments using ensemble-based classifiers. It consists of a hybrid technique of minimum redundancy and maximum relevance (mRMR) and Forest Optimization Algorithm (FOA)-based feature selection. Ensemble-based classification is implemented to optimize the results of individual classifiers. The Forest Optimization Algorithm as a feature selection technique has been applied to various classification datasets from the UCI machine learning repository. The classifiers used for ensemble methods for UCI repository datasets are the k-Nearest Neighbor (k-NN) and Naïve Bayes (NB). For the classification of sentiments, 15–20% improvement has been recorded. The dataset used for classification of sentiments is Blitzer’s dataset consisting of reviews of electronic products. The results are further improved by ensemble of k-NN, NB, and Support Vector Machine (SVM) with an accuracy of 95% for the classification of sentiment tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Li, Chen, Ziyuan Liu, Jiawei Ren, Wenchao Wang, and Ji Xu. "A Feature Optimization Approach Based on Inter-Class and Intra-Class Distance for Ship Type Classification." Sensors 20, no. 18 (September 22, 2020): 5429. http://dx.doi.org/10.3390/s20185429.

Повний текст джерела
Анотація:
Deep learning based methods have achieved state-of-the-art results on the task of ship type classification. However, most existing ship type classification algorithms take time–frequency (TF) features as input, the underlying discriminative information of these features has not been explored thoroughly. This paper proposes a novel feature optimization method which is designed to minimize an objective function aimed at increasing inter-class and reducing intra-class feature distance for ship type classification. The objective function we design is able to learn a center for each class and make samples from the same class closer to the corresponding center. This ensures that the features maximize underlying discriminative information involved in the data, particularly for some targets that usually confused by the conventional manual designed feature. Results on the dataset from a real environment show that the proposed feature optimization approach outperforms traditional TF features.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Dionela, Matthew C., Carey Louise B. Arroyo, Mhica S. Torres, Miguel P. Alaan, Sandy C. Lauguico, Ryan Rhay Vicerra, and Ronnie Concepcion II. "MACHINE LEARNING METHODS FOR EARLY-STAGE DIAGNOSIS OF PARKINSON'S DISEASE THROUGH HANDWRITING DATA." ASEAN Engineering Journal 13, no. 3 (August 30, 2023): 15–28. http://dx.doi.org/10.11113/aej.v13.18777.

Повний текст джерела
Анотація:
Parkinson's disease (PD) deteriorates human cognitive and motor functions, causing slowness of movements and postural shakiness. PD is currently incurable, and managing symptoms in its late stages is difficult. PD diagnosis also has gaps in accuracy due to several clinical challenges. Thus, early-stage detection of PD through its symptoms, such as handwriting abnormality, has become a popular research area using machine learning. Since most related studies focus on advanced algorithms, this study aims to determine the classification accuracies of simpler classical models using the NewHandPD-NewMeander dataset. This study used the 9 features extracted from the meanders drawn by healthy participants and participants diagnosed with Parkinson’s disease and 3 features about the individual. The same features were reduced to the 8 best according to univariate selection and recursive feature elimination. The machine learning algorithms used for the models in this study are Logistic regression, Multilayer perceptron, and Naive Bayes. Additionally, hyperparameter optimization was done. Results have shown that feature selection improved the performances of the default model, while optimization had varying effects depending on the feature selection method used. Among 15 models built, Multilayer perceptron, which utilized top 8 features from univariate selection with default hyperparameters (MLPU8), performed best. It yielded an accuracy of 84.4% in cross-validation, 87.5% in holdout validation, and an F1-score of 87.5%. Remaining models had accuracies ranging from 81.4% - 84.4% in cross-validations and 82.5% - 85.0% in holdout validations. Other studies done on diagnosing PD using similar handwritten datasets resulted in lower accuracies of 87.14% and 77.38% despite utilizing complex algorithms for its models. This proved that the 15 models built using simple architecture can outperform complex classification methods. The 15 models built accurately classify meander data and can be used as an early assessment tool for detecting PD.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Agasiev, Taleh. "Characteristic feature analysis of continuous optimization problems based on Variability Map of objective function for optimization algorithm configuration." Open Computer Science 10, no. 1 (May 13, 2020): 97–111. http://dx.doi.org/10.1515/comp-2020-0114.

Повний текст джерела
Анотація:
AbstractAdvanced optimization algorithms with a variety of configurable parameters become increasingly difficult to apply effectively to solving optimization problems. Appropriate algorithm configuration becomes highly relevant, still remaining a computationally expensive operation. Development of machine learning methods allows to model and predict the efficiency of different solving strategies and algorithm configurations depending on properties of optimization problem to be solved. The paper suggests the Dependency Decomposition approach to reduce computational complexity of modeling the efficiency of optimization algorithm, also considering the amount of computational resources available for optimization problem solving. The approach requires development of explicit Exploratory Landscape Analysis methods to assess a variety of significant characteristic features of optimization problems. The results of feature assessment depend on the number of sample points analyzed and their location in the design space, on top of that some of methods require additional evaluations of objective function. The paper proposes new landscape analysis methods based on given points without the need of any additional objective function evaluations. An algorithm of building a so-called Full Variability Map is suggested based on informativeness criteria formulated for groups of sample points. The paper suggests Generalized Information Content method for analysis of Full Variability Map which allows to get accurate and stable estimations of objective function features. The Sectorization method of Variability Map analysis is proposed to assess characteristic features reflecting such properties of objective function that are critical for optimization algorithm efficiency. The resulting features are invariant to the scale of objective function gradients which positively affects the generalizing ability of problems classification algorithm. The procedure of the comparative study of effectiveness of landscape analysis algorithms is introduced. The results of computational experiments indicate reliability of applying the suggested landscape analysis methods to optimization problem characterization and classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Das, Himansu, Sanjay Prajapati, Mahendra Kumar Gourisaria, Radha Mohan Pattanayak, Abdalla Alameen, and Manjur Kolhar. "Feature Selection Using Golden Jackal Optimization for Software Fault Prediction." Mathematics 11, no. 11 (May 25, 2023): 2438. http://dx.doi.org/10.3390/math11112438.

Повний текст джерела
Анотація:
A program’s bug, fault, or mistake that results in unintended results is known as a software defect or fault. Software flaws are programming errors due to mistakes in the requirements, architecture, or source code. Finding and fixing bugs as soon as they arise is a crucial goal of software development that can be achieved in various ways. So, selecting a handful of optimal subsets of features from any dataset is a prime approach. Indirectly, the classification performance can be improved through the selection of features. A novel approach to feature selection (FS) has been developed, which incorporates the Golden Jackal Optimization (GJO) algorithm, a meta-heuristic optimization technique that draws on the hunting tactics of golden jackals. Combining this algorithm with four classifiers, namely K-Nearest Neighbor, Decision Tree, Quadrative Discriminant Analysis, and Naive Bayes, will aid in selecting a subset of relevant features from software fault prediction datasets. To evaluate the accuracy of this algorithm, we will compare its performance with other feature selection methods such as FSDE (Differential Evolution), FSPSO (Particle Swarm Optimization), FSGA (Genetic Algorithm), and FSACO (Ant Colony Optimization). The result that we got from FSGJO is great for almost all the cases. For many of the results, FSGJO has given higher classification accuracy. By utilizing the Friedman and Holm tests, to determine statistical significance, the suggested strategy has been verified and found to be superior to prior methods in selecting an optimal set of attributes.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Mohammed Majeed, Nadia, and Fawziya Mahmood Ramo. "Implementation of Features Selection Based on Dragonfly Optimization Algorithm." Technium: Romanian Journal of Applied Sciences and Technology 4, no. 10 (November 14, 2022): 44–52. http://dx.doi.org/10.47577/technium.v4i10.7203.

Повний текст джерела
Анотація:
Nowadays increasing dimensionality of data produces several issues in machine learning. Therefore, it is needed to decrease the number of features by choosing just the most important ones and eliminating duplicate features, also reducing the number of features that are important to the model. For this purpose, many methodologies known as Feature Selection are applied. In this study, a feature selection approach is proposed based on Swarm Intelligence methods, which search for the best points in the search area to achieve optimization. In this paper, a wrapper feature selection technique based on the Dragonfly algorithm is proposed. The dragonfly optimization technique is used to find the optimal subset of features that could accurately classify breast cancer as benign or malignant. Many times, the fitness function is defined as classification accuracy. In this study, hard vote classes are employed as a model developed to evaluate feature subsets that have been chosen. It is used as an evaluation function (fitness function) to evaluate each dragonfly in the population. The proposed ensemble hard voting classifier utilizes a combination of five machine-learning algorithms to produce a binary classification for feature selection: Support Vector Machine (SVM), K-Nearest Neighbors (K-NN), Naive Bayes (NB), Decision Tree (DT), and Random Forest (RF). According to the results of the experiments, the voting ensemble classifier has the greatest accuracy value among the single classifiers. The proposed method showed that when training the subset features, the accuracy generated by the voting classifier is high at 98.24%, whereas the training of all features achieved an accuracy of 96.49%. The proposed approach makes use of the UCI repository's Wisconsin Diagnostic Breast Cancer (WDBC) Dataset. Which consists of 569 instances and 30 features.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Ye, A. Zhiwei, B. Ruihan Li, C. Wen Zhou, D. Mingwei Wang, E. Mengqing Mei, F. Zhe Shu, and G. Jun Shen. "High-Dimensional Feature Selection Based on Improved Binary Ant Colony Optimization Combined with Hybrid Rice Optimization Algorithm." International Journal of Intelligent Systems 2023 (July 10, 2023): 1–27. http://dx.doi.org/10.1155/2023/1444938.

Повний текст джерела
Анотація:
In the realm of high-dimensional data analysis, numerous fields stand to benefit from its applications, including the biological and medical sectors that are crucial for computer-aided disease diagnosis and prediction systems. However, the presence of a significant number of redundant or irrelevant features can adversely affect system accuracy and real-time diagnosis efficiency. To mitigate this issue, this paper proposes two innovative wrapper feature selection (FS) methods that integrate the ant colony optimization (ACO) algorithm and hybrid rice optimization (HRO). HRO is a recently developed metaheuristic that mimics the breeding process of the three-line hybrid rice, which is yet to be thoroughly explored in the context of solving high-dimensional FS problems. In the first hybridization, ACO is embedded as an evolutionary operator within HRO and updated alternately with it. In the second form of hybridization, two subpopulations evolve independently, sharing the local search results to assist individual updating. In the initial stage preceding hybridization, a problem-oriented heuristic factor assignment strategy based on the importance of the knee point feature is introduced to enhance the global search capability of ACO in identifying the smallest and most representative features. The performance of the proposed algorithms is evaluated on fourteen high-dimensional biomedical datasets and compared with other recently advanced FS methods. Experimental results suggest that the proposed methods are efficient and computationally robust, exhibiting superior performance compared to the other algorithms involved in this study.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Radojicic, Dragana, Nina Radojicic, and Simeon Kredatus. "A multicriteria optimization approach for the stock market feature selection." Computer Science and Information Systems, no. 00 (2020): 44. http://dx.doi.org/10.2298/csis200326044r.

Повний текст джерела
Анотація:
This paper studies the informativeness of features extracted from a limit order book data, to classify market data vector into the label (buy/idle) by using the Long short-term memory (LSTM) network. New technical indicators based on the support/resistance zones are introduced to enrich the set of features. We evaluate whether the performance of the LSTM network model is improved when we select features with respect to the newly proposed methods. Moreover, we employ multicriteria optimization to perform adequate feature selection among the proposed approaches, with respect to precision, recall, and F? score. Seven variations of approaches to select features are proposed and the best is selected by incorporation of multicriteria optimization.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Zhang, Dehuan, Wei Cao, Jingchun Zhou, Yan-Tsung Peng, Weishi Zhang, and Zifan Lin. "Two-Branch Underwater Image Enhancement and Original Resolution Information Optimization Strategy in Ocean Observation." Journal of Marine Science and Engineering 11, no. 7 (June 25, 2023): 1285. http://dx.doi.org/10.3390/jmse11071285.

Повний текст джерела
Анотація:
In complex marine environments, underwater images often suffer from color distortion, blur, and poor visibility. Existing underwater image enhancement methods predominantly rely on the U-net structure, which assigns the same weight to different resolution information. However, this approach lacks the ability to extract sufficient detailed information, resulting in problems such as blurred details and color distortion. We propose a two-branch underwater image enhancement method with an optimized original resolution information strategy to address this limitation. Our method comprises a feature enhancement subnetwork (FEnet) and an original resolution subnetwork (ORSnet). FEnet extracts multi-resolution information and utilizes an adaptive feature selection module to enhance global features in different dimensions. The enhanced features are then fed into ORSnet as complementary features, which extract local enhancement features at the original image scale to achieve semantically consistent and visually superior enhancement effects. Experimental results on the UIEB dataset demonstrate that our method achieves the best performance compared to the state-of-the-art methods. Furthermore, through comprehensive application testing, we have validated the superiority of our proposed method in feature extraction and enhancement compared to other end-to-end underwater image enhancement methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Zhang, Shaorong, Zhibin Zhu, Benxin Zhang, Bao Feng, Tianyou Yu, and Zhi Li. "The CSP-Based New Features Plus Non-Convex Log Sparse Feature Selection for Motor Imagery EEG Classification." Sensors 20, no. 17 (August 22, 2020): 4749. http://dx.doi.org/10.3390/s20174749.

Повний текст джерела
Анотація:
The common spatial pattern (CSP) is a very effective feature extraction method in motor imagery based brain computer interface (BCI), but its performance depends on the selection of the optimal frequency band. Although a lot of research works have been proposed to improve CSP, most of these works have the problems of large computation costs and long feature extraction time. To this end, three new feature extraction methods based on CSP and a new feature selection method based on non-convex log regularization are proposed in this paper. Firstly, EEG signals are spatially filtered by CSP, and then three new feature extraction methods are proposed. We called them CSP-wavelet, CSP-WPD and CSP-FB, respectively. For CSP-Wavelet and CSP-WPD, the discrete wavelet transform (DWT) or wavelet packet decomposition (WPD) is used to decompose the spatially filtered signals, and then the energy and standard deviation of the wavelet coefficients are extracted as features. For CSP-FB, the spatially filtered signals are filtered into multiple bands by a filter bank (FB), and then the logarithm of variances of each band are extracted as features. Secondly, a sparse optimization method regularized with a non-convex log function is proposed for the feature selection, which we called LOG, and an optimization algorithm for LOG is given. Finally, ensemble learning is used for secondary feature selection and classification model construction. Combing feature extraction and feature selection methods, a total of three new EEG decoding methods are obtained, namely CSP-Wavelet+LOG, CSP-WPD+LOG, and CSP-FB+LOG. Four public motor imagery datasets are used to verify the performance of the proposed methods. Compared to existing methods, the proposed methods achieved the highest average classification accuracy of 88.86, 83.40, 81.53, and 80.83 in datasets 1–4, respectively. The feature extraction time of CSP-FB is the shortest. The experimental results show that the proposed methods can effectively improve the classification accuracy and reduce the feature extraction time. With comprehensive consideration of classification accuracy and feature extraction time, CSP-FB+LOG has the best performance and can be used for the real-time BCI system.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Vaskevich, V. P. "Optimization of Relations Regulation Methods in Professional Sports." Lex Russica, no. 5 (May 26, 2022): 137–50. http://dx.doi.org/10.17803/1729-5920.2022.186.5.137-150.

Повний текст джерела
Анотація:
The paper is devoted to the problem of approaches to the legal organization of professional activity of an athlete. In the field of professional sports, it is easy to find almost all known methods of legal influence: from prohibitions, obligations and restrictions to encouragement, stimulation, permission. But the method that has the greatest impact on relations with the participation of professional athletes is based on the fact that the personal will (decisions) of athletes determines this activity in basic legal facts and features. The legal regulation of professional activity of athletes develops primarily under the influence of dispositive norms, supplemented in appropriate cases by mandatory norms. To characterize the methods of regulating relations in sports, it is necessary to take into account a number of circumstances atypical for ordinary legal regulation (for example, a feature of the sphere under consideration is a layer of relatively soft methods of influence, i.e. explanations and recommendations). Among the special methods of regulating sports activities was and remains one or another way of combining acts of law-making, formed both in the field of state legal influence and in the field of corporate law-making. An intersectoral approach is essential in the development of regulatory impact on relations in professional sports, which allows taking into account the impact of various rules on certain relationships, which will potentially allow for a more reasonable distribution of rights and obligations, achieve the goals of legislative and other regulation, as well as effectively protect the subjects of rights. When constructing and organizing legal material containing complex (in-industry) regulation of relations in sports, it would be correct to use the techniques of private international law based on various conflict of laws bindings. The author concludes that it is necessary to continue work on optimizing the methods of regulating relations in professional sports.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Yan Yan, Yan Yan, Hongzhong Ma Yan Yan, Dongdong Song Hongzhong Ma, Yang Feng Dongdong Song, and Dawei Duan Yang Feng. "OLTC Fault Diagnosis Method Based on Time Domain Analysis and Kernel Extreme Learning Machine." 電腦學刊 33, no. 6 (December 2022): 091–106. http://dx.doi.org/10.53106/199115992022123306008.

Повний текст джерела
Анотація:
<p>Aiming at the problems of limited feature information and low diagnosis accuracy of traditional on-load tap changers (OLTCs), an OLTC fault diagnosis method based on time-domain analysis and kernel extreme learning machine (KELM) is proposed in this paper. Firstly, the time-frequency analysis method is used to analyze the collected OLTC vibration signal, extract the feature information and form the feature matrix; Then, the PCA algorithm is used to select effective features to build the initial optimal feature matrix; Finally, a kernel extreme learning machine optimized by improved grasshopper optimization algorithm (IGOA), is used to handle the optimal feature matrix for classifying fault patterns. Evaluation of algorithm performance in comparison with other existing methods indicates that the proposed method can improve the diagnostic accuracy by at least 7%.</p> <p>&nbsp;</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Sabeena, B., S. Sivakumari, and Dawit Mamru Teressa. "Optimization-Based Ensemble Feature Selection Algorithm and Deep Learning Classifier for Parkinson’s Disease." Journal of Healthcare Engineering 2022 (April 13, 2022): 1–12. http://dx.doi.org/10.1155/2022/1487212.

Повний текст джерела
Анотація:
PD (Parkinson’s Disease) is a severe malady that is painful and incurable, affecting older human beings. Identifying PD early in a precise manner is critical for the lengthened survival of patients, where DMTs (data mining techniques) and MLTs (machine learning techniques) can be advantageous. Studies have examined DMTs for their accuracy using Parkinson’s dataset and analyzing feature relevance. Recent studies have used FMBOAs for feature selections and relevance analyses, where the selection of features aims to find the optimal subset of features for classification tasks and combine the learning of FMBOAs. EFSs (ensemble feature selections) are viable solutions for combining the benefits of multiple algorithms while balancing their drawbacks. This work uses OBEFSs (optimization-based ensemble feature selections) to select appropriate features based on agreements. Ensembles have the ability to combine results from multiple feature selection approaches, including FMBOAs, LFCSAs (Lévy flight cuckoo search algorithms), and AFAs (adaptive firefly algorithms). These approaches select optimized feature subsets, resulting in three feature subsets, which are subsequently matched for correlations by ensembles. The optimum features are generated by OBEFSs the trained on FCBi-LSTMs (fuzzy convolution bi-directional long short-term memories) for classifications. This work’s suggested model uses the UCI (University of California-Irvine) learning repository, and the methods are evaluated using LOPO-CVs (Leave-One-Person-Out-Cross Validations) in terms of accuracies, F-measure values, and MCCs (Matthews correlation coefficients).
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Zeng, Xiaohua, Jieping Cai, Changzhou Liang, and Chiping Yuan. "Prediction of stock price movement using an improved NSGA-II-RF algorithm with a three-stage feature engineering process." PLOS ONE 18, no. 6 (June 28, 2023): e0287754. http://dx.doi.org/10.1371/journal.pone.0287754.

Повний текст джерела
Анотація:
Prediction of stock price has been a hot topic in artificial intelligence field. Computational intelligent methods such as machine learning or deep learning are explored in the prediction system in recent years. However, making accurate predictions of stock price direction is still a big challenge because stock prices are affected by nonlinear, nonstationary, and high dimensional features. In previous works, feature engineering was overlooked. How to select the optimal feature sets that affect stock price is a prominent solution. Hence, our motivation for this article is to propose an improved many-objective optimization algorithm integrating random forest (I-NSGA-II-RF) algorithm with a three-stage feature engineering process in order to decrease the computational complexity and improve the accuracy of prediction system. Maximizing accuracy and minimizing the optimal solution set are the optimization directions of the model in this study. The integrated information initialization population of two filtered feature selection methods is used to optimize the I-NSGA-II algorithm, using multiple chromosome hybrid coding to synchronously select features and optimize model parameters. Finally, the selected feature subset and parameters are input to the RF for training, prediction, and iterative optimization. Experimental results show that the I-NSGA-II-RF algorithm has the highest average accuracy, the smallest optimal solution set, and the shortest running time compared to the unmodified multi-objective feature selection algorithm and the single target feature selection algorithm. Compared to the deep learning model, this model has interpretability, higher accuracy, and less running time.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Lou, Zhi. "Discussion on the Optimization Design of Feature Extraction Methods of the EMG Signal." Applied Mechanics and Materials 155-156 (February 2012): 674–77. http://dx.doi.org/10.4028/www.scientific.net/amm.155-156.674.

Повний текст джерела
Анотація:
This paper introduces a digital signal processor based system design, through the computer acquisition individuals EMG(Electromyographic)signal data to monitor the dynamic activity of muscles, and the estimation of normal and pathological conditions of the acquired data of the power spectrum. In the collection system, the input signal of the two differential amplifier signal preprocessing and obtain the EMG signal linear feature. Obtain electromyographic signal is then digitized, and transmitted to the computer analysis. The optimization system for detection and Analysis on the clinical diagnosis of human functional status and rehabilitation has important significance.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

C, Manjunatha Swamy,, Dr S. Meenakshi Sundaram, and Dr Lokesh, M. R. "Performance analysis of feature selection and classification in Big Data Information extraction." Saudi Journal of Engineering and Technology 8, no. 03 (March 19, 2023): 62–70. http://dx.doi.org/10.36348/sjet.2023.v08i03.002.

Повний текст джерела
Анотація:
Purpose: Information extraction from big data is improved by either reducing the number of features in a data set or selecting features using intelligent data analysis. Generally, big data sets are complex to process using traditional approaches. Feature selection is highly essential in big data information extraction because it chooses the subset of features that influence the final classification. Reducing the number of selected features in the data leads to enhanced accuracy and efficiency of data extraction with other attributes used in the mathematical model. This work aims to improve the performance of the classifier using an enhanced binary bat algorithm-based effective feature selection model. formulated to enhance accuracy, efficiency of data extraction with other attributes. An enhanced binary bat algorithm (EBBA) proposed to solve the mentioned problem using local optimization and global optimization factor which improves the performance of optimization. Experiment carried out with different datasets selected to test effective performance of proposed algorithm and demonstrated performance is better with other algorithms. Design: The purpose of this paper is to provide, an effective feature selection model for big data information extraction. An enhanced binary bat algorithm has been proposed to improve attribute selection using local optimization and global optimization methods. Classification of multisource data using selected features using labeled approach. Particular Information extraction for multi view multi label (PIMM) approach is compared with EBBA algorithm. Further to enhance effectiveness of shared and specific information in big data [3] by setting the delta and omega factors in order to fuse different information from different view point, Online analysis of relevance with any redundancy analysis also been incorporated. Findings: All the experiments were carried out with different datasets on the number of iterations and fitness of the attributes to validate the effective performance of the proposed algorithm. Experimental results and graphs show that the proposed methodology improves the overall performance of optimization using PIMM models. Originality: A feature selection model based on the binary bat algorithm has been the focus of this paper. Subset selection and feature ranking are the two important methods used in this approach. Experiments were conducted on datasets to analyze the patterns in the number of iterations and fitness of the attributes over selection. The improvement in feature selection leads to better classification accuracy of the proposed model compared to other nature inspired techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Wu, Di, Wanying Zhang, Heming Jia, and Xin Leng. "Simultaneous Feature Selection and Support Vector Machine Optimization Using an Enhanced Chimp Optimization Algorithm." Algorithms 14, no. 10 (September 28, 2021): 282. http://dx.doi.org/10.3390/a14100282.

Повний текст джерела
Анотація:
Chimp Optimization Algorithm (ChOA), a novel meta-heuristic algorithm, has been proposed in recent years. It divides the population into four different levels for the purpose of hunting. However, there are still some defects that lead to the algorithm falling into the local optimum. To overcome these defects, an Enhanced Chimp Optimization Algorithm (EChOA) is developed in this paper. Highly Disruptive Polynomial Mutation (HDPM) is introduced to further explore the population space and increase the population diversity. Then, the Spearman’s rank correlation coefficient between the chimps with the highest fitness and the lowest fitness is calculated. In order to avoid the local optimization, the chimps with low fitness values are introduced with Beetle Antenna Search Algorithm (BAS) to obtain visual ability. Through the introduction of the above three strategies, the ability of population exploration and exploitation is enhanced. On this basis, this paper proposes an EChOA-SVM model, which can optimize parameters while selecting the features. Thus, the maximum classification accuracy can be achieved with as few features as possible. To verify the effectiveness of the proposed method, the proposed method is compared with seven common methods, including the original algorithm. Seventeen benchmark datasets from the UCI machine learning library are used to evaluate the accuracy, number of features, and fitness of these methods. Experimental results show that the classification accuracy of the proposed method is better than the other methods on most data sets, and the number of features required by the proposed method is also less than the other algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії