Journal articles on the topic 'Feature extraction'

To see the other types of publications on this topic, follow the link: Feature extraction.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Feature extraction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Xia, Liegang, Shulin Mi, Junxia Zhang, Jiancheng Luo, Zhanfeng Shen, and Yubin Cheng. "Dual-Stream Feature Extraction Network Based on CNN and Transformer for Building Extraction." Remote Sensing 15, no. 10 (May 22, 2023): 2689. http://dx.doi.org/10.3390/rs15102689.

Full text
Abstract:
Automatically extracting 2D buildings from high-resolution remote sensing images is among the most popular research directions in the area of remote sensing information extraction. Semantic segmentation based on a CNN or transformer has greatly improved building extraction accuracy. A CNN is good at local feature extraction, but its ability to acquire global features is poor, which can lead to incorrect and missed detection of buildings. The advantage of transformer models lies in their global receptive field, but they do not perform well in extracting local features, resulting in poor local detail for building extraction. We propose a CNN-based and transformer-based dual-stream feature extraction network (DSFENet) in this paper, for accurate building extraction. In the encoder, convolution extracts the local features for buildings, and the transformer realizes the global representation of the buildings. The effective combination of local and global features greatly enhances the network’s feature extraction ability. We validated the capability of DSFENet on the Google Image dataset and the ISPRS Vaihingen dataset. DSEFNet achieved the best accuracy performance compared to other state-of-the-art models.
APA, Harvard, Vancouver, ISO, and other styles
2

He, Haiqing, Yan Wei, Fuyang Zhou, and Hai Zhang. "A Deep Neural Network for Road Extraction with the Capability to Remove Foreign Objects with Similar Spectra." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1-2024 (May 10, 2024): 193–99. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-2024-193-2024.

Full text
Abstract:
Abstract. Existing road extraction methods based on deep learning often struggle with distinguishing ground objects that share similar spectral information, such as roads and buildings. Consequently, this study proposes a dual encoder-decoder deep neural network to address road extraction in complex backgrounds. In the feature extraction stage, the first encoder-decoder designed for extracting road features. The second encoder-decoder utilized for extracting building features. During the feature fusion stage, road features and building features are integrated using a subtraction method. The resultant road features, constrained by building features, enhance the preservation of accurate road feature information. Within the feature fusion stage, road feature maps and building feature maps designated for fusion are input into the convolutional block attention module. This step aims to amplify the features of different channels and extract key information from diverse spatial positions. Subsequently, feature fusion is executed using the element-by-element subtraction method. The outcome is road features constrained by building features, thus preserving more precise road feature information. Experimental results demonstrate that the model successfully learns both road and building features concurrently. It effectively distinguishes between easily confused roads and buildings with similar spectral information, ultimately enhancing the accuracy of road extraction.
APA, Harvard, Vancouver, ISO, and other styles
3

V., Dr Sellam. "Text Analysis Via Composite Feature Extraction." Journal of Advanced Research in Dynamical and Control Systems 24, no. 4 (March 31, 2020): 310–20. http://dx.doi.org/10.5373/jardcs/v12i4/20201445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Park, Sun-Bae, and Do-Sik Yoo. "Priority feature extraction using projective transform feature extraction technique." Journal of Korean Institute of Intelligent Systems 34, no. 2 (April 30, 2024): 110–16. http://dx.doi.org/10.5391/jkiis.2024.34.2.110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ohl, Frank W., and Henning Scheich. "Feature extraction and feature interaction." Behavioral and Brain Sciences 21, no. 2 (April 1998): 278. http://dx.doi.org/10.1017/s0140525x98431170.

Full text
Abstract:
The idea of the orderly output constraint is compared with recent findings about the representation of vowels in the auditory cortex of an animal model for human speech sound processing (Ohl & Scheich 1997). The comparison allows a critical consideration of the idea of neuronal “feature extractors,” which is of relevance to the noninvariance problem in speech perception.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Ziyan. "Feature Extraction and Identification of Calligraphy Style Based on Dual Channel Convolution Network." Security and Communication Networks 2022 (May 16, 2022): 1–11. http://dx.doi.org/10.1155/2022/4187797.

Full text
Abstract:
To improve the effect of calligraphy style feature extraction and identification, this study proposes a calligraphy style feature extraction and identification technology based on two-channel convolutional neural network and constructs an intelligent calligraphy style feature extraction and identification system. Moreover, this paper improves the C3D network model and retains 2 fully connected layers. In addition, by extracting the outline skeleton and stroke features of calligraphy characters, this paper calculates the feature weight and authenticity determination function and constructs an authenticity identification system. The experimental study shows that the calligraphy style feature extraction and identification system based on the dual-channel convolutional neural network proposed in this paper has a good performance in calligraphy style feature extraction and identification.
APA, Harvard, Vancouver, ISO, and other styles
7

Zheng, Jian, Hongchun Qu, Zhaoni Li, Lin Li, Xiaoming Tang, and Fei Guo. "A novel autoencoder approach to feature extraction with linear separability for high-dimensional data." PeerJ Computer Science 8 (August 11, 2022): e1061. http://dx.doi.org/10.7717/peerj-cs.1061.

Full text
Abstract:
Feature extraction often needs to rely on sufficient information of the input data, however, the distribution of the data upon a high-dimensional space is too sparse to provide sufficient information for feature extraction. Furthermore, high dimensionality of the data also creates trouble for the searching of those features scattered in subspaces. As such, it is a tricky task for feature extraction from the data upon a high-dimensional space. To address this issue, this article proposes a novel autoencoder method using Mahalanobis distance metric of rescaling transformation. The key idea of the method is that by implementing Mahalanobis distance metric of rescaling transformation, the difference between the reconstructed distribution and the original distribution can be reduced, so as to improve the ability of feature extraction to the autoencoder. Results show that the proposed approach wins the state-of-the-art methods in terms of both the accuracy of feature extraction and the linear separabilities of the extracted features. We indicate that distance metric-based methods are more suitable for extracting those features with linear separabilities from high-dimensional data than feature selection-based methods. In a high-dimensional space, evaluating feature similarity is relatively easier than evaluating feature importance, so that distance metric methods by evaluating feature similarity gain advantages over feature selection methods by assessing feature importance for feature extraction, while evaluating feature importance is more computationally efficient than evaluating feature similarity.
APA, Harvard, Vancouver, ISO, and other styles
8

Taha, Mohammed A., Hanaa M. Ahmed, and Saif O. Husain. "Iris Features Extraction and Recognition based on the Scale Invariant Feature Transform (SIFT)." Webology 19, no. 1 (January 20, 2022): 171–84. http://dx.doi.org/10.14704/web/v19i1/web19013.

Full text
Abstract:
Iris Biometric authentication is considered to be one of the most dependable biometric characteristics for identifying persons. In actuality, iris patterns have invariant, stable, and distinguishing properties for personal identification. Due to its excellent dependability in personal identification, iris recognition has received more attention. Current iris recognition methods give good results especially when NIR and specific capture conditions are used in collaboration with the user. On the other hand, values related to images captured using VW are affected by noise such as blurry images, eye skin, occlusion, and reflection, which negatively affects the overall performance of the recognition systems. In both NIR and visible spectrum iris images, this article presents an effective iris feature extraction strategy based on the scale-invariant feature transform algorithm (SIFT). The proposed method was tested on different databases such as CASIA v1 and ITTD v1, as NIR images, as well as UBIRIS v1 as visible-light color images. The proposed system gave good accuracy rates compared to existing systems, as it gave an accuracy rate of (96.2%) when using CASIA v1 and (96.4%) in ITTD v1, while the system accuracy dropped to (84.0 %) when using UBIRIS v1.
APA, Harvard, Vancouver, ISO, and other styles
9

Soechting, John F., Weilai Song, and Martha Flanders. "Haptic Feature Extraction." Cerebral Cortex 16, no. 8 (October 12, 2005): 1168–80. http://dx.doi.org/10.1093/cercor/bhj058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

He, Dong-Chen, Li Wang, and Jean Guibert. "Texture feature extraction." Pattern Recognition Letters 6, no. 4 (September 1987): 269–73. http://dx.doi.org/10.1016/0167-8655(87)90087-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

He, Chao, and Gang Ma. "Cooperative Cloud-Edge Feature Extraction Architecture for Mobile Image Retrieval." Complexity 2021 (November 1, 2021): 1–7. http://dx.doi.org/10.1155/2021/7937922.

Full text
Abstract:
Mobile image retrieval greatly facilitates our lives and works by providing various retrieval services. The existing mobile image retrieval scheme is based on mobile cloud-edge computing architecture. That is, user equipment captures images and uploads the captured image data to the edge server. After preprocessing these captured image data and extracting features from these image data, the edge server uploads the extracted features to the cloud server. However, the feature extraction on the cloud server is noncooperative with the feature extraction on the edge server which cannot extract features effectively and has a lower image retrieval accuracy. For this, we propose a collaborative cloud-edge feature extraction architecture for mobile image retrieval. The cloud server generates the projection matrix from the image data set with a feature extraction algorithm, and the edge server extracts the feature from the uploaded image with the projection matrix. That is, the cloud server guides the edge server to perform feature extraction. This architecture can effectively extract the image data on the edge server, reduce network load, and save bandwidth. The experimental results indicate that this scheme can upload few features to get high retrieval accuracy and reduce the feature matching time by about 69.5% with similar retrieval accuracy.
APA, Harvard, Vancouver, ISO, and other styles
12

Im, Heeju, and Yong-Suk Choi. "UAT: Universal Attention Transformer for Video Captioning." Sensors 22, no. 13 (June 25, 2022): 4817. http://dx.doi.org/10.3390/s22134817.

Full text
Abstract:
Video captioning via encoder–decoder structures is a successful sentence generation method. In addition, using various feature extraction networks for extracting multiple features to obtain multiple kinds of visual features in the encoding process is a standard method for improving model performance. Such feature extraction networks are weight-freezing states and are based on convolution neural networks (CNNs). However, these traditional feature extraction methods have some problems. First, when the feature extraction model is used in conjunction with freezing, additional learning of the feature extraction model is not possible by exploiting the backpropagation of the loss obtained from the video captioning training. Specifically, this blocks feature extraction models from learning more about spatial information. Second, the complexity of the model is further increased when multiple CNNs are used. Additionally, the author of Vision Transformers (ViTs) pointed out the inductive bias of CNN called the local receptive field. Therefore, we propose the full transformer structure that uses an end-to-end learning method for video captioning to overcome this problem. As a feature extraction model, we use a vision transformer (ViT) and propose feature extraction gates (FEGs) to enrich the input of the captioning model through that extraction model. Additionally, we design a universal encoder attraction (UEA) that uses all encoder layer outputs and performs self-attention on the outputs. The UEA is used to address the lack of information about the video’s temporal relationship because our method uses only the appearance feature. We will evaluate our model against several recent models on two benchmark datasets and show its competitive performance on MSRVTT/MSVD datasets. We show that the proposed model performed captioning using only a single feature, but in some cases, it was better than the others, which used several features.
APA, Harvard, Vancouver, ISO, and other styles
13

Suhaidi, Mustazzihim, Rabiah Abdul Kadir, and Sabrina Tiun. "A REVIEW OF FEATURE EXTRACTION METHODS ON MACHINE LEARNING." Journal of Information System and Technology Management 6, no. 22 (September 1, 2021): 51–59. http://dx.doi.org/10.35631/jistm.622005.

Full text
Abstract:
Extracting features from input data is vital for successful classification and machine learning tasks. Classification is the process of declaring an object into one of the predefined categories. Many different feature selection and feature extraction methods exist, and they are being widely used. Feature extraction, obviously, is a transformation of large input data into a low dimensional feature vector, which is an input to classification or a machine learning algorithm. The task of feature extraction has major challenges, which will be discussed in this paper. The challenge is to learn and extract knowledge from text datasets to make correct decisions. The objective of this paper is to give an overview of methods used in feature extraction for various applications, with a dataset containing a collection of texts taken from social media.
APA, Harvard, Vancouver, ISO, and other styles
14

Krishna, Nanditha, and K. Nagamani. "Understanding and Visualization of Different Feature Extraction Processes in Glaucoma Detection." Journal of Physics: Conference Series 2327, no. 1 (August 1, 2022): 012023. http://dx.doi.org/10.1088/1742-6596/2327/1/012023.

Full text
Abstract:
Abstract In the recent years the usage of mobile phone is increased and it is the major reason for cause of vision loss in several people. The continuous usage increases pressure inside optic nerve head and it leads to glaucoma disease. Also, there are lot of other reasons which leads to the cause of glaucoma. The purpose of this paper is to determine the importance of feature extraction process in glaucoma detection and implementation of different techniques for extracting convenient features for training machine learning model using pre-processed OCT (Optical Coherence Tomography) images. The two major feature extraction techniques narrated in this paper are convolutional neural network (CNN) model-based feature extraction and image processing model-based feature extraction. A performance analysis was conducted to find best feature extraction technique and both techniques performed well.
APA, Harvard, Vancouver, ISO, and other styles
15

Sun, Da Chun. "Investigation of Local Feature Extraction." Applied Mechanics and Materials 644-650 (September 2014): 4653–56. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.4653.

Full text
Abstract:
Feature extraction is an important subject of image analysis, pattern recognition, computer vision, etc. It is the fundamental to solve many different image problems. As the local feature has the characteristic of invariability even after image translation and rotation, changing of zoom, illumination or viewpoint, it has been widely applied to image registration, image mosaic, object identification, target tracking, digital watermark and image retrieval. Extracting stable feature of images has attracted lots of interest. In this paper, we provide the definition of local feature and steps of extracting local feature. The difficulties and trend of this technology are also briefly discussed.
APA, Harvard, Vancouver, ISO, and other styles
16

Zou, Ji, Chao Zhang, Zhongjing Ma, Lei Yu, Kaiwen Sun, and Tengfei Liu. "Image Feature Analysis and Dynamic Measurement of Plantar Pressure Based on Fusion Feature Extraction." Traitement du Signal 38, no. 6 (December 31, 2021): 1829–35. http://dx.doi.org/10.18280/ts.380627.

Full text
Abstract:
Footprint recognition and parameter measurement are widely used in fields like medicine, sports, and criminal investigation. Some results have been achieved in the analysis of plantar pressure image features based on image processing. But the common algorithms of image feature extraction often depend on computer processing power and massive datasets. Focusing on the auxiliary diagnosis and treatment of foot rehabilitation of foot laceration patients, this paper explores the image feature analysis and dynamic measurement of plantar pressure based on fusion feature extraction. Firstly, the authors detailed the idea of extracting image features with a fusion algorithm, which integrates wavelet transform and histogram of oriented gradients (HOG) descriptor. Next, the plantar parameters were calculated based on plantar pressure images, and the measurement steps of plantar parameters were given. Finally, the feature extraction effect of the proposed algorithm was verified, and the measured results on plantar parameters were obtained through experiments.
APA, Harvard, Vancouver, ISO, and other styles
17

He, Wangpeng, Peipei Zhang, Xuan Liu, Binqiang Chen, and Baolong Guo. "Group-Sparse Feature Extraction via Ensemble Generalized Minimax-Concave Penalty for Wind-Turbine-Fault Diagnosis." Sustainability 14, no. 24 (December 14, 2022): 16793. http://dx.doi.org/10.3390/su142416793.

Full text
Abstract:
Extracting weak fault features from noisy measured signals is critical for the diagnosis of wind turbine faults. In this paper, a novel group-sparse feature extraction method via an ensemble generalized minimax-concave (GMC) penalty is proposed for machinery health monitoring. Specifically, the proposed method tackles the problem of formulating large useful magnitude values as isolated features in the original GMC-based sparse feature extraction method. To accurately estimate group-sparse fault features, the proposed method formulates an effective unconstrained optimization problem wherein the group-sparse structure is incorporated into non-convex regularization. Moreover, the convex condition is proved to maintain the convexity of the whole formulated cost function. In addition, the setting criteria of the regularization parameter are investigated. A simulated signal is presented to verify the performance of the proposed method for group-sparse feature extraction. Finally, the effectiveness of the proposed group-sparse feature extraction method is further validated by experimental fault diagnosis cases.
APA, Harvard, Vancouver, ISO, and other styles
18

SONG, JEONG-JUN, and FOROUZAN GOLSHANI. "3D OBJECT FEATURE EXTRACTION BASED ON SHAPE SIMILARITY." International Journal on Artificial Intelligence Tools 12, no. 01 (March 2003): 37–56. http://dx.doi.org/10.1142/s0218213003001101.

Full text
Abstract:
We introduce two complementary feature extraction methods for shape similarity based retrieval of 3D object models. The proposed methods lead us to achieve effectiveness and robustness in searching similar 3D models, and eventually support two essential query modes, namely, query by 3D model and query by 2D image. Our feature extraction scheme is inspired by the observation of human behavior in recognizing 3D objects. The process of extracting spatial arrangement from a 3D object can be considered as using human tactile sensation without visual information. On the other hand, the process of extracting 2D features from multiple views can be considered as examining an object by moving the viewing points (or camera positions). We propose a hybrid method of 3D model identification by object-centered feature extraction, which utilizes the Extended Gaussian Image (EGI) surface normal distribution and distance distributions between object surface points and origin. Another technique need in parallel is a hybrid method using view-centered features, which adopts simple geometric attributes such as circularity, rectangularity and eccentricity. To generate a signature for view-centered features, we have measured distances of a feature between different views and constructing histogram of the distance. We also address the fundamental problem of obtaining sample points on an object surface, which is important to extract reliable features from the object model.
APA, Harvard, Vancouver, ISO, and other styles
19

Maddumala, Venkata Rao, and Arunkumar R. "A Weight Based Feature Extraction Model on Multifaceted Multimedia Bigdata Using Convolutional Neural Network." Ingénierie des systèmes d information 25, no. 6 (December 31, 2020): 729–35. http://dx.doi.org/10.18280/isi.250603.

Full text
Abstract:
This paper intends to present main technique for feature extraction on multimeda getting well versed and a challenging task to handle big data. Analyzing and feature extracting valuable data from high dimensional dataset challenges the bounds of measurable methods and strategies. Conventional techniques in general have less performance while managing high dimensional datasets. Lower test size has consistently been an issue in measurable tests, which get bothered in high dimensional information due to more equivalent or higher component size than the quantity of tests. The intensity of any measurable test is legitimately relative to its capacity to lesser an invalid theory, and test size is a significant central factor in producing probabilities of errors for making substantial ends. Thus one of the effective methods for taking care of high dimensional datasets is by lessening its measurement through feature selection and extraction with the goal that substantial accurate data can be practically performed. Clustering is the act of finding hidden or comparable data in information. It is one of the most widely recognized techniques for realizing useful features where a weight is given to each feature without predefining the various classes. In any feature selection and extraction procedures, the three main considerations of concern are measurable exactness, model interpretability and computational multifaceted nature. For any classification model, it is important to ensure that the productivity of any of these three components isn't undermined. In this manuscript, a Weight Based Feature Extraction Model on Multifaceted Multimedia Big Data (WbFEM-MMB) is proposed which extracts useful features from videos. The feature extraction strategies utilize features from the discrete cosine methods and the features are extracted using a pre-prepared Convolutional Neural Network (CNN). The proposed method is compared with traditional methods and the results show that the proposed method exhibits better performance and accuracy in extracting features from multifaceted multimedia data.
APA, Harvard, Vancouver, ISO, and other styles
20

Gao, Zhangchi, and Shoubin Li. "Joint Information Extraction Model Based on Feature Sharing." International Journal of Emerging Technologies and Advanced Applications 1, no. 2 (March 26, 2024): 16–18. http://dx.doi.org/10.62677/ijetaa.2402107.

Full text
Abstract:
To address the challenge of efficiently and accurately extracting entities, relationships, and events from unstructured text, a joint information extraction model based on feature sharing is proposed. This model utilizes the contextual information of entities, relationships, and events, and integrates entity extraction, relationship extraction, and event extraction tasks through a multi-feature cascade encoder to achieve joint extraction. To validate the effectiveness of the model, comparative analysis was conducted on military news datasets, comparing against two typical information extraction models. Results demonstrated superiority over current state-of-the-art baselines.
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Yuanzhi, Qingzhan Zhao, Yuzhen Wu, Wenzhong Tian, and Guoshun Zhang. "SCA-Net: Multiscale Contextual Information Network for Building Extraction Based on High-Resolution Remote Sensing Images." Remote Sensing 15, no. 18 (September 11, 2023): 4466. http://dx.doi.org/10.3390/rs15184466.

Full text
Abstract:
Accurately extracting buildings is essential for urbanization rate statistics, urban planning, resource allocation, etc. The high-resolution remote sensing images contain rich building information, which provides an important data source for building extraction. However, the extreme abundance of building types with large differences in size, as well as the extreme complexity of the background environment, result in the accurate extraction of spatial details of multi-scale buildings, which remains a difficult problem worth studying. To this end, this study selects the representative Xinjiang Tumxuk urban area as the study area. A building extraction network (SCA-Net) with feature highlighting, multi-scale sensing, and multi-level feature fusion is proposed, which includes Selective kernel spatial Feature Extraction (SFE), Contextual Information Aggregation (CIA), and Attentional Feature Fusion (AFF) modules. First, Selective kernel spatial Feature Extraction modules are used for cascading composition, highlighting information representation of features, and improving the feature extraction capability. Adding a Contextual Information Aggregation module enables the acquisition of multi-scale contextual information. The Attentional Feature Fusion module bridges the semantic gap between high-level and low-level features to achieve effective fusion between cross-level features. The classical U-Net, Segnet, Deeplab v3+, and HRNet v2 semantic segmentation models are compared on the self-built Tmsk and WHU building datasets. The experimental results show that the algorithm proposed in this paper can effectively extract multi-scale buildings in complex backgrounds with IoUs of 85.98% and 89.90% on the two datasets, respectively. SCA-Net is a suitable method for building extraction from high-resolution remote sensing images with good usability and generalization.
APA, Harvard, Vancouver, ISO, and other styles
22

Ge, Zixian, Guo Cao, Hao Shi, Youqiang Zhang, Xuesong Li, and Peng Fu. "Compound Multiscale Weak Dense Network with Hybrid Attention for Hyperspectral Image Classification." Remote Sensing 13, no. 16 (August 20, 2021): 3305. http://dx.doi.org/10.3390/rs13163305.

Full text
Abstract:
Recently, hyperspectral image (HSI) classification has become a popular research direction in remote sensing. The emergence of convolutional neural networks (CNNs) has greatly promoted the development of this field and demonstrated excellent classification performance. However, due to the particularity of HSIs, redundant information and limited samples pose huge challenges for extracting strong discriminative features. In addition, addressing how to fully mine the internal correlation of the data or features based on the existing model is also crucial in improving classification performance. To overcome the above limitations, this work presents a strong feature extraction neural network with an attention mechanism. Firstly, the original HSI is weighted by means of the hybrid spectral–spatial attention mechanism. Then, the data are input into a spectral feature extraction branch and a spatial feature extraction branch, composed of multiscale feature extraction modules and weak dense feature extraction modules, to extract high-level semantic features. These two features are compressed and fused using the global average pooling and concat approaches. Finally, the classification results are obtained by using two fully connected layers and one Softmax layer. A performance comparison shows the enhanced classification performance of the proposed model compared to the current state of the art on three public datasets.
APA, Harvard, Vancouver, ISO, and other styles
23

Long, Gang, and Zhaoxin Zhang. "Deep Encrypted Traffic Detection: An Anomaly Detection Framework for Encryption Traffic Based on Parallel Automatic Feature Extraction." Computational Intelligence and Neuroscience 2023 (March 10, 2023): 1–12. http://dx.doi.org/10.1155/2023/3316642.

Full text
Abstract:
With an increasing number of network attacks using encrypted communication, the anomaly detection of encryption traffic is of great importance to ensure reliable network operation. However, the existing feature extraction methods for encrypted traffic anomaly detection have difficulties in extracting features, resulting in their low efficiency. In this paper, we propose a framework of encrypted traffic anomaly detection based on parallel automatic feature extraction, called deep encrypted traffic detection (DETD). The proposed DETD uses a parallel small-scale multilayer stack autoencoder to extract local traffic features from encrypted traffic and then adopts an L1 regularization-based feature selection algorithm to select the most representative feature set for the final encrypted traffic anomaly detection task. The experimental results show that DETD has promising robustness in feature extraction, i.e., the feature extraction efficiency of DETD is 66% higher than that of the conventional stacked autoencoder, and the anomaly detection performance is as high as 99.998%, and thus DETD outperforms the deep full-range framework and other neural network anomaly detection algorithms.
APA, Harvard, Vancouver, ISO, and other styles
24

Ji, Guanni, Yu Wang, and Fei Wang. "Comparative Study on Feature Extraction of Marine Background Noise Based on Nonlinear Dynamic Features." Entropy 25, no. 6 (May 25, 2023): 845. http://dx.doi.org/10.3390/e25060845.

Full text
Abstract:
Marine background noise (MBN) is the background noise of the marine environment, which can be used to invert the parameters of the marine environment. However, due to the complexity of the marine environment, it is difficult to extract the features of the MBN. In this paper, we study the feature extraction method of MBN based on nonlinear dynamics features, where the nonlinear dynamical features include two main categories: entropy and Lempel–Ziv complexity (LZC). We have performed single feature and multiple feature comparative experiments on feature extraction based on entropy and LZC, respectively: for entropy-based feature extraction experiments, we compared feature extraction methods based on dispersion entropy (DE), permutation entropy (PE), fuzzy entropy (FE), and sample entropy (SE); for LZC-based feature extraction experiments, we compared feature extraction methods based on LZC, dispersion LZC (DLZC) and permutation LZC (PLZC), and dispersion entropy-based LZC (DELZC). The simulation experiments prove that all kinds of nonlinear dynamics features can effectively detect the change of time series complexity, and the actual experimental results show that regardless of the entropy-based feature extraction method or LZC-based feature extraction method, they both present better feature extraction performance for MBN.
APA, Harvard, Vancouver, ISO, and other styles
25

D. Majeed, Hamsa, and Goran Saman Nariman. "Offline Handwritten English Alphabet Recognition (OHEAR)." UHD Journal of Science and Technology 6, no. 2 (August 20, 2022): 29–38. http://dx.doi.org/10.21928/uhdjst.v6n2y2022.pp29-38.

Full text
Abstract:
In most pattern recognition models, the accuracy of the recognition plays a major role in the efficiency of those models. The feature extraction phase aims to sum up most of the details and findings contained in those patterns to be informational and non-redundant in a way that is sufficient to fen to the used classifier of that model and facilitate the subsequent learning process. This work proposes a highly accurate offline handwritten English alphabet (OHEAR) model for recognizing through efficiently extracting the most informative features from constructed self-collected dataset through three main phases: Pre-processing, features extraction, and classification. The features extraction is the core phase of OHEAR based on combining both statistical and structural features of the certain alphabet sample image. In fact, four feature extraction portions, this work has utilized, are tracking adjoin pixels, chain of redundancy, scaled-occupancy-rate chain, and density feature. The feature set of 27 elements is constructed to be provided to the multi-class support vector machine (MSVM) for the process of classification. The OHEAR resultant revealed an accuracy recognition of 98.4%.
APA, Harvard, Vancouver, ISO, and other styles
26

Xu, Zeyu, Cheng Su, Shirou Wang, and Xiaocan Zhang. "Local and Global Spectral Features for Hyperspectral Image Classification." Remote Sensing 15, no. 7 (March 28, 2023): 1803. http://dx.doi.org/10.3390/rs15071803.

Full text
Abstract:
Hyperspectral images (HSI) contain powerful spectral characterization capabilities and are widely used especially for classification applications. However, the rich spectrum contained in HSI also increases the difficulty of extracting useful information, which makes the feature extraction method significant as it enables effective expression and utilization of the spectrum. Traditional HSI feature extraction methods design spectral features manually, which is likely to be limited by the complex spectral information within HSI. Recently, data-driven methods, especially the use of convolutional neural networks (CNNs), have shown great improvements in performance when processing image data owing to their powerful automatic feature learning and extraction abilities and are also widely used for HSI feature extraction and classification. The CNN extracts features based on the convolution operation. Nevertheless, the local perception of the convolution operation makes CNN focus on the local spectral features (LSF) and weakens the description of features between long-distance spectral ranges, which will be referred to as global spectral features (GSF) in this study. LSF and GSF describe the spectral features from two different perspectives and are both essential for determining the spectrum. Thus, in this study, a local-global spectral feature (LGSF) extraction and optimization method is proposed to jointly consider the LSF and GSF for HSI classification. To increase the relationship between spectra and the possibility to obtain features with more forms, we first transformed the 1D spectral vector into a 2D spectral image. Based on the spectral image, the local spectral feature extraction module (LSFEM) and the global spectral feature extraction module (GSFEM) are proposed to automatically extract the LGSF. The loss function for spectral feature optimization is proposed to optimize the LGSF and obtain improved class separability inspired by contrastive learning. We further enhanced the LGSF by introducing spatial relation and designed a CNN constructed using dilated convolution for classification. The proposed method was evaluated on four widely used HSI datasets, and the results highlighted its comprehensive utilization of spectral information as well as its effectiveness in HSI classification.
APA, Harvard, Vancouver, ISO, and other styles
27

Santhiran, Rajeswary, Kasturi Dewi Varathan, and Yin Kia Chiam. "Feature extraction from customer reviews using enhanced rules." PeerJ Computer Science 10 (January 31, 2024): e1821. http://dx.doi.org/10.7717/peerj-cs.1821.

Full text
Abstract:
Opinion mining is gaining significant research interest, as it directly and indirectly provides a better avenue for understanding customers, their sentiments toward a service or product, and their purchasing decisions. However, extracting every opinion feature from unstructured customer review documents is challenging, especially since these reviews are often written in native languages and contain grammatical and spelling errors. Moreover, existing pattern rules frequently exclude features and opinion words that are not strictly nouns or adjectives. Thus, selecting suitable features when analyzing customer reviews is the key to uncovering their actual expectations. This study aims to enhance the performance of explicit feature extraction from product review documents. To achieve this, an approach that employs sequential pattern rules is proposed to identify and extract features with associated opinions. The improved pattern rules total 41, including 16 new rules introduced in this study and 25 existing pattern rules from previous research. An average calculated from the testing results of five datasets showed that the incorporation of this study’s 16 new rules significantly improved feature extraction precision by 6%, recall by 6% and F-measure value by 5% compared to the contemporary approach. The new set of rules has proven to be effective in extracting features that were previously overlooked, thus achieving its objective of addressing gaps in existing rules. Therefore, this study has successfully enhanced feature extraction results, yielding an average precision of 0.91, an average recall value of 0.88, and an average F-measure of 0.89.
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Yuxing, Bingzhao Tang, and Shangbin Jiao. "Optimized Ship-Radiated Noise Feature Extraction Approaches Based on CEEMDAN and Slope Entropy." Entropy 24, no. 9 (September 8, 2022): 1265. http://dx.doi.org/10.3390/e24091265.

Full text
Abstract:
Slope entropy (Slopen) has been demonstrated to be an excellent approach to extracting ship-radiated noise signals (S-NSs) features by analyzing the complexity of the signals; however, its recognition ability is limited because it extracts the features of undecomposed S-NSs. To solve this problem, in this study, we combined complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) to explore the differences of Slopen between the intrinsic mode components (IMFs) of the S-NSs and proposed a single-IMF optimized feature extraction approach. Aiming to further enhance its performance, the optimized combination of dual-IMFs was selected, and a dual-IMF optimized feature extraction approach was also proposed. We conducted three experiments to demonstrate the effectiveness of CEEMDAN, Slopen, and the proposed approaches. The experimental and comparative results revealed both of the proposed single- and dual-IMF optimized feature extraction approaches based on Slopen and CEEMDAN to be more effective than the original ship signal-based and IMF-based feature extraction approaches.
APA, Harvard, Vancouver, ISO, and other styles
29

Xie, Shengkun. "Wavelet Power Spectral Domain Functional Principal Component Analysis for Feature Extraction of Epileptic EEGs." Computation 9, no. 7 (July 7, 2021): 78. http://dx.doi.org/10.3390/computation9070078.

Full text
Abstract:
Feature extraction plays an important role in machine learning for signal processing, particularly for low-dimensional data visualization and predictive analytics. Data from real-world complex systems are often high-dimensional, multi-scale, and non-stationary. Extracting key features of this type of data is challenging. This work proposes a novel approach to analyze Epileptic EEG signals using both wavelet power spectra and functional principal component analysis. We focus on how the feature extraction method can help improve the separation of signals in a low-dimensional feature subspace. By transforming EEG signals into wavelet power spectra, the functionality of signals is significantly enhanced. Furthermore, the power spectra transformation makes functional principal component analysis suitable for extracting key signal features. Therefore, we refer to this approach as a double feature extraction method since both wavelet transform and functional PCA are feature extractors. To demonstrate the applicability of the proposed method, we have tested it using a set of publicly available epileptic EEGs and patient-specific, multi-channel EEG signals, for both ictal signals and pre-ictal signals. The obtained results demonstrate that combining wavelet power spectra and functional principal component analysis is promising for feature extraction of epileptic EEGs. Therefore, they can be useful in computer-based medical systems for epilepsy diagnosis and epileptic seizure detection problems.
APA, Harvard, Vancouver, ISO, and other styles
30

Daniel, Jackson, S. Irin Sherly, Veeralakshmi Ponnuramu, Devesh Pratap Singh, S. N. Netra, Wadi B. Alonazi, Khalid M. A. Almutairi, K. S. A. Priyan, and Yared Abera. "Recurrent Neural Networks for Feature Extraction from Dengue Fever." Evidence-Based Complementary and Alternative Medicine 2022 (June 9, 2022): 1–9. http://dx.doi.org/10.1155/2022/5669580.

Full text
Abstract:
Dengue fever modelling in endemic locations is critical to reducing outbreaks and improving vector-borne illness control. Early projections of dengue are a crucial tool for disease control because of the unavailability of treatments and universal vaccination. Neural networks have made significant contributions to public health in a variety of ways. In this paper, we develop a deep learning modelling using random forest (RF) that helps extract the features of the dengue fever from the text datasets. The proposed modelling involves the data collection, preprocessing of the input texts, and feature extraction. The extracted features are studied to test how well the feature extraction using RF is effective on dengue datasets. The simulation result shows that the proposed method achieves higher degree of accuracy that offers an improvement of more than 12% than the existing methods in extracting the features from the input datasets than the other feature extraction methods. Further, the study reduces the errors associated with feature extraction that is 10% lesser than the other existing methods, and this shows the efficacy of the model.
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Yitong, Shumin Wang, and Aixia Dou. "A Dual-Branch Fusion Network Based on Reconstructed Transformer for Building Extraction in Remote Sensing Imagery." Sensors 24, no. 2 (January 7, 2024): 365. http://dx.doi.org/10.3390/s24020365.

Full text
Abstract:
Automatic extraction of building contours from high-resolution images is of great significance in the fields of urban planning, demographics, and disaster assessment. Network models based on convolutional neural network (CNN) and transformer technology have been widely used for semantic segmentation of buildings from high resolution remote sensing images (HRSI). However, the fixed geometric structure and the local receptive field of the convolutional kernel are not good at global feature extraction, and the transformer technique with self-attention mechanism introduces computational redundancies and extracts local feature details poorly in the process of modeling the global contextual information. In this paper, a dual-branch fused reconstructive transformer network, DFRTNet, is proposed for efficient and accurate building extraction. In the encoder, the traditional transformer is reconfigured by designing the local and global feature extraction module (LGFE); the branch of global feature extraction (GFE) performs dynamic range attention (DRA) based on the idea of top-k attention for extracting global features; furthermore, the branch of local feature extraction (LFE) is used to obtain fine-grained features. The multilayer perceptron (MLP) is employed to efficiently fuse the local and global features. In the decoder, a simple channel attention module (CAM) is used in the up-sampling part to enhance channel dimension features. Our network achieved the best segmentation accuracy on both the WHU and Massachusetts building datasets when compared to other mainstream and state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
32

Lee, Haneum, Cheonghwan Hur, Bunyodbek Ibrokhimov, and Sanggil Kang. "Interactive Guiding Sparse Auto-Encoder with Wasserstein Regularization for Efficient Classification." Applied Sciences 13, no. 12 (June 12, 2023): 7055. http://dx.doi.org/10.3390/app13127055.

Full text
Abstract:
In the era of big data, feature engineering has proved its efficiency and importance in dimensionality reduction and useful information extraction from original features. Feature engineering can be expressed as dimensionality reduction and is divided into two types of methods, namely, feature selection and feature extraction. Each method has its pros and cons. There are a lot of studies that combine these methods. The sparse autoencoder (SAE) is a representative deep feature learning method that combines feature selection with feature extraction. However, existing SAEs do not consider feature importance during training. It causes extracting irrelevant information. In this paper, we propose an interactive guiding sparse autoencoder (IGSAE) to guide the information by two interactive guiding layers and sparsity constraints. The interactive guiding layers keep the main distribution using Wasserstein distance, which is a metric of distribution difference, and it suppresses the leverage of guiding features to prevent overfitting. We perform our experiments using four datasets that have different dimensionalities and numbers of samples. The proposed IGSAE method produces a better classification performance compared to other dimensionality reduction methods.
APA, Harvard, Vancouver, ISO, and other styles
33

Liu, Yangtian, Xiaopeng Yan, Xinhong Hao, Guanghua Yi, and Dingkun Huang. "Automatic Modulation Recognition of Radiation Source Signals Based on Data Rearrangement and the 2D FFT." Remote Sensing 15, no. 2 (January 15, 2023): 518. http://dx.doi.org/10.3390/rs15020518.

Full text
Abstract:
It is a challenge for automatic modulation recognition (AMR) methods for radiation source signals to work in environments with low signal-to-noise ratios (SNRs). This paper proposes a modulation feature extraction method based on data rearrangement and the 2D fast Fourier transform (FFT) (DR2D), and a DenseNet feature extraction network with early fusion is constructed to recognize the extracted modulation features. First, the input signal is preprocessed by DR2D to obtain three types of joint frequency feature bins with multiple time scales. Second, the feature fusion operation is performed on the inputs of the different layers of the proposed network. Finally, feature recognition is completed in the subsequent layers. The theoretical analysis and simulation results show that DR2D is a fast and robust preprocessing method for extracting the features of modulated radiation source signals with less computational complexity. The proposed DenseNet feature extraction network with early fusion can identify the extracted modulation features with less spatial complexity than other types of convolutional neural networks (CNNs) and performs well in low-SNR environments.
APA, Harvard, Vancouver, ISO, and other styles
34

Gao, Zhenyi, Jiayang Sun, Haotian Yang, Jiarui Tan, Bin Zhou, Qi Wei, and Rong Zhang. "Exploration and Research of Human Identification Scheme Based on Inertial Data." Sensors 20, no. 12 (June 18, 2020): 3444. http://dx.doi.org/10.3390/s20123444.

Full text
Abstract:
The identification work based on inertial data is not limited by space, and has high flexibility and concealment. Previous research has shown that inertial data contains information related to behavior categories. This article discusses whether inertial data contains information related to human identity. The classification experiment, based on the neural network feature fitting function, achieves 98.17% accuracy on the test set, confirming that the inertial data can be used for human identification. The accuracy of the classification method without feature extraction on the test set is only 63.84%, which further indicates the need for extracting features related to human identity from the changes in inertial data. In addition, the research on classification accuracy based on statistical features discusses the effect of different feature extraction functions on the results. The article also discusses the dimensionality reduction processing and visualization results of the collected data and the extracted features, which helps to intuitively assess the existence of features and the quality of different feature extraction effects.
APA, Harvard, Vancouver, ISO, and other styles
35

Demircan, S., and H. Kahramanlı. "Feature Extraction from Speech Data for Emotion Recognition." Journal of Advances in Computer Networks 2, no. 1 (2014): 28–30. http://dx.doi.org/10.7763/jacn.2014.v2.76.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Begum, Asma, and Afshaan Kaleem. "Feature Extraction and Enhanced Classification of Urban Sounds." International Journal of Science and Research (IJSR) 12, no. 9 (September 5, 2023): 1461–64. http://dx.doi.org/10.21275/sr23918200525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Tian, Yang Meng, Yu Duo Zheng, Wei Jin, and Gai Hong Du. "Face Image Feature Extraction and Feature Selection." Applied Mechanics and Materials 432 (September 2013): 587–91. http://dx.doi.org/10.4028/www.scientific.net/amm.432.587.

Full text
Abstract:
In order to solve the problem of face recognition, the method of feature extraction and feature selection is presented in this paper. First using Gabor filters and face image as the convolution Operator to extract the Gabor feature vector of the image and also to uniform sampling; then using the PCA + LDA method to reduce the dimension for high-dimensional Gabor feature vector; Finally, using the nearest neighbor classifier to discriminate and determine the identity of a face image. The result I get is that the sampled Gabor feature in high-dimensional space can be projected onto low-dimensional space though the method of feature selection and compression. The new and original in this paper is that the method of PCA + LDA overcomes the problem of the spread matrix singular in the class and matrix too large which is brought by directly use the LDA.
APA, Harvard, Vancouver, ISO, and other styles
38

Rana, M., and S. Kharel. "FEATURE EXTRACTION FOR URBAN AND AGRICULTURAL DOMAINS USING ECOGNITION DEVELOPER." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W6 (July 26, 2019): 609–15. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w6-609-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Feature extraction has always been a challenging task in Geo-Spatial studies both in urban areas as well as in agricultural areas. After the evolution of eCognition Developer, different segmentation techniques and classification algorithms which help in automating feature extraction have been developed in recent years which have been a boon for scientists and people conducting research in the field of geomatics. This research reflects a study depicting the potential of eCognition Developer in extracting features in Agricultural as well as urban areas using various classification techniques. Rule Based and SVM Classification techniques were used for feature extraction in urban areas whereas Feature Space Optimization and K-Nearest Neighbor were used for classifying agricultural features. Results reflect that rule based classification yields more accurate results for urban areas whereas Feature Space Optimization along with object–based classification gave more accuracy in case of agricultural areas.</p>
APA, Harvard, Vancouver, ISO, and other styles
39

UNO, Kohei. "Clustering Meets Feature Extraction." Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 33, no. 2 (May 15, 2021): 57–63. http://dx.doi.org/10.3156/jsoft.33.2_57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

PRADEEP, N., H. GIRISHA, B. SREEPATHI, and K. KARIBASAPPA. "FEATURE EXTRACTION OF MAMMOGRAMS." International Journal of Bioinformatics Research 4, no. 1 (March 15, 2012): 241–44. http://dx.doi.org/10.9735/0975-3087.4.1.241-244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Hochreiter, Sepp, and Jürgen Schmidhuber. "Feature Extraction Through LOCOCODE." Neural Computation 11, no. 3 (April 1, 1999): 679–714. http://dx.doi.org/10.1162/089976699300016629.

Full text
Abstract:
Low-complexity coding and decoding (LOCOCODE) is a novel approach to sensory coding and unsupervised learning. Unlike previous methods, it explicitly takes into account the information-theoretic complexity of the code generator. It computes lococodes that convey information about the input data and can be computed and decoded by low-complexity mappings. We implement LOCOCODE by training autoassociators with flat minimum search, a recent, general method for discovering low-complexity neural nets. It turns out that this approach can unmix an unknown number of independent data sources by extracting a minimal number of low-complexity features necessary for representing the data. Experiments show that unlike codes obtained with standard autoencoders, lococodes are based on feature detectors, never unstructured, usually sparse, and sometimes factorial or local (depending on statistical properties of the data). Although LOCOCODE is not explicitly designed to enforce sparse or factorial codes, it extracts optimal codes for difficult versions of the “bars” benchmark problem, whereas independent component analysis (ICA) and principal component analysis (PCA) do not. It produces familiar, biologically plausible feature detectors when applied to real-world images and codes with fewer bits per pixel than ICA and PCA. Unlike ICA, it does not need to know the number of independent sources. As a preprocessor for a vowel recognition benchmark problem, it sets the stage for excellent classification performance. Our results reveal an interesting, previously ignored connection between two important fields: regularizer research and ICA-related research. They may represent a first step toward unification of regularization and unsupervised learning.
APA, Harvard, Vancouver, ISO, and other styles
42

Osia, Seyed Ali, Ali Taheri, Ali Shahin Shamsabadi, Kleomenis Katevas, Hamed Haddadi, and Hamid R. Rabiee. "Deep Private-Feature Extraction." IEEE Transactions on Knowledge and Data Engineering 32, no. 1 (January 1, 2020): 54–66. http://dx.doi.org/10.1109/tkde.2018.2878698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Zhao, JieYu. "Live facial feature extraction." Science in China Series F: Information Sciences 51, no. 5 (May 2008): 489–98. http://dx.doi.org/10.1007/s11432-008-0049-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Andreassen, S. "Feature extraction from EMG." Electroencephalography and Clinical Neurophysiology 61, no. 3 (September 1985): S222. http://dx.doi.org/10.1016/0013-4694(85)90842-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Gupta, Shikha, Jafreezal Jaafar, Wan Fatimah wan Ahmad, and Arpit Bansal. "Feature Extraction Using Mfcc." Signal & Image Processing : An International Journal 4, no. 4 (August 31, 2013): 101–8. http://dx.doi.org/10.5121/sipij.2013.4408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Kiuchi, Shingo. "Voice feature extraction device." Journal of the Acoustical Society of America 119, no. 5 (2006): 2565. http://dx.doi.org/10.1121/1.2203557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Haiyan, Dujin Liu, and Guolin Pu. "Nuclear reconstructive feature extraction." Neural Computing and Applications 31, no. 7 (October 22, 2017): 2649–59. http://dx.doi.org/10.1007/s00521-017-3220-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Esmaiel, Hamada, Dongri Xie, Zeyad A. H. Qasem, Haixin Sun, Jie Qi, and Junfeng Wang. "Multi-Stage Feature Extraction and Classification for Ship-Radiated Noise." Sensors 22, no. 1 (December 24, 2021): 112. http://dx.doi.org/10.3390/s22010112.

Full text
Abstract:
Due to the complexity and unique features of the hydroacoustic channel, ship-radiated noise (SRN) detected using a passive sonar tends mostly to distort. SRN feature extraction has been proposed to improve the detected passive sonar signal. Unfortunately, the current methods used in SRN feature extraction have many shortcomings. Considering this, in this paper we propose a new multi-stage feature extraction approach to enhance the current SRN feature extractions based on enhanced variational mode decomposition (EVMD), weighted permutation entropy (WPE), local tangent space alignment (LTSA), and particle swarm optimization-based support vector machine (PSO-SVM). In the proposed method, first, we enhance the decomposition operation of the conventional VMD by decomposing the SRN signal into a finite group of intrinsic mode functions (IMFs) and then calculate the WPE of each IMF. Then, the high-dimensional features obtained are reduced to two-dimensional ones by using the LTSA method. Finally, the feature vectors are fed into the PSO-SVM multi-class classifier to realize the classification of different types of SRN sample. The simulation and experimental results demonstrate that the recognition rate of the proposed method overcomes the conventional SRN feature extraction methods, and it has a recognition rate of up to 96.6667%.
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Cheng, Wei Zhang, Hao Hao, and Huiling Shi. "Network Traffic Classification Model Based on Spatio-Temporal Feature Extraction." Electronics 13, no. 7 (March 27, 2024): 1236. http://dx.doi.org/10.3390/electronics13071236.

Full text
Abstract:
The demand for encrypted communication is increasing with the continuous development of secure and trustworthy networks. In edge computing scenarios, the requirement for data processing security is becoming increasingly high. Therefore, the accurate identification of encrypted traffic has become a prerequisite to ensure edge intelligent device security. Currently, encrypted network traffic classification relies on single-feature extraction methods. These methods have simple feature extraction, making distinguishing encrypted network data flows and designing compelling manual features challenging. This leads to low accuracy in multi-classification tasks involving encrypted network traffic. This paper proposes a hybrid deep learning model for multi-classification tasks to address this issue based on the synergy of dilated convolution and gating unit mechanisms. The model comprises a Gated Dilated Convolution (GDC) module and a CA-LSTM module. The GDC module completes the spatial feature extraction of encrypted network traffic through dilated convolution and gating unit mechanisms. In contrast, the CA-LSTM module focuses on extracting temporal network traffic features. By employing a collaborative approach to extract spatio-temporal features, the model ensures feature extraction diversity, guarantees robustness, and effectively enhances the feature extraction rate. We evaluate our multi-classification model using the ISCX VPN-nonVPN public dataset. Experimental results show that the proposed method achieves an accuracy rate of over 95% and a recall rate of over 90%, significantly outperforming existing methods.
APA, Harvard, Vancouver, ISO, and other styles
50

Bao, Forrest Sheng, Xin Liu, and Christina Zhang. "PyEEG: An Open Source Python Module for EEG/MEG Feature Extraction." Computational Intelligence and Neuroscience 2011 (2011): 1–7. http://dx.doi.org/10.1155/2011/406391.

Full text
Abstract:
Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography