Journal articles on the topic 'Feature Recognition Methods'

To see the other types of publications on this topic, follow the link: Feature Recognition Methods.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Feature Recognition Methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chatterji, B. N. "Feature Extraction Methods for Character Recognition." IETE Technical Review 3, no. 1 (January 1986): 9–22. http://dx.doi.org/10.1080/02564602.1986.11437879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chaudhary, Gopal, Smriti Srivastava, and Saurabh Bhardwaj. "Feature Extraction Methods for Speaker Recognition: A Review." International Journal of Pattern Recognition and Artificial Intelligence 31, no. 12 (September 17, 2017): 1750041. http://dx.doi.org/10.1142/s0218001417500410.

Full text
Abstract:
This paper presents main paradigms of research for feature extraction methods to further augment the state of art in speaker recognition (SR) which has been recognized extensively in person identification for security and protection applications. Speaker recognition system (SRS) has become a widely researched topic for the last many decades. The basic concept of feature extraction methods is derived from the biological model of human auditory/vocal tract system. This work provides a classification-oriented review of feature extraction methods for SR over the last 55 years that are proven to be successful and have become the new stone to further research. Broadly, the review work is dichotomized into feature extraction methods with and without noise compensation techniques. Feature extraction methods without noise compensation techniques are divided into following categories: On the basis of high/low level of feature extraction; type of transform; speech production/auditory system; type of feature extraction technique; time variability; speech processing techniques. Further, feature extraction methods with noise compensation techniques are classified into noise-screened features, feature normalization methods, feature compensation methods. This classification-oriented review would endow the clear vision of readers to choose among different techniques and will be helpful in future research in this field.
APA, Harvard, Vancouver, ISO, and other styles
3

Long, Yi, Fu Rong Liu, and Guo Qing Qiu. "Research of Face Recognition Methods Based on Binding Feature Extraction." Applied Mechanics and Materials 568-570 (June 2014): 668–71. http://dx.doi.org/10.4028/www.scientific.net/amm.568-570.668.

Full text
Abstract:
To address the problem that the dimension of the feature vector extracted by Local Binary Pattern (LBP) for face recognition is too high and Principal Component Analysis (PCA) extract features are not the best classification features, an efficient feature extraction method using LBP, PCA and Maximum scatter difference (MSD) has been introduced in this paper. The original face image is firstly divided into sub-images, then the LBP operator is applied to extract the histogram feature. and the feature dimensions are further reduced by using PCA. Finally,MSD is performed on the reduced PCA-based feature.The experimental results on ORL and Yale database demonstrate that the proposed method can classify more effectively and can get higher recognition rate than the traditional recognition methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Jiefei Zhang, Jiefei Zhang. "MASFF: Multiscale Adaptive Spatial Feature Fusion Method for Vehicle Recognition." 電腦學刊 33, no. 1 (February 2022): 001–11. http://dx.doi.org/10.53106/199115992022023301001.

Full text
Abstract:
<p>Traditional vehicle recognition methods have the disadvantages such as low efficiency and time-consuming due to the complex background and overlapping situation. In this paper, we propose a multiscale adptive spatial feature fusion (ASFF) method for vehicle recognition. First, it calculates the difference hash values of images. Then the hash value is used to judge the similarity between the current frame and the previous frame. When the similarity is less than the threshold value, it is input to ResNet18 model for detection. Using ResNet18 as the base network can reduce network parameters. Then, aiming at the problem that the detection effect of ASFF for vehicle recognition is not ideal, the offset loss and width-height loss are replaced by the intersection ratio loss. Meanwhile, multi-scale adaptive spatial feature fusion method is adopted to fuse the multi-level features of the network. The experimental results show that the average accuracy with proposed methed is increased by 2.1%. For BDD100K and Pascal VOC datasets, the average accuracy of predicted borders is increased by 5.5%, when the IoU is greater than 0.5. With the GTX1060Ti, the recognition speed can reach 149 frames per second. The multiscale ASFF in this paper can significantly improve the vehicle recognition accuracy.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
5

Taha, Mohammed A., Hanaa M. Ahmed, and Saif O. Husain. "Iris Features Extraction and Recognition based on the Scale Invariant Feature Transform (SIFT)." Webology 19, no. 1 (January 20, 2022): 171–84. http://dx.doi.org/10.14704/web/v19i1/web19013.

Full text
Abstract:
Iris Biometric authentication is considered to be one of the most dependable biometric characteristics for identifying persons. In actuality, iris patterns have invariant, stable, and distinguishing properties for personal identification. Due to its excellent dependability in personal identification, iris recognition has received more attention. Current iris recognition methods give good results especially when NIR and specific capture conditions are used in collaboration with the user. On the other hand, values related to images captured using VW are affected by noise such as blurry images, eye skin, occlusion, and reflection, which negatively affects the overall performance of the recognition systems. In both NIR and visible spectrum iris images, this article presents an effective iris feature extraction strategy based on the scale-invariant feature transform algorithm (SIFT). The proposed method was tested on different databases such as CASIA v1 and ITTD v1, as NIR images, as well as UBIRIS v1 as visible-light color images. The proposed system gave good accuracy rates compared to existing systems, as it gave an accuracy rate of (96.2%) when using CASIA v1 and (96.4%) in ITTD v1, while the system accuracy dropped to (84.0 %) when using UBIRIS v1.
APA, Harvard, Vancouver, ISO, and other styles
6

Hu, Gang, Kejun Wang, Yuan Peng, Mengran Qiu, Jianfei Shi, and Liangliang Liu. "Deep Learning Methods for Underwater Target Feature Extraction and Recognition." Computational Intelligence and Neuroscience 2018 (2018): 1–10. http://dx.doi.org/10.1155/2018/1214301.

Full text
Abstract:
The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM) was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved.
APA, Harvard, Vancouver, ISO, and other styles
7

Prasad, Binod Kumar, and Rajdeep Kundu. "SEVERAL METHODS OF FEATURE EXTRACTION TO HELP IN OPTICAL CHARACTER RECOGNITION." International Journal of Students' Research in Technology & Management 5, no. 4 (November 27, 2017): 52–57. http://dx.doi.org/10.18510/ijsrtm.2017.547.

Full text
Abstract:
An Optical Character Recognition (OCR) consists of three bold steps namely Preprocessing, Feature extraction, Classification. Methods of Feature extraction yield feature vectors based on which the classification of a testing pattern is executed. The paper aims at proposing some methods of feature extraction that may go a long way to recognize a Bengali numeral or character. Pixel Ex-OR Method presents a digital gating (Ex-OR) technique to extract the information in an image. Two successive elements of a row in image matrix have been Ex-ORed and the output is again Ex-ORed with the next element. Alphabetical coding codes a binary character image by means of letters of English alphabet. Directional features find gradient information using Sobel Masks to make position of stroke clear in an image. The features have been derived in eight standard directions and then these eight feature vectors are merged into four sets of features to reduce the system complexity and hence processing time is saved considerably. These features will help develop a Bengali numeral recognition system.
APA, Harvard, Vancouver, ISO, and other styles
8

Swiniarski, Roman W., and Andrzej Skowron. "Rough set methods in feature selection and recognition." Pattern Recognition Letters 24, no. 6 (March 2003): 833–49. http://dx.doi.org/10.1016/s0167-8655(02)00196-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Due Trier, Øivind, Anil K. Jain, and Torfinn Taxt. "Feature extraction methods for character recognition-A survey." Pattern Recognition 29, no. 4 (April 1996): 641–62. http://dx.doi.org/10.1016/0031-3203(95)00118-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ear, Mong Heng, Cheng Cheng, Salem Mostafa Hamdy, and Alhazmi Marwah. "Feature Recognition for Virtual Environments." Applied Mechanics and Materials 610 (August 2014): 642–46. http://dx.doi.org/10.4028/www.scientific.net/amm.610.642.

Full text
Abstract:
This paper demonstrates methods to recognize 3D designed features for virtual environments and apply them to Virtual assembly. STEP is a standard of Product data Exchange for interfacing different design systems, but it cannot be used as input for virtual environments. In order to use feature data in virtual assembly environments, main data source from a STEP file should be recognized and features should be re-built. First, Attributed Adjacency Graph (AAG) is used to analyze and express the boundary representation; second, a feature-tree of a part is constructed; third, using the AAG and feature-tree as inputs, we analyze and extract of features with a feature recognition algorithm; finally, various levels of detail of object geometric shapes is built and expressed in XML for virtual assembly applications.
APA, Harvard, Vancouver, ISO, and other styles
11

Jing Zhao, Jing Zhao. "Sports Motion Feature Extraction and Recognition Based on a Modified Histogram of Oriented Gradients with Speeded Up Robust Features." 電腦學刊 33, no. 1 (February 2022): 063–70. http://dx.doi.org/10.53106/199115992022023301007.

Full text
Abstract:
<p>Traditional motion recognition methods can extract global features, but ignore the local features. And the obscured motion cannot be recognized. Therefore, this paper proposes a modified Histogram of oriented gradients (HOG) combining speeded up robust features (SURF) for sports motion feature extraction and recognition. This new method can fully extract the local and global features of the sports motion recognition. The new algorithm first adopts background subtraction to obtain the motion region. Direction controllable filter can effectively describe the motion edge features. The HOG feature is improved by introducing direction controllable filter to enhance the local edge information. At the same time, the K-means clustering is performed on SURF to obtain the word bag model. Finally, the fused motion features are input to support vector machine (SVM) to classify and recognize the motion features. We make comparison with the state-of-the-art methods on KTH, UCF Sports and SBU Kinect Interaction data sets. The results show that the recognition accuracy of the proposed algorithm is greatly improved.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
12

Yuvaraj, Rajamanickam, Prasanth Thagavel, John Thomas, Jack Fogarty, and Farhan Ali. "Comprehensive Analysis of Feature Extraction Methods for Emotion Recognition from Multichannel EEG Recordings." Sensors 23, no. 2 (January 12, 2023): 915. http://dx.doi.org/10.3390/s23020915.

Full text
Abstract:
Advances in signal processing and machine learning have expedited electroencephalogram (EEG)-based emotion recognition research, and numerous EEG signal features have been investigated to detect or characterize human emotions. However, most studies in this area have used relatively small monocentric data and focused on a limited range of EEG features, making it difficult to compare the utility of different sets of EEG features for emotion recognition. This study addressed that by comparing the classification accuracy (performance) of a comprehensive range of EEG feature sets for identifying emotional states, in terms of valence and arousal. The classification accuracy of five EEG feature sets were investigated, including statistical features, fractal dimension (FD), Hjorth parameters, higher order spectra (HOS), and those derived using wavelet analysis. Performance was evaluated using two classifier methods, support vector machine (SVM) and classification and regression tree (CART), across five independent and publicly available datasets linking EEG to emotional states: MAHNOB-HCI, DEAP, SEED, AMIGOS, and DREAMER. The FD-CART feature-classification method attained the best mean classification accuracy for valence (85.06%) and arousal (84.55%) across the five datasets. The stability of these findings across the five different datasets also indicate that FD features derived from EEG data are reliable for emotion recognition. The results may lead to the possible development of an online feature extraction framework, thereby enabling the development of an EEG-based emotion recognition system in real time.
APA, Harvard, Vancouver, ISO, and other styles
13

Qadir, Tara Othman, Nik Shahidah Afifi Md Taujuddin, and Sundas Naqeeb Khan. "Feature Extraction Methods for IRIS Recognition System: A Survey." International Journal of Computer Science and Information Technology 14, no. 1 (February 28, 2022): 99–110. http://dx.doi.org/10.5121/ijcsit.2022.14107.

Full text
Abstract:
Protection has become one of the biggest fields of study for several years, however the demand for this is growing exponentially mostly with rise in sensitive data. The quality of the research can differ slightly from any workstation to cloud, and though protection must be incredibly important all over. Throughout the past two decades, sufficient focus has been given to substantiation along with validation in the technology model. Identifying a legal person is increasingly become the difficult activity with the progression of time. Some attempts are introduced in that same respect, in particular by utilizing human movements such as fingerprints, facial recognition, palm scanning, retinal identification, DNA checking, breathing, speech checker, and so on. A number of methods for effective iris detection have indeed been suggested and researched. A general overview of current and state-of-the-art approaches to iris recognition is presented in this paper. In addition, significant advances in techniques, algorithms, qualified classifiers, datasets and methodologies for the extraction of features are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
14

Alavipour, Fataneh, and Ali Broumandnia. "Farsi Character Recognition Using New Hybrid Feature Extraction Methods." International Journal of Computer Science, Engineering and Information Technology 4, no. 1 (February 28, 2014): 15–25. http://dx.doi.org/10.5121/ijcseit.2014.4102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Jian Chao, Jun Wang, and Tao Liu. "Research of Feature Recognition Method Based on STEP-NC." Advanced Materials Research 562-564 (August 2012): 1418–21. http://dx.doi.org/10.4028/www.scientific.net/amr.562-564.1418.

Full text
Abstract:
To achieve the numerical control (NC) system based on the standard for the exchange of product model data-numerical control (STEP-NC), the feature concept and feature recognition method of STEP-NC is proposed. The meaning of feature in STEP-NC, while the three classification methods of STEP-NC feature were also presented. The expression methods of STEP-NC feature were analyzed in details. Furthermore, the feature recognition methods based on above the analyse were described. Finally, we simulate several machining features on STEP-NC milling simulation system. The simulation results demonstrate that the feature recognition method is correct and feasible.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhu, Di, Kamarul Arifin Ahmad, and Aolin Chen. "Research on flame recognition technology based on local complex features." Journal of Physics: Conference Series 2246, no. 1 (April 1, 2022): 012074. http://dx.doi.org/10.1088/1742-6596/2246/1/012074.

Full text
Abstract:
Abstract Traditional flame recognition methods based on image features are difficult to extract flame image features effectively, resulting in low flame recognition accuracy, while such methods mostly perform specific flame recognition for specific scenes, and when the scene, flame color and other features change, it is difficult to perform flame recognition effectively. To address this problem, this paper proposes a flame recognition scheme based on local complex features. Its main purpose is to fuse multi-scene flame data, introduce the characteristics of flame in color space through the process of extracting feature descriptors in SIFT, so as to filter the extracted feature descriptors with noise interferers, and transform the extracted feature descriptors into feature vectors by using the key point bag-of-words method, and finally a general fast flame recognition model based on the limit learning machine. In this paper, we explore the upper limit of the capability of traditional image feature representation to pave the way for the application of deep learning to the flame recognition problem.
APA, Harvard, Vancouver, ISO, and other styles
17

Song, Chao, Xinyu Gao, Yongqian Wang, Jinhai Li, Lifeng Fan, Xiaohuang Qin, Qiao Zhou, Zhongyi Wang, and Lan Huang. "Feature Selection and Recognition Methods for Discovering Physiological and Bioinformatics RESTful Services." Information 9, no. 9 (September 6, 2018): 227. http://dx.doi.org/10.3390/info9090227.

Full text
Abstract:
Many physiology and bioinformatics research institutions and websites have opened their own data analysis services and other related Web services. It is therefore very important to be able to quickly and effectively select and extract features from the Web service pages to learn about and use these services. This facilitates the automatic discovery and recognition of Representational State Transfer or RESTful services. However, this task is still challenging. Following the description feature pattern of a RESTful service, the authors proposed a Feature Pattern Search and Replace (FPSR) method. First, they applied a regular expression to perform a matching lookup. Then, a custom string was used to substitute the relevant feature pattern to avoid the segmentation of its feature pattern and the loss of its feature information during the segmentation process. Experimental results showed in the visualization that FPSR obtained a clearer and more obvious boundary with fewer overlaps than the test without using FPSR, thereby enabling a higher accuracy rate. Therefore, FPSR allowed the authors to extract RESTful service page feature information and achieve better classification results.
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Qiaoqin, Yongguo Liu, Jiajing Zhu, Zhi Chen, Lang Liu, Shangming Yang, Guanyi Zhu, et al. "Upper-Limb Motion Recognition Based on Hybrid Feature Selection: Algorithm Development and Validation." JMIR mHealth and uHealth 9, no. 9 (September 2, 2021): e24402. http://dx.doi.org/10.2196/24402.

Full text
Abstract:
Background For rehabilitation training systems, it is essential to automatically record and recognize exercises, especially when more than one type of exercise is performed without a predefined sequence. Most motion recognition methods are based on feature engineering and machine learning algorithms. Time-domain and frequency-domain features are extracted from original time series data collected by sensor nodes. For high-dimensional data, feature selection plays an important role in improving the performance of motion recognition. Existing feature selection methods can be categorized into filter and wrapper methods. Wrapper methods usually achieve better performance than filter methods; however, in most cases, they are computationally intensive, and the feature subset obtained is usually optimized only for the specific learning algorithm. Objective This study aimed to provide a feature selection method for motion recognition of upper-limb exercises and improve the recognition performance. Methods Motion data from 5 types of upper-limb exercises performed by 21 participants were collected by a customized inertial measurement unit (IMU) node. A total of 60 time-domain and frequency-domain features were extracted from the original sensor data. A hybrid feature selection method by combining filter and wrapper methods (FESCOM) was proposed to eliminate irrelevant features for motion recognition of upper-limb exercises. In the filter stage, candidate features were first selected from the original feature set according to the significance for motion recognition. In the wrapper stage, k-nearest neighbors (kNN), Naïve Bayes (NB), and random forest (RF) were evaluated as the wrapping components to further refine the features from the candidate feature set. The performance of the proposed FESCOM method was verified using experiments on motion recognition of upper-limb exercises and compared with the traditional wrapper method. Results Using kNN, NB, and RF as the wrapping components, the classification error rates of the proposed FESCOM method were 1.7%, 8.9%, and 7.4%, respectively, and the feature selection time in each iteration was 13 seconds, 71 seconds, and 541 seconds, respectively. Conclusions The experimental results demonstrated that, in the case of 5 motion types performed by 21 healthy participants, the proposed FESCOM method using kNN and NB as the wrapping components achieved better recognition performance than the traditional wrapper method. The FESCOM method dramatically reduces the search time in the feature selection process. The results also demonstrated that the optimal number of features depends on the classifier. This approach serves to improve feature selection and classification algorithm selection for upper-limb motion recognition based on wearable sensor data, which can be extended to motion recognition of more motion types and participants.
APA, Harvard, Vancouver, ISO, and other styles
19

Wu, Lian, Yong Xu, Zhongwei Cui, Yu Zuo, Shuping Zhao, and Lunke Fei. "Triple-Type Feature Extraction for Palmprint Recognition." Sensors 21, no. 14 (July 19, 2021): 4896. http://dx.doi.org/10.3390/s21144896.

Full text
Abstract:
Palmprint recognition has received tremendous research interests due to its outstanding user-friendliness such as non-invasive and good hygiene properties. Most recent palmprint recognition studies such as deep-learning methods usually learn discriminative features from palmprint images, which usually require a large number of labeled samples to achieve a reasonable good recognition performance. However, palmprint images are usually limited because it is relative difficult to collect enough palmprint samples, making most existing deep-learning-based methods ineffective. In this paper, we propose a heuristic palmprint recognition method by extracting triple types of palmprint features without requiring any training samples. We first extract the most important inherent features of a palmprint, including the texture, gradient and direction features, and encode them into triple-type feature codes. Then, we use the block-wise histograms of the triple-type feature codes to form the triple feature descriptors for palmprint representation. Finally, we employ a weighted matching-score level fusion to calculate the similarity between two compared palmprint images of triple-type feature descriptors for palmprint recognition. Extensive experimental results on the three widely used palmprint databases clearly show the promising effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
20

Singh, Gunjan, Sandeep Kumar, and Manu Pratap Singh. "Performance Evaluation of Feed-Forward Neural Network Models for Handwritten Hindi Characters with Different Feature Extraction Methods." International Journal of Artificial Life Research 7, no. 2 (July 2017): 38–57. http://dx.doi.org/10.4018/ijalr.2017070103.

Full text
Abstract:
Automatic handwritten character recognition is one of the most critical and interesting research areas in domain of pattern recognition. The problem becomes more challenging if domain is handwritten Hindi character as Hindi characters are cursive in nature and demonstrate a lot of similar features. A number of feature extraction, classification and recognition techniques have been devised and being used in this area; still the efficiency and accuracy is awaited. In this article, performance of various feed-forward neural networks is evaluated for the generalized classification of handwritten Hindi characters using various feature extraction methods. To study and analyze the performance of the selected neural networks, training and test character patterns are presented to each model and their recognition accuracy is measured. It has been analyzed that the Radial basis function network and Exact Radial basis network give highest recognition accuracy while Elman backpropagation neural network gives lowest recognition rate for most of the selected feature extraction methods.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhong, Ju, Hua Wen Liu, and Chun Li Lin. "Human Action Recognition Based on Hybrid Features." Applied Mechanics and Materials 373-375 (August 2013): 1188–91. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.1188.

Full text
Abstract:
The extraction methods of both the shape feature based on Fourier descriptors and the motion feature in time domain were introduced. These features were fused to get a hybrid feature which had higher distinguish ability. This combined representation was used for human action recognition. The experimental results show the proposed hybrid feature has efficient recognition performance in the Weizmann action database .
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Qi, and Xiyou Su. "Research on Named Entity Recognition Methods in Chinese Forest Disease Texts." Applied Sciences 12, no. 8 (April 12, 2022): 3885. http://dx.doi.org/10.3390/app12083885.

Full text
Abstract:
Named entity recognition of forest diseases plays a key role in knowledge extraction in the field of forestry. The aim of this paper is to propose a named entity recognition method based on multi-feature embedding, a transformer encoder, a bi-gated recurrent unit (BiGRU), and conditional random fields (CRF). According to the characteristics of the forest disease corpus, several features are introduced here to improve the method’s accuracy. In this paper, we analyze the characteristics of forest disease texts; carry out pre-processing, labeling, and extraction of multiple features; and construct forest disease texts. In the input representation layer, the method integrates multi-features, such as characters, radicals, word boundaries, and parts of speech. Then, implicit features (e.g., sentence context features) are captured through the transformer’s encoding layer. The obtained features are transmitted to the BiGRU layer for further deep feature extraction. Finally, the CRF model is used to learn constraints and output the optimal annotation of disease names, damage sites, and drug entities in the forest disease texts. The experimental results on the self-built data set of forest disease texts show that the precision of the proposed method for entity recognition reached more than 93%, indicating that it can effectively solve the task of named entity recognition in forest disease texts.
APA, Harvard, Vancouver, ISO, and other styles
23

Al-Kaltakchi, Musab T. S., Haithem Abd Al-Raheem Taha, Mohanad Abd Shehab, and Mohamed A. M. Abdullah. "Comparison of feature extraction and normalization methods for speaker recognition using grid-audiovisual database." Indonesian Journal of Electrical Engineering and Computer Science 18, no. 2 (May 1, 2020): 782. http://dx.doi.org/10.11591/ijeecs.v18.i2.pp782-789.

Full text
Abstract:
<p><span lang="EN-GB">In this paper, different feature extraction and feature normalization methods are investigated for speaker recognition. With a view to give a good representation of acoustic speech signals, Power Normalized Cepstral Coefficients (PNCCs) and Mel Frequency Cepstral Coefficients (MFCCs) are employed for feature extraction. Then, to mitigate the effect of linear channel, Cepstral Mean-Variance Normalization (CMVN) and feature warping are utilized. The current paper investigates Text-independent speaker identification system by using 16 coefficients from both the MFCCs and PNCCs features. Eight different speakers are selected from the GRID-Audiovisual database with two females and six males. The speakers are modeled using the coupling between the Universal Background Model and Gaussian Mixture Models (GMM-UBM) in order to get a fast scoring technique and better performance. The system shows 100% in terms of speaker identification accuracy. The results illustrated that PNCCs features have better performance compared to the MFCCs features to identify females compared to male speakers. Furthermore, feature wrapping reported better performance compared to the CMVN method. </span></p>
APA, Harvard, Vancouver, ISO, and other styles
24

Westphal, Günter, and Rolf P. Würtz. "Combining Feature- and Correspondence-Based Methods for Visual Object Recognition." Neural Computation 21, no. 7 (July 2009): 1952–89. http://dx.doi.org/10.1162/neco.2009.12-07-675.

Full text
Abstract:
We present an object recognition system built on a combination of feature- and correspondence-based pattern recognizers. The feature-based part, called preselection network, is a single-layer feedforward network weighted with the amount of information contributed by each feature to the decision at hand. For processing arbitrary objects, we employ small, regular graphs whose nodes are attributed with Gabor amplitudes, termed parquet graphs. The preselection network can quickly rule out most irrelevant matches and leaves only the ambiguous cases, so-called model candidates, to be verified by a rudimentary version of elastic graph matching, a standard correspondence-based technique for face and object recognition. According to the model, graphs are constructed that describe the object in the input image well. We report the results of experiments on standard databases for object recognition. The method achieved high recognition rates on identity and pose. Unlike many other models, it can also cope with varying background, multiple objects, and partial occlusion.
APA, Harvard, Vancouver, ISO, and other styles
25

Moe Htay, Moe. "Feature extraction and classification methods of facial expression: a survey." Computer Science and Information Technologies 2, no. 1 (March 1, 2021): 26–32. http://dx.doi.org/10.11591/csit.v2i1.p26-32.

Full text
Abstract:
Facial Expression is a significant role in affective computing and one of the non-verbal communication for human computer interaction. Automatic recognition of human affects has become more challenging and interesting problem in recent years. Facial Expression is the significant features to recognize the human emotion in human daily life. Facial Expression Recognition System (FERS) can be developed for the application of human affect analysis, health care assessment, distance learning, driver fatigue detection and human computer interaction. Basically, there are three main components to recognize the human facial expression. They are face or face’s components detection, feature extraction of face image, classification of expression. The study proposed the methods of feature extraction and classification for FER.
APA, Harvard, Vancouver, ISO, and other styles
26

Suto, Jozsef, Stefan Oniga, and Petrica Pop Sitar. "Feature Analysis to Human Activity Recognition." International Journal of Computers Communications & Control 12, no. 1 (December 2, 2016): 116. http://dx.doi.org/10.15837/ijccc.2017.1.2787.

Full text
Abstract:
Human activity recognition (HAR) is one of those research areas whose importance and popularity have notably increased in recent years. HAR can be seen as a general machine learning problem which requires feature extraction and feature selection. In previous articles different features were extracted from time, frequency and wavelet domains for HAR but it is not clear that, how to determine the best feature combination which maximizes the performance of a machine learning algorithm. The aim of this paper is to present the most relevant feature extraction methods in HAR and to compare them with widely-used filter and wrapper feature selection algorithms. This work is an extended version of [1]a where we tested the efficiency of filter and wrapper feature selection algorithms in combination with artificial neural networks. In this paper the efficiency of selected features has been investigated on more machine learning algorithms (feed-forward artificial neural network, k-nearest neighbor and decision tree) where an independent database was the data source. The result demonstrates that machine learning in combination with feature selection can overcome other classification approaches.
APA, Harvard, Vancouver, ISO, and other styles
27

Liu, Shengyu, Buzhou Tang, Qingcai Chen, Xiaolong Wang, and Xiaoming Fan. "Feature Engineering for Drug Name Recognition in Biomedical Texts: Feature Conjunction and Feature Selection." Computational and Mathematical Methods in Medicine 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/913489.

Full text
Abstract:
Drug name recognition (DNR) is a critical step for drug information extraction. Machine learning-based methods have been widely used for DNR with various types of features such as part-of-speech, word shape, and dictionary feature. Features used in current machine learning-based methods are usually singleton features which may be due to explosive features and a large number of noisy features when singleton features are combined into conjunction features. However, singleton features that can only capture one linguistic characteristic of a word are not sufficient to describe the information for DNR when multiple characteristics should be considered. In this study, we explore feature conjunction and feature selection for DNR, which have never been reported. We intuitively select 8 types of singleton features and combine them into conjunction features in two ways. Then, Chi-square, mutual information, and information gain are used to mine effective features. Experimental results show that feature conjunction and feature selection can improve the performance of the DNR system with a moderate number of features and our DNR system significantly outperforms the best system in the DDIExtraction 2013 challenge.
APA, Harvard, Vancouver, ISO, and other styles
28

Sidiropoulos, George K., Polixeni Kiratsa, Petros Chatzipetrou, and George A. Papakostas. "Feature Extraction for Finger-Vein-Based Identity Recognition." Journal of Imaging 7, no. 5 (May 15, 2021): 89. http://dx.doi.org/10.3390/jimaging7050089.

Full text
Abstract:
This paper aims to provide a brief review of the feature extraction methods applied for finger vein recognition. The presented study is designed in a systematic way in order to bring light to the scientific interest for biometric systems based on finger vein biometric features. The analysis spans over a period of 13 years (from 2008 to 2020). The examined feature extraction algorithms are clustered into five categories and are presented in a qualitative manner by focusing mainly on the techniques applied to represent the features of the finger veins that uniquely prove a human’s identity. In addition, the case of non-handcrafted features learned in a deep learning framework is also examined. The conducted literature analysis revealed the increased interest in finger vein biometric systems as well as the high diversity of different feature extraction methods proposed over the past several years. However, last year this interest shifted to the application of Convolutional Neural Networks following the general trend of applying deep learning models in a range of disciplines. Finally, yet importantly, this work highlights the limitations of the existing feature extraction methods and describes the research actions needed to face the identified challenges.
APA, Harvard, Vancouver, ISO, and other styles
29

Peng, Zhiqiang, and Yue Zhang. "Dilemma and Solution of Traditional Feature Extraction Methods Based on Inertial Sensors." Mobile Information Systems 2018 (November 22, 2018): 1–6. http://dx.doi.org/10.1155/2018/2659142.

Full text
Abstract:
Correctly identifying human activities is very significant in modern life. Almost all feature extraction methods are based directly on acceleration and angular velocity. However, we found that some activities have no difference in acceleration and angular velocity. Therefore, we believe that for these activities, any feature extraction method based on acceleration and angular velocity is difficult to achieve good results. After analyzing the difference of these indistinguishable movements, we propose several new features to improve accuracy of recognition. We compare the traditional features and our custom features. In addition, we examined whether the time-domain features and frequency-domain features based on acceleration and angular velocity are different. The results show that (1) our custom features significantly improve the precision of the activities that have no difference in acceleration and angular velocity; and (2) the combination of time-domain features and frequency-domain features does not significantly improve the recognition of different activities.
APA, Harvard, Vancouver, ISO, and other styles
30

Qiao, Long, and Asad Esmaeily. "An Overview of Signal-Based Damage Detection Methods." Applied Mechanics and Materials 94-96 (September 2011): 834–51. http://dx.doi.org/10.4028/www.scientific.net/amm.94-96.834.

Full text
Abstract:
Deterioration of structures due to aging, cumulative crack growth or excessive response significantly affects the performance and safety of structures during their service life. Recently, signal-based methods have received many attentions for structural health monitoring and damage detection. These methods examine changes in the features derived directly from the measured time histories or their corresponding spectra through proper signal processing methods and algorithms to detect damage. Based on different signal processing techniques for feature extraction, these methods are classified into time-domain methods, frequency-domain methods, and time-frequency (or time-scale)-domain methods. As an enhancement for feature extraction, selection and classification, pattern recognition techniques are deeply integrated into signal-based damage detection. This paper provided an overview of these methods based on two aspects: (1) feature extraction and selection, and (2) pattern recognition. Signal-based methods are particularly more effective for structures with complicated nonlinear behavior and the incomplete, incoherent, and noise-contaminated measurements of structural response.
APA, Harvard, Vancouver, ISO, and other styles
31

Wu, Yutong, Xinhui Hu, Ziwei Wang, Jian Wen, Jiangming Kan, and Wenbin Li. "Exploration of Feature Extraction Methods and Dimension for sEMG Signal Classification." Applied Sciences 9, no. 24 (December 6, 2019): 5343. http://dx.doi.org/10.3390/app9245343.

Full text
Abstract:
It is necessary to complete the two parts of gesture recognition and wireless remote control to realize the gesture control of the automatic pruning machine. To realize gesture recognition, in this paper, we have carried out the research of gesture recognition technology based on surface electromyography signal, and discussed the influence of different numbers and different gesture combinations on the optimal size. We have calculated the 630-dimensional eigenvector from the benchmark scientific database of sEMG signals and extracted the features using principal component analysis (PCA). Discriminant analysis (DA) has been used to compare the processing effects of each feature extraction method. The experimental results have shown that the recognition rate of four gestures can reach 100.0%, the recognition rate of six gestures can reach 98.29%, and the optimal size is 516~523 dimensions. This study lays a foundation for the follow-up work of the pruning machine gesture control, and p rovides a compelling new way to promote the creative and human computer interaction process of forestry machinery.
APA, Harvard, Vancouver, ISO, and other styles
32

Zyout, Ala’a, Hiam Alquran, Wan Azani Mustafa, and Ali Mohammad Alqudah. "Advanced Time-Frequency Methods for ECG Waves Recognition." Diagnostics 13, no. 2 (January 13, 2023): 308. http://dx.doi.org/10.3390/diagnostics13020308.

Full text
Abstract:
ECG wave recognition is one of the new topics where only one of the ECG beat waves (P-QRS-T) was used to detect heart diseases. Normal, tachycardia, and bradycardia heart rhythm are hard to detect using either time-domain or frequency-domain features solely, and a time-frequency analysis is required to extract representative features. This paper studies the performance of two different spectrum representations, iris-spectrogram and scalogram, for different ECG beat waves in terms of recognition of normal, tachycardia, and bradycardia classes. These two different spectra are then sent to two different deep convolutional neural networks (CNN), i.e., Resnet101 and ShuffleNet, for deep feature extraction and classification. The results show that the best accuracy for detection of beats rhythm was using ResNet101 and scalogram of T-wave with an accuracy of 98.3%, while accuracy was 94.4% for detection using iris-spectrogram using also ResNet101 and QRS-Wave. Finally, based on these results we note that using deep features from time-frequency representation using one wave of ECG beat we can accurately detect basic rhythms such as normal, tachycardia, and bradycardia.
APA, Harvard, Vancouver, ISO, and other styles
33

Gore, Dayanand Bharat. "Comparative Study on Feature Extractions for Ear Recognition." International Journal of Applied Evolutionary Computation 10, no. 2 (April 2019): 8–18. http://dx.doi.org/10.4018/ijaec.2019040102.

Full text
Abstract:
Biometrics includes the study of automatic methods for distinguishing human beings based on physical or behavioural traits. The problem of finding good biometric features and recognition methods has been researched extensively in recent years. This research considers the use of ears as a biometric for human recognition. In this article, basic feature extraction techniques are implemented such as Harris Feature, FAST Feature extraction and SURF Feature Extraction. All the images are taken from standard database and each image has different angles because of any criminal investigation, accident, or ATM machine camera taken different types of images. So, using different images feature extraction the research goes through these techniques to give the best result to the user.
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Haiwang, Bin Wang, Lulu Wu, and Qiang Tang. "Multihydrophone Fusion Network for Modulation Recognition." Sensors 22, no. 9 (April 22, 2022): 3214. http://dx.doi.org/10.3390/s22093214.

Full text
Abstract:
Deep learning (DL)-based modulation recognition methods of underwater acoustic communication signals are mostly applied to a single hydrophone reception scenario. In this paper, we propose a novel end-to-end multihydrophone fusion network (MHFNet) for multisensory reception scenarios. MHFNet consists of a feature extraction module and a fusion module. The feature extraction module extracts the features of the signals received by the multiple hydrophones. Then, through the neural network, the fusion module fuses and classifies the features of the multiple signals. MHFNet takes full advantage of neural networks and multihydrophone reception to effectively fuse signal features for realizing improved modulation recognition performance. Experimental results on simulation and practical data show that MHFNet is superior to other fusion methods. The classification accuracy is improved by about 16%.
APA, Harvard, Vancouver, ISO, and other styles
35

Uzun, Mehmet Zahit, Yuksel Celik, and Erdal Basaran. "Micro-Expression Recognition by Using CNN Features with PSO Algorithm and SVM Methods." Traitement du Signal 39, no. 5 (November 30, 2022): 1685–93. http://dx.doi.org/10.18280/ts.390526.

Full text
Abstract:
This study proposes a framework for defining ME expressions, in which preprocessing, feature extraction with deep learning, feature selection with an optimization algorithm, and classification methods are used. CASME-II, SMIC-HS, and SAMM, which are among the most used ME datasets in the literature, were combined to overcome the under-sampling problem caused by the datasets. In the preprocessing stage, onset, and apex frames in each video clip in datasets were detected, and optical flow images were obtained from the frames using the FarneBack method. The features of these obtained images were extracted by applying AlexNet, VGG16, MobilenetV2, EfficientNet, Squeezenet from CNN models. Then, combining the image features obtained from all CNN models. And then, the ones which are the most distinctive features were selected with the Particle Swarm Optimization (PSO) algorithm. The new feature set obtained was divided into classes positive, negative, and surprise using SVM. As a result, its success has been demonstrated with an accuracy rate of 0.8784 obtained in our proposed ME framework.
APA, Harvard, Vancouver, ISO, and other styles
36

Patel, Ishani, Virag Jagtap, and Ompriya Kale. "A Survey on Feature Extraction Methods for Handwritten Digits Recognition." International Journal of Computer Applications 107, no. 12 (December 18, 2014): 11–17. http://dx.doi.org/10.5120/18801-0317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Fei, Lunke, Guangming Lu, Wei Jia, Shaohua Teng, and David Zhang. "Feature Extraction Methods for Palmprint Recognition: A Survey and Evaluation." IEEE Transactions on Systems, Man, and Cybernetics: Systems 49, no. 2 (February 2019): 346–63. http://dx.doi.org/10.1109/tsmc.2018.2795609.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

SWINIARSKI, R., and A. SWINIARSKA. "Comparison of Feature Extraction and Selection Methods in Mammogram Recognition." Annals of the New York Academy of Sciences 980, no. 1 (December 2002): 116–24. http://dx.doi.org/10.1111/j.1749-6632.2002.tb04892.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Soltanpour, Sima, Boubakeur Boufama, and Q. M. Jonathan Wu. "A survey of local feature methods for 3D face recognition." Pattern Recognition 72 (December 2017): 391–406. http://dx.doi.org/10.1016/j.patcog.2017.08.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, Ru. "The Ancient Ceramics Identification Methods Based on Non-Linear Support Vector Machines." Applied Mechanics and Materials 278-280 (January 2013): 1201–4. http://dx.doi.org/10.4028/www.scientific.net/amm.278-280.1201.

Full text
Abstract:
With the ceramics market's developing, the use of image processing and intelligent algorithm is applied to the ancient ceramics recognition and appreciation is one of the most challenging issues in the field of ancient ceramics. Article focuses on selected Ming Qing Dynasty blue and white porcelain as research samples, and explore how to extract the effective image recognition features of ancient ceramics, and to quantify the comparison, given the ancient craft of evaluation index system, and improve the identification of categories of, and appreciation evaluation model to extract special recognition feature, image preprocessing, discussion handwritten the key technology of Chinese character segmentation, feature extraction and classifier design a variety of methods, and non-linear support vector machine analysis method using multiple classifiers based, so that the sample's accuracy greatly improved.
APA, Harvard, Vancouver, ISO, and other styles
41

Yin, Jun, Weiming Zeng, and Lai Wei. "Optimal feature extraction methods for classification methods and their applications to biometric recognition." Knowledge-Based Systems 99 (May 2016): 112–22. http://dx.doi.org/10.1016/j.knosys.2016.01.043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Rojathai, S., and M. Venkatesulu. "Investigation of ANFIS and FFBNN Recognition Methods Performance in Tamil Speech Word Recognition." International Journal of Software Innovation 2, no. 2 (April 2014): 43–53. http://dx.doi.org/10.4018/ijsi.2014040103.

Full text
Abstract:
In speech word recognition systems, feature extraction and recognition plays a most significant role. More number of feature extraction and recognition methods are available in the existing speech word recognition systems. In most recent Tamil speech word recognition system has given high speech word recognition performance with PAC-ANFIS compared to the earlier Tamil speech word recognition systems. So the investigation of speech word recognition by various recognition methods is needed to prove their performance in the speech word recognition. This paper presents the investigation process with well known Artificial Intelligence method as Feed Forward Back Propagation Neural Network (FFBNN) and Adaptive Neuro Fuzzy Inference System (ANFIS). The Tamil speech word recognition system with PAC-FFBNN performance is analyzed in terms of statistical measures and Word Recognition Rate (WRR) and compared with PAC-ANFIS and other existing Tamil speech word recognition systems.
APA, Harvard, Vancouver, ISO, and other styles
43

Liu, Jie. "Feature dimensionality reduction for myoelectric pattern recognition: A comparison study of feature selection and feature projection methods." Medical Engineering & Physics 36, no. 12 (December 2014): 1716–20. http://dx.doi.org/10.1016/j.medengphy.2014.09.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Liu, Xiaoyang, Wei Jing, Mingxuan Zhou, and Yuxing Li. "Multi-Scale Feature Fusion for Coal-Rock Recognition Based on Completed Local Binary Pattern and Convolution Neural Network." Entropy 21, no. 6 (June 25, 2019): 622. http://dx.doi.org/10.3390/e21060622.

Full text
Abstract:
Automatic coal-rock recognition is one of the critical technologies for intelligent coal mining and processing. Most existing coal-rock recognition methods have some defects, such as unsatisfactory performance and low robustness. To solve these problems, and taking distinctive visual features of coal and rock into consideration, the multi-scale feature fusion coal-rock recognition (MFFCRR) model based on a multi-scale Completed Local Binary Pattern (CLBP) and a Convolution Neural Network (CNN) is proposed in this paper. Firstly, the multi-scale CLBP features are extracted from coal-rock image samples in the Texture Feature Extraction (TFE) sub-model, which represents texture information of the coal-rock image. Secondly, the high-level deep features are extracted from coal-rock image samples in the Deep Feature Extraction (DFE) sub-model, which represents macroscopic information of the coal-rock image. The texture information and macroscopic information are acquired based on information theory. Thirdly, the multi-scale feature vector is generated by fusing the multi-scale CLBP feature vector and deep feature vector. Finally, multi-scale feature vectors are input to the nearest neighbor classifier with the chi-square distance to realize coal-rock recognition. Experimental results show the coal-rock image recognition accuracy of the proposed MFFCRR model reaches 97.9167%, which increased by 2%–3% compared with state-of-the-art coal-rock recognition methods.
APA, Harvard, Vancouver, ISO, and other styles
45

Balasundaram, Sasi Kumar, J. Umadevi, and B. Sankara Gomathi. "AN EFFECTIVE COLOR FACE RECOGNITION BASED ON BEST COLOR FEATURE SELECTION ALGORITHM USING WEIGHTED FEATURES FUSION SYSTEM." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 8, no. 2 (June 20, 2013): 787–95. http://dx.doi.org/10.24297/ijct.v8i2.3386.

Full text
Abstract:
This paper aims to achieve the best color face recognition performance. The newly introduced feature selection method takes advantage of novel learning which is used to find the optimal set of color-component features for the purpose of achieving the best face recognition result. The proposed color face recognition method consists of two parts namely color-component feature selection with boosting and color face recognition solution using selected color component features. This method is better than existing color face recognition methods with illumination, pose variation and low resolution face images. This system is based on the selection of the best color component features from various color models using the novel boosting learning framework. These selected color component features are then combined into a single concatenated color feature using weighted feature fusion. The effectiveness of color face recognition method has been successfully evaluated by the public face databases.
APA, Harvard, Vancouver, ISO, and other styles
46

Sridharan, Nandakumar, and Jami J. Shah. "Recognition of Multi Axis Milling Features: Part I-Topological and Geometric Characteristics." Journal of Computing and Information Science in Engineering 4, no. 3 (September 1, 2004): 242–50. http://dx.doi.org/10.1115/1.1778718.

Full text
Abstract:
Most of the work in machining feature recognition has been limited to 2-1/2 and 3 axis milling features. The major impediment to recognition of complex features has been the difficulty in generalizing the characteristics of their shape. This two-part paper describes general purpose methods for recognizing both simple and complex features; the latter may have freeform surfaces and may require 4 or 5 axis machining. Part I of this paper attempts to describe features in terms of geometric and topological characteristics. Part II of the paper uses the characterization and classification developed in Part I for designing feature recognition algorithms. Part I proposes five basic categories and several sub-classifications of features derived both from machining considerations and computational methods for NC toolpath generation. Rather than using topologically rigid features, such as slots and steps, etc., machining features are classified as “Cut-Thru,” “Cut-Around” and “Cut-on” and further classified into sub-categories. Each feature class is described by a list of properties. Apart from the obvious use in feature recognition, this feature classification and characterization may have potential use in developing future data exchange standards for complex features.
APA, Harvard, Vancouver, ISO, and other styles
47

Jing, Xiao Yuan, Kun Li, Song Song Wu, Yong Fang Yao, and Chao Wang. "Kernel Feature Extraction Approach for Color Image Recognition." Advanced Materials Research 760-762 (September 2013): 1621–26. http://dx.doi.org/10.4028/www.scientific.net/amr.760-762.1621.

Full text
Abstract:
Color Image Recognition is one of the most important fields in Pattern Recognition. Both Multi-set canonical correlation analysis and Kernel method are important techniques in the field of color image recognition. In this paper, we combine the two methods and propose one novel color image recognition approach: color image kernel canonical correlation analysis (CIKCCA). Color image kernel canonical correlation analysis is based on the theory of multi-set canonical correlation analysis and extracts canonical correlation features among the color image components. Then fuse the features of the color image components in the feature level, which are used for classification and recognition. Experimental results on the FRGC-v2 public color image databases demonstrate that the proposed approach acquire better recognition performance than other color recognition methods.
APA, Harvard, Vancouver, ISO, and other styles
48

Meng, Zibo, Shizhong Han, Min Chen, and Yan Tong. "Audiovisual Facial Action Unit Recognition using Feature Level Fusion." International Journal of Multimedia Data Engineering and Management 7, no. 1 (January 2016): 60–76. http://dx.doi.org/10.4018/ijmdem.2016010104.

Full text
Abstract:
Recognizing facial actions is challenging, especially when they are accompanied with speech. Instead of employing information solely from the visual channel, this work aims to exploit information from both visual and audio channels in recognizing speech-related facial action units (AUs). In this work, two feature-level fusion methods are proposed. The first method is based on a kind of human-crafted visual feature. The other method utilizes visual features learned by a deep convolutional neural network (CNN). For both methods, features are independently extracted from visual and audio channels and aligned to handle the difference in time scales and the time shift between the two signals. These temporally aligned features are integrated via feature-level fusion for AU recognition. Experimental results on a new audiovisual AU-coded dataset have demonstrated that both fusion methods outperform their visual counterparts in recognizing speech-related AUs. The improvement is more impressive with occlusions on the facial images, which would not affect the audio channel.
APA, Harvard, Vancouver, ISO, and other styles
49

Gao, Jiangjin, and Tao Yang. "Research on Real-Time Face Key Point Detection Algorithm Based on Attention Mechanism." Computational Intelligence and Neuroscience 2022 (January 5, 2022): 1–11. http://dx.doi.org/10.1155/2022/6205108.

Full text
Abstract:
The existing face detection methods were affected by the network model structure used. Most of the face recognition methods had low recognition rate of face key point features due to many parameters and large amount of calculation. In order to improve the recognition accuracy and detection speed of face key points, a real-time face key point detection algorithm based on attention mechanism was proposed in this paper. Due to the multiscale characteristics of face key point features, the deep convolution network model was adopted, the attention module was added to the VGG network structure, the feature enhancement module and feature fusion module were combined to improve the shallow feature representation ability of VGG, and the cascade attention mechanism was used to improve the deep feature representation ability. Experiments showed that the proposed algorithm not only can effectively realize face key point recognition but also has better recognition accuracy and detection speed than other similar methods. This method can provide some theoretical basis and technical support for face detection in complex environment.
APA, Harvard, Vancouver, ISO, and other styles
50

Shah, Jami J., David Anderson, Yong Se Kim, and Sanjay Joshi. "A Discourse on Geometric Feature Recognition From CAD Models." Journal of Computing and Information Science in Engineering 1, no. 1 (November 1, 2000): 41–51. http://dx.doi.org/10.1115/1.1345522.

Full text
Abstract:
This paper discusses the past 25 years of research in feature recognition. Although a great variety of feature recognition techniques have been developed, the discussion here focuses on the more successful ones. These include graph based and “hint” based methods, convex hull decomposition, and volume decomposition-recomposition techniques. Recent advances in recognizing features with free form features are also presented. In order to benchmark these methods, a frame of reference is created based on topological generality, feature interactions handled, surface geometry supported, pattern matching criteria used, and computational complexity. This framework is used to compare each of the recognition techniques. Problems related to domain dependence and multiple interpretations are also addressed. Finally, some current research challenges are discussed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography