Journal articles on the topic 'Face Expression Recognition'

To see the other types of publications on this topic, follow the link: Face Expression Recognition.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Face Expression Recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

CHOI, JAE-YOUNG, TAEG-KEUN WHANGBO, YOUNG-GYU YANG, MURLIKRISHNA VISWANATHAN, and NAK-BIN KIM. "POSE-EXPRESSION NORMALIZATION FOR FACE RECOGNITION USING CONNECTED COMPONENTS ANALYSIS." International Journal of Pattern Recognition and Artificial Intelligence 20, no. 06 (September 2006): 869–81. http://dx.doi.org/10.1142/s0218001406005010.

Full text
Abstract:
Accurate measurement of poses and expressions can increase the efficiency of recognition systems by avoiding the recognition of spurious faces. This paper presents a novel and robust pose-expression invariant face recognition method in order to improve the existing face recognition techniques. First, we apply the TSL color model for detecting facial region and estimate the vector X-Y-Z of face using connected components analysis. Second, the input face is mapped by a deformable 3D facial model. Third, the mapped face is transformed to the frontal face which appropriates for face recognition by the estimated pose vector and action unit of expression. Finally, the damaged regions which occur during the process of normalization are reconstructed using PCA. Several empirical tests are used to validate the application of face detection model and the method for estimating facial poses and expression. In addition, the tests suggest that recognition rate is greatly boosted through the normalization of the poses and expression.
APA, Harvard, Vancouver, ISO, and other styles
2

Ahlawat, Deepti, and Vijay Nehra. "Expression Invariant Face Recognition System." International Journal of Signal Processing, Image Processing and Pattern Recognition 10, no. 6 (June 30, 2017): 13–22. http://dx.doi.org/10.14257/ijsip.2017.10.6.02.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ali, Humayra Binte, David M. W. Powers, Xibin Jia, and Yanhua Zhang. "Extended Non-negative Matrix Factorization for Face and Facial Expression Recognition." International Journal of Machine Learning and Computing 5, no. 2 (April 2015): 142–47. http://dx.doi.org/10.7763/ijmlc.2015.v5.498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

ter Haar, Frank B., and Remco C. Veltkamp. "Expression modeling for expression-invariant face recognition." Computers & Graphics 34, no. 3 (June 2010): 231–41. http://dx.doi.org/10.1016/j.cag.2010.03.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chaudhari, V. J. "Face Recognition and Emotion Detection." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 30, 2021): 4775–77. http://dx.doi.org/10.22214/ijraset.2021.35698.

Full text
Abstract:
This Face recognition and facial emotion detection is new era of technology. It’s also indirectly defining the level of growth in intelligence, security and copying human emotional behaviour. It is mainly used in market research and testing. Many companies require a good and accurate testing method which contributes to their development by providing the necessary insights and drawing the accurate conclusions. Facial expression recognition technology can be developed through various methods. This technology can be developed by using the deep learning with the convolutional neural network or with inbuilt libraries like deepface. The main objective here is to classify each face based on the emotions shown into seven categories which include Anger, Disgust, Fear, Happiness, Sadness, Surprise and Neutrality. The main objective here in this project is, to read the facial expressions of the people and displaying them the product which helps in determining their interest in it. Facial expression recognition technology can also be used in video game testing. During the video game testing, certain users are asked to play the game for a specified period and their expressions, and their behavior are monitored and analyzed. The game developers usually use the facial expression recognition and get the required insights and draw the conclusions and provide their feedback in the making of the final product. In this project, deep learning with the convolutional neural networks (CNN) approach is used. Neural networks need to be trained with large amounts of data and have a higher computational power [8-11]. It takes more time to train the model.[1]
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Hyung-Soo, and Daijin Kim. "Expression-invariant face recognition by facial expression transformations." Pattern Recognition Letters 29, no. 13 (October 2008): 1797–805. http://dx.doi.org/10.1016/j.patrec.2008.05.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dhekane, Manasi, Ayan Seal, and Pritee Khanna. "Illumination and Expression Invariant Face Recognition." International Journal of Pattern Recognition and Artificial Intelligence 31, no. 12 (September 17, 2017): 1756018. http://dx.doi.org/10.1142/s0218001417560183.

Full text
Abstract:
An illumination and expression invariant face recognition method based on uniform local binary patterns (uLBP) and Legendre moments is proposed in this work. The proposed method exploits uLBP texture features and Legendre moments to make a feature representation with enhanced discriminating power. The input images are preprocessed to extract the face region and normalized. From normalized image, uLBP codes are extracted to obtain texture image which overcomes the effect of monotonic temperature changes. Legendre moments are computed from this texture image to get the required feature vector. Legendre moments conserve the spatial structure information of the texture image. The resultant feature vector is classified using k-nearest neighbor classifier with [Formula: see text] norm. To evaluate the proposed method, experiments are performed on IRIS and NVIE databases. The proposed method is tested on both visible and infrared images under different illumination and expression variations and performance is compared with recently published methods in terms of recognition rate, recall, length of feature vector, and computational time. The proposed method gives better recognition rates and outperforms other recent face recognition methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Rhodes, Gillian. "Adaptive Coding and Face Recognition." Current Directions in Psychological Science 26, no. 3 (June 2017): 218–24. http://dx.doi.org/10.1177/0963721417692786.

Full text
Abstract:
Face adaptation generates striking face aftereffects, but is this adaptation useful? The answer appears to be yes, with several lines of evidence suggesting that it contributes to our face-recognition ability. Adaptation to face identity is reduced in a variety of clinical populations with impaired face recognition. In addition, individual differences in face adaptation are linked to face-recognition ability in typical adults. People who adapt more readily to new faces are better at recognizing faces. This link between adaptation and recognition holds for both identity and expression recognition. Adaptation updates face norms, which represent the typical or average properties of the faces we experience. By using these norms to code how faces differ from average, the visual system can make explicit the distinctive information that we need to recognize faces. Thus, adaptive norm-based coding may help us to discriminate and recognize faces despite their similarity as visual patterns.
APA, Harvard, Vancouver, ISO, and other styles
9

Minemoto, Kazusa, Yoshiyuki Ueda, and Sakiko Yoshikawa. "The aftereffect of the ensemble average of facial expressions on subsequent facial expression recognition." Attention, Perception, & Psychophysics 84, no. 3 (February 15, 2022): 815–28. http://dx.doi.org/10.3758/s13414-021-02407-w.

Full text
Abstract:
AbstractAn ensemble or statistical summary can be extracted from facial expressions presented in different spatial locations simultaneously. However, how such complicated objects are represented in the mind is not clear. It is known that the aftereffect of facial expressions, in which prolonged viewing of facial expressions biases the perception of subsequent facial expressions of the same category, occurs only when a visual representation is formed. Using this methodology, we examined whether an ensemble can be represented with visualized information. Experiment 1 revealed that the presentation of multiple facial expressions biased the perception of subsequent facial expressions to less happy as much as the presentation of a single face did. Experiment 2 compared the presentation of faces comprising strong and weak intensities of emotional expressions with an individual face as the adaptation stimulus. The results indicated that the perceptual biases were found after the presentation of four faces and a strong single face, but not after the weak single face presentation. Experiment 3 employed angry expressions, a distinct category from the test expression used as an adaptation stimulus; no aftereffect was observed. Finally, Experiment 4 clearly demonstrated the perceptual bias with a higher number of faces. Altogether, these results indicate that an ensemble average extracted from multiple faces leads to the perceptual bias, and this effect is similar in terms of its properties to that of a single face. This supports the idea that an ensemble of faces is represented with visualized information as a single face.
APA, Harvard, Vancouver, ISO, and other styles
10

Rasyid, Muhammad Furqan. "Comparison Of LBPH, Fisherface, and PCA For Facial Expression Recognition of Kindergarten Student." International Journal Education and Computer Studies (IJECS) 2, no. 1 (May 15, 2022): 19–26. http://dx.doi.org/10.35870/ijecs.v2i1.625.

Full text
Abstract:
Face recognition is the biometric personal identification that gaining a lot of attention recently. An increasing need for fast and accurate face expression recognition systems. Facial expression recognition is a system used to identify what expression is displayed by someone. In general, research on facial expression recognition only focuses on adult facial expressions. The introduction of human facial expressions is one of the very fields of research important because it is a blend of feelings and computer applications such as interactions between humans and computers, compressing data, face animation and face image search from a video. This research process recognizes facial expressions for toddlers, precisely for kindergarten students. But before making this research system Comparing three methods namely PCA, Fisherface and LBPH by adopts our new database that contains the face of individuals with a variety of pose and expression. which will be used for facial expression recognition. Fisherface accuracy was obtained at 94%, LBPH 100%, and PCA 48.75%.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhu, Xiaoliang, Shihao Ye, Liang Zhao, and Zhicheng Dai. "Hybrid Attention Cascade Network for Facial Expression Recognition." Sensors 21, no. 6 (March 12, 2021): 2003. http://dx.doi.org/10.3390/s21062003.

Full text
Abstract:
As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.
APA, Harvard, Vancouver, ISO, and other styles
12

Li, Zhenye, Hongyan Zou, Xinyan Sun, Tingting Zhu, and Chao Ni. "3D Expression-Invariant Face Verification Based on Transfer Learning and Siamese Network for Small Sample Size." Electronics 10, no. 17 (September 1, 2021): 2128. http://dx.doi.org/10.3390/electronics10172128.

Full text
Abstract:
Three-dimensional (3D) face recognition has become a trending research direction in both industry and academia. However, traditional facial recognition methods carry high computational costs and face data storage costs. Deep learning has led to a significant improvement in the recognition rate, but small sample sizes represent a new problem. In this paper, we present an expression-invariant 3D face recognition method based on transfer learning and Siamese networks that can resolve the small sample size issue. First, a landmark detection method utilizing the shape index was employed for facial alignment. Then, a convolutional network (CNN) was constructed with transfer learning and trained with the aligned 3D facial data for face recognition, enabling the CNN to recognize faces regardless of facial expressions. Following that, the weighted trained CNN was shared by a Siamese network to build a 3D facial recognition model that can identify faces even with a small sample size. Our experimental results showed that the proposed method reached a recognition rate of 0.977 on the FRGC database, and the network can be used for facial recognition with a single sample.
APA, Harvard, Vancouver, ISO, and other styles
13

Schwartz, Emily, Kathryn O’Nell, Rebecca Saxe, and Stefano Anzellotti. "Challenging the Classical View: Recognition of Identity and Expression as Integrated Processes." Brain Sciences 13, no. 2 (February 10, 2023): 296. http://dx.doi.org/10.3390/brainsci13020296.

Full text
Abstract:
Recent neuroimaging evidence challenges the classical view that face identity and facial expression are processed by segregated neural pathways, showing that information about identity and expression are encoded within common brain regions. This article tests the hypothesis that integrated representations of identity and expression arise spontaneously within deep neural networks. A subset of the CelebA dataset is used to train a deep convolutional neural network (DCNN) to label face identity (chance = 0.06%, accuracy = 26.5%), and the FER2013 dataset is used to train a DCNN to label facial expression (chance = 14.2%, accuracy = 63.5%). The identity-trained and expression-trained networks each successfully transfer to labeling both face identity and facial expression on the Karolinska Directed Emotional Faces dataset. This study demonstrates that DCNNs trained to recognize face identity and DCNNs trained to recognize facial expression spontaneously develop representations of facial expression and face identity, respectively. Furthermore, a congruence coefficient analysis reveals that features distinguishing between identities and features distinguishing between expressions become increasingly orthogonal from layer to layer, suggesting that deep neural networks disentangle representational subspaces corresponding to different sources.
APA, Harvard, Vancouver, ISO, and other styles
14

Ariff, F. N. M., H. Jaafar, S. N. H. Jusoh, and N. A. F. Haris. "Single and Multiface Detection and Recognition System." Journal of Physics: Conference Series 2312, no. 1 (August 1, 2022): 012036. http://dx.doi.org/10.1088/1742-6596/2312/1/012036.

Full text
Abstract:
Abstract Face detection has drawn the interest of numerous research groups because to its vast application in various domains such as surveillance and security systems, as human-computer interaction, and many more. Face identification is the important phase involving several factors such as lighting, facial expression, and ageing effects. It’s more tough as detection takes a lot of time to detect and distinguish a single face at a time. Moreover, most of the existing technology cannot accurately detect many faces simultaneously. This study therefore presents a system that can recognize and identify multiple face image simultaneously with various expressions. Face-recognition procedure consists of data gathering, face detection, extraction, and classification feature. The face dataset is obtained from 10 participants with varied backgrounds and expressions. Subsequently, the viola-jones technique together with threshold technique is utilized in face detection to detect face presents while removing the unnecessary background to reduce face recognition time processing further. The Principal Component Analysis (PCA) is then employed to extract features while maintaining as much information as possible from enormous image data set. After formulating each face’s representation, the classification process is considered to recognize the identities of users’ faces. Here, a non-parametric classifier i.e. Support Vector Machine (SVM) is applied in this process. Conclusively, the system is able to detect around 90 percent multi-face user in different conditions.
APA, Harvard, Vancouver, ISO, and other styles
15

., Dr Dolly Reney. "Review on Human face and Expression Recognition." CSVTU International Journal of Biotechnology Bioinformatics and Biomedical 3, no. 3 (February 18, 2019): 31–40. http://dx.doi.org/10.30732/ijbbb.20180303001.

Full text
Abstract:
There are the different popular algorithms and techniques available which are used for implementation of face and expression recognition all having respective advantages and disadvantages. Some of the algorithms improve the efficiency of face and expression recognition, under the different varying illumination and expression conditions for input source. The main steps for face recognition are Feature representation and classification. The different authors have described different novel approaches for face and emotion recognition. Present review paper discrebie the different methods and techniques used to identified the person with the facial expression and person emotion with the voice of the person.
APA, Harvard, Vancouver, ISO, and other styles
16

Huh, Kyung Moo. "A Face Expression Recognition Method using Histograms." Journal of Institute of Control, Robotics and Systems 20, no. 5 (May 1, 2014): 520–25. http://dx.doi.org/10.5302/j.icros.2014.14.9030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Han, Tien C. Bau, and Glenn Healey. "Expression-invariant face recognition in hyperspectral images." Optical Engineering 53, no. 10 (October 3, 2014): 103102. http://dx.doi.org/10.1117/1.oe.53.10.103102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Zhang, Tingxuan. "Face Expression Recognition Based on Deep Learning." Journal of Physics: Conference Series 1486 (April 2020): 042048. http://dx.doi.org/10.1088/1742-6596/1486/4/042048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

ROSLAN, NURUL ATIFAH, HAMIMAH UJIR, and IRWANDI HIPNI MOHAMAD HIPINY. "3D Face Recognition Analysis Using Random Forest." Trends in Undergraduate Research 2, no. 2 (December 31, 2019): c1–7. http://dx.doi.org/10.33736/tur.1981.2019.

Full text
Abstract:
Face recognition is an emerging field due to the technological advances in camera hardware and for its application in various fields such as the commercial and security sector. Although the existing works in 3D face recognition perform well, a similar experiment setting across classifiers is hard to find, which includes the Random Forest classifier. The aggregations of the classification from each decision tree are the outcome of Random Forest. This paper presents 3D facial recognition using the Random Forest method using the BU-3DFE database, which consists of basic facial expressions. The work using other classifiers such as Neural Network (NN) and Support Vector Machine (SVM) using a similar experiment setting also presented. As for the results, the Random Forest approach has yield 94.71% of recognition rate, which is an encouraging result compared to NN and SVM. In addition, the experiment also yields that fear expression is unique to each human due to a high confidence rate (82%) of subjects with fear expression. Therefore, a lower chance to be mistakenly recognized someone with a fear expression.
APA, Harvard, Vancouver, ISO, and other styles
20

Wu, Zhaoqi, Reziwanguli Xiamixiding, Atul Sajjanhar, Juan Chen, and Quan Wen. "Image Appearance-Based Facial Expression Recognition." International Journal of Image and Graphics 18, no. 02 (April 2018): 1850012. http://dx.doi.org/10.1142/s0219467818500122.

Full text
Abstract:
We investigate facial expression recognition (FER) based on image appearance. FER is performed using state-of-the-art classification approaches. Different approaches to preprocess face images are investigated. First, region-of-interest (ROI) images are obtained by extracting the facial ROI from raw images. FER of ROI images is used as the benchmark and compared with the FER of difference images. Difference images are obtained by computing the difference between the ROI images of neutral and peak facial expressions. FER is also evaluated for images which are obtained by applying the Local binary pattern (LBP) operator to ROI images. Further, we investigate different contrast enhancement operators to preprocess images, namely, histogram equalization (HE) approach and a brightness preserving approach for histogram equalization. The classification experiments are performed for a convolutional neural network (CNN) and a pre-trained deep learning model. All experiments are performed on three public face databases, namely, Cohn–Kanade (CK[Formula: see text]), JAFFE and FACES.
APA, Harvard, Vancouver, ISO, and other styles
21

Barabanschikov, V. A., O. A. Korolkova, and E. A. Lobodinskaya. "Recognition of facial expressions during step-function stroboscopic presentation." Experimental Psychology (Russia) 11, no. 4 (2018): 50–69. http://dx.doi.org/10.17759/exppsy.2018110405.

Full text
Abstract:
We studied the perception of human facial emotional expressions during step-function stroboscopic presentation of changing mimics. Consecutive stages of each of the six basic facial expressions were pre sented to the participants: neutral face (300 ms) — expression of medium intensity (10—40 ms) — intense expression (30—120 ms) — expression of medium intensity (10—40 ms) — neutral face (100 ms). Alternative forced choice task was used to categorize the facial expressions. The results were compared to previous studies (Barabanschikov, Korolkova, Lobodinskaya, 2015; 2016), conducted using the same paradigm but with boxcar-function change of the expression: neutral face — intense expression — neutral face. We found that the dynamics of facial expression recognition, as well as errors and recognition time are almost identical in conditions of boxcar- and step-function presentation. One factor influencing the recognition rate is the proportion of presentation time of static (neutral) and changing (facial expression) aspects of the stimulus. In suboptimal conditions of facial expression perception (minimal presentation time of 10+30+10 ms and reduced intensity of expressions) we revealed stroboscopic sensibilization — a previously described phenomenon of enhanced recognition rate of low-attractive expressions (disgust, sadness, fear and anger), which has been previously found in conditions of boxcar-function presentation of expressions. We confirmed the similarity of influence of real and apparent motion on the recognition of basic facial emotional expressions.
APA, Harvard, Vancouver, ISO, and other styles
22

CHEN, SHAOKANG, BRIAN C. LOVELL, and TING SHAN. "ROBUST ADAPTED PRINCIPAL COMPONENT ANALYSIS FOR FACE RECOGNITION." International Journal of Pattern Recognition and Artificial Intelligence 23, no. 03 (May 2009): 491–520. http://dx.doi.org/10.1142/s0218001409007284.

Full text
Abstract:
Recognizing faces with uncontrolled pose, illumination, and expression is a challenging task due to the fact that features insensitive to one variation may be highly sensitive to the other variations. Existing techniques dealing with just one of these variations are very often unable to cope with the other variations. The problem is even more difficult in applications where only one gallery image per person is available. In this paper, we describe a recognition method, Adapted Principal Component Analysis (APCA), that can simultaneously deal with large variations in both illumination and facial expression using only a single gallery image per person. We have now extended this method to handle head pose variations in two steps. The first step is to apply an Active Appearance Model (AAM) to the non-frontal face image to construct a synthesized frontal face image. The second is to use APCA for classification robust to lighting and pose. The proposed technique is evaluated on three public face databases — Asian Face, Yale Face, and FERET Database — with images under different lighting conditions, facial expressions, and head poses. Experimental results show that our method performs much better than other recognition methods including PCA, FLD, PRM and LTP. More specifically, we show that by using AAM for frontal face synthesis from high pose angle faces, the recognition rate of our APCA method increases by up to a factor of 4.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Xiang Zhang, Zhi Hao Yin, Ze Su Cai, and Ding Ding Zhu. "Facial Expression Recognition of Home Service Robots." Applied Mechanics and Materials 411-414 (September 2013): 1795–800. http://dx.doi.org/10.4028/www.scientific.net/amm.411-414.1795.

Full text
Abstract:
It is of great significance that a home service robot can recognize facial expressions of a human being. This thesis suggests that features of facial expressions be extracted with PCA, and facial expressions be recognized by distance-based Hashing K-nearest neighbor classification. First, Haar-like feature and AdaBoost algorithm is adopted to detect a face and preprocess the face image; then PCA is applied to extract features of the facial expression, those features will be inserted into the hash table; finally, the facial expression can be recognized by K-nearest neighbor classification algorithm. As concluded, recognition efficiency can be greatly improved after reconstructing the feature database into hash tables.
APA, Harvard, Vancouver, ISO, and other styles
24

Collin, Charles A., Justin Chamberland, Megan LeBlanc, Anna Ranger, and Isabelle Boutet. "Effects of Emotional Expression on Face Recognition May Be Accounted for by Image Similarity." Social Cognition 40, no. 3 (June 2022): 282–301. http://dx.doi.org/10.1521/soco.2022.40.3.282.

Full text
Abstract:
We examined the degree to which differences in face recognition rates across emotional expression conditions varied concomitantly with differences in mean objective image similarity. Effects of emotional expression on face recognition performance were measured via an old/new recognition paradigm in which stimuli at both learning and testing had happy, neutral, and angry expressions. Results showed an advantage for faces learned with neutral expressions, as well as for angry faces at testing. Performance data was compared to three quantitative image-similarity indices. Findings showed that mean human performance was strongly correlated with mean image similarity, suggesting that the former may be at least partly explained by the latter. Our findings sound a cautionary note regarding the necessity of considering low-level stimulus properties as explanations for findings that otherwise may be prematurely attributed to higher order phenomena such as attention or emotional arousal.
APA, Harvard, Vancouver, ISO, and other styles
25

Ayeche, Farid, and Adel Alti. "Novel Descriptors for Effective Recognition of Face and Facial Expressions." Revue d'Intelligence Artificielle 34, no. 5 (November 20, 2020): 521–30. http://dx.doi.org/10.18280/ria.340501.

Full text
Abstract:
In this paper, we present a face recognition approach based on extended Histogram Oriented Gradient (HOG) descriptors to extract the facial expressions features allowing classifying the faces and facial expressions. The approach is based on determining the different directional codes on the face image based on edge response values to define the feature vector from the face image. Its size is reduced to improve the performance of the SVM (Support Vector Machine) classifier. Experiments are conducted using two public datasets: JAFFE for facial expression recognition and YALE for face recognition. Experimental results show that the proposed descriptor achieves recognition rate of 92.12% and execution time ranging from 0.4s to 0.7s in all evaluated databases compared with existing works. Experiments demonstrate and confirm both the effectiveness and the efficiency of the proposed descriptor.
APA, Harvard, Vancouver, ISO, and other styles
26

Lin, Qing, Ruili He, and Peihe Jiang. "Feature Guided CNN for Baby’s Facial Expression Recognition." Complexity 2020 (November 22, 2020): 1–10. http://dx.doi.org/10.1155/2020/8855885.

Full text
Abstract:
State-of-the-art facial expression methods outperform human beings, especially, thanks to the success of convolutional neural networks (CNNs). However, most of the existing works focus mainly on analyzing an adult’s face and ignore the important problems: how can we recognize facial expression from a baby’s face image and how difficult is it? In this paper, we first introduce a new face image database, named BabyExp, which contains 12,000 images from babies younger than two years old, and each image is with one of three facial expressions (i.e., happy, sad, and normal). To the best of our knowledge, the proposed dataset is the first baby face dataset for analyzing a baby’s face image, which is complementary to the existing adult face datasets and can shed some light on exploring baby face analysis. We also propose a feature guided CNN method with a new loss function, called distance loss, to optimize interclass distance. In order to facilitate further research, we provide the benchmark of expression recognition on the BabyExp dataset. Experimental results show that the proposed network achieves the recognition accuracy of 87.90% on BabyExp.
APA, Harvard, Vancouver, ISO, and other styles
27

Owusu, Ebenezer, Ebenezer Komla Gavua, and Zhan Yong-Zhao. "Facial Expression Recognition – A Comprehensive Review." International Journal of Technology and Management Research 1, no. 4 (March 12, 2020): 29–46. http://dx.doi.org/10.47127/ijtmr.v1i4.36.

Full text
Abstract:
In this paper, we have provided a comprehensive review of modern facial expression recognition system. The history of the technology as well as the current status in terms of accomplishments and challenges has been emphasized. First, we highlighted some modern applications of the technology. The best methods of face detection, an essential component of automatic facial expression system, are also discussed. Facial Action Coding Systems- the cumulative database of research and development of micro expressions within the behavioral science are also enlightened. Then various facial expression databases and the types of recognitions are explained in detail. Finally, we provided the procedures of facial expression recognition from feature extraction to classifications, emphasizing on modern and best approaches. Then the challenges encountered when comparing results with others are highlighted and suggestions to alleviate the problems, provided. Keywords: FACS; Expression recognition; spatial; spatio-temporal; expression classification
APA, Harvard, Vancouver, ISO, and other styles
28

H. Sable, Archana, Sanjay N. Talbar, and Haricharan Amarsing Dhirbasi. "EV-SIFT - An Extended Scale Invariant Face Recognition for Plastic Surgery Face Recognition." International Journal of Electrical and Computer Engineering (IJECE) 7, no. 4 (August 1, 2017): 1923. http://dx.doi.org/10.11591/ijece.v7i4.pp1923-1933.

Full text
Abstract:
Automatic recognition of people faces many challenging problems which has experienced much attention due to many applications in different fields during recent years. Face recognition is one of those challenging problem which does not have much technique to solve all situations like pose, expression, and illumination changes, and/or ageing. Facial expression due to plastic surgery is one of the additional challenges which arise recently. This paper presents a new technique for accurate face recognition after the plastic surgery. This technique uses Entropy based SIFT (EV-SIFT) features for the recognition purpose. The corresponding feature extracts the key points and volume of the scale-space structure for which the information rate is determined. This provides least effect on uncertain variations in the face since the entropy is the higher order statistical feature. The corresponding EV-SIFT features are applied to the Support vector machine for classification. The normal SIFT feature extracts the key points based on the contrast of the image and the V- SIFT feature extracts the key points based on the volume of the structure. But the EV- SIFT method provides the contrast and volume information. This technique provides better performance when compare with PCA, normal SIFT and V-SIFT based feature extraction.
APA, Harvard, Vancouver, ISO, and other styles
29

Mallikarjuna, Basetty, M. Sethu Ram, and Supriya Addanke. "An Improved Face-Emotion Recognition to Automatically Generate Human Expression With Emoticons." International Journal of Reliable and Quality E-Healthcare 11, no. 1 (January 1, 2022): 1–18. http://dx.doi.org/10.4018/ijrqeh.314945.

Full text
Abstract:
Any human face image expression naturally identifies expressions of happy, sad etc.; sometimes human facial image expression recognition is complex, and it is a combination of two emotions. The existing literature provides face emotion classification and image recognition, and the study on deep learning using convolutional neural networks (CNN), provides face emotion recognition most useful for healthcare and with the most complex of the existing algorithms. This paper improves the human face emotion recognition and provides feelings of interest for others to generate emoticons on their smartphone. Face emotion recognition plays a major role by using convolutional neural networks in the area of deep learning and artificial intelligence for healthcare services. Automatic facial emotion recognition consists of two methods, such as face detection with Ada boost classifier algorithm and emotional classification, which consists of feature extraction by using deep learning methods such as CNN to identify the seven emotions to generate emoticons.
APA, Harvard, Vancouver, ISO, and other styles
30

Ramirez Rivera, Adin, Jorge Rojas Castillo, and Oksam Oksam Chae. "Local Directional Number Pattern for Face Analysis: Face and Expression Recognition." IEEE Transactions on Image Processing 22, no. 5 (May 2013): 1740–52. http://dx.doi.org/10.1109/tip.2012.2235848.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Yingying, Yibin Li, Yong Song, and Xuewen Rong. "Facial Expression Recognition Based on Auxiliary Models." Algorithms 12, no. 11 (October 31, 2019): 227. http://dx.doi.org/10.3390/a12110227.

Full text
Abstract:
In recent years, with the development of artificial intelligence and human–computer interaction, more attention has been paid to the recognition and analysis of facial expressions. Despite much great success, there are a lot of unsatisfying problems, because facial expressions are subtle and complex. Hence, facial expression recognition is still a challenging problem. In most papers, the entire face image is often chosen as the input information. In our daily life, people can perceive other’s current emotions only by several facial components (such as eye, mouth and nose), and other areas of the face (such as hair, skin tone, ears, etc.) play a smaller role in determining one’s emotion. If the entire face image is used as the only input information, the system will produce some unnecessary information and miss some important information in the process of feature extraction. To solve the above problem, this paper proposes a method that combines multiple sub-regions and the entire face image by weighting, which can capture more important feature information that is conducive to improving the recognition accuracy. Our proposed method was evaluated based on four well-known publicly available facial expression databases: JAFFE, CK+, FER2013 and SFEW. The new method showed better performance than most state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
32

Kao, Chang Yi, and Chin Shyurng Fahn. "A Design of Face Detection and Facial Expression Recognition Techniques Based on Boosting Schema." Applied Mechanics and Materials 121-126 (October 2011): 617–21. http://dx.doi.org/10.4028/www.scientific.net/amm.121-126.617.

Full text
Abstract:
During the development of the facial expression classification procedure, we evaluate three machine learning methods. We combine ABAs with CARTs, which selects weak classifiers and integrates them into a strong classifier automatically. We have presented a highly automatic facial expression recognition system in which a face detection procedure is first able to detect and locate human faces in image sequences acquired in real environments. We need not label or choose characteristic blocks in advance. In the face detection procedure, some geometrical properties are applied to eliminate the skin color regions that do not belong to human faces. In the facial feature extraction procedure, we only perform both the binarization and edge detection operations on the proper ranges of eyes, mouth, and eyebrows to obtain the 16 landmarks of a human face to further produce 16 characteristic distances which represent a kind of expressions. We realize a facial expression classification procedure by employing an ABA to recognize six kinds of expressions. The performance of the system is very satisfactory; whose recognition rate achieves more than 90%.
APA, Harvard, Vancouver, ISO, and other styles
33

SHERMINA, J., and V. VASUDEVAN. "RECOGNITION OF THE FACE IMAGES WITH OCCLUSION AND EXPRESSION." International Journal of Pattern Recognition and Artificial Intelligence 26, no. 03 (May 2012): 1256006. http://dx.doi.org/10.1142/s021800141256006x.

Full text
Abstract:
Face recognition, a kind of biometric identification, researched in several fields such as computer vision, image processing, and pattern recognition is a natural and direct biometric method. Face Recognition Technology has diverse potential over applications in the fields of information security, law enforcement and surveillance, smart cards, access control and more. Face recognition is one of the diverse techniques used for identifying an individual. Generally the image variations because of the change in face identity are less than the variations among the images of the same face under different illumination and viewing angle. Illumination and pose are the two major challenges, among the several factors that influence face recognition. After pose and illumination, the main factors that affect the face recognition performance are occlusion and expression. So in order to overcome these issues, we proposed an efficient face recognition system based on partial occlusion and expression. The similar blocks in the face image are identified and occlusion can be recovered using the block matching technique. This is combined with expression normalized by calculating the Empherical Mode Decomposition feature. Finally, the face can be recognized by using the PCA. From the implementation result, it is evident that our proposed method based on the PCA technique recognizes the face images effectively.
APA, Harvard, Vancouver, ISO, and other styles
34

Peter, Marcella, Jacey-Lynn Minoi, and Suriani Ab Rahman. "Neutral expression synthesis using kernel active shape model." Indonesian Journal of Electrical Engineering and Computer Science 20, no. 1 (October 1, 2020): 150. http://dx.doi.org/10.11591/ijeecs.v20.i1.pp150-157.

Full text
Abstract:
This paper presents a modified kernel-based Active Shape Model for neutralizing and synthesizing facial expressions. In recent decades, facial identity and emotional studies have gained interest from researchers, especially in the works of integrating human emotions and machine learning to improve the current lifestyle. It is known that facial expressions are often associated with face recognition systems with poor recognition rate. In this research, a method of a modified kernel-based active shape model based on statistical-based approach is introduced to synthesize neutral (neutralize) expressions from expressional faces, with the aim to improve the face recognition rate. An experimental study was conducted using 3D geometric facial datasets to evaluate the proposed modified method. The experimental results have shown a significant improvement on the recognition rates.
APA, Harvard, Vancouver, ISO, and other styles
35

Barabanschikov, V. A., O. A. Korolkova, and E. A. Lobodinskaya. "Recognition of blurred images of facial emotional expression in apparent movement." Experimental Psychology (Russia) 8, no. 4 (2015): 5–29. http://dx.doi.org/10.17759/exppsy.2015080402.

Full text
Abstract:
We studied the influence of the apparent (stroboscopic) movements on the perception of facial expressions of basic expressions of defocused images. Varied factors were modality of expression, context, time of exposure and the degree of the face blurring. We found that under conditions of stroboscopic exposure, high-attractive face expressions (happiness, surprise) and a neutral face are most adequately perceived by observers, and the relative accuracy of their recognition in all stimulus situation does not change. Adequacy recognition of low-attractive expressions (disgust, sadness, fear and anger) depends on the duration of exposure of the face and the extent of its blurring. At low (20 pixels) and intermediate (40 pixels) levels of blur and reduced exposure times (up to 100 or 50 ms), the relative accuracy of recognition falls (the effect of stroboscopic masking), but strong (60 pixels) blurring and the minimum time (50 ms) exposure increase the relative accuracy (stroboscopic effect of sensitization). Stroboscopic effect sensitization indicates partial similarity of the influence of real and apparent changes in facial expressions to recognition of the emotional expression.
APA, Harvard, Vancouver, ISO, and other styles
36

Grundmann, Felix, Kai Epstude, and Susanne Scheibe. "Face masks reduce emotion-recognition accuracy and perceived closeness." PLOS ONE 16, no. 4 (April 23, 2021): e0249792. http://dx.doi.org/10.1371/journal.pone.0249792.

Full text
Abstract:
Face masks became the symbol of the global fight against the coronavirus. While face masks’ medical benefits are clear, little is known about their psychological consequences. Drawing on theories of the social functions of emotions and rapid trait impressions, we tested hypotheses on face masks’ effects on emotion-recognition accuracy and social judgments (perceived trustworthiness, likability, and closeness). Our preregistered study with 191 German adults revealed that face masks diminish people’s ability to accurately categorize an emotion expression and make target persons appear less close. Exploratory analyses further revealed that face masks buffered the negative effect of negative (vs. non-negative) emotion expressions on perceptions of trustworthiness, likability, and closeness. Associating face masks with the coronavirus’ dangers predicted higher perceptions of closeness for masked but not for unmasked faces. By highlighting face masks’ effects on social functioning, our findings inform policymaking and point at contexts where alternatives to face masks are needed.
APA, Harvard, Vancouver, ISO, and other styles
37

Oruganti, Rakesh, and Namratha P. "Cascading Deep Learning Approach for Identifying Facial Expression YOLO Method." ECS Transactions 107, no. 1 (April 24, 2022): 16649–58. http://dx.doi.org/10.1149/10701.16649ecst.

Full text
Abstract:
Face detection is one of the biggest tasks to find things. Identification is usually the first stage of facial recognition. and identity verification. In recent years, in-depth learning algorithms have changed dramatically in object acquisition. These algorithms can usually be divided into two groups, namely two-phase machines like Faster R-CNN or single-phase machines like YOLO. While YOLO and its variants are less accurate than the two-phase detection systems, they outperform other components with wider genes. When faced with standard-sized objects, YOLO works well, but can't get smaller objects. A face recognition system that uses AI (Artificial Intelligence) separates or verifies a person's identity by analyzing their faces. In this project, a single neural network predicts binding boxes and class opportunities directly from the full images in a single test.
APA, Harvard, Vancouver, ISO, and other styles
38

Babu Rajendra Prasad, S., and B. Sai Chandana. "Human Face Emotions Recognition from Thermal Images Using DenseNet." International journal of electrical and computer engineering systems 14, no. 2 (February 27, 2023): 155–67. http://dx.doi.org/10.32985/ijeces.14.2.5.

Full text
Abstract:
In the current scenario face identification and recognition is an important technique in surveillance. The face is a necessary biometric in humans. Therefore face detection plays a major job in computer vision applications. Several face recognition and emotions classification approaches have been presented throughout the last few decades of research to improve the rate of face recognition for thermal pictures. However, in real-time, lighting conditions might change due to several factors, such as the different times of capture, weather, etc. Due to variations in lighting intensity, the performance of the facial expression recognition system is not good. This paper proposed a model for human thermal face detection and expression classification. Four main steps were involved in this research. Initially, the Difference of the Gaussian (DOG) filter is utilized to crop the input thermal images and then normalize the images using the median filter in pre-processing step. Then, Efficient Net is used for extracting features such as shape, location, and occurrences from thermal face images. After that, detect human faces utilized by the YOLOv4 technique to better emotions classification. Finally, classify the emotions on faces by using the DenseNet technique into seven emotions such as happy, sad, disgust, surprise, anger, fear, and neutral. The proposed method outperforms state-of-art techniques for face recognition on thermal pictures, and classifies the expressions, according to experimentations on the RGB-D-T database. The accuracy, precision, recall, and f1-score metrics will be utilized with the database to assess the efficacy of the proposed methodology. The proposed models achieve a high classification accuracy of 95.97% on the RGB-D-T database. Furthermore, the outcomes show good precision for various face recognition tasks.
APA, Harvard, Vancouver, ISO, and other styles
39

Kulkarni, Narayan, and Ashok V. Sutagundar. "Detection of Human Facial Parts Using Viola-Jones Algorithm in Group of Faces." International Journal of Applied Evolutionary Computation 10, no. 1 (January 2019): 39–48. http://dx.doi.org/10.4018/ijaec.2019010103.

Full text
Abstract:
Face detection is an image processing technique used in computer system to detect face in digital image. This article proposes an approach to detect faces and facial parts from an image of a group of people using the Viola Jones algorithm. Face detection is used in face recognition and identification systems. Automatic face detection and recognition is most challenging and a fast-growing research area in real-time applications like CC TV surveillance, video tracking, facial expression recognition, gesture recognition, human computer interaction, computer vision, and gender recognition. For face detection purposes various techniques and methods are applied in a computer system. In proposed system, a Viola Jones algorithm is implemented for multiple faces and facial parts and detected with a high rate of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
40

Gong, Ting, and Yu Biao Liu. "Human Face Expression Recognition Based on Feature Fusion." Applied Mechanics and Materials 536-537 (April 2014): 115–20. http://dx.doi.org/10.4028/www.scientific.net/amm.536-537.115.

Full text
Abstract:
The Gabor wavelet is the important technique widely used in the areas of images recognition such as human face expression, it extract the more important grain features for face expression effective, but it does not take into account the relative changes in the important characteristics of each location of the point features. Aiming at recognizing the information of human face expression, fuse the geometry feature based on angle changes at key parts on face expression, and then a radial basis function (RBF) neural network is designed as the classifier to perform recognition. The results of the experiment in the human face expression database indicate that the recognition rate by the feature fusion is obviously superior to that of traditional method.
APA, Harvard, Vancouver, ISO, and other styles
41

Aswini Priyanka, R., C. Ashwitha, R. Arun Chakravarthi, and R. Prakash. "Face Recognition Model Using Back Propagation." International Journal of Engineering & Technology 7, no. 3.34 (September 1, 2018): 237. http://dx.doi.org/10.14419/ijet.v7i3.34.18973.

Full text
Abstract:
In scientific world, Face recognition becomes an important research topic. The face identification system is an application capable of verifying a human face from a live videos or digital images. One of the best methods is to compare the particular facial attributes of a person with the images and its database. It is widely used in biometrics and security systems. Back in old days, face identification was a challenging concept. Because of the variations in viewpoint and facial expression, the deep learning neural network came into the technology stack it’s been very easy to detect and recognize the faces. The efficiency has increased dramatically. In this paper, ORL database is about the ten images of forty people helps to evaluate our methodology. We use the concept of Back Propagation Neural Network (BPNN) in deep learning model is to recognize the faces and increase the efficiency of the model compared to previously existing face recognition models.
APA, Harvard, Vancouver, ISO, and other styles
42

Jeong, Kyungjoong, Jaesik Choi, and Gil-Jin Jang. "Facial Expression Recognition using Face Alignment and AdaBoost." Journal of the Institute of Electronics and Information Engineers 51, no. 11 (November 25, 2014): 193–201. http://dx.doi.org/10.5573/ieie.2014.51.11.193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Correya, Minu, and Dr Thippeswamy G. "Expression Invariant Face Recognition Using Convolutional Neural Networks." IJARCCE 8, no. 5 (May 30, 2019): 199–201. http://dx.doi.org/10.17148/ijarcce.2019.8538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Biswas, Amrita, and M. K. Ghose. "Expression Invariant Face Recognition using DWT SIFT Features." International Journal of Computer Applications 92, no. 2 (April 18, 2014): 30–32. http://dx.doi.org/10.5120/15983-4901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Erdogmus, Nesli, and Jean-Luc Dugelay. "3D Assisted Face Recognition: Dealing With Expression Variations." IEEE Transactions on Information Forensics and Security 9, no. 5 (May 2014): 826–38. http://dx.doi.org/10.1109/tifs.2014.2309851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

De Marsico, Maria, Michele Nappi, and Daniel Riccio. "FARO: FAce Recognition Against Occlusions and Expression Variations." IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans 40, no. 1 (January 2010): 121–32. http://dx.doi.org/10.1109/tsmca.2009.2033031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Liu Fu, 刘芾, 李茂军 Li Maojun, 胡建文 Hu Jianwen, 肖雨荷 Xiao Yuhe, and 齐战 Qi Zhan. "Expression Recognition Based on Low Pixel Face Images." Laser & Optoelectronics Progress 57, no. 10 (2020): 101008. http://dx.doi.org/10.3788/lop57.101008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Judith Leo, M., and S. Suchitra. "SVM Based Expression-Invariant 3D Face Recognition System." Procedia Computer Science 143 (2018): 619–25. http://dx.doi.org/10.1016/j.procs.2018.10.441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Tawari, Ashish, and Mohan Manubhai Trivedi. "Face Expression Recognition by Cross Modal Data Association." IEEE Transactions on Multimedia 15, no. 7 (November 2013): 1543–52. http://dx.doi.org/10.1109/tmm.2013.2266635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Burange, Mr Mayur S. "Neuro Fuzzy Model for Human Face Expression Recognition." IOSR Journal of Computer Engineering 1, no. 2 (2012): 01–06. http://dx.doi.org/10.9790/0661-0120106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography