Artykuły w czasopismach na temat „Facial expression”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Facial expression.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Facial expression”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Chapre Lopita Choudhury, Harshada. "Emotion / Facial Expression Detection". International Journal of Science and Research (IJSR) 12, nr 5 (5.05.2023): 1395–98. http://dx.doi.org/10.21275/sr23516180518.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Mehra, Shivam, Prabhat Parashar, Akshay Aggarwal i Deepika Rawat. "FACIAL EXPRESSION RECOGNITION". International Journal of Advanced Research 12, nr 01 (31.01.2024): 1109–13. http://dx.doi.org/10.21474/ijar01/18230.

Pełny tekst źródła
Streszczenie:
Facial Expression Recognition is a system which provides an interface for computer and human interaction. With the advancement of technology and need of the hour such systems have earned the interest of researchers in psychology, medicine, computer science and similar other fields as its applications are identified in such fields. Facial expression recognizer is an application which uses live data coming from the camera or even the existing videos to capture the expressions of the person in the video and is represented on the screen in the form of attractive emojis. Expressions form the basis of human communication and interaction. Expressions are used as a crucial tool to study the behaviour in the medicine and psychological fields to understand the state of mind of the people. The main objective to develop such a system was to be able to classify facial expressions using CNN algorithm which is responsible for the expression detection and then providing a corresponding emoticon relevant to detected facial expression as the output.
Style APA, Harvard, Vancouver, ISO itp.
3

XIONG, LEI, NANNING ZHENG, SHAOYI DU i JIANYI LIU. "FACIAL EXPRESSION SYNTHESIS BASED ON FACIAL COMPONENT MODEL". International Journal of Pattern Recognition and Artificial Intelligence 23, nr 03 (maj 2009): 637–57. http://dx.doi.org/10.1142/s0218001409007235.

Pełny tekst źródła
Streszczenie:
Statistical model based facial expression synthesis methods are robust and can be easily used in real environment. But facial expressions of humans are varied. How to represent and synthesize expressions that are is not included in the training set is an unresolved problem in statistical model based researches. In this paper, we propose a two-step method. At first, we propose a statistical appearance model, the facial component model, to represent faces. The model divides the face into seven components, and constructs one global shape model and seven local texture models separately. The motivation to use global shape + local texture strategy is the combination of different components that can generate more types of expression than training sets and the global shape guarantees a "legal" result. Then a neighbor reconstruction framework is proposed to synthesize expressions. The framework estimates the target expression vector by a linear combination of neighbor subject's expression vectors. This paper primarily contributes three things: first, the proposed method can synthesize a wider range of expressions than with the training set. Second, experiments demonstrate that FCM is better than standard AAM in face representation. Third, neighbor reconstruction framework is very flexible. It can be used in multisamples with multitargets and single-sample with single-target applications.
Style APA, Harvard, Vancouver, ISO itp.
4

Prkachin, Kenneth M. "Assessing Pain by Facial Expression: Facial Expression as Nexus". Pain Research and Management 14, nr 1 (2009): 53–58. http://dx.doi.org/10.1155/2009/542964.

Pełny tekst źródła
Streszczenie:
The experience of pain is often represented by changes in facial expression. Evidence of pain that is available from facial expression has been the subject of considerable scientific investigation. The present paper reviews the history of pain assessment via facial expression in the context of a model of pain expression as a nexus connecting internal experience with social influence. Evidence about the structure of facial expressions of pain across the lifespan is reviewed. Applications of facial assessment in the study of adult and pediatric pain are also reviewed, focusing on how such techniques facilitate the discovery and articulation of novel phenomena. Emerging applications of facial assessment in clinical settings are also described. Alternative techniques that have the potential to overcome barriers to the application of facial assessment arising out of its resource-intensiveness are described and evaluated, including recent work on computer-based automatic assessment.
Style APA, Harvard, Vancouver, ISO itp.
5

Ekman, Paul. "Facial Appearance and Facial Expression". Facial Plastic Surgery Clinics of North America 2, nr 3 (sierpień 1994): 235–39. http://dx.doi.org/10.1016/s1064-7406(23)00426-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Dewangan Asha Ambhaikar, Leelkanth. "Real Time Facial Expression Analysis Using PCA". International Journal of Science and Research (IJSR) 1, nr 2 (5.02.2012): 27–30. http://dx.doi.org/10.21275/ijsr11120203.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Yagi, Satoshi, Yoshihiro Nakata, Yutaka Nakamura i Hiroshi Ishiguro. "Can an android’s posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?" PLOS ONE 16, nr 8 (10.08.2021): e0254905. http://dx.doi.org/10.1371/journal.pone.0254905.

Pełny tekst źródła
Streszczenie:
Expressing emotions through various modalities is a crucial function not only for humans but also for robots. The mapping method from facial expressions to the basic emotions is widely used in research on robot emotional expressions. This method claims that there are specific facial muscle activation patterns for each emotional expression and people can perceive these emotions by reading these patterns. However, recent research on human behavior reveals that some emotional expressions, such as the emotion “intense”, are difficult to judge as positive or negative by just looking at the facial expression alone. Nevertheless, it has not been investigated whether robots can also express ambiguous facial expressions with no clear valence and whether the addition of body expressions can make the facial valence clearer to humans. This paper shows that an ambiguous facial expression of an android can be perceived more clearly by viewers when body postures and movements are added. We conducted three experiments and online surveys among North American residents with 94, 114 and 114 participants, respectively. In Experiment 1, by calculating the entropy, we found that the facial expression “intense” was difficult to judge as positive or negative when they were only shown the facial expression. In Experiments 2 and 3, by analyzing ANOVA, we confirmed that participants were better at judging the facial valence when they were shown the whole body of the android, even though the facial expression was the same as in Experiment 1. These results suggest that facial and body expressions by robots should be designed jointly to achieve better communication with humans. In order to achieve smoother cooperative human-robot interaction, such as education by robots, emotion expressions conveyed through a combination of both the face and the body of the robot is necessary to convey the robot’s intentions or desires to humans.
Style APA, Harvard, Vancouver, ISO itp.
8

Sadat, Mohammed Nashat. "Facial Emotion Recognition using Convolutional Neural Network". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, nr 05 (9.05.2024): 1–5. http://dx.doi.org/10.55041/ijsrem33503.

Pełny tekst źródła
Streszczenie:
Emotion is an expression that human use in expressing their feelings. It can be express through facial expression, body language and voice tone. Humans' facial expression is a major way in conveying emotion since it is the most powerful, natural and universal signal to express humans' emotion condition. However, humans' facial expression has similar patterns, and it is very confusing in recognizing the expression using naked eye. For instance, afraid and surprised is very similar to one another. Thus, this will lead to confusion in determining the facial expression. Hence, this study aims to develop a application for emotion recognition that can recognize emotion based on facial expression in real-time. The Deep Learning based technique, Convolutional Neural Network (CNN) is implemented in this study. The Mobile Net algorithm is deployed to train the model for recognition. There are four types of facial expressions to be recognized which are happy, sad, surprise, and disgusting. Keywords: Facial Emotion Recognition, Deep Learning, CNN, Image Processing.
Style APA, Harvard, Vancouver, ISO itp.
9

Saha, Priya, Debotosh Bhattacharjee, Barin Kumar De i Mita Nasipuri. "Mathematical Representations of Blended Facial Expressions towards Facial Expression Modeling". Procedia Computer Science 84 (2016): 94–98. http://dx.doi.org/10.1016/j.procs.2016.04.071.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Ren, Zhuoyue. "Facial expression classification". Highlights in Science, Engineering and Technology 41 (30.03.2023): 43–52. http://dx.doi.org/10.54097/hset.v41i.6741.

Pełny tekst źródła
Streszczenie:
At present, emotion classification has become a hot topic in artificial intelligence pattern recognition. The facial expression recognition (FER) is indispensable for computers to understand the emotional information conveyed by expressions. In the past, using traditional features to extract and classify facial expressions has not achieved satisfactory accuracy, so the classification of facial emotions is still a challenge. The model used in the paper is an existing one - MINI_XCEPTION, the dominant framework for CNNs that extracts features from images to identify and classify seven facial emotions. The model was trained on a dataset of people’s facial expressions (Kaggle). This model has a significant improvement over the previous model.
Style APA, Harvard, Vancouver, ISO itp.
11

Taylor, Alisdair J. G., i Louise Bryant. "The Effect of Facial Attractiveness on Facial Expression Identification". Swiss Journal of Psychology 75, nr 4 (październik 2016): 175–81. http://dx.doi.org/10.1024/1421-0185/a000183.

Pełny tekst źródła
Streszczenie:
Abstract. Emotion perception studies typically explore how judgments of facial expressions are influenced by invariant characteristics such as sex or by variant characteristics such as gaze. However, few studies have considered the importance of factors that are not easily categorized as invariant or variant. We investigated one such factor, attractiveness, and the role it plays in judgments of emotional expression. We asked 26 participants to categorize different facial expressions (happy, neutral, and angry) that varied with respect to facial attractiveness (attractive, unattractive). Participants were significantly faster when judging expressions on attractive as compared to unattractive faces, but there was no interaction between facial attractiveness and facial expression, suggesting that the attractiveness of a face does not play an important role in the judgment of happy or angry facial expressions.
Style APA, Harvard, Vancouver, ISO itp.
12

Lisetti, Christine L., i Diane J. Schiano. "Automatic facial expression interpretation". Facial Information Processing 8, nr 1 (17.05.2000): 185–235. http://dx.doi.org/10.1075/pc.8.1.09lis.

Pełny tekst źródła
Streszczenie:
We discuss here one of our projects, aimed at developing an automatic facial expression interpreter, mainly in terms of signaled emotions. We present some of the relevant findings on facial expressions from cognitive science and psychology that can be understood by and be useful to researchers in Human-Computer Interaction and Artificial Intelligence. We then give an overview of HCI applications involving automated facial expression recognition, we survey some of the latest progresses in this area reached by various approaches in computer vision, and we describe the design of our facial expression recognizer. We also give some background knowledge about our motivation for understanding facial expressions and we propose an architecture for a multimodal intelligent interface capable of recognizing and adapting to computer users’ affective states. Finally, we discuss current interdisciplinary issues and research questions which will need to be addressed for further progress to be made in the promising area of computational facial expression recognition.
Style APA, Harvard, Vancouver, ISO itp.
13

de la Rosa, Stephan, Laura Fademrecht, Heinrich H. Bülthoff, Martin A. Giese i Cristóbal Curio. "Two Ways to Facial Expression Recognition? Motor and Visual Information Have Different Effects on Facial Expression Recognition". Psychological Science 29, nr 8 (6.06.2018): 1257–69. http://dx.doi.org/10.1177/0956797618765477.

Pełny tekst źródła
Streszczenie:
Motor-based theories of facial expression recognition propose that the visual perception of facial expression is aided by sensorimotor processes that are also used for the production of the same expression. Accordingly, sensorimotor and visual processes should provide congruent emotional information about a facial expression. Here, we report evidence that challenges this view. Specifically, the repeated execution of facial expressions has the opposite effect on the recognition of a subsequent facial expression than the repeated viewing of facial expressions. Moreover, the findings of the motor condition, but not of the visual condition, were correlated with a nonsensory condition in which participants imagined an emotional situation. These results can be well accounted for by the idea that facial expression recognition is not always mediated by motor processes but can also be recognized on visual information alone.
Style APA, Harvard, Vancouver, ISO itp.
14

Yang, Ruilin, i Limin Yan. "P‐5.3: Three‐dimensional Continuous Expression Synthesis Method Based on Facial Expression Feature Map". SID Symposium Digest of Technical Papers 55, S1 (kwiecień 2024): 898–901. http://dx.doi.org/10.1002/sdtp.17231.

Pełny tekst źródła
Streszczenie:
With the development of virtual reality technology, three‐dimensional (3D) images are widely used in various fields. However, the facial features extracted by most facial expression generation methods are not deeply explored. We propose a method for synthesizing 3D continuous facial expressions based on human expression feature maps, which includes an identity converter based on expression feature maps, a facial image generator, and linear interpolation of continuous expressions. Through comparative experiments, this method has shown good stability and fast speed in synthesizing continuous expressions.
Style APA, Harvard, Vancouver, ISO itp.
15

Nomiya, Hiroki, Atsushi Morikuni i Teruhisa Hochin. "Unsupervised Emotional Scene Detection from Lifelog Videos Using Cluster Ensembles". International Journal of Software Innovation 1, nr 4 (październik 2013): 1–15. http://dx.doi.org/10.4018/ijsi.2013100101.

Pełny tekst źródła
Streszczenie:
An emotional scene detection method is proposed in order to retrieve impressive scenes from lifelog videos. The proposed method is based on facial expression recognition considering that a wide variety of facial expression could be observed in impressive scenes. Conventional facial expression techniques, which focus on discriminating typical facial expressions, will be inadequate for lifelog video retrieval because of the diversity of facial expressions. The authors thus propose a more flexible and efficient emotional scene detection method using an unsupervised facial expression recognition based on cluster ensembles. The authors' approach does not need to predefine facial expressions and is able to detect emotional scenes containing a wide variety of facial expressions. The detection performance of the proposed method is evaluated through some emotional scene detection experiments.
Style APA, Harvard, Vancouver, ISO itp.
16

Sohail, Muhammad, Ghulam Ali, Javed Rashid, Israr Ahmad, Sultan H. Almotiri, Mohammed A. AlGhamdi, Arfan A. Nagra i Khalid Masood. "Racial Identity-Aware Facial Expression Recognition Using Deep Convolutional Neural Networks". Applied Sciences 12, nr 1 (22.12.2021): 88. http://dx.doi.org/10.3390/app12010088.

Pełny tekst źródła
Streszczenie:
Multi-culture facial expression recognition remains challenging due to cross cultural variations in facial expressions representation, caused by facial structure variations and culture specific facial characteristics. In this research, a joint deep learning approach called racial identity aware deep convolution neural network is developed to recognize the multicultural facial expressions. In the proposed model, a pre-trained racial identity network learns the racial features. Then, the racial identity aware network and racial identity network jointly learn the racial identity aware facial expressions. By enforcing the marginal independence of facial expression and racial identity, the proposed joint learning approach is expected to be purer for the expression and be robust to facial structure and culture specific facial characteristics variations. For the reliability of the proposed joint learning technique, extensive experiments were performed with racial identity features and without racial identity features. Moreover, culture wise facial expression recognition was performed to analyze the effect of inter-culture variations in facial expression representation. A large scale multi-culture dataset is developed by combining the four facial expression datasets including JAFFE, TFEID, CK+ and RaFD. It contains facial expression images of Japanese, Taiwanese, American, Caucasian and Moroccan cultures. We achieved 96% accuracy with racial identity features and 93% accuracy without racial identity features.
Style APA, Harvard, Vancouver, ISO itp.
17

Zulhijah Awang Jesemi, Dayang Nur, Hamimah Ujir, Irwandi Hipiny i Sarah Flora Samson Juan. "The analysis of facial feature deformation using optical flow algorithm". Indonesian Journal of Electrical Engineering and Computer Science 15, nr 2 (1.08.2019): 769. http://dx.doi.org/10.11591/ijeecs.v15.i2.pp769-777.

Pełny tekst źródła
Streszczenie:
<span>Facial features deformed according to the intended facial expression. Specific facial features are associated with specific facial expression, i.e. happy means the deformation of mouth. This paper presents the study of facial feature deformation for each facial expression by using an optical flow algorithm and segmented into three different regions of interest. The deformation of facial features shows the relation between facial the and facial expression. Based on the experiments, the deformations of eye and mouth are significant in all expressions except happy. For happy expression, cheeks and mouths are the significant regions. This work also suggests that different facial features' intensity varies in the way that they contribute to the recognition of the different facial expression intensity. The maximum magnitude across all expressions is shown by the mouth for surprise expression which is 9x10<sup>-4</sup>. While the minimum magnitude is shown by the mouth for angry expression which is 0.4x10<sup>-4</sup>.</span>
Style APA, Harvard, Vancouver, ISO itp.
18

Hong, Yu-Jin, Sung Eun Choi, Gi Pyo Nam, Heeseung Choi, Junghyun Cho i Ig-Jae Kim. "Adaptive 3D Model-Based Facial Expression Synthesis and Pose Frontalization". Sensors 20, nr 9 (1.05.2020): 2578. http://dx.doi.org/10.3390/s20092578.

Pełny tekst źródła
Streszczenie:
Facial expressions are one of the important non-verbal ways used to understand human emotions during communication. Thus, acquiring and reproducing facial expressions is helpful in analyzing human emotional states. However, owing to complex and subtle facial muscle movements, facial expression modeling from images with face poses is difficult to achieve. To handle this issue, we present a method for acquiring facial expressions from a non-frontal single photograph using a 3D-aided approach. In addition, we propose a contour-fitting method that improves the modeling accuracy by automatically rearranging 3D contour landmarks corresponding to fixed 2D image landmarks. The acquired facial expression input can be parametrically manipulated to create various facial expressions through a blendshape or expression transfer based on the FACS (Facial Action Coding System). To achieve a realistic facial expression synthesis, we propose an exemplar-texture wrinkle synthesis method that extracts and synthesizes appropriate expression wrinkles according to the target expression. To do so, we constructed a wrinkle table of various facial expressions from 400 people. As one of the applications, we proved that the expression-pose synthesis method is suitable for expression-invariant face recognition through a quantitative evaluation, and showed the effectiveness based on a qualitative evaluation. We expect our system to be a benefit to various fields such as face recognition, HCI, and data augmentation for deep learning.
Style APA, Harvard, Vancouver, ISO itp.
19

Ju, Wang, Ding Rui i Chun Yan Nie. "Research on the Facial Expression Feature Extraction of Facial Expression Recognition Based on MATLAB". Advanced Materials Research 1049-1050 (październik 2014): 1522–25. http://dx.doi.org/10.4028/www.scientific.net/amr.1049-1050.1522.

Pełny tekst źródła
Streszczenie:
In such a developed day of information communication, communication is an important essential way of interpersonal communication. As a carrier of information, expression is rich in human behavior information. Facial expression recognition is a combination of many fields, but also a new topic in the field of pattern recognition. This paper mainly studied the facial feature extraction based on MATLAB, by MATLAB software, extracting the expression features through a large number of facial expressions, which can be divided into different facial expressions more accurate classification .
Style APA, Harvard, Vancouver, ISO itp.
20

Zhang, Yu, Kuo Yang, Xue Ying Deng i Ying Shi. "Research and Realization of Facial Expression Robot". Advanced Materials Research 433-440 (styczeń 2012): 7413–19. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.7413.

Pełny tekst źródła
Streszczenie:
By analysing the formation mechanism of human facial expressions and summary of existing research about facial expression robot, the paper summarized the facial expression robot, developed three technical difficulties, and proposed the improvements according to the technical difficulties. Based on the above, the paper presented one facial robot with eight facial expressions of basic emotions. In mechanical part, the mechanical structure with 20 degrees of freedom is designed; in control part, SSC32 V2 is used to control 20 servo coordination movements; in simulation modelling part, a special silicon rubber material is developed and the soft part of material is used to as the skin of facial expression robot. The facial expression of robot substantially increased the extent of such robot simulation.
Style APA, Harvard, Vancouver, ISO itp.
21

Yaermaimaiti, Yilihamu, Tusongjiang Kari i Guohang Zhuang. "Research on facial expression recognition based on an improved fusion algorithm". Nonlinear Engineering 11, nr 1 (1.01.2022): 112–22. http://dx.doi.org/10.1515/nleng-2022-0015.

Pełny tekst źródła
Streszczenie:
Abstract This article puts forward a facial expression recognition (FER) algorithm based on multi-feature fusion and convolutional neural network (CNN) to solve the problem that FER is susceptible to interference factors such as non-uniform illumination, thereby reducing the recognition rate of facial expressions. It starts by extracting the multi-layer representation information (asymmetric region local binary pattern [AR-LBP]) of facial expression images and cascading them to minimize the loss of facial expression texture information. In addition, an improved algorithm called divided local directional pattern (DLDP) is used to extract the original facial expression image features, which not only retains the original texture information but also reduces the time consumption. With a weighted fusion of the features extracted from the above two facial expressions, new AR-LBP-DLDP facial local features are then obtained. Later, CNN is used to extract global features of facial expressions, and the local features of AR-LBP-DLDP obtained by weighted fusion are cascaded and fused with the global features extracted by the CNN, thereby producing the final facial expression features. Ultimately, the final facial expression features are input into Softmax for training and classification. The results show that the proposed algorithm, with good robustness and real-time performance, effectively improves the recognition rate of facial expressions.
Style APA, Harvard, Vancouver, ISO itp.
22

Johnston, D. J., D. T. Millett, A. F. Ayoub i M. Bock. "Are Facial Expressions Reproducible?" Cleft Palate-Craniofacial Journal 40, nr 3 (maj 2003): 291–96. http://dx.doi.org/10.1597/1545-1569_2003_040_0291_afer_2.0.co_2.

Pełny tekst źródła
Streszczenie:
Objectives To determine the extent of reproducibility of five facial expressions. Design Thirty healthy Caucasian volunteers (15 males, 15 females) aged 21 to 30 years had 20 landmarks highlighted on the face with a fine eyeliner pencil. Subjects were asked to perform a sequence of five facial expressions that were captured by a three-dimensional camera system. Each expression was repeated after 15 minutes to investigate intrasession expression reproducibility. To investigate intersession expression reproducibility, each subject returned 2 weeks after the first session. A single operator identified 3-dimensional coordinate values of each landmark. A partial ordinary procrustes analysis was used to adjust for differences in head posture between similar expressions. Statistical analysis was undertaken using analysis of variance (linear mixed effects model). Results Intrasession expression reproducibility was least between cheek puffs (1.12 mm) and greatest between rest positions (0.74 mm). The reproducibility of individual landmarks was expression specific. Except for the lip purse, the reproducibility of facial expressions was not statistically different within each of the two sessions. Rest position was most reproducible, followed by lip purse, maximal smile, natural smile, and cheek puff. Subjects did not perform expressions with the same degree of symmetry on each occasion. Female subjects demonstrated significantly better reproducibility with regard to the maximal smile than males (p = .036). Conclusions Under standardized conditions, intrasession expression reproducibility was high. Variation in expression reproducibility between sessions was minimal. The extent of reproducibility is expression specific. Differences in expression reproducibility exist between males and females.
Style APA, Harvard, Vancouver, ISO itp.
23

Jiang, H., K. Huang, T. Mu, R. Zhang, T. O. Ting i C. Wang. "Robust One-Shot Facial Expression Recognition with Sunglasses". International Journal of Machine Learning and Computing 6, nr 2 (kwiecień 2016): 80–86. http://dx.doi.org/10.18178/ijmlc.2016.6.2.577.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Guojiang, Wang, i Yang Guoliang. "Facial Expression Recognition Using PCA and AdaBoost Algorithm". International Journal of Signal Processing Systems 7, nr 2 (marzec 2019): 73–77. http://dx.doi.org/10.18178/ijsps.7.2.73-77.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Kumagai, Kazumi, Kotaro Hayashi i Ikuo Mizuuchi. "Elicitation of Specific Facial Expression by Robot's Action". Abstracts of the international conference on advanced mechatronics : toward evolutionary fusion of IT and mechatronics : ICAM 2015.6 (2015): 53–54. http://dx.doi.org/10.1299/jsmeicam.2015.6.53.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Padmapriya K.C., Leelavathy V. i Angelin Gladston. "Automatic Multiface Expression Recognition Using Convolutional Neural Network". International Journal of Artificial Intelligence and Machine Learning 11, nr 2 (lipiec 2021): 1–13. http://dx.doi.org/10.4018/ijaiml.20210701.oa8.

Pełny tekst źródła
Streszczenie:
The human facial expressions convey a lot of information visually. Facial expression recognition plays a crucial role in the area of human-machine interaction. Automatic facial expression recognition system has many applications in human behavior understanding, detection of mental disorders and synthetic human expressions. Recognition of facial expression by computer with high recognition rate is still a challenging task. Most of the methods utilized in the literature for the automatic facial expression recognition systems are based on geometry and appearance. Facial expression recognition is usually performed in four stages consisting of pre-processing, face detection, feature extraction, and expression classification. In this paper we applied various deep learning methods to classify the seven key human emotions: anger, disgust, fear, happiness, sadness, surprise and neutrality. The facial expression recognition system developed is experimentally evaluated with FER dataset and has resulted with good accuracy.
Style APA, Harvard, Vancouver, ISO itp.
27

Liang, Chengxu, i Jianshe Dong. "A Survey of Deep Learning-based Facial Expression Recognition Research". Frontiers in Computing and Intelligent Systems 5, nr 2 (1.09.2023): 56–60. http://dx.doi.org/10.54097/fcis.v5i2.12445.

Pełny tekst źródła
Streszczenie:
Facial expression is one of the ways to convey emotional expression. Deep learning is used to analyze facial expression to understand people's true feelings, and human-computer interaction is integrated into it. However, in the natural real environment and various interference (such as lighting, age and ethnicity), facial expression recognition will face many challenges. In recent years, with the development of artificial intelligence, scholars have studied more and more facial expression recognition in the case of interference, which not only promotes the theoretical research, but also makes it popularized in the application. Facial expression recognition is to identify facial expressions to carry out emotion analysis, and emotion analysis can be analyzed with the help of facial expressions, speech, text, video and other signals. Therefore, facial expression recognition can be regarded as a research direction of emotion analysis. This paper focuses on the perspective of facial expression recognition to summarize. In the process of facial expression recognition, researchers usually try to combine multiple modal information such as voice, text, picture and video for analysis. Due to the differences between single-modal data set and multi-modal data set, this paper will analyze static facial expression recognition, dynamic facial expression recognition and multi-modal fusion. This research has a wide range of applications, such as: smart elderly care, medical research, detection of fatigue driving and other fields.
Style APA, Harvard, Vancouver, ISO itp.
28

BUCIU, IOAN, i IOAN NAFORNITA. "FEATURE EXTRACTION THROUGH CROSS-PHASE CONGRUENCY FOR FACIAL EXPRESSION ANALYSIS". International Journal of Pattern Recognition and Artificial Intelligence 23, nr 03 (maj 2009): 617–35. http://dx.doi.org/10.1142/s021800140900717x.

Pełny tekst źródła
Streszczenie:
Human face analysis has attracted a large number of researchers from various fields, such as computer vision, image processing, neurophysiology or psychology. One of the particular aspects of human face analysis is encompassed by facial expression recognition task. A novel method based on phase congruency for extracting the facial features used in the facial expression classification procedure is developed. Considering a set of image samples comprising humans expressing various expressions, this new approach computes the phase congruency map between the samples. The analysis is performed in the frequency space where the similarity (or dissimilarity) between sample phases is measured to form discriminant features. The experiments were run using samples from two facial expression databases. To assess the method's performance, the technique is compared to the state-of-the art techniques utilized for classifying facial expressions, such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA), and Gabor jets. The features extracted by the aforementioned techniques are further classified using two classifiers: a distance-based classifier and a Support Vector Machine-based classifier. Experiments reveal superior facial expression recognition performance for the proposed approach with respect to other techniques.
Style APA, Harvard, Vancouver, ISO itp.
29

LEE, CHAN-SU, i DIMITRIS SAMARAS. "ANALYSIS AND CONTROL OF FACIAL EXPRESSIONS USING DECOMPOSABLE NONLINEAR GENERATIVE MODELS". International Journal of Pattern Recognition and Artificial Intelligence 28, nr 05 (31.07.2014): 1456009. http://dx.doi.org/10.1142/s0218001414560096.

Pełny tekst źródła
Streszczenie:
Facial expressions convey personal characteristics and subtle emotional states. This paper presents a new framework for modeling subtle facial motions of different people with different types of expressions from high-resolution facial expression tracking data to synthesize new stylized subtle facial expressions. A conceptual facial motion manifold is used for a unified representation of facial motion dynamics from three-dimensional (3D) high-resolution facial motions as well as from two-dimensional (2D) low-resolution facial motions. Variant subtle facial motions in different people with different expressions are modeled by nonlinear mappings from the embedded conceptual manifold to input facial motions using empirical kernel maps. We represent facial expressions by a factorized nonlinear generative model, which decomposes expression style factors and expression type factors from different people with multiple expressions. We also provide a mechanism to control the high-resolution facial motion model from low-resolution facial video sequence tracking and analysis. Using the decomposable generative model with a common motion manifold embedding, we can estimate parameters to control 3D high resolution facial expressions from 2D tracking results, which allows performance-driven control of high-resolution facial expressions.
Style APA, Harvard, Vancouver, ISO itp.
30

Liang, Yanqiu. "Intelligent Emotion Evaluation Method of Classroom Teaching Based on Expression Recognition". International Journal of Emerging Technologies in Learning (iJET) 14, nr 04 (27.02.2019): 127. http://dx.doi.org/10.3991/ijet.v14i04.10130.

Pełny tekst źródła
Streszczenie:
To solve the problem of emotional loss in teaching and improve the teaching effect, an intelligent teaching method based on facial expression recognition was studied. The traditional active shape model (ASM) was improved to extract facial feature points. Facial expression was identified by using the geometric features of facial features and support vector machine (SVM). In the expression recognition process, facial geometry and SVM methods were used to generate expression classifiers. Results showed that the SVM method based on the geometric characteristics of facial feature points effectively realized the automatic recognition of facial expressions. Therefore, the automatic classification of facial expressions is realized, and the problem of emotional deficiency in intelligent teaching is effectively solved.
Style APA, Harvard, Vancouver, ISO itp.
31

Schmidt, Karen L., i Jeffrey F. Cohn. "Human facial expressions as adaptations: Evolutionary questions in facial expression research". American Journal of Physical Anthropology 116, S33 (2001): 3–24. http://dx.doi.org/10.1002/ajpa.20001.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Prkachin, Kenneth M. "Facial pain expression". Pain Management 1, nr 4 (lipiec 2011): 367–76. http://dx.doi.org/10.2217/pmt.11.22.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Matsumoto, David, i Paul Ekman. "Facial expression analysis". Scholarpedia 3, nr 5 (2008): 4237. http://dx.doi.org/10.4249/scholarpedia.4237.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Zhao, Yue, i Jiancheng Xu. "Necessary Morphological Patches Extraction for Automatic Micro-Expression Recognition". Applied Sciences 8, nr 10 (3.10.2018): 1811. http://dx.doi.org/10.3390/app8101811.

Pełny tekst źródła
Streszczenie:
Micro expressions are usually subtle and brief facial expressions that humans use to hide their true emotional states. In recent years, micro-expression recognition has attracted wide attention in the fields of psychology, mass media, and computer vision. The shortest micro expression lasts only 1/25 s. Furthermore, different from macro-expressions, micro-expressions have considerable low intensity and inadequate contraction of the facial muscles. Based on these characteristics, automatic micro-expression detection and recognition are great challenges in the field of computer vision. In this paper, we propose a novel automatic facial expression recognition framework based on necessary morphological patches (NMPs) to better detect and identify micro expressions. Micro expression is a subconscious facial muscle response. It is not controlled by the rational thought of the brain. Therefore, it calls on a few facial muscles and has local properties. NMPs are the facial regions that must be involved when a micro expression occurs. NMPs were screened based on weighting the facial active patches instead of the holistic utilization of the entire facial area. Firstly, we manually define the active facial patches according to the facial landmark coordinates and the facial action coding system (FACS). Secondly, we use a LBP-TOP descriptor to extract features in these patches and the Entropy-Weight method to select NMP. Finally, we obtain the weighted LBP-TOP features of these NMP. We test on two recent publicly available datasets: CASME II and SMIC database that provided sufficient samples. Compared with many recent state-of-the-art approaches, our method achieves more promising recognition results.
Style APA, Harvard, Vancouver, ISO itp.
35

Benton, Christopher P. "Effect of Photographic Negation on Face Expression Aftereffects". Perception 38, nr 9 (1.01.2009): 1267–74. http://dx.doi.org/10.1068/p6468.

Pełny tekst źródła
Streszczenie:
Our visual representation of facial expression is examined in this study: is this representation built from edge information, or does it incorporate surface-based information? To answer this question, photographic negation of grey-scale images is used. Negation preserves edge information whilst disrupting the surface-based information. In two experiments visual aftereffects produced by prolonged viewing of images of facial expressions were measured. This adaptation-based technique allows a behavioural assessment of the characteristics encoded by the neural systems underlying our representation of facial expression. The experiments show that photographic negation of the adapting images results in a profound decrease of expression aftereffect. Our visual representation of facial expression therefore appears to not just be built from edge information, but to also incorporate surface information. The latter allows an appreciation of the 3-D structure of the expressing face that, it is argued, may underpin the subtlety and range of our non-verbal facial communication.
Style APA, Harvard, Vancouver, ISO itp.
36

Li, Dejian, Wenqian Qi i Shouqian Sun. "Facial Landmarks and Expression Label Guided Photorealistic Facial Expression Synthesis". IEEE Access 9 (2021): 56292–300. http://dx.doi.org/10.1109/access.2021.3072057.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

MK, Ashraful. "Expression of the Emotions in Pigeons". Journal of Ethology & Animal Science 2, nr 1 (9.01.2019): 1–6. http://dx.doi.org/10.23880/jeasc-16000104.

Pełny tekst źródła
Streszczenie:
Unexpressed emotions in dove showed its body expression 13 (81.25%) like incubating, tender, aggressive, feeding, regurgitation, flying, courtship, nesting, mating, post-mating, frightened, resting, and helpless where facial expression were only 3 (18.75%) out of 16 behaviours. Age-related characteristics were incubating, aggressive, regurgitation, courtship, nesting, mating, post-mating, and shame (50%). Except incubating, feeding, regurgitation, courtship, nesting, mating, and post-mating other behaviours were depended on environmental factors (56.25%) and genetical characteristics were 43.75%. Facial expressions were not seen for lacking facial muscles of the pigeons. Only aggressive and mating behaviour were prominent by observing the puffing of the feathers.
Style APA, Harvard, Vancouver, ISO itp.
38

Rodríguez, Marcelo, i Sonia E. Rodríguez. "Expresiones faciales y contexto. Reglas sociales que condicionan la espontaneidad de la expresión facial de las emociones". Revista Mexicana de Investigación en Psicología 9, nr 1 (1.06.2017): 55–72. http://dx.doi.org/10.32870/rmip.v9i1.584.

Pełny tekst źródła
Streszczenie:
Las ciencias sociales y de la comunicación han avanzado en la investigación del comportamiento humano entendiendo al individuo como parte de sistemas. Para ello, observan los comportamientos partiendo de un nivel epistemológico postmoderno que involucra el pensamiento sistémico delineando un modo de construcción de una realidad que entiende que cada conducta condiciona y es condicionada por su contexto, donde cobra un significado. El objetivo de este trabajo es realizar un breve recorrido por conceptualizaciones e importantes estudios que permiten comprender el fenómeno “espontáneo” de expresión emocional mediante la gestualidad, para observar cómo en un metanivel el contexto regula, limita y expande el gesto de acuerdo a la rigidez o flexibilidad de sus reglas. De tal manera que la espontaneidad de un gesto es regulada por una regla de expresión, por lo tanto, nos hallamos envueltos en una paradoja.
Style APA, Harvard, Vancouver, ISO itp.
39

Barabanschikov, V. A., O. A. Korolkova i E. A. Lobodinskaya. "Recognition of facial expressions during step-function stroboscopic presentation". Experimental Psychology (Russia) 11, nr 4 (2018): 50–69. http://dx.doi.org/10.17759/exppsy.2018110405.

Pełny tekst źródła
Streszczenie:
We studied the perception of human facial emotional expressions during step-function stroboscopic presentation of changing mimics. Consecutive stages of each of the six basic facial expressions were pre sented to the participants: neutral face (300 ms) — expression of medium intensity (10—40 ms) — intense expression (30—120 ms) — expression of medium intensity (10—40 ms) — neutral face (100 ms). Alternative forced choice task was used to categorize the facial expressions. The results were compared to previous studies (Barabanschikov, Korolkova, Lobodinskaya, 2015; 2016), conducted using the same paradigm but with boxcar-function change of the expression: neutral face — intense expression — neutral face. We found that the dynamics of facial expression recognition, as well as errors and recognition time are almost identical in conditions of boxcar- and step-function presentation. One factor influencing the recognition rate is the proportion of presentation time of static (neutral) and changing (facial expression) aspects of the stimulus. In suboptimal conditions of facial expression perception (minimal presentation time of 10+30+10 ms and reduced intensity of expressions) we revealed stroboscopic sensibilization — a previously described phenomenon of enhanced recognition rate of low-attractive expressions (disgust, sadness, fear and anger), which has been previously found in conditions of boxcar-function presentation of expressions. We confirmed the similarity of influence of real and apparent motion on the recognition of basic facial emotional expressions.
Style APA, Harvard, Vancouver, ISO itp.
40

Santra, Arpita, Vivek Rai, Debasree Das i Sunistha Kundu. "Facial Expression Recognition Using Convolutional Neural Network". International Journal for Research in Applied Science and Engineering Technology 10, nr 5 (31.05.2022): 1081–92. http://dx.doi.org/10.22214/ijraset.2022.42439.

Pełny tekst źródła
Streszczenie:
Abstract: Human & computer interaction has been an important field of study for ages. Humans share universal and fundamental set of emotions which are exhibited through consistent facial expressions or emotion. If computer could understand the feelings of humans, it can give the proper services based on the feedback received. An algorithm that performs detection, extraction, and evaluation of these facial expressions will allow for automatic recognition of human emotion in images and videos. Automatic recognition of facial expressions can be an important component of natural human-machine interfaces; it may also be used in behavioural science and in clinical practices. In this model we give the overview of the work done in the past related to Emotion Recognition using Facial expressions along with our approach towards solving the problem. The approaches used for facial expression include classifiers like Support Vector Machine (SVM), Convolution Neural Network (CNN) are used to classify emotions based on certain regions of interest on the face like lips, lower jaw, eyebrows, cheeks and many more. Kaggle facial expression dataset with seven facial expression labels as happy, sad, surprise, fear, anger, disgust, and neutral is used in this project. The system achieved 56.77 % accuracy and 0.57 precision on testing dataset. Keywords: Facial Expression Recognition, Convolutional Neural Network, Deep Learning.
Style APA, Harvard, Vancouver, ISO itp.
41

Jyothsna, K. Amrutha. "Currency Classification Using Deep Learning". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, nr 05 (11.05.2024): 1–5. http://dx.doi.org/10.55041/ijsrem33812.

Pełny tekst źródła
Streszczenie:
Among the uses of machine learning is the recognition of facial expressions. Based on the features that are derived from an image, it assigns a facial expression to one of the classes of facial expressions. Convolutional Neural Network (CNN) is a classification technique that may also be used to identify patterns in an image. We used the CNN approach to identify facial expressions in our proposed study. To increase the precision of facial emotion recognition, the wavelet transform is used after CNN processing. Seven distinct facial expressions are included in the facial expression image dataset that was obtained from Kaggle. The findings of the face expression recognition experiment utilizing CNN and wavelet transform show that the accuracy is improved, and the output is audible. Index Terms: Facial and Convolutional Neural Nets
Style APA, Harvard, Vancouver, ISO itp.
42

Hyung, Hyun-Jun, Han Ul Yoon, Dongwoon Choi, Duk-Yeon Lee i Dong-Wook Lee. "Optimizing Android Facial Expressions Using Genetic Algorithms". Applied Sciences 9, nr 16 (16.08.2019): 3379. http://dx.doi.org/10.3390/app9163379.

Pełny tekst źródła
Streszczenie:
Because the internal structure, degree of freedom, skin control position and range of the android face are different, it is very difficult to generate facial expressions by applying existing facial expression generation methods. In addition, facial expressions differ among robots because they are designed subjectively. To address these problems, we developed a system that can automatically generate robot facial expressions by combining an android, a recognizer capable of classifying facial expressions and a genetic algorithm. We have developed two types (older men and young women) of android face robots that can simulate human skin movements. We selected 16 control positions to generate the facial expressions of these robots. The expressions were generated by combining the displacements of 16 motors. A chromosome comprising 16 genes (motor displacements) was generated by applying real-coded genetic algorithms; subsequently, it was used to generate robot facial expressions. To determine the fitness of the generated facial expressions, expression intensity was evaluated through a facial expression recognizer. The proposed system was used to generate six facial expressions (angry, disgust, fear, happy, sad, surprised); the results confirmed that they were more appropriate than manually generated facial expressions.
Style APA, Harvard, Vancouver, ISO itp.
43

Kim, Jin-Chul, Min-Hyun Kim, Han-Enul Suh, Muhammad Tahir Naseem i Chan-Su Lee. "Hybrid Approach for Facial Expression Recognition Using Convolutional Neural Networks and SVM". Applied Sciences 12, nr 11 (28.05.2022): 5493. http://dx.doi.org/10.3390/app12115493.

Pełny tekst źródła
Streszczenie:
Facial expression recognition is very useful for effective human–computer interaction, robot interfaces, and emotion-aware smart agent systems. This paper presents a new framework for facial expression recognition by using a hybrid model: a combination of convolutional neural networks (CNNs) and a support vector machine (SVM) classifier using dynamic facial expression data. In order to extract facial motion characteristics, dense facial motion flows and geometry landmark flows of facial expression sequences were used as inputs to the CNN and SVM classifier, respectively. CNN architectures for facial expression recognition from dense facial motion flows were proposed. The optimal weighting combination of the hybrid classifiers provides better facial expression recognition results than individual classifiers. The system has successfully classified seven facial expressions signalling anger, contempt, disgust, fear, happiness, sadness and surprise classes for the CK+ database, and facial expressions of anger, disgust, fear, happiness, sadness and surprise for the BU4D database. The recognition performance of the proposed system is 99.69% for the CK+ database and 94.69% for the BU4D database. The proposed method shows state-of-the-art results for the CK+ database and is proven to be effective for the BU4D database when compared with the previous schemes.
Style APA, Harvard, Vancouver, ISO itp.
44

Onyema, Edeh Michael, Piyush Kumar Shukla, Surjeet Dalal, Mayuri Neeraj Mathur, Mohammed Zakariah i Basant Tiwari. "Enhancement of Patient Facial Recognition through Deep Learning Algorithm: ConvNet". Journal of Healthcare Engineering 2021 (6.12.2021): 1–8. http://dx.doi.org/10.1155/2021/5196000.

Pełny tekst źródła
Streszczenie:
The use of machine learning algorithms for facial expression recognition and patient monitoring is a growing area of research interest. In this study, we present a technique for facial expression recognition based on deep learning algorithm: convolutional neural network (ConvNet). Data were collected from the FER2013 dataset that contains samples of seven universal facial expressions for training. The results show that the presented technique improves facial expression recognition accuracy without encoding several layers of CNN that lead to a computationally costly model. This study proffers solutions to the issues of high computational cost due to the implementation of facial expression recognition by providing a model close to the accuracy of the state-of-the-art model. The study concludes that deep l\earning-enabled facial expression recognition techniques enhance accuracy, better facial recognition, and interpretation of facial expressions and features that promote efficiency and prediction in the health sector.
Style APA, Harvard, Vancouver, ISO itp.
45

Chen, Xiang Zhang, Zhi Hao Yin, Ze Su Cai i Ding Ding Zhu. "Facial Expression Recognition of Home Service Robots". Applied Mechanics and Materials 411-414 (wrzesień 2013): 1795–800. http://dx.doi.org/10.4028/www.scientific.net/amm.411-414.1795.

Pełny tekst źródła
Streszczenie:
It is of great significance that a home service robot can recognize facial expressions of a human being. This thesis suggests that features of facial expressions be extracted with PCA, and facial expressions be recognized by distance-based Hashing K-nearest neighbor classification. First, Haar-like feature and AdaBoost algorithm is adopted to detect a face and preprocess the face image; then PCA is applied to extract features of the facial expression, those features will be inserted into the hash table; finally, the facial expression can be recognized by K-nearest neighbor classification algorithm. As concluded, recognition efficiency can be greatly improved after reconstructing the feature database into hash tables.
Style APA, Harvard, Vancouver, ISO itp.
46

Qayyum, Huma, Muhammad Majid, Syed Muhammad Anwar i Bilal Khan. "Facial Expression Recognition Using Stationary Wavelet Transform Features". Mathematical Problems in Engineering 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/9854050.

Pełny tekst źródła
Streszczenie:
Humans use facial expressions to convey personal feelings. Facial expressions need to be automatically recognized to design control and interactive applications. Feature extraction in an accurate manner is one of the key steps in automatic facial expression recognition system. Current frequency domain facial expression recognition systems have not fully utilized the facial elements and muscle movements for recognition. In this paper, stationary wavelet transform is used to extract features for facial expression recognition due to its good localization characteristics, in both spectral and spatial domains. More specifically a combination of horizontal and vertical subbands of stationary wavelet transform is used as these subbands contain muscle movement information for majority of the facial expressions. Feature dimensionality is further reduced by applying discrete cosine transform on these subbands. The selected features are then passed into feed forward neural network that is trained through back propagation algorithm. An average recognition rate of 98.83% and 96.61% is achieved for JAFFE and CK+ dataset, respectively. An accuracy of 94.28% is achieved for MS-Kinect dataset that is locally recorded. It has been observed that the proposed technique is very promising for facial expression recognition when compared to other state-of-the-art techniques.
Style APA, Harvard, Vancouver, ISO itp.
47

Alrubaish, Hind A., i Rachid Zagrouba. "The Effects of Facial Expressions on Face Biometric System’s Reliability". Information 11, nr 10 (17.10.2020): 485. http://dx.doi.org/10.3390/info11100485.

Pełny tekst źródła
Streszczenie:
The human mood has a temporary effect on the face shape due to the movement of its muscles. Happiness, sadness, fear, anger, and other emotional conditions may affect the face biometric system’s reliability. Most of the current studies on facial expressions are concerned about the accuracy of classifying the subjects based on their expressions. This study investigated the effect of facial expressions on the reliability of a face biometric system to find out which facial expression puts the biometric system at greater risk. Moreover, it identified a set of facial features that have the lowest facial deformation caused by facial expressions to be generalized during the recognition process, regardless of which facial expression is presented. In order to achieve the goal of this study, an analysis of 22 facial features between the normal face and six universal facial expressions is obtained. The results show that the face biometric systems are affected by facial expressions where the disgust expression achieved the most dissimilar score, while the sad expression achieved the lowest dissimilar score. Additionally, the study identified the five and top ten facial features that have the lowest facial deformations on the face shape in all facial expressions. Besides that, the relativity score showed less variances between the sample using the top facial features. The obtained results of this study minimized the false rejection rate in the face biometric system and subsequently the ability to raise the system’s acceptance threshold to maximize the intrusion detection rate without affecting the user convenience.
Style APA, Harvard, Vancouver, ISO itp.
48

Rasyid, Muhammad Furqan. "Comparison Of LBPH, Fisherface, and PCA For Facial Expression Recognition of Kindergarten Student". International Journal Education and Computer Studies (IJECS) 2, nr 1 (15.05.2022): 19–26. http://dx.doi.org/10.35870/ijecs.v2i1.625.

Pełny tekst źródła
Streszczenie:
Face recognition is the biometric personal identification that gaining a lot of attention recently. An increasing need for fast and accurate face expression recognition systems. Facial expression recognition is a system used to identify what expression is displayed by someone. In general, research on facial expression recognition only focuses on adult facial expressions. The introduction of human facial expressions is one of the very fields of research important because it is a blend of feelings and computer applications such as interactions between humans and computers, compressing data, face animation and face image search from a video. This research process recognizes facial expressions for toddlers, precisely for kindergarten students. But before making this research system Comparing three methods namely PCA, Fisherface and LBPH by adopts our new database that contains the face of individuals with a variety of pose and expression. which will be used for facial expression recognition. Fisherface accuracy was obtained at 94%, LBPH 100%, and PCA 48.75%.
Style APA, Harvard, Vancouver, ISO itp.
49

Som, P. M., P. J. Taub i B. N. Delman. "Revisiting the Embryology of the Facial Muscles, the Superficial Musculoaponeurotic System, and the Facial Nerve". Neurographics 11, nr 3 (1.05.2021): 200–228. http://dx.doi.org/10.3174/ng.1900035.

Pełny tekst źródła
Streszczenie:
The facial muscles are responsible for nonverbal expression, and the manner by which these muscles function to express various emotions are reviewed. How one recognizes these various facial expressions and how individuals can alter their facial expression are discussed. The methodology for cataloging facial expressions is also presented. The embryology of the facial muscles; the facial ligaments; and the supporting superficial musculoaponeurotic system, which magnifies the muscle movements, is also reviewed as is the embryology of the facial nerve, which innervates these muscles. Also, a detailed MR imaging atlas of the facial muscles is presented.Learning Objective: The reader will learn how the facial muscles develop and how they are the means of human nonverbal emotional expression. The anatomy of the facial ligaments and the superficial musculoaponeurotic system are also discussed
Style APA, Harvard, Vancouver, ISO itp.
50

Park, Sung, Seong Won Lee i Mincheol Whang. "The Analysis of Emotion Authenticity Based on Facial Micromovements". Sensors 21, nr 13 (5.07.2021): 4616. http://dx.doi.org/10.3390/s21134616.

Pełny tekst źródła
Streszczenie:
People tend to display fake expressions to conceal their true feelings. False expressions are observable by facial micromovements that occur for less than a second. Systems designed to recognize facial expressions (e.g., social robots, recognition systems for the blind, monitoring systems for drivers) may better understand the user’s intent by identifying the authenticity of the expression. The present study investigated the characteristics of real and fake facial expressions of representative emotions (happiness, contentment, anger, and sadness) in a two-dimensional emotion model. Participants viewed a series of visual stimuli designed to induce real or fake emotions and were signaled to produce a facial expression at a set time. From the participant’s expression data, feature variables (i.e., the degree and variance of movement, and vibration level) involving the facial micromovements at the onset of the expression were analyzed. The results indicated significant differences in the feature variables between the real and fake expression conditions. The differences varied according to facial regions as a function of emotions. This study provides appraisal criteria for identifying the authenticity of facial expressions that are applicable to future research and the design of emotion recognition systems.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii