Journal articles on the topic 'Facial expression – Evaluation'

To see the other types of publications on this topic, follow the link: Facial expression – Evaluation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Facial expression – Evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Liang, Yanqiu. "Intelligent Emotion Evaluation Method of Classroom Teaching Based on Expression Recognition." International Journal of Emerging Technologies in Learning (iJET) 14, no. 04 (February 27, 2019): 127. http://dx.doi.org/10.3991/ijet.v14i04.10130.

Full text
Abstract:
To solve the problem of emotional loss in teaching and improve the teaching effect, an intelligent teaching method based on facial expression recognition was studied. The traditional active shape model (ASM) was improved to extract facial feature points. Facial expression was identified by using the geometric features of facial features and support vector machine (SVM). In the expression recognition process, facial geometry and SVM methods were used to generate expression classifiers. Results showed that the SVM method based on the geometric characteristics of facial feature points effectively realized the automatic recognition of facial expressions. Therefore, the automatic classification of facial expressions is realized, and the problem of emotional deficiency in intelligent teaching is effectively solved.
APA, Harvard, Vancouver, ISO, and other styles
2

Silvey, Brian A. "The Role of Conductor Facial Expression in Students’ Evaluation of Ensemble Expressivity." Journal of Research in Music Education 60, no. 4 (October 19, 2012): 419–29. http://dx.doi.org/10.1177/0022429412462580.

Full text
Abstract:
The purpose of this study was to explore whether conductor facial expression affected the expressivity ratings assigned to music excerpts by high school band students. Three actors were videotaped while portraying approving, neutral, and disapproving facial expressions. Each video was duplicated twice and then synchronized with one of three professional wind ensemble recordings. Participants ( N = 133) viewed nine 1-min videos of varying facial expressions, actors, and excerpts and rated each ensemble’s expressivity on a 10-point rating scale. Results of a one-way repeated measures ANOVA indicated that conductor facial expression significantly affected ratings of ensemble expressivity ( p < .001, partial η2 = .15). Post hoc comparisons revealed that participants’ ensemble expressivity ratings were significantly higher for excerpts featuring approving facial expressions than for either neutral or disapproving expressions. Participants’ mean ratings were lowest for neutral facial expression excerpts, indicating that an absence of facial affect influenced evaluations of ensemble expressivity most negatively.
APA, Harvard, Vancouver, ISO, and other styles
3

Hong, Yu-Jin, Sung Eun Choi, Gi Pyo Nam, Heeseung Choi, Junghyun Cho, and Ig-Jae Kim. "Adaptive 3D Model-Based Facial Expression Synthesis and Pose Frontalization." Sensors 20, no. 9 (May 1, 2020): 2578. http://dx.doi.org/10.3390/s20092578.

Full text
Abstract:
Facial expressions are one of the important non-verbal ways used to understand human emotions during communication. Thus, acquiring and reproducing facial expressions is helpful in analyzing human emotional states. However, owing to complex and subtle facial muscle movements, facial expression modeling from images with face poses is difficult to achieve. To handle this issue, we present a method for acquiring facial expressions from a non-frontal single photograph using a 3D-aided approach. In addition, we propose a contour-fitting method that improves the modeling accuracy by automatically rearranging 3D contour landmarks corresponding to fixed 2D image landmarks. The acquired facial expression input can be parametrically manipulated to create various facial expressions through a blendshape or expression transfer based on the FACS (Facial Action Coding System). To achieve a realistic facial expression synthesis, we propose an exemplar-texture wrinkle synthesis method that extracts and synthesizes appropriate expression wrinkles according to the target expression. To do so, we constructed a wrinkle table of various facial expressions from 400 people. As one of the applications, we proved that the expression-pose synthesis method is suitable for expression-invariant face recognition through a quantitative evaluation, and showed the effectiveness based on a qualitative evaluation. We expect our system to be a benefit to various fields such as face recognition, HCI, and data augmentation for deep learning.
APA, Harvard, Vancouver, ISO, and other styles
4

Mao, Jun. "Evaluation of Classroom Teaching Effect Based on Facial Expression Recognition." Journal of Contemporary Educational Research 5, no. 12 (December 23, 2021): 63–68. http://dx.doi.org/10.26689/jcer.v5i12.2855.

Full text
Abstract:
Classroom is an important environment for communication in teaching events. Therefore, both school and society should pay more attention to it. However, in the traditional teaching classroom, there is actually a relatively lack of communication and exchanges. Facial expression recognition is a branch of facial recognition technology with high precision. Even in large teaching scenes, it can capture the changes of students’ facial expressions and analyze their concentration accurately. This paper expounds the concept of this technology, and studies the evaluation of classroom teaching effects based on facial expression recognition.
APA, Harvard, Vancouver, ISO, and other styles
5

Zecca, M., T. Chaminade, M. A. Umilta, K. Itoh, M. Saito, N. Endo, Y. Mizoguchi, et al. "2A1-O10 Emotional Expression Humanoid Robot WE-4RII : Evaluation of the perception of facial emotional expressions by using fMRI." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2007 (2007): _2A1—O10_1—_2A1—O10_4. http://dx.doi.org/10.1299/jsmermd.2007._2a1-o10_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ramis, Silvia, Jose Maria Buades, and Francisco J. Perales. "Using a Social Robot to Evaluate Facial Expressions in the Wild." Sensors 20, no. 23 (November 24, 2020): 6716. http://dx.doi.org/10.3390/s20236716.

Full text
Abstract:
In this work an affective computing approach is used to study the human-robot interaction using a social robot to validate facial expressions in the wild. Our global goal is to evaluate that a social robot can be used to interact in a convincing manner with human users to recognize their potential emotions through facial expressions, contextual cues and bio-signals. In particular, this work is focused on analyzing facial expression. A social robot is used to validate a pre-trained convolutional neural network (CNN) which recognizes facial expressions. Facial expression recognition plays an important role in recognizing and understanding human emotion by robots. Robots equipped with expression recognition capabilities can also be a useful tool to get feedback from the users. The designed experiment allows evaluating a trained neural network in facial expressions using a social robot in a real environment. In this paper a comparison between the CNN accuracy and human experts is performed, in addition to analyze the interaction, attention and difficulty to perform a particular expression by 29 non-expert users. In the experiment, the robot leads the users to perform different facial expressions in motivating and entertaining way. At the end of the experiment, the users are quizzed about their experience with the robot. Finally, a set of experts and the CNN classify the expressions. The obtained results allow affirming that the use of social robot is an adequate interaction paradigm for the evaluation on facial expression.
APA, Harvard, Vancouver, ISO, and other styles
7

Asano, Hirotoshi, and Hideto Ide. "Facial-Expression-Based Arousal Evaluation by NST." Journal of Robotics and Mechatronics 22, no. 1 (February 20, 2010): 76–81. http://dx.doi.org/10.20965/jrm.2010.p0076.

Full text
Abstract:
Fatigue accumulation and poor attention could cause accidents in situations such as flight control, and automobile operation. This has contributed to international interest in intelligent transport system (ITS) research and development. We evaluated human sleepiness arousal based on facial thermal image analysis, in doing so based on nasal skin temperature for different levels of sleepiness during vehicle driving, we found that nasal skin temperature can replace facialexpression in evaluating sleep transition.
APA, Harvard, Vancouver, ISO, and other styles
8

Mayer, C., M. Eggers, and B. Radig. "Cross-database evaluation for facial expression recognition." Pattern Recognition and Image Analysis 24, no. 1 (March 2014): 124–32. http://dx.doi.org/10.1134/s1054661814010106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Santra, Arpita, Vivek Rai, Debasree Das, and Sunistha Kundu. "Facial Expression Recognition Using Convolutional Neural Network." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (May 31, 2022): 1081–92. http://dx.doi.org/10.22214/ijraset.2022.42439.

Full text
Abstract:
Abstract: Human & computer interaction has been an important field of study for ages. Humans share universal and fundamental set of emotions which are exhibited through consistent facial expressions or emotion. If computer could understand the feelings of humans, it can give the proper services based on the feedback received. An algorithm that performs detection, extraction, and evaluation of these facial expressions will allow for automatic recognition of human emotion in images and videos. Automatic recognition of facial expressions can be an important component of natural human-machine interfaces; it may also be used in behavioural science and in clinical practices. In this model we give the overview of the work done in the past related to Emotion Recognition using Facial expressions along with our approach towards solving the problem. The approaches used for facial expression include classifiers like Support Vector Machine (SVM), Convolution Neural Network (CNN) are used to classify emotions based on certain regions of interest on the face like lips, lower jaw, eyebrows, cheeks and many more. Kaggle facial expression dataset with seven facial expression labels as happy, sad, surprise, fear, anger, disgust, and neutral is used in this project. The system achieved 56.77 % accuracy and 0.57 precision on testing dataset. Keywords: Facial Expression Recognition, Convolutional Neural Network, Deep Learning.
APA, Harvard, Vancouver, ISO, and other styles
10

Mahmood, Mayyadah R., Maiwan B. Abdulrazaq, Subhi R. M. Zeebaree, Abbas Kh Ibrahim, Rizgar Ramadhan Zebari, and Hivi Ismat Dino. "Classification techniques’ performance evaluation for facial expression recognition." Indonesian Journal of Electrical Engineering and Computer Science 21, no. 2 (February 1, 2020): 1176. http://dx.doi.org/10.11591/ijeecs.v21.i2.pp1176-1184.

Full text
Abstract:
<p><span>Facial exprestion recognition as a recently developed method in computer vision is founded upon the idea of analazing the facial changes in which are witnessed due to emotional impacts on an individual. This paper provides a performance evaluation of a set of supervised classifiers used for facial expression recognition based on minimum features selected by chi-square. These features are the most iconic and influential ones that have tangible value for result dermination. The highest ranked six features are applied on six classifiers including multi-layer preceptron, support vector machine, decision tree, random forest, radial baised function, and k-nearest neioughbor to figure out the most accurate one when the minum number of features are utilized. This is done via analyzing and appraising the classifiers’ performance. CK+ is used as the research’s dataset. Random forest with the total accuracy ratio of 94.23 % is illustrated as the most accurate classifier amongst the rest. </span></p>
APA, Harvard, Vancouver, ISO, and other styles
11

Koji, Tatsumi, Yoshinobu Izumi, Sung-jin Yu, Takaya Terada, Yoko Akiyama, Shin-ichi Takeda, Kimiko Ema, and Shigehiro Nishijima. "Evaluation of pain by analysis of facial expression." Journal of Life Support Engineering 16, Supplement (2004): 311–12. http://dx.doi.org/10.5136/lifesupport.16.supplement_311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Saeed, Anwar, Ayoub Al-Hamadi, Robert Niese, and Moftah Elzobi. "Frame-Based Facial Expression Recognition Using Geometrical Features." Advances in Human-Computer Interaction 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/408953.

Full text
Abstract:
To improve the human-computer interaction (HCI) to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness), with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.
APA, Harvard, Vancouver, ISO, and other styles
13

Bera, Chirag, Prathmesh Adhav, Shridhar Amati, and Navin Singhaniya. "Product Review Based on Facial Expression Detection." ITM Web of Conferences 44 (2022): 03061. http://dx.doi.org/10.1051/itmconf/20224403061.

Full text
Abstract:
The research presents a method for assessing public acceptability of items based on their brand by analysing the facial expression of a consumer who intends to purchase the product at a supermarket. In such circumstances, face expression detection is crucial in product evaluation. Emotions are conveyed through facial expressions. Sentimental analysis is a type of natural language processing that may be used for a variety of purposes. As a result, several techniques to classifying human emotional states have been proposed. The extraction of feature points via a cascade classifier is used to identify facial expressions, which minimizes the time complexity. The owner can view the feedback of the the reviewed product. This product ranking will assist the business owner in increasing product sales while also ensuring that the top products are available for the customers.
APA, Harvard, Vancouver, ISO, and other styles
14

Syrjänen, Elmeri, Håkan Fischer, Marco Tullio Liuzza, Torun Lindholm, and Jonas K. Olofsson. "A Review of the Effects of Valenced Odors on Face Perception and Evaluation." i-Perception 12, no. 2 (March 2021): 204166952110095. http://dx.doi.org/10.1177/20416695211009552.

Full text
Abstract:
How do valenced odors affect the perception and evaluation of facial expressions? We reviewed 25 studies published from 1989 to 2020 on cross-modal behavioral effects of odors on the perception of faces. The results indicate that odors may influence facial evaluations and classifications in several ways. Faces are rated as more arousing during simultaneous odor exposure, and the rated valence of faces is affected in the direction of the odor valence. For facial classification tasks, in general, valenced odors, whether pleasant or unpleasant, decrease facial emotion classification speed. The evidence for valence congruency effects was inconsistent. Some studies found that exposure to a valenced odor facilitates the processing of a similarly valenced facial expression. The results for facial evaluation were mirrored in classical conditioning studies, as faces conditioned with valenced odors were rated in the direction of the odor valence. However, the evidence of odor effects was inconsistent when the task was to classify faces. Furthermore, using a z-curve analysis, we found clear evidence for publication bias. Our recommendations for future research include greater consideration of individual differences in sensation and cognition, individual differences (e.g., differences in odor sensitivity related to age, gender, or culture), establishing standardized experimental assessments and stimuli, larger study samples, and embracing open research practices.
APA, Harvard, Vancouver, ISO, and other styles
15

Baranowski, Andreas M., and H. Hecht. "The Auditory Kuleshov Effect: Multisensory Integration in Movie Editing." Perception 46, no. 5 (December 5, 2016): 624–31. http://dx.doi.org/10.1177/0301006616682754.

Full text
Abstract:
Almost a hundred years ago, the Russian filmmaker Lev Kuleshov conducted his now famous editing experiment in which different objects were added to a given film scene featuring a neutral face. It is said that the audience interpreted the unchanged facial expression as a function of the added object (e.g., an added soup made the face express hunger). This interaction effect has been dubbed “Kuleshov effect.” In the current study, we explored the role of sound in the evaluation of facial expressions in films. Thirty participants watched different clips of faces that were intercut with neutral scenes, featuring either happy music, sad music, or no music at all. This was crossed with the facial expressions of happy, sad, or neutral. We found that the music significantly influenced participants’ emotional judgments of facial expression. Thus, the intersensory effects of music are more specific than previously thought. They alter the evaluation of film scenes and can give meaning to ambiguous situations.
APA, Harvard, Vancouver, ISO, and other styles
16

Rodriguez Medina, David Alberto, Benjamín Domínguez Trejo, Irving Armando Cruz Albarrán, Luis Morales Hernández, Gerardo Leija Alva, and Patricia Zamudio Silva. "Nasal thermal activity during voluntary facial expression in a patient with chronic pain and alexithymia." Pan American Journal of Medical Thermology 4 (June 21, 2018): 25. http://dx.doi.org/10.18073/pajmt.2017.4.25-31.

Full text
Abstract:
The presence of alexithymia (difficulty in recognizing and expressing emotions and feelings) is one of the psychological factors that has been studied in patients with chronic pain. Different psychological strategies have been used for its management; however, none of them regulates the autonomic activity. We present the case of a 74-year-old female patient diagnosed with rheumatoid arthritis with alexithymia. For twelve years he has been taking pregabalin for pain. The main objective of this case study was to perform a biopsychosocial evaluation of pain (level of interleukin 6 concentration, to evaluate the inflammatory appearance, psychophysiological nasal thermal evaluation and psychosocial measures associated with pain). He was presented videos with affective scenes of various emotions (joy, sadness, fear, pain, anger). The results show that, when the patient observes the videos, there is little nasal thermal variability. However, when facial movements are induced for 10 seconds of a facial expression, a thermal variation is reached around 1 ° C. The induced facial expressions that decrease the temperature are those of anger and pain, which coincide with the priority needs of the patient according to the biopsychosocial profile. The results are discussed in the clinical context of the use of facial expressions to promote autonomic regulation in this population.
APA, Harvard, Vancouver, ISO, and other styles
17

Huang, Yunxin, Fei Chen, Shaohe Lv, and Xiaodong Wang. "Facial Expression Recognition: A Survey." Symmetry 11, no. 10 (September 20, 2019): 1189. http://dx.doi.org/10.3390/sym11101189.

Full text
Abstract:
Facial Expression Recognition (FER), as the primary processing method for non-verbal intentions, is an important and promising field of computer vision and artificial intelligence, and one of the subject areas of symmetry. This survey is a comprehensive and structured overview of recent advances in FER. We first categorise the existing FER methods into two main groups, i.e., conventional approaches and deep learning-based approaches. Methodologically, to highlight the differences and similarities, we propose a general framework of a conventional FER approach and review the possible technologies that can be employed in each component. As for deep learning-based methods, four kinds of neural network-based state-of-the-art FER approaches are presented and analysed. Besides, we introduce seventeen commonly used FER datasets and summarise four FER-related elements of datasets that may influence the choosing and processing of FER approaches. Evaluation methods and metrics are given in the later part to show how to assess FER algorithms, along with subsequent performance comparisons of different FER approaches on the benchmark datasets. At the end of the survey, we present some challenges and opportunities that need to be addressed in future.
APA, Harvard, Vancouver, ISO, and other styles
18

Leo, Marco, Pierluigi Carcagnì, Cosimo Distante, Pier Luigi Mazzeo, Paolo Spagnolo, Annalisa Levante, Serena Petrocchi, and Flavia Lecciso. "Computational Analysis of Deep Visual Data for Quantifying Facial Expression Production." Applied Sciences 9, no. 21 (October 25, 2019): 4542. http://dx.doi.org/10.3390/app9214542.

Full text
Abstract:
The computational analysis of facial expressions is an emerging research topic that could overcome the limitations of human perception and get quick and objective outcomes in the assessment of neurodevelopmental disorders (e.g., Autism Spectrum Disorders, ASD). Unfortunately, there have been only a few attempts to quantify facial expression production and most of the scientific literature aims at the easier task of recognizing if either a facial expression is present or not. Some attempts to face this challenging task exist but they do not provide a comprehensive study based on the comparison between human and automatic outcomes in quantifying children’s ability to produce basic emotions. Furthermore, these works do not exploit the latest solutions in computer vision and machine learning. Finally, they generally focus only on a homogeneous (in terms of cognitive capabilities) group of individuals. To fill this gap, in this paper some advanced computer vision and machine learning strategies are integrated into a framework aimed to computationally analyze how both ASD and typically developing children produce facial expressions. The framework locates and tracks a number of landmarks (virtual electromyography sensors) with the aim of monitoring facial muscle movements involved in facial expression production. The output of these virtual sensors is then fused to model the individual ability to produce facial expressions. Gathered computational outcomes have been correlated with the evaluation provided by psychologists and evidence has been given that shows how the proposed framework could be effectively exploited to deeply analyze the emotional competence of ASD children to produce facial expressions.
APA, Harvard, Vancouver, ISO, and other styles
19

Arima, Masakazu, and Kazuto Ikeda. "Evaluation of Ride Comfort using Facial-Expression Analysis Models." Journal of the Japan Society of Naval Architects and Ocean Engineers 2 (2005): 205–9. http://dx.doi.org/10.2534/jjasnaoe.2.205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Holden, E., G. Calvo, M. Collins, A. Bell, J. Reid, E. M. Scott, and A. M. Nolan. "Evaluation of facial expression in acute pain in cats." Journal of Small Animal Practice 55, no. 12 (October 30, 2014): 615–21. http://dx.doi.org/10.1111/jsap.12283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Fotios, S., H. Castleton, C. Cheal, and B. Yang. "Investigating the chromatic contribution to recognition of facial expression." Lighting Research & Technology 49, no. 2 (August 3, 2016): 243–58. http://dx.doi.org/10.1177/1477153515616166.

Full text
Abstract:
A pedestrian may judge the intentions of another person by their facial expression amongst other cues and aiding such evaluation after dark is one aim of road lighting. Previous studies give mixed conclusions as to whether lamp spectrum affects the ability to make such judgements. An experiment was carried out using conditions better resembling those of pedestrian behaviour, using as targets photographs of actors portraying facial expressions corresponding to the six universally recognised emotions. Responses were sought using a forced-choice procedure, under two types of lamp and with colour and grey scale photographs. Neither lamp type nor image colour was suggested to have a significant effect on the frequency with which the emotion conveyed by facial expression was correctly identified.
APA, Harvard, Vancouver, ISO, and other styles
22

Mohamad Nezami, Omid, Mark Dras, Stephen Wan, and Cecile Paris. "Image Captioning using Facial Expression and Attention." Journal of Artificial Intelligence Research 68 (August 6, 2020): 661–89. http://dx.doi.org/10.1613/jair.1.12025.

Full text
Abstract:
Benefiting from advances in machine vision and natural language processing techniques, current image captioning systems are able to generate detailed visual descriptions. For the most part, these descriptions represent an objective characterisation of the image, although some models do incorporate subjective aspects related to the observer’s view of the image, such as sentiment; current models, however, usually do not consider the emotional content of images during the caption generation process. This paper addresses this issue by proposing novel image captioning models which use facial expression features to generate image captions. The models generate image captions using long short-term memory networks applying facial features in addition to other visual features at different time steps. We compare a comprehensive collection of image captioning models with and without facial features using all standard evaluation metrics. The evaluation metrics indicate that applying facial features with an attention mechanism achieves the best performance, showing more expressive and more correlated image captions, on an image caption dataset extracted from the standard Flickr 30K dataset, consisting of around 11K images containing faces. An analysis of the generated captions finds that, perhaps unexpectedly, the improvement in caption quality appears to come not from the addition of adjectives linked to emotional aspects of the images, but from more variety in the actions described in the captions.
APA, Harvard, Vancouver, ISO, and other styles
23

Talluri, Kranthi Kumar, Marc-André Fiedler, and Ayoub Al-Hamadi. "Deep 3D Convolutional Neural Network for Facial Micro-Expression Analysis from Video Images." Applied Sciences 12, no. 21 (November 1, 2022): 11078. http://dx.doi.org/10.3390/app122111078.

Full text
Abstract:
Micro-expression is the involuntary emotion of the human that reflects the genuine feelings that cannot be hidden. Micro-expression is exhibited by facial expressions that last for a short duration and have very low intensity. Because of these reasons, micro-expression recognition is a challenging task. Recent research on the application of 3D convolutional neural networks (CNNs) has gained much popularity for video-based micro-expression analysis. For this purpose, both spatial as well as temporal features are of great importance to achieve high accuracies. The real possibly suppressed emotions of a person are valuable information for a variety of applications, such as in security, psychology, neuroscience, medicine and many other disciplines. This paper proposes a 3D CNN model architecture which is able to extract spatial and temporal features simultaneously. Thereby, the selection of the frame sequence plays a crucial role, since the emotions are only distinctive in a subset of the frames. Thus, we employ a novel pre-processing technique to select the Apex frame sequence from the entire video, where the timestamp of the most pronounced emotion is centered within this sequence. After an extensive evaluation including many experiments, the results show that the train–test split evaluation is biased toward a particular split and cannot be recommended in case of small and imbalanced datasets. Instead, a stratified K-fold evaluation technique is utilized to evaluate the model, which proves to be much more appropriate when using the three benchmark datasets CASME II, SMIC, and SAMM. Moreover, intra-dataset as well as cross-dataset evaluations were conducted in a total of eight different scenarios. For comparison purposes, two networks from the state of the art were reimplemented and compared with the presented architecture. In stratified K-fold evaluation, our proposed model outperforms both reimplemented state-of-the-art methods in seven out of eight evaluation scenarios.
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Fowei, Bo Shen, Shaoyuan Sun, and Zidong Wang. "Improved GA and Pareto optimization-based facial expression recognition." Assembly Automation 36, no. 2 (April 4, 2016): 192–99. http://dx.doi.org/10.1108/aa-11-2015-110.

Full text
Abstract:
Purpose The purpose of this paper is to improve the accuracy of the facial expression recognition by using genetic algorithm (GA) with an appropriate fitness evaluation function and Pareto optimization model with two new objective functions. Design/methodology/approach To achieve facial expression recognition with high accuracy, the Haar-like features representation approach and the bilateral filter are first used to preprocess the facial image. Second, the uniform local Gabor binary patterns are used to extract the facial feature so as to reduce the feature dimension. Third, an improved GA and Pareto optimization approach are used to select the optimal significant features. Fourth, the random forest classifier is chosen to achieve the feature classification. Subsequently, some comparative experiments are implemented. Finally, the conclusion is drawn and some future research topics are pointed out. Findings The experiment results show that the proposed facial expression recognition algorithm outperforms ones in the existing literature in terms of both the actuary and computational time. Originality/value The GA and Pareto optimization algorithm are combined to select the optimal significant feature. To improve the accuracy of the facial expression recognition, the GA is improved by adjusting an appropriate fitness evaluation function, and a new Pareto optimization model is proposed that contains two objective functions indicating the achievements in minimizing within-class variations and in maximizing between-class variations.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Yong Sheng. "A Novel Real-Time Facial Expression Capturing Method." Applied Mechanics and Materials 273 (January 2013): 826–30. http://dx.doi.org/10.4028/www.scientific.net/amm.273.826.

Full text
Abstract:
In this paper, we propose a new facial expression capturing method in real-time mode. Our proposed method main includes two processes: 1) Online process and 2) Offline process. The offline process adopt human face database to build 2D face model and 3D shape model, and then train an expression feature model. Afterwards, online process extracts feature points from human face images, and then obtains facial expression by SVM classifier which is trained from offline process. The main creativity of our method lies in that we propose an effective face detection approach and propose an optimal evaluation method to facial expression recognition. Experimental results show that our approach can capture facial expression precisely in real-time mode.
APA, Harvard, Vancouver, ISO, and other styles
26

Huang, Wei. "Elderly Depression Recognition Based on Facial Micro-Expression Extraction." Traitement du Signal 38, no. 4 (August 31, 2021): 1123–30. http://dx.doi.org/10.18280/ts.380423.

Full text
Abstract:
Depression leads to a high suicide rate and a high death rate. But the disease can be cured if recognized in time. At present, there are only a few low-precision methods for recognizing mental health or mental disorder. Therefore, this paper attempts to recognize elderly depression by extracting facial micro-expressions. Firstly, a micro-expression recognition model was constructed for elderly depression recognition. Then, a jump connection structure and a feature fusion module were introduced to VGG-16 model, realizing the extraction and classification of micro-expression features. After that, a quantitative evaluation approach was proposed for micro-expressions based on the features of action units, which improves the recognition accuracy of elderly depression expressions. Finally, the advanced features related to the dynamic change rate of depression micro-expressions were constructed, and subjected to empirical modal decomposition (EMD) and Hilbert analysis. The effectiveness of our algorithm was proved through experiments.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhuang, Wei, Fanan Xing, Jili Fan, Chunming Gao, and Yunhong Zhang. "An Integrated Model for On-Site Teaching Quality Evaluation Based on Deep Learning." Wireless Communications and Mobile Computing 2022 (June 24, 2022): 1–13. http://dx.doi.org/10.1155/2022/9027907.

Full text
Abstract:
During on-site teaching for university students, the level of concentration of every student is an important indicator for the evaluation of teaching quality. Traditionally, teachers rely on subjective methods for observing students’ learning status. Due to the volume of on-site crowds, teachers are unable to stay on top of the learning status of each student. Meanwhile, because of the subjective evaluation, the results would not be precise. With the fast development of artificial intelligence and machine learning, it is possible to adopt deep learning technology to achieve scientific evaluation of the classroom teaching quality. This paper proposes an integrated evaluation model based on deep learning technology, incorporating YOLOX model, Retinaface model, and SCN model. Among which, YOLOX model is used to detect the area of the students’ upper body, Retinaface model is adopted to assess the head-up rate, and SCN model is used to recognize the facial expression. The experimental results have shown that our model can achieve 93.1% object detection accuracy, more than 85% face recognition accuracy, and 87.39% expression recognition accuracy. We further develop a model to use the combination of head-up rate and facial expression scores to jointly evaluate classroom teaching quality. Five teaching professors’ evaluations of our classroom video images confirmed that our proposed model is effective in objectively evaluating the on-site teaching quality.
APA, Harvard, Vancouver, ISO, and other styles
28

Isabella, Giuliana, and Valter Afonso Vieira. "The effect of facial expression on emotional contagion and product evaluation in print advertising." RAUSP Management Journal 55, no. 3 (January 2, 2020): 375–91. http://dx.doi.org/10.1108/rausp-03-2019-0038.

Full text
Abstract:
Purpose The purpose of this paper is to investigate the emotional contagion theory in print ads, and expand the literature of smiling to different type of smiles and gender congruency. Emotional contagion happens when an emotion is transferred from a sender to a receiver by the synchronization of emotions from the emitter. Drawing on emotional contagion theory, the authors expand this concept and propose that smiles in static facial expressions influence product evaluation. They suggest that false smiles do not have the same impact as genuine smiles on product evaluation, and the congruence between the model gender–product in a static ad and the gender of the viewer moderates the effects. Design/methodology/approach In Experiment 1, subjects were randomly assigned to view one of the two ad treatments to guard against systematic error (e.g. bias). In Experiment 2, it was investigated whether viewing a static ad featuring a model with a false smile can result in a positive product evaluation as was the case with genuine smiles (H3). In Experiment 3, it was assumed that when consumers evaluate an ad featuring a smiling face, the facial expression influences product evaluation, and this influence is moderated by the congruence between the gender of the ad viewer and the product H gender of the model in the ad. Findings Across three experiments, the authors found that the model’s facial expression influenced the product evaluation. Second, they supported the association between a model’s facial expression and mimicry synchronization. Third, they showed that genuine smiles have a higher impact on product evaluation than false smiles. This novel result enlarges the research on genuine smiles to include false smiles. Fourth, the authors supported the gender–product congruence effect in that the gender of the ad’s reader and the model have a moderating effect on the relationship between the model’s facial expression and the reader’s product evaluation. Originality/value Marketing managers would benefit from understanding that genuine smiles can encourage positive emotions on the part of consumers via emotional contagion, which would be very useful to create a positive effect on products. The authors improved upon previous psychological theory (Gunnery et al., 2013; Hennig-Thurau et al., 2006) showing that a genuine smile results in higher evaluation scores of products presented in static ads. The theoretical explanation for this effect is the genuine smile, which involves contraction of both zygomatic major and orbicularis oculi muscles. These facial muscles can be better perceived and transmit positive emotions (Hennig-Thurau et al., 2006).
APA, Harvard, Vancouver, ISO, and other styles
29

Sou, Kanyou, Hiroya Shiokawa, Kento Yoh, and Kenji Doi. "Street Design for Hedonistic Sustainability through AI and Human Co-Operative Evaluation." Sustainability 13, no. 16 (August 13, 2021): 9066. http://dx.doi.org/10.3390/su13169066.

Full text
Abstract:
Recently, there has been an increasing emphasis on community development centered on the well-being and quality of life of citizens, while pursuing sustainability. This study proposes an AI and human co-operative evaluation (AIHCE) framework that facilitates communication design between designers and stakeholders based on human emotions and values and is an evaluation method for street space. AIHCE is an evaluation method based on image recognition technology that performs deep learning of the facial expressions of both people and the city; namely, it consists of a facial expression recognition model (FERM) and a street image evaluation model (SIEM). The former evaluates the street space based on the emotions and values of the pedestrian’s facial expression, and the latter evaluates the target street space from the prepared street space image. AIHCE is an integrated framework for these two models, enabling continuous and objective evaluation of space with simultaneous subjective emotional evaluation, showing the possibility of reflecting it in the design. It is expected to contribute to fostering people’s awareness that streets are public goods reflecting the basic functions of public spaces and the values and regional characteristics of residents, contributing to the improvement of the sustainability of the entire city.
APA, Harvard, Vancouver, ISO, and other styles
30

Ueda, Yoshiyuki. "Understanding Mood of the Crowd with Facial Expressions: Majority Judgment for Evaluation of Statistical Summary Perception." Attention, Perception, & Psychophysics 84, no. 3 (March 15, 2022): 843–60. http://dx.doi.org/10.3758/s13414-022-02449-8.

Full text
Abstract:
AbstractWe intuitively perceive mood or collective information of facial expressions without much effort. Although it is known that statistical summarization occurs even for faces instantaneously, it might be hard to perceive precise summary statistics of facial expressions (i.e., using all of them equally) since recognition of them requires the binding of multiple features of a face. This study assessed which information is extracted from the crowd to understand mood. In a series of experiments, twelve individual faces with happy and neutral expressions (or angry and neutral expressions) were presented simultaneously, and participants reported which expression appeared more frequently. To perform this task correctly, participants must perceive precise distribution of facial expressions in the crowd. If participants could perceive ensembles based on every face instantaneously, expressions presented on more than half of the faces (in a single ensemble/trial) would have been identified as more frequently presented and the just noticeable difference would be small. The results showed that participants did not always report seeing emotional faces more frequently until much more emotional than neutral faces appeared, suggesting that facial expression ensembles were not perceived from all faces. Manipulating the presentation layout revealed that participants’ judgments highly weight only a part of the faces in the center of the crowd regardless of their visual size. Moreover, individual differences in the precision of summary statistical perception were related to visual working memory. Based on these results, this study provides a speculative explanation of summary perception of real distinctive faces. (247 words)
APA, Harvard, Vancouver, ISO, and other styles
31

Le, Trang Thanh Quynh, Thuong-Khanh Tran, and Manjeet Rege. "Rank-Pooling-Based Features on Localized Regions for Automatic Micro-Expression Recognition." International Journal of Multimedia Data Engineering and Management 11, no. 4 (October 2020): 25–37. http://dx.doi.org/10.4018/ijmdem.2020100102.

Full text
Abstract:
Facial micro-expression is a subtle and involuntary facial expression that exhibits short duration and low intensity where hidden feelings can be disclosed. The field of micro-expression analysis has been receiving substantial awareness due to its potential values in a wide variety of practical applications. A number of studies have proposed sophisticated hand-crafted feature representations in order to leverage the task of automatic micro-expression recognition. This paper employs a dynamic image computation method for feature extraction so that features can be learned on certain localized facial regions along with deep convolutional networks to identify micro-expressions presented in the extracted dynamic images. The proposed framework is simple as opposed to other existing frameworks which used complex hand-crafted feature descriptors. For performance evaluation, the framework is tested on three publicly available databases, as well as on the integrated database in which individual databases are merged into a data pool. Impressive results from the series of experimental work show that the technique is promising in recognizing micro-expressions.
APA, Harvard, Vancouver, ISO, and other styles
32

Komatsu, Sahoko, and Yuji Hakoda. "Construction and evaluation of a facial expression database of children." Japanese journal of psychology 83, no. 3 (2012): 217–24. http://dx.doi.org/10.4992/jjpsy.83.217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Hou, Changbo, Jiajun Ai, Yun Lin, Chenyang Guan, Jiawen Li, and Wenyu Zhu. "Evaluation of Online Teaching Quality Based on Facial Expression Recognition." Future Internet 14, no. 6 (June 8, 2022): 177. http://dx.doi.org/10.3390/fi14060177.

Full text
Abstract:
In 21st-century society, with the rapid development of information technology, the scientific and technological strength of all walks of life is increasing, and the field of education has also begun to introduce high and new technologies gradually. Affected by the epidemic, online teaching has been implemented all over the country, forming an education model of “dual integration” of online and offline teaching. However, the disadvantages of online teaching are also very obvious; that is, teachers cannot understand the students’ listening status in real-time. Therefore, our study adopts automatic face detection and expression recognition based on a deep learning framework and other related technologies to solve this problem, and it designs an analysis system of students’ class concentration based on expression recognition. The students’ class concentration analysis system can help teachers detect students’ class concentration and improve the efficiency of class evaluation. In this system, OpenCV is used to call the camera to collect the students’ listening status in real-time, and the MTCNN algorithm is used to detect the face of the video to frame the location of the student’s face image. Finally, the obtained face image is used for real-time expression recognition by using the VGG16 network added with ECANet, and the students’ emotions in class are obtained. The experimental results show that the method in our study can more accurately identify students’ emotions in class and carry out a teaching effect evaluation, which has certain application value in intelligent education fields, such as the smart classroom and distance learning. For example, a teaching evaluation module can be added to the teaching software, and teachers can know the listening emotions of each student in class while lecturing.
APA, Harvard, Vancouver, ISO, and other styles
34

Porcu, Simone, Alessandro Floris, and Luigi Atzori. "Evaluation of Data Augmentation Techniques for Facial Expression Recognition Systems." Electronics 9, no. 11 (November 11, 2020): 1892. http://dx.doi.org/10.3390/electronics9111892.

Full text
Abstract:
Most Facial Expression Recognition (FER) systems rely on machine learning approaches that require large databases for an effective training. As these are not easily available, a good solution is to augment the databases with appropriate data augmentation (DA) techniques, which are typically based on either geometric transformation or oversampling augmentations (e.g., generative adversarial networks (GANs)). However, it is not always easy to understand which DA technique may be more convenient for FER systems because most state-of-the-art experiments use different settings which makes the impact of DA techniques not comparable. To advance in this respect, in this paper, we evaluate and compare the impact of using well-established DA techniques on the emotion recognition accuracy of a FER system based on the well-known VGG16 convolutional neural network (CNN). In particular, we consider both geometric transformations and GAN to increase the amount of training images. We performed cross-database evaluations: training with the "augmented" KDEF database and testing with two different databases (CK+ and ExpW). The best results were obtained combining horizontal reflection, translation and GAN, bringing an accuracy increase of approximately 30%. This outperforms alternative approaches, except for the one technique that could however rely on a quite bigger database.
APA, Harvard, Vancouver, ISO, and other styles
35

IZUMI, Yoshinobu, Takaya TERADA, Sung-Jin YU, Yoko AKIYAMA, Koji TATSUMI, Shin-ichi TAKEDA, Kimiko EMA, and Shigehiro NISHIJIMA. "Evaluation of emotion of fear and anxiousness from facial expression." Journal of Life Support Engineering 16, Supplement (2004): 309–10. http://dx.doi.org/10.5136/lifesupport.16.supplement_309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Amico, F., G. Healy, M. Arvaneh, D. Kearney, E. Mohedano, D. Roddy, J. Yek, A. Smeaton, and J. Brophy. "Multimodal validation of facial expression detection software for real-time monitoring of affect in patients with suicidal intent." European Psychiatry 33, S1 (March 2016): S596. http://dx.doi.org/10.1016/j.eurpsy.2016.01.2225.

Full text
Abstract:
Facial expression is an independent and objective marker of affect. Basic emotions (fear, sadness, joy, anger, disgust and surprise) have been shown to be universal across human cultures. Techniques such as the Facial Action Coding System can capture emotion with good reliability. Such techniques visually process the changes in different assemblies of facial muscles that produce the facial expression of affect.Recent groundbreaking advances in computing and facial expression analysis software now allow real-time and objective measurement of emotional states. In particular, a recently developed software package and equipment, the Imotion Attention Tool™, allows capturing information on discreet emotional states based on facial expressions while a subject is participating in a behavioural task.Extending preliminary work by further experimentation and analysis, the present findings suggests a link between facial affect data to already established peripheral arousal measures such as event related potentials (ERP), heart rate variability (HRV) and galvanic skin response (GSR) using disruptively innovative, noninvasive and clinically applicable technology in patients reporting suicidal ideation and intent compared to controls. Our results hold promise for the establishment of a computerized diagnostic battery that can be utilized by clinicians to improve the evaluation of suicide risk.Disclosure of interestThe authors have not supplied their declaration of competing interest.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhou, Ming, Shu He, and Yong Jun Cheng. "Research on Facial Feature Detection Based on Correlation Analysis." Advanced Materials Research 1044-1045 (October 2014): 1489–93. http://dx.doi.org/10.4028/www.scientific.net/amr.1044-1045.1489.

Full text
Abstract:
In order to enhance the extraction efficiency of facial feature, the paper explores a novel defect evaluation method that uses combined features and modified method classifiers to characterize and classify the defects of facial expression. It provides a good approach to implement facial expression recognition both in 2D and 3D images. Innovative methods which are aimed at reducing the computational complexity and improving the accuracy of expression recognition are proposed.The experiments result showed the proposed method achieved lower error rate than other method.
APA, Harvard, Vancouver, ISO, and other styles
38

Yanqing Cui, Yanqing Cui, Guangjie Han Yanqing Cui, and Hongbo Zhu Guangjie Han. "A Novel Online Teaching Effect Evaluation Model Based on Visual Question Answering." 網際網路技術學刊 23, no. 1 (January 2022): 093–100. http://dx.doi.org/10.53106/160792642022012301009.

Full text
Abstract:
<p>The paper proposes a novel visual question answering (VQA)-based online teaching effect evaluation model. Based on the text interaction between teacher and students, we give a guide-attention (GA) model to discover the directive clues. Combining the self-attention (SA) models, we reweight the vital feature to locate the critical information on the whiteboard and students&rsquo; faces and further recognize their content and facial expressions. Three branches of information are encoded into the feature vectors to be fed into a bidirectional GRU network. With the real labels of the students’ answers annotated by two teachers and the predicted labels from the text and facial expression feedback, we train the chained network. Experiment reports a couple of competitive performance in the 2-class and 5-class tasks on the self-collected dataset, respectively.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
39

Duong, Khanh Ngoc Van, and An Bao Nguyen. "AN EVALUATION ON PERFORMANCE OF PCA IN FACE RECOGNITION WITH EXPRESSION VARIATIONS." Scientific Journal of Tra Vinh University 1, no. 30 (June 1, 2018): 61–66. http://dx.doi.org/10.35382/18594816.1.30.2018.19.

Full text
Abstract:
Appearance-based recognition methods often encounter difficulties when the input images contain facial expression variations such as laughing, crying or wide mouth opening. In these cases, holistic methods give better performance than appearance-based methods. This paper presents some evaluation on face recognition under variation of facial expression using the combination of PCA and classification algorithms. The experimental results showed that the best accuracy can be obtained with very few eigenvectors and KNN algorithm (with k=1) performs better than SVM in most test cases.
APA, Harvard, Vancouver, ISO, and other styles
40

Sarafoleanu, Dorin, and Andreea Bejenariu. "Facial nerve paralysis." Romanian Journal of Rhinology 10, no. 39 (September 1, 2020): 68–77. http://dx.doi.org/10.2478/rjr-2020-0016.

Full text
Abstract:
AbstractThe facial nerve, the seventh pair of cranial nerves, has an essential role in non-verbal communication through facial expression. Besides innervating the muscles involved in facial expression, the complex structure of the facial nerve contains sensory fibres involved in the perception of taste and parasympathetic fibres involved in the salivation and tearing processes. Damage to the facial nerve manifested by facial paralysis translates into a decrease or disappearance of mobility of normal facial expression.Facial nerve palsy is one of the common causes of presenting to the Emergency Room. Most facial paralysis are idiopathic, followed by traumatic, infectious, tumor causes. A special place is occupied by the child’s facial paralysis. Due to the multitude of factors that can determine or favour its appearance, it requires a multidisciplinary evaluation consisting of otorhinolaryngologist, neurologist, ophthalmologist, internist.Early presentation to the doctor, accurate determination of the cause, correctly performed topographic diagnosis is the key to proper treatment and complete functional recovery.
APA, Harvard, Vancouver, ISO, and other styles
41

Sakaue, Shota, Hiroki Nomiya, and Teruhisa Hochin. "Unsupervised Estimation of Facial Expression Intensity for Emotional Scene Retrieval in Lifelog Videos." International Journal of Software Innovation 6, no. 4 (October 2018): 30–45. http://dx.doi.org/10.4018/ijsi.2018100103.

Full text
Abstract:
This article describes how in order to facilitate the retrieval of impressive scenes from lifelog videos, a method to estimate the intensity of a facial expression of a person in a lifelog video is proposed. The previous work made it possible to estimate the facial expression intensity, but the previous method requires some training samples which should be manually and carefully selected. This makes the previous method quite inconvenient. This article attempts to solve this problem by introducing an unsupervised learning method. The proposed method estimates the facial expression intensity via a clustering on the basis of several facial features computed from the positional relationship of a number of facial feature points. For the evaluation of the proposed method, an experiment to estimate the facial expression intensity is performed using a lifelog video data set. The estimation performance of the proposed method is compared with that of the previous method.
APA, Harvard, Vancouver, ISO, and other styles
42

Namba, Shushi, Wataru Sato, Masaki Osumi, and Koh Shimokawa. "Assessing Automated Facial Action Unit Detection Systems for Analyzing Cross-Domain Facial Expression Databases." Sensors 21, no. 12 (June 20, 2021): 4222. http://dx.doi.org/10.3390/s21124222.

Full text
Abstract:
In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. This study compared the performance of three systems (FaceReader, OpenFace, AFARtoolbox) that detect each facial movement corresponding to an action unit (AU) derived from the Facial Action Coding System. All machines could detect the presence of AUs from the dynamic facial database at a level above chance. Moreover, OpenFace and AFAR provided higher area under the receiver operating characteristic curve values compared to FaceReader. In addition, several confusion biases of facial components (e.g., AU12 and AU14) were observed to be related to each automated AU detection system and the static mode was superior to dynamic mode for analyzing the posed facial database. These findings demonstrate the features of prediction patterns for each system and provide guidance for research on facial expressions.
APA, Harvard, Vancouver, ISO, and other styles
43

Okuda, Itsuko, Yumika Yamakawa, Nobu Mitani, Naoko Ota, Marie Kawabata, and Naoki Yoshioka. "Objective evaluation of the relationship between facial expression analysis by the facial action coding system (FACS) and CT/MRI analyses of the facial expression muscles." Skin Research and Technology 26, no. 5 (April 6, 2020): 727–33. http://dx.doi.org/10.1111/srt.12864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Syrjänen, Elmeri, Marco Tullio Liuzza, Håkan Fischer, and Jonas K. Olofsson. "Do Valenced Odors and Trait Body Odor Disgust Affect Evaluation of Emotion in Dynamic Faces?" Perception 46, no. 12 (July 14, 2017): 1412–26. http://dx.doi.org/10.1177/0301006617720831.

Full text
Abstract:
Disgust is a core emotion evolved to detect and avoid the ingestion of poisonous food as well as the contact with pathogens and other harmful agents. Previous research has shown that multisensory presentation of olfactory and visual information may strengthen the processing of disgust-relevant information. However, it is not known whether these findings extend to dynamic facial stimuli that changes from neutral to emotionally expressive, or if individual differences in trait body odor disgust may influence the processing of disgust-related information. In this preregistered study, we tested whether a classification of dynamic facial expressions as happy or disgusted, and an emotional evaluation of these facial expressions, would be affected by individual differences in body odor disgust sensitivity, and by exposure to a sweat-like, negatively valenced odor (valeric acid), as compared with a soap-like, positively valenced odor (lilac essence) or a no-odor control. Using Bayesian hypothesis testing, we found evidence that odors do not affect recognition of emotion in dynamic faces even when body odor disgust sensitivity was used as moderator. However, an exploratory analysis suggested that an unpleasant odor context may cause faster RTs for faces, independent of their emotional expression. Our results further our understanding of the scope and limits of odor effects on facial perception affect and suggest further studies should focus on reproducibility, specifying experimental circumstances where odor effects on facial expressions may be present versus absent.
APA, Harvard, Vancouver, ISO, and other styles
45

Liu, Yun. "Improved Convolutional Neural Networks for Course Teaching Quality Assessment." Advances in Multimedia 2022 (July 20, 2022): 1–10. http://dx.doi.org/10.1155/2022/4395307.

Full text
Abstract:
The cultivation of innovative talents is closely related to the quality of course teaching, and there is a correlation between facial expressions and the effectiveness of classroom teaching. In this paper, a separate long-term recursive convolutional network (SLRCN) microexpression recognition algorithm is proposed using deep learning technology for building a course teaching effectiveness evaluation model. Firstly, facial image sequences are extracted from microexpression data sets, and the transfer learning method is introduced to extract spatial features of facial expression frames through pretrained convolutional neural network model to reduce the risk of overfitting in network training. The extracted features of video sequences were input into long short-team memory (LSTM) to process temporal features. Experimental results show that SLRCN algorithm has the best performance in training set and test set. It has the best performance in ROC curve. This effectively distinguishes between seven different expressions in the database. The model proposed in this paper can obtain the changes of students’ facial expressions in class and evaluate students’ learning status, thus promoting the improvement of teaching quality. It provides a new method of course teaching quality evaluation.
APA, Harvard, Vancouver, ISO, and other styles
46

Sasaki, Yutaka, Naoki Arakawa, Yuki Sato, Hiromitsu Yajima, and Kazushi Ito. "Development and Evaluation of 3-D Facial Expression-Kansei Estimation System." Agricultural Information Research 18, no. 1 (2009): 17–23. http://dx.doi.org/10.3173/air.18.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Chaves, Francisco Edvan, Thelmo Pontes de Araujo, and José Everardo Bessa Maia. "Facial Expression Recognition: A Cross-Database Evaluation of Features and Classifiers." Journal of Intelligent Computing 10, no. 1 (March 1, 2019): 34. http://dx.doi.org/10.6025/jic/2019/10/1/34-45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Yu, Sung-Jin, Yoshinobu Izumi, Shin-ichi Takeda, and Shigehiro Nishijima. "Evaluation of Feeling of Fatigue Using by Analysis of Facial Expression." Journal of Life Support Engineering 16, Supplement (2004): 313–14. http://dx.doi.org/10.5136/lifesupport.16.supplement_313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Dalton, Jo Ann, Linda Brown, John Carlson, Robert McNutt, and Susan M. Greer. "An evaluation of facial expression displayed by patients with chest pain." Heart & Lung 28, no. 3 (May 1999): 168–74. http://dx.doi.org/10.1016/s0147-9563(99)70056-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Liu, Shiguang, Huixin Wang, and Min Pei. "Facial-expression-aware Emotional Color Transfer Based on Convolutional Neural Network." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 1 (January 31, 2022): 1–19. http://dx.doi.org/10.1145/3464382.

Full text
Abstract:
Emotional color transfer aims to change the evoked emotion of a source image to that of a target image by adjusting color distribution. Most of existing emotional color transfer methods only consider the low-level visual features of an image and ignore the facial expression features when the image contains a human face, which would cause incorrect emotion evaluation for the given image. In addition, previous emotional color transfer methods may easily result in ambiguity between the emotion of resulting image and target image. For example, if the background of the target image is dark while the facial expression is happiness, then previous methods would directly transfer dark color to the source image, neglecting the facial emotion in the image. To solve this problem, we propose a new facial-expression-aware emotional color transfer framework. Given a target image with facial expression features, we first predict the facial emotion label of the image through the emotion classification network. Then, facial emotion labels are matched with pre-trained emotional color transfer models. Finally, we use the matched emotion model to transfer the color of the target image to the source image. Considering none of the existing emotion image databases, which focus on images that contain face and background, we built an emotion database for our new emotional color transfer framework that is called “Face-Emotion database.” Experiments demonstrate that our method can successfully capture and transfer facial emotions, outperforming state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography