Journal articles on the topic 'Face emotion recognition'

To see the other types of publications on this topic, follow the link: Face emotion recognition.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Face emotion recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Mallikarjuna, Basetty, M. Sethu Ram, and Supriya Addanke. "An Improved Face-Emotion Recognition to Automatically Generate Human Expression With Emoticons." International Journal of Reliable and Quality E-Healthcare 11, no. 1 (January 1, 2022): 1–18. http://dx.doi.org/10.4018/ijrqeh.314945.

Full text
Abstract:
Any human face image expression naturally identifies expressions of happy, sad etc.; sometimes human facial image expression recognition is complex, and it is a combination of two emotions. The existing literature provides face emotion classification and image recognition, and the study on deep learning using convolutional neural networks (CNN), provides face emotion recognition most useful for healthcare and with the most complex of the existing algorithms. This paper improves the human face emotion recognition and provides feelings of interest for others to generate emoticons on their smartphone. Face emotion recognition plays a major role by using convolutional neural networks in the area of deep learning and artificial intelligence for healthcare services. Automatic facial emotion recognition consists of two methods, such as face detection with Ada boost classifier algorithm and emotional classification, which consists of feature extraction by using deep learning methods such as CNN to identify the seven emotions to generate emoticons.
APA, Harvard, Vancouver, ISO, and other styles
2

Iqbal, Muhammad, Bhakti Yudho Suprapto, Hera Hikmarika, Hermawati Hermawati, and Suci Dwijayanti. "Design of Real-Time Face Recognition and Emotion Recognition on Humanoid Robot Using Deep Learning." Jurnal Ecotipe (Electronic, Control, Telecommunication, Information, and Power Engineering) 9, no. 2 (October 6, 2022): 149–58. http://dx.doi.org/10.33019/jurnalecotipe.v9i2.3044.

Full text
Abstract:
A robot is capable of mimicking human beings, including recognizing their faces and emotions. However, current studies of the humanoid robot have not been implemented in the real-time system. In addition, face recognition and emotion recognition have been treated as separate problems. Thus, for real-time application on a humanoid robot, this study proposed a combination of face recognition and emotion recognition. Face and emotion recognition systems were developed concurrently in this study using convolutional neural network architectures. The proposed architecture was compared to the well-known architecture, AlexNet, to determine which architecture would be better suited for implementation on a humanoid robot. Primary data from 30 respondents was used for face recognition. Meanwhile, emotional data were collected from the same respondents and combined with secondary data from a 2500-person dataset. Surprise, anger, neutral, smile, and sadness were among the emotions. The experiment was carried out in real-time on a humanoid robot using the two architectures. Using the AlexNet model, the accuracy of face and emotion recognition was 87 % and 70 %, respectively. Meanwhile, the proposed architecture achieved accuracy rates of 95 % for face recognition and 75 % for emotion recognition, respectively. Thus, the proposed method performs better in terms of recognizing faces and emotions, and it can be implemented on a humanoid robot.
APA, Harvard, Vancouver, ISO, and other styles
3

Sondawale, Shweta. "Face and Speech Emotion Recognition System." International Journal for Research in Applied Science and Engineering Technology 12, no. 4 (April 30, 2024): 5621–28. http://dx.doi.org/10.22214/ijraset.2024.61278.

Full text
Abstract:
Abstract: Emotions serve as the cornerstone of human communication, facilitating the expression of one's inner thoughts and feelings to others. Speech Emotion Recognition (SER) represents a pivotal endeavour aimed at deciphering the emotional nuances embedded within a speaker's voice signal. Universal emotions such as neutrality, anger, happiness, and sadness form the basis of this recognition process, allowing for the identification of fundamental emotional states. To achieve this, spectral and prosodic features are leveraged, each offering unique insights into the emotional content of speech. Spectral features, exemplified by the Mel Frequency Cepstral Coefficient (MFCC), provide a detailed analysis of the frequency distribution within speech signals, while prosodic features encompass elements like fundamental frequency, volume, pitch, speech intensity, and glottal parameters, capturing the rhythmic and tonal variations indicative of different emotional states. Through the integration of these features, SER systems can effectively simulate and classify a diverse range of emotional expressions, paving the way for enhanced human-computer interaction and communication technologies
APA, Harvard, Vancouver, ISO, and other styles
4

Mareeswari V. "Face Emotion Recognition based Recommendation System." ACS Journal for Science and Engineering 2, no. 1 (March 1, 2022): 73–80. http://dx.doi.org/10.34293/acsjse.v2i1.29.

Full text
Abstract:
Face recognition technology has gotten a lot of press because of its wide range of applications and market potential. It is used in a variety of fields, including surveillance systems, digital video editing, and other technical advancements. In the fields of tourism, music, video, and film, these systems have overcome the burden of irrelevant knowledge by taking into account user desires and emotional states. Advice systems, emotion recognition, and machine learning are proposed as thematic categories in the analysis. Our vision is to develop a method for recommending new content that is based on the emotional reactions of the viewers. Music is a form of art that is thought to have a stronger connection to a person's emotions. It has the unique potential to boost one's mood, and video streaming services are becoming more prevalent in people's lives, necessitating the development of better video recommendation systems that respond to their users in a customised manner. Furthermore, many users will believe that travel would be a method to help them cope with their ongoing emotions. Our project aims to create a smart travel recommendation system based on the user's emotional state. This project focuses on developing an efficient music, video, movie, and tourism recommendation system that uses Facial Recognition techniques to assess the emotion of users. The system's overall concept is to identify facial expression and provide music, video, and movie recommendations based on the user's mood.
APA, Harvard, Vancouver, ISO, and other styles
5

Levitan, Carmel A., Isabelle Rusk, Danielle Jonas-Delson, Hanyun Lou, Lennon Kuzniar, Gray Davidson, and Aleksandra Sherman. "Mask wearing affects emotion perception." i-Perception 13, no. 3 (May 2022): 204166952211073. http://dx.doi.org/10.1177/20416695221107391.

Full text
Abstract:
To reduce the spread of COVID-19, mask wearing has become ubiquitous in much of the world. We studied the extent to which masks impair emotion recognition and dampen the perceived intensity of facial expressions by naturalistically inducing positive, neutral, and negative emotions in individuals while they were masked and unmasked. Two groups of online participants rated the emotional intensity of each presented image. One group rated full faces (N=104); the other (N=102) rated cropped images where only the upper face was visible. We found that masks impaired the recognition of and rated intensity of positive emotions. This happened even when the faces were cropped and the lower part of the face was not visible. Masks may thus reduce positive emotion and/or expressivity of positive emotion. However, perception of negativity was unaffected by masking, perhaps because unlike positive emotions like happiness which are signaled more in the mouth, negative emotions like anger rely more on the upper face.
APA, Harvard, Vancouver, ISO, and other styles
6

Liao, Songyang, Katsuaki Sakata, and Galina V. Paramei. "Color Affects Recognition of Emoticon Expressions." i-Perception 13, no. 1 (January 2022): 204166952210807. http://dx.doi.org/10.1177/20416695221080778.

Full text
Abstract:
In computer-mediated communication, emoticons are conventionally rendered in yellow. Previous studies demonstrated that colors evoke certain affective meanings, and face color modulates perceived emotion. We investigated whether color variation affects the recognition of emoticon expressions. Japanese participants were presented with emoticons depicting four basic emotions (Happy, Sad, Angry, Surprised) and a Neutral expression, each rendered in eight colors. Four conditions (E1–E4) were employed in the lab-based experiment; E5, with an additional participant sample, was an online replication of the critical E4. In E1, colored emoticons were categorized in a 5AFC task. In E2–E5, stimulus affective meaning was assessed using visual scales with anchors corresponding to each emotion. The conditions varied in stimulus arrays: E2: light gray emoticons; E3: colored circles; E4 and E5: colored emoticons. The affective meaning of Angry and Sad emoticons was found to be stronger when conferred in warm and cool colors, respectively, the pattern highly consistent between E4 and E5. The affective meaning of colored emoticons is regressed to that of achromatic expression counterparts and decontextualized color. The findings provide evidence that affective congruency of the emoticon expression and the color it is rendered in facilitates recognition of the depicted emotion, augmenting the conveyed emotional message.
APA, Harvard, Vancouver, ISO, and other styles
7

Wyman, Austin, and Zhiyong Zhang. "API Face Value." Journal of Behavioral Data Science 3, no. 1 (July 13, 2023): 1–11. http://dx.doi.org/10.35566/jbds/v3n1/wyman.

Full text
Abstract:
Emotion recognition application programming interface (API) is a recent advancement in computing technology that synthesizes computer vision, machine-learning algorithms, deep-learning neural networks, and other information to detect and label human emotions. The strongest iterations of this technology are produced by technology giants with large, cloud infrastructure (i.e., Google, and Microsoft), bolstering high true positive rates. We review the current status of applications of emotion recognition API in psychological research and find that, despite evidence of spatial, age, and race bias effects, API is improving the accessibility of clinical and educational research. Specifically, emotion detection software can assist individuals with emotion-related deficits (e.g., Autism Spectrum Disorder, Attention Deficit-Hyperactivity Disorder, Alexithymia). API has been incorporated in various computer-assisted interventions for Autism, where it has been used to diagnose, train, and monitor emotional responses to one's environment. We identify AP's potential to enhance interventions in other emotional dysfunction populations and to address various professional needs. Future work should aim to address the bias limitations of API software and expand its utility in subfields of clinical, educational, neurocognitive, and industrial-organizational psychology.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Zhiqin. "Deep Face Emotion Recognition." Journal of Physics: Conference Series 1087 (September 2018): 062036. http://dx.doi.org/10.1088/1742-6596/1087/6/062036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lawrence, Louise, and Deborah Abdel Nabi. "The Compilation and Validation of a Collection of Emotional Expression Images Communicated by Synthetic and Human Faces." International Journal of Synthetic Emotions 4, no. 2 (July 2013): 34–62. http://dx.doi.org/10.4018/ijse.2013070104.

Full text
Abstract:
The BARTA (Bolton Affect Recognition Tri-Stimulus Approach) is a unique database comprising over 400 colour images of the universally recognised basic emotional expressions and is the first compilation to include three different classes of validated face stimuli; emoticon, computer-generated cartoon and photographs of human faces. The validated tri-stimulus collection (all images received =70% inter-rater (child and adult) consensus) has been developed to promote pioneering research into the differential effects of synthetic emotion representation on atypical emotion perception, processing and recognition in autism spectrum disorders (ASD) and, given the recent evidence for an ASD synthetic-face processing advantage (Rosset et al., 2008), provides a means of investigating the benefits associated with the recruitment of synthetic face images in ASD emotion recognition training contexts.
APA, Harvard, Vancouver, ISO, and other styles
10

Homorogan, C., R. Adam, R. Barboianu, Z. Popovici, C. Bredicean, and M. Ienciu. "Emotional Face Recognition in Bipolar Disorder." European Psychiatry 41, S1 (April 2017): S117. http://dx.doi.org/10.1016/j.eurpsy.2017.01.1904.

Full text
Abstract:
IntroductionEmotional face recognition is significant for social communication. This is impaired in mood disorders, such as bipolar disorder. Individuals with bipolar disorder lack the ability to perceive facial expressions.ObjectivesTo analyse the capacity of emotional face recognition in subjects diagnosed with bipolar disorder.AimsTo establish a correlation between emotion recognition ability and the evolution of bipolar disease.MethodsA sample of 24 subjects were analysed in this trial, diagnosed with bipolar disorder (according to ICD-10 criteria), who were hospitalised in the Psychiatry Clinic of Timisoara and monitored in outpatients clinic. Subjects were introduced in the trial based on inclusion/exclusion criteria. The analysed parameters were: socio-demographic (age, gender, education level), the number of relapses, the predominance of manic or depressive episodes, and the ability of identifying emotions (Reading the Mind in the Eyes Test).ResultsMost of the subjects (79.16%) had a low ability to identify emotions, 20.83% had a normal capacity to recognise emotions, and none of them had a high emotion recognition capacity. The positive emotions (love, joy, surprise) were easier recognised, by 75% of the subjects, than the negative ones (anger, sadness, fear). There was no evident difference in emotional face recognition between the individuals with predominance of manic episodes than the ones who had mostly depressive episodes, and between the number of relapses.ConclusionsThe individuals with bipolar disorder have difficulties in identifying facial emotions, but with no obvious correlation between the analysed parameters.Disclosure of interestThe authors have not supplied their declaration of competing interest.
APA, Harvard, Vancouver, ISO, and other styles
11

Lozier, Leah M., John W. Vanmeter, and Abigail A. Marsh. "Impairments in facial affect recognition associated with autism spectrum disorders: A meta-analysis." Development and Psychopathology 26, no. 4pt1 (June 10, 2014): 933–45. http://dx.doi.org/10.1017/s0954579414000479.

Full text
Abstract:
AbstractAutism spectrum disorders (ASDs) are characterized by social impairments, including inappropriate responses to affective stimuli and nonverbal cues, which may extend to poor face-emotion recognition. However, the results of empirical studies of face-emotion recognition in individuals with ASD have yielded inconsistent findings that occlude understanding the role of face-emotion recognition deficits in the development of ASD. The goal of this meta-analysis was to address three as-yet unanswered questions. Are ASDs associated with consistent face-emotion recognition deficits? Do deficits generalize across multiple emotional expressions or are they limited to specific emotions? Do age or cognitive intelligence affect the magnitude of identified deficits? The results indicate that ASDs are associated with face-emotion recognition deficits across multiple expressions and that the magnitude of these deficits increases with age and cannot be accounted for by intelligence. These findings suggest that, whereas neurodevelopmental processes and social experience produce improvements in general face-emotion recognition abilities over time during typical development, children with ASD may experience disruptions in these processes, which suggested distributed functional impairment in the neural architecture that subserves face-emotion processing, an effect with downstream developmental consequences.
APA, Harvard, Vancouver, ISO, and other styles
12

Anderson, Ian M., Clare Shippen, Gabriella Juhasz, Diana Chase, Emma Thomas, Darragh Downey, Zoltan G. Toth, Kathryn Lloyd-Williams, Rebecca Elliott, and J. F. William Deakin. "State-dependent alteration in face emotion recognition in depression." British Journal of Psychiatry 198, no. 4 (April 2011): 302–8. http://dx.doi.org/10.1192/bjp.bp.110.078139.

Full text
Abstract:
BackgroundNegative biases in emotional processing are well recognised in people who are currently depressed but are less well described in those with a history of depression, where such biases may contribute to vulnerability to relapse.AimsTo compare accuracy, discrimination and bias in face emotion recognition in those with current and remitted depression.MethodThe sample comprised a control group (n = 101), a currently depressed group (n = 30) and a remitted depression group (n = 99). Participants provided valid data after receiving a computerised face emotion recognition task following standardised assessment of diagnosis and mood symptoms.ResultsIn the control group women were more accurate in recognising emotions than men owing to greater discrimination. Among participants with depression, those in remission correctly identified more emotions than controls owing to increased response bias, whereas those currently depressed recognised fewer emotions owing to decreased discrimination. These effects were most marked for anger, fear and sadness but there was no significant emotion × group interaction, and a similar pattern tended to be seen for happiness although not for surprise or disgust. These differences were confined to participants who were antidepressant-free, with those taking antidepressants having similar results to the control group.ConclusionsAbnormalities in face emotion recognition differ between people with current depression and those in remission. Reduced discrimination in depressed participants may reflect withdrawal from the emotions of others, whereas the increased bias in those with a history of depression could contribute to vulnerability to relapse. The normal face emotion recognition seen in those taking medication may relate to the known effects of antidepressants on emotional processing and could contribute to their ability to protect against depressive relapse.
APA, Harvard, Vancouver, ISO, and other styles
13

Grahlow, Melina, Claudia Ines Rupp, and Birgit Derntl. "The impact of face masks on emotion recognition performance and perception of threat." PLOS ONE 17, no. 2 (February 11, 2022): e0262840. http://dx.doi.org/10.1371/journal.pone.0262840.

Full text
Abstract:
Facial emotion recognition is crucial for social interaction. However, in times of a global pandemic, where wearing a face mask covering mouth and nose is widely encouraged to prevent the spread of disease, successful emotion recognition may be challenging. In the current study, we investigated whether emotion recognition, assessed by a validated emotion recognition task, is impaired for faces wearing a mask compared to uncovered faces, in a sample of 790 participants between 18 and 89 years (condition mask vs. original). In two more samples of 395 and 388 participants between 18 and 70 years, we assessed emotion recognition performance for faces that are occluded by something other than a mask, i.e., a bubble as well as only showing the upper part of the faces (condition half vs. bubble). Additionally, perception of threat for faces with and without occlusion was assessed. We found impaired emotion recognition for faces wearing a mask compared to faces without mask, for all emotions tested (anger, fear, happiness, sadness, disgust, neutral). Further, we observed that perception of threat was altered for faces wearing a mask. Upon comparison of the different types of occlusion, we found that, for most emotions and especially for disgust, there seems to be an effect that can be ascribed to the face mask specifically, both for emotion recognition performance and perception of threat. Methodological constraints as well as the importance of wearing a mask despite temporarily compromised social interaction are discussed.
APA, Harvard, Vancouver, ISO, and other styles
14

Chaudhari, V. J. "Face Recognition and Emotion Detection." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 30, 2021): 4775–77. http://dx.doi.org/10.22214/ijraset.2021.35698.

Full text
Abstract:
This Face recognition and facial emotion detection is new era of technology. It’s also indirectly defining the level of growth in intelligence, security and copying human emotional behaviour. It is mainly used in market research and testing. Many companies require a good and accurate testing method which contributes to their development by providing the necessary insights and drawing the accurate conclusions. Facial expression recognition technology can be developed through various methods. This technology can be developed by using the deep learning with the convolutional neural network or with inbuilt libraries like deepface. The main objective here is to classify each face based on the emotions shown into seven categories which include Anger, Disgust, Fear, Happiness, Sadness, Surprise and Neutrality. The main objective here in this project is, to read the facial expressions of the people and displaying them the product which helps in determining their interest in it. Facial expression recognition technology can also be used in video game testing. During the video game testing, certain users are asked to play the game for a specified period and their expressions, and their behavior are monitored and analyzed. The game developers usually use the facial expression recognition and get the required insights and draw the conclusions and provide their feedback in the making of the final product. In this project, deep learning with the convolutional neural networks (CNN) approach is used. Neural networks need to be trained with large amounts of data and have a higher computational power [8-11]. It takes more time to train the model.[1]
APA, Harvard, Vancouver, ISO, and other styles
15

Farkhod, Akhmedov, Akmalbek Bobomirzaevich Abdusalomov, Mukhriddin Mukhiddinov, and Young-Im Cho. "Development of Real-Time Landmark-Based Emotion Recognition CNN for Masked Faces." Sensors 22, no. 22 (November 11, 2022): 8704. http://dx.doi.org/10.3390/s22228704.

Full text
Abstract:
Owing to the availability of a wide range of emotion recognition applications in our lives, such as for mental status calculation, the demand for high-performance emotion recognition approaches remains uncertain. Nevertheless, the wearing of facial masks has been indispensable during the COVID-19 pandemic. In this study, we propose a graph-based emotion recognition method that adopts landmarks on the upper part of the face. Based on the proposed approach, several pre-processing steps were applied. After pre-processing, facial expression features need to be extracted from facial key points. The main steps of emotion recognition on masked faces include face detection by using Haar–Cascade, landmark implementation through a media-pipe face mesh model, and model training on seven emotional classes. The FER-2013 dataset was used for model training. An emotion detection model was developed for non-masked faces. Thereafter, landmarks were applied to the upper part of the face. After the detection of faces and landmark locations were extracted, we captured coordinates of emotional class landmarks and exported to a comma-separated values (csv) file. After that, model weights were transferred to the emotional classes. Finally, a landmark-based emotion recognition model for the upper facial parts was tested both on images and in real time using a web camera application. The results showed that the proposed model achieved an overall accuracy of 91.2% for seven emotional classes in the case of an image application. Image based emotion detection of the proposed model accuracy showed relatively higher results than the real-time emotion detection.
APA, Harvard, Vancouver, ISO, and other styles
16

Hubble, Kelly, Katie Daughters, Antony S. R. Manstead, Aled Rees, Anita Thapar, and Stephanie H. M. van Goozen. "Oxytocin Reduces Face Processing Time but Leaves Recognition Accuracy and Eye-Gaze Unaffected." Journal of the International Neuropsychological Society 23, no. 1 (November 21, 2016): 23–33. http://dx.doi.org/10.1017/s1355617716000886.

Full text
Abstract:
AbstractObjectives: Previous studies have found that oxytocin (OXT) can improve the recognition of emotional facial expressions; it has been proposed that this effect is mediated by an increase in attention to the eye-region of faces. Nevertheless, evidence in support of this claim is inconsistent, and few studies have directly tested the effect of oxytocin on emotion recognition via altered eye-gaze Methods: In a double-blind, within-subjects, randomized control experiment, 40 healthy male participants received 24 IU intranasal OXT and placebo in two identical experimental sessions separated by a 2-week interval. Visual attention to the eye-region was assessed on both occasions while participants completed a static facial emotion recognition task using medium intensity facial expressions. Results: Although OXT had no effect on emotion recognition accuracy, recognition performance was improved because face processing was faster across emotions under the influence of OXT. This effect was marginally significant (p<.06). Consistent with a previous study using dynamic stimuli, OXT had no effect on eye-gaze patterns when viewing static emotional faces and this was not related to recognition accuracy or face processing time. Conclusions: These findings suggest that OXT-induced enhanced facial emotion recognition is not necessarily mediated by an increase in attention to the eye-region of faces, as previously assumed. We discuss several methodological issues which may explain discrepant findings and suggest the effect of OXT on visual attention may differ depending on task requirements. (JINS, 2017, 23, 23–33)
APA, Harvard, Vancouver, ISO, and other styles
17

Lahera, G., V. de los Ángeles, C. Fernández, M. Bardón, S. Herrera, and A. Fernández-Liria. "Sense of familiarity and face emotion recognition in schizophrenia." European Psychiatry 26, S2 (March 2011): 1427. http://dx.doi.org/10.1016/s0924-9338(11)73132-7.

Full text
Abstract:
IntroductionPatients with schizophrenia show a deficit in emotion recognition through facial expression. Familiarity means the implicit memory of past affective experiences and it involves fast cognitive processes and it is triggered by certain signals.ObjectivesTo assess the emotion recognition in familiar and unfamiliar faces in a sample of schizophrenic patients and healthy controls.Methods18 outpatients diagnosed with schizophrenia (DSM-IVTR) and 18 healthy volunteers were assessed with the Ekman Test of emotion recognition in unfamiliar faces. In addition each subject was accompanied by 4 familiar people (parents, siblings or friends), which was photographed by expressing the 6 Ekman’s basic emotions.ResultsSchizophrenic patients recognize worse emotions in their relatives than in neutral faces, a greater extent than controls (Mann-Whitney U = 81, p = .01). The patient group showed a mean score on the Ekman test (neutral faces) lower than control group (16 (SD 2.38) versus 17.82 (2.13; U p = 0.03). Regarding familiar faces, the group patients showed a worse performance than the control group (13.22 (3.8) versus 17.18 (2.82); U p = 0.00). In both tests, the highest number of errors was with emotions of anger and fear. The patients group showed a lower level of familiarity and emotional valence to their families (U = 33, p < 0.01).ConclusionsThe sense of familiarity may be a factor involved in face emotion recognition and it may be disturbed in schizophrenia.
APA, Harvard, Vancouver, ISO, and other styles
18

Pandey, Amit, Aman Gupta, and Radhey Shyam. "FACIAL EMOTION DETECTION AND RECOGNITION." International Journal of Engineering Applied Sciences and Technology 7, no. 1 (May 1, 2022): 176–79. http://dx.doi.org/10.33564/ijeast.2022.v07i01.027.

Full text
Abstract:
Facial emotional expression is a part of face recognition, it has always been an easy task for humans, but achieving the same with a computer algorithm is challenging. With the recent and continuous advancements in computer vision and machine learning, it is possible to detect emotions in images, videos, etc. A face expression recognition method based on the Deep Neural Networks especially the convolutional neural network (CNN) and an image edge detection is proposed. The edge of each layer of the image is retrieved in the convolution process after the facial expression image is normalized. To maintain the texture picture's edge structure information, the retrieved edge information is placed on each feature image. In this research, several datasets are investigated and explored for training expression recognition models. The purpose of this paper is to make a study on face emotion detection and recognition via Machine learning algorithms and deep learning. This research work will present deeper insights into Face emotion detection and Recognition. It will also highlight the variables that have an impact on its efficacy.
APA, Harvard, Vancouver, ISO, and other styles
19

Evers, Kris, Inneke Kerkhof, Jean Steyaert, Ilse Noens, and Johan Wagemans. "No Differences in Emotion Recognition Strategies in Children with Autism Spectrum Disorder: Evidence from Hybrid Faces." Autism Research and Treatment 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/345878.

Full text
Abstract:
Emotion recognition problems are frequently reported in individuals with an autism spectrum disorder (ASD). However, this research area is characterized by inconsistent findings, with atypical emotion processing strategies possibly contributing to existing contradictions. In addition, an attenuated saliency of the eyes region is often demonstrated in ASD during face identity processing. We wanted to compare reliance on mouth versus eyes information in children with and without ASD, using hybrid facial expressions. A group of six-to-eight-year-old boys with ASD and an age- and intelligence-matched typically developing (TD) group without intellectual disability performed an emotion labelling task with hybrid facial expressions. Five static expressions were used: one neutral expression and four emotional expressions, namely, anger, fear, happiness, and sadness. Hybrid faces were created, consisting of an emotional face half (upper or lower face region) with the other face half showing a neutral expression. Results showed no emotion recognition problem in ASD. Moreover, we provided evidence for the existence of top- and bottom-emotions in children: correct identification of expressions mainly depends on information in the eyes (so-called top-emotions: happiness) or in the mouth region (so-called bottom-emotions: sadness, anger, and fear). No stronger reliance on mouth information was found in children with ASD.
APA, Harvard, Vancouver, ISO, and other styles
20

Tian, Wenqiang. "Personalized Emotion Recognition and Emotion Prediction System Based on Cloud Computing." Mathematical Problems in Engineering 2021 (May 26, 2021): 1–10. http://dx.doi.org/10.1155/2021/9948733.

Full text
Abstract:
Promoting economic development and improving people’s quality of life have a lot to do with the continuous improvement of cloud computing technology and the rapid expansion of applications. Emotions play an important role in all aspects of human life. It is difficult to avoid the influence of inner emotions in people’s behavior and deduction. This article mainly studies the personalized emotion recognition and emotion prediction system based on cloud computing. This paper proposes a method of intelligently identifying users’ emotional states through the use of cloud computing. First, an emotional induction experiment is designed to induce the testers’ positive, neutral, and negative three basic emotional states and collect cloud data and EEG under different emotional states. Then, the cloud data is processed and analyzed to extract emotional features. After that, this paper constructs a facial emotion prediction system based on cloud computing data model, which consists of face detection and facial emotion recognition. The system uses the SVM algorithm for face detection, uses the temporal feature algorithm for facial emotion analysis, and finally uses the classification method of machine learning to classify emotions, so as to realize the purpose of identifying the user’s emotional state through cloud computing technology. Experimental data shows that the EEG signal emotion recognition method based on time domain features performs best has better generalization ability and is improved by 6.3% on the basis of traditional methods. The experimental results show that the personalized emotion recognition method based on cloud computing is more effective than traditional methods.
APA, Harvard, Vancouver, ISO, and other styles
21

Yu, Guiping. "Emotion Monitoring for Preschool Children Based on Face Recognition and Emotion Recognition Algorithms." Complexity 2021 (March 2, 2021): 1–12. http://dx.doi.org/10.1155/2021/6654455.

Full text
Abstract:
In this paper, we study the face recognition and emotion recognition algorithms to monitor the emotions of preschool children. For previous emotion recognition focusing on faces, we propose to obtain more comprehensive information from faces, gestures, and contexts. Using the deep learning approach, we design a more lightweight network structure to reduce the number of parameters and save computational resources. There are not only innovations in applications, but also algorithmic enhancements. And face annotation is performed on the dataset, while a hierarchical sampling method is designed to alleviate the data imbalance phenomenon that exists in the dataset. A new feature descriptor, called “oriented gradient histogram from three orthogonal planes,” is proposed to characterize facial appearance variations. A new efficient geometric feature is also proposed to capture facial contour variations, and the role of audio methods in emotion recognition is explored. Multifeature fusion can be used to optimally combine different features. The experimental results show that the method is very effective compared to other recent methods in dealing with facial expression recognition problems about videos in both laboratory-controlled environments and outdoor environments. The method performed experiments on expression detection in a facial expression database. The experimental results are compared with data from previous studies and demonstrate the effectiveness of the proposed new method.
APA, Harvard, Vancouver, ISO, and other styles
22

Surcinelli, Paola, Bruno Baldaro, Antonio Balsamo, Roberto Bolzani, Monia Gennari, and Nicolino C. F. Rossi. "Emotion Recognition and Expression in Young Obese Participants: Preliminary Study." Perceptual and Motor Skills 105, no. 2 (October 2007): 477–82. http://dx.doi.org/10.2466/pms.105.2.477-482.

Full text
Abstract:
This study of the presence of alexithymic characteristics in obese adolescents and preadolescents tested the hypothesis of whether they showed impaired recognition and expression of emotion. The sample included 30 obese young participants and a control group of 30 participants of normal weight for their ages. Stimuli, 42 faces representing seven emotional expressions, were shown to participants who identified the emotion expressed in the face. The Level of Emotional Awareness Scale was adapted for children to evaluate their ability to describe their emotions. Young obese participants had significantly lower scores than control participants, but no differences were found in recognition of emotion. The lack of words to describe emotions might suggest a greater prevalence of alexithymic characteristics in the obese participants, but the hypothesis of a general deficit in the processing of emotional experiences was not supported.
APA, Harvard, Vancouver, ISO, and other styles
23

Fuchs, Marla, Anette Kersting, Thomas Suslow, and Charlott Maria Bodenschatz. "Recognizing and Looking at Masked Emotional Faces in Alexithymia." Behavioral Sciences 14, no. 4 (April 18, 2024): 343. http://dx.doi.org/10.3390/bs14040343.

Full text
Abstract:
Alexithymia is a clinically relevant personality construct characterized by difficulties identifying and communicating one’s emotions and externally oriented thinking. Alexithymia has been found to be related to poor emotion decoding and diminished attention to the eyes. The present eye tracking study investigated whether high levels of alexithymia are related to impairments in recognizing emotions in masked faces and reduced attentional preference for the eyes. An emotion recognition task with happy, fearful, disgusted, and neutral faces with face masks was administered to high-alexithymic and non-alexithymic individuals. Hit rates, latencies of correct responses, and fixation duration on eyes and face mask were analyzed as a function of group and sex. Alexithymia had no effects on accuracy and speed of emotion recognition. However, alexithymic men showed less attentional preference for the eyes relative to the mask than non-alexithymic men, which was due to their increased attention to face masks. No fixation duration differences were observed between alexithymic and non-alexithymic women. Our data indicate that high levels of alexithymia might not have adverse effects on the efficiency of emotion recognition from faces wearing masks. Future research on gaze behavior during facial emotion recognition in high alexithymia should consider sex as a moderating variable.
APA, Harvard, Vancouver, ISO, and other styles
24

Bonfiglio, Natale Salvatore, Roberta Renati, and Gabriella Bottini. "Decoding Emotion in Drug Abusers: Evidence for Face and Body Emotion Recognition and for Disgust Emotion." European Journal of Investigation in Health, Psychology and Education 12, no. 9 (September 17, 2022): 1427–40. http://dx.doi.org/10.3390/ejihpe12090099.

Full text
Abstract:
Background: Different drugs damage the frontal cortices, particularly the prefrontal areas involved in both emotional and cognitive functions, with a consequence of decoding emotion deficits for people with substance abuse. The present study aimed to explore the cognitive impairments in drug abusers through facial, body and disgust emotion recognition, expanding the investigation of emotions processing, measuring accuracy and response velocity. Methods: We enrolled 13 addicted to cocaine and 12 alcohol patients attending treatment services in Italy, comparing them with 33 matched controls. Facial emotion and body posture recognition tasks, a disgust rating task and the Barrat Impulsivity Scale were included in the experimental assessment. Results: We found that emotional processes are differently influenced by cocaine and alcohol, suggesting that these substances impact diverse cerebral systems. Conclusions: Drug abusers seem to be less accurate on elaboration of facial, body and disgust emotions. Considering that the participants were not cognitively impaired, our data support the hypothesis that emotional impairments emerge independently from the damage of cognitive functions.
APA, Harvard, Vancouver, ISO, and other styles
25

Dores, Artemisa R., Fernando Barbosa, Cristina Queirós, Irene P. Carvalho, and Mark D. Griffiths. "Recognizing Emotions through Facial Expressions: A Largescale Experimental Study." International Journal of Environmental Research and Public Health 17, no. 20 (October 12, 2020): 7420. http://dx.doi.org/10.3390/ijerph17207420.

Full text
Abstract:
Experimental research examining emotional processes is typically based on the observation of images with affective content, including facial expressions. Future studies will benefit from databases with emotion-inducing stimuli in which characteristics of the stimuli potentially influencing results can be controlled. This study presents Portuguese normative data for the identification of seven facial expressions of emotions (plus a neutral face), on the Radboud Faces Database (RaFD). The effect of participants’ gender and models’ sex on emotion recognition was also examined. Participants (N = 1249) were exposed to 312 pictures of white adults displaying emotional and neutral faces with a frontal gaze. Recognition agreement between the displayed and participants’ chosen expressions ranged from 69% (for anger) to 97% (for happiness). Recognition levels were significantly higher among women than among men only for anger and contempt. The emotion recognition was higher either in female models or in male models depending on the emotion. Overall, the results show high recognition levels of the facial expressions presented, indicating that the RaFD provides adequate stimuli for studies examining the recognition of facial expressions of emotion among college students. Participants’ gender had a limited influence on emotion recognition, but the sex of the model requires additional consideration.
APA, Harvard, Vancouver, ISO, and other styles
26

Grundmann, Felix, Kai Epstude, and Susanne Scheibe. "Face masks reduce emotion-recognition accuracy and perceived closeness." PLOS ONE 16, no. 4 (April 23, 2021): e0249792. http://dx.doi.org/10.1371/journal.pone.0249792.

Full text
Abstract:
Face masks became the symbol of the global fight against the coronavirus. While face masks’ medical benefits are clear, little is known about their psychological consequences. Drawing on theories of the social functions of emotions and rapid trait impressions, we tested hypotheses on face masks’ effects on emotion-recognition accuracy and social judgments (perceived trustworthiness, likability, and closeness). Our preregistered study with 191 German adults revealed that face masks diminish people’s ability to accurately categorize an emotion expression and make target persons appear less close. Exploratory analyses further revealed that face masks buffered the negative effect of negative (vs. non-negative) emotion expressions on perceptions of trustworthiness, likability, and closeness. Associating face masks with the coronavirus’ dangers predicted higher perceptions of closeness for masked but not for unmasked faces. By highlighting face masks’ effects on social functioning, our findings inform policymaking and point at contexts where alternatives to face masks are needed.
APA, Harvard, Vancouver, ISO, and other styles
27

Singh,, Mr Ankit. "Real-Time Emotion Recognition System Using Facial Expressions." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (April 19, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem31021.

Full text
Abstract:
This paper describes an emotion detection system based on real-time detection using image processing with human-friendly machine interaction. Facial detection has been around for decades. Taking a step ahead, Human expressions displayed by face and felt by the brain, captured via video, electric signal, or image form can be approximated. To recognize emotions via images or videos is a difficult task for the human eye and challenging for machines thus detection of emotion by a machine requires many image processing techniques for feature extraction. This paper proposes a system that has two main processes such as face detection and Facial expression recognition (FER). This research focuses on an experimental study on identifying facial emotions. The flow for an emotion detection system includes the image acquisition, preprocessing of an image, Face detection, feature extraction, and classification. To identify such emotions, the emotion detection system uses KNN Classifier for image classification, and Haar cascade algorithm an Object Detection Algorithm to identify faces in an image or a real-time video. This system works by taking live images from the webcam. The objective of this research is to produce an automatic facial emotion detection system to identify different emotions based on these experiments the system could identify several people that are sad, surprised, and happy, in fear, are angry, disgust etc.
APA, Harvard, Vancouver, ISO, and other styles
28

Franzoni, Valentina, Giulio Biondi, Damiano Perri, and Osvaldo Gervasi. "Enhancing Mouth-Based Emotion Recognition Using Transfer Learning." Sensors 20, no. 18 (September 13, 2020): 5222. http://dx.doi.org/10.3390/s20185222.

Full text
Abstract:
This work concludes the first study on mouth-based emotion recognition while adopting a transfer learning approach. Transfer learning results are paramount for mouth-based emotion emotion recognition, because few datasets are available, and most of them include emotional expressions simulated by actors, instead of adopting real-world categorisation. Using transfer learning, we can use fewer training data than training a whole network from scratch, and thus more efficiently fine-tune the network with emotional data and improve the convolutional neural network’s performance accuracy in the desired domain. The proposed approach aims at improving emotion recognition dynamically, taking into account not only new scenarios but also modified situations to the initial training phase, because the image of the mouth can be available even when the whole face is visible only in an unfavourable perspective. Typical applications include automated supervision of bedridden critical patients in a healthcare management environment, and portable applications supporting disabled users having difficulties in seeing or recognising facial emotions. This achievement takes advantage of previous preliminary works on mouth-based emotion recognition using deep-learning, and has the further benefit of having been tested and compared to a set of other networks using an extensive dataset for face-based emotion recognition, well known in the literature. The accuracy of mouth-based emotion recognition was also compared to the corresponding full-face emotion recognition; we found that the loss in accuracy is mostly compensated by consistent performance in the visual emotion recognition domain. We can, therefore, state that our method proves the importance of mouth detection in the complex process of emotion recognition.
APA, Harvard, Vancouver, ISO, and other styles
29

Smitha, E. S., S. Sendhilkumar, and G. S. Mahalakshmi. "Ensemble Convolution Neural Network for Robust Video Emotion Recognition Using Deep Semantics." Scientific Programming 2023 (May 17, 2023): 1–21. http://dx.doi.org/10.1155/2023/6859284.

Full text
Abstract:
Human emotion recognition from videos involves accurately interpreting facial features, including face alignment, occlusion, and shape illumination problems. Dynamic emotion recognition is more important. The situation becomes more challenging with multiple persons and the speedy movement of faces. In this work, the ensemble max rule method is proposed. For obtaining the results of the ensemble method, three primary forms, such as CNNHOG-KLT, CNNHaar-SVM, and CNNPATCH are developed parallel to each other to detect the human emotions from the extracted vital frames from videos. The first method uses HoG and KLT algorithms for face detection and tracking. The second method uses Haar cascade and SVM to detect the face. Template matching is used for face detection in the third method. Convolution neural network (CNN) is used for emotion classification in CNNHOG-KLT and CNNHaar-SVM. To handle occluded images, a patch-based CNN is introduced for emotion recognition in CNNPATCH. Finally, all three methods are ensembles based on the Max rule. The CNNENSEMBLE for emotion classification results in 92.07% recognition accuracy by considering both occluded and nonoccluded facial videos.
APA, Harvard, Vancouver, ISO, and other styles
30

Hysenaj, Arben, Mariel Leclère, Bernard Tahirbegolli, Dorentina Kuqi, Albane Isufi, Lulejete Prekazi, Nevzat Shemsedini, Driton Maljichi, and Rina Meha. "Accuracy and speed of emotion recognition with face masks." Europe’s Journal of Psychology 20, no. 1 (February 29, 2024): 16–24. http://dx.doi.org/10.5964/ejop.11789.

Full text
Abstract:
Wearing face masks is one of the important actions to prevent the spread of COVID-19 among people around the world. Nevertheless, social interaction is limited via masks, and this impacts the accuracy and speed of emotional perception. In the present study, we assess the impact of mask-wearing on the accuracy and speed of emotion recognition. Fifty people (female n = 39, male n = 11) aged 19–28 participated in the study (M = 21.1 years). We used frontal photos of a Kosova woman who belonged to the same participants’ age group, with a grey background. Twelve different pictures were used that showed the emotional states of fear, joy, sadness, anger, neutrality, and disgust, in masked and unmasked conditions. The experiment was conducted in a controlled laboratory setting. Participants were faster for identifying emotions like joy (1.507 ms) and neutral (1.971 ms). The participants were more accurate (emotions identification) in unmasked faces (M = 85.7%) than in masked faces (M = 73.8%), F(1,98) = 20.73, MSE = 1027.66, p ≤ .001, partial η² = 0.17. Masks make confusion and reduce the accuracy and speediness of emotional detection. This may have a notable impact on social interactions among peoples.
APA, Harvard, Vancouver, ISO, and other styles
31

Léveillé, Edith, Samuel Guay, Caroline Blais, Peter Scherzer, and Louis De Beaumont. "Sex-Related Differences in Emotion Recognition in Multi-concussed Athletes." Journal of the International Neuropsychological Society 23, no. 1 (December 15, 2016): 65–77. http://dx.doi.org/10.1017/s1355617716001004.

Full text
Abstract:
AbstractObjectives:Concussion is defined as a complex pathophysiological process affecting the brain. Although the cumulative and long-term effects of multiple concussions are now well documented on cognitive and motor function, little is known about their effects on emotion recognition. Recent studies have suggested that concussion can result in emotional sequelae, particularly in females and multi-concussed athletes. The objective of this study was to investigate sex-related differences in emotion recognition in asymptomatic male and female multi-concussed athletes.Methods:We tested 28 control athletes (15 males) and 22 multi-concussed athletes (10 males) more than a year since the last concussion. Participants completed the Post-Concussion Symptom Scale, the Beck Depression Inventory-II, the Beck Anxiety Inventory, a neuropsychological test battery and a morphed emotion recognition task. Pictures of a male face expressing basic emotions (anger, disgust, fear, happiness, sadness, surprise) morphed with another emotion were randomly presented. After each face presentation, participants were asked to indicate the emotion expressed by the face.Results:Results revealed significant sex by group interactions in accuracy and intensity threshold for negative emotions, together with significant main effects of emotion and group.Conclusions:Male concussed athletes were significantly impaired in recognizing negative emotions and needed more emotional intensity to correctly identify these emotions, compared to same-sex controls. In contrast, female concussed athletes performed similarly to same-sex controls. These findings suggest that sex significantly modulates concussion effects on emotional facial expression recognition. (JINS, 2017,23, 65–77)
APA, Harvard, Vancouver, ISO, and other styles
32

Bick, Johanna, Rhiannon Luyster, Nathan A. Fox, Charles H. Zeanah, and Charles A. Nelson. "Effects of early institutionalization on emotion processing in 12-year-old youth." Development and Psychopathology 29, no. 5 (November 22, 2017): 1749–61. http://dx.doi.org/10.1017/s0954579417001377.

Full text
Abstract:
AbstractWe examined facial emotion recognition in 12-year-olds in a longitudinally followed sample of children with and without exposure to early life psychosocial deprivation (institutional care). Half of the institutionally reared children were randomized into foster care homes during the first years of life. Facial emotion recognition was examined in a behavioral task using morphed images. This same task had been administered when children were 8 years old. Neutral facial expressions were morphed with happy, sad, angry, and fearful emotional facial expressions, and children were asked to identify the emotion of each face, which varied in intensity. Consistent with our previous report, we show that some areas of emotion processing, involving the recognition of happy and fearful faces, are affected by early deprivation, whereas other areas, involving the recognition of sad and angry faces, appear to be unaffected. We also show that early intervention can have a lasting positive impact, normalizing developmental trajectories of processing negative emotions (fear) into the late childhood/preadolescent period.
APA, Harvard, Vancouver, ISO, and other styles
33

Künecke, Janina, Oliver Wilhelm, and Werner Sommer. "Emotion Recognition in Nonverbal Face-to-Face Communication." Journal of Nonverbal Behavior 41, no. 3 (April 5, 2017): 221–38. http://dx.doi.org/10.1007/s10919-017-0255-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Athavle, Madhuri. "Music Recommendation Based on Face Emotion Recognition." Journal of Informatics Electrical and Electronics Engineering (JIEEE) 2, no. 2 (June 9, 2021): 1–11. http://dx.doi.org/10.54060/jieee/002.02.018.

Full text
Abstract:
We propose a new approach for playing music automatically using facial emotion. Most of the existing approaches involve playing music manually, using wearable computing devices, or classifying based on audio features. Instead, we propose to change the manual sorting and playing. We have used a Convolutional Neural Network for emotion detection. For music recommendations, Pygame & Tkinter are used. Our proposed system tends to reduce the computational time involved in obtaining the results and the overall cost of the designed system, thereby increasing the system’s overall accuracy. Testing of the system is done on the FER2013 dataset. Facial expressions are captured using an inbuilt camera. Feature extraction is performed on input face images to detect emotions such as happy, angry, sad, surprise, and neutral. Automatically music playlist is generated by identifying the current emotion of the user. It yields better performance in terms of computational time, as compared to the algorithm in the existing literature.
APA, Harvard, Vancouver, ISO, and other styles
35

Pratomo, Awang Hendrianto, Mangaras Yanu Florestyanto, Y. I. Sania, B. Ihsan, H. H. Triharminto, and Leonel Hernandez. "Image processing for student emotion monitoring based on fisherface method." Science in Information Technology Letters 2, no. 1 (May 31, 2021): 43–53. http://dx.doi.org/10.31763/sitech.v2i1.690.

Full text
Abstract:
Monitoring academic emotion is an activity to provide information from students' academic emotions in the class continuously. Some research in the image processing field had done for face recognition but had not been many studies on image processing to detect student emotions. This paper aims to determine the percentage of facial recognition with fisherface and academic emotional recognition by monitoring changes in students' facial expressions using facial landmarks in various distances, camera angles, light, and attributes used on objects. The proposed method uses facial image extraction based on fisherface method for presence. Furthermore, face identification will be made with Euclidean distance by finding the smallest length of training data with test data. Emotion detection is done by facial landmarks and mathematical calculations to detect drowsiness, focus, and not focus on the face. Restful web service is used as a communication architecture to integrate data. The success rate of applications with the fisherface method obtains 96% percent accuracy of face recognition. Meanwhile, facial landmarks and mathematical calculations are used to detect emotions, with 84 %.
APA, Harvard, Vancouver, ISO, and other styles
36

Vagmare, Rishikesh. "Emotion Recognition with CNN." International Journal for Research in Applied Science and Engineering Technology 11, no. 11 (November 30, 2023): 2738–43. http://dx.doi.org/10.22214/ijraset.2023.57189.

Full text
Abstract:
Abstract: Emotion is a subjective phenomenon, utilizing knowledge and science behind tagged data and extracting the components that comprise it has been a difficult challenge. With the advancement of deep learning in computer vision, emotion identification has become a popular research topic. This Project presents feature extraction of facial expressions using a neural network combination for the recognition of various facial emotions (sad, happy, neutral, angry, surprised, fear). Convolution Neural Network has been used to achieve a accuracy of 75%, which have excellent recognition of image features. Haar-Cascade has been used to find the region that contains the face, so the model has to only work with the region which has face.
APA, Harvard, Vancouver, ISO, and other styles
37

Van Rheenen, Tamsyn E., Nicole Joshua, David J. Castle, and Susan L. Rossell. "Configural and Featural Face Processing Influences on Emotion Recognition in Schizophrenia and Bipolar Disorder." Journal of the International Neuropsychological Society 23, no. 3 (February 3, 2017): 287–91. http://dx.doi.org/10.1017/s1355617716001211.

Full text
Abstract:
AbstractObjectives: Emotion recognition impairments have been demonstrated in schizophrenia (Sz), but are less consistent and lesser in magnitude in bipolar disorder (BD). This may be related to the extent to which different face processing strategies are engaged during emotion recognition in each of these disorders. We recently showed that Sz patients had impairments in the use of both featural and configural face processing strategies, whereas BD patients were impaired only in the use of the latter. Here we examine the influence that these impairments have on facial emotion recognition in these cohorts. Methods: Twenty-eight individuals with Sz, 28 individuals with BD, and 28 healthy controls completed a facial emotion labeling task with two conditions designed to separate the use of featural and configural face processing strategies; part-based and whole-face emotion recognition. Results: Sz patients performed worse than controls on both conditions, and worse than BD patients on the whole-face condition. BD patients performed worse than controls on the whole-face condition only. Conclusions: Configural processing deficits appear to influence the recognition of facial emotions in BD, whereas both configural and featural processing abnormalities impair emotion recognition in Sz. This may explain discrepancies in the profiles of emotion recognition between the disorders. (JINS, 2017, 23, 287–291)
APA, Harvard, Vancouver, ISO, and other styles
38

Anand, Aditi, Rajashvi Srivastava, Archismaan Banerjee, and Arpit Khare. "Emotion Recognition System." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (May 31, 2022): 709–12. http://dx.doi.org/10.22214/ijraset.2022.42303.

Full text
Abstract:
Abstract: In today's world, the face uses expressions to convey tons of knowledge visually, hence Emotion-Recognition System can be a important focus within the space of computer-user contact. Our emotions area unit is conveyed through the stimulation of unique sets of facial muscles. They are generally refined, nevertheless advanced, signals in an expression that always contains a verdant quantity of knowledge concerning our mindset. By emotion recognition (classification), We tend to design a supervised deep-learning neural network (DNN) that offers computers the power to create speculation concerning sentiments. The most objective of our project is that we tend to apply numerous deep learning ways like convolutional neural networks to spot general human emotions
APA, Harvard, Vancouver, ISO, and other styles
39

Juntao Zhao, Juntao Zhao. "Multichannel Fusion Based on modified CNN for Image Emotion Recognition." 電腦學刊 33, no. 1 (February 2022): 013–19. http://dx.doi.org/10.53106/199115992022023301002.

Full text
Abstract:
<p>Social media networks are an integral part of people’s daily life. Users share images and texts to express their emotions and opinions. Analyzing the image and text content published by these users can help understand and predict user behavior, so as to carry out marketing, public opinion monitoring and personalized recommendation. Weibo, Wechat and other social media are important ways of self-expression. Images are more intuitive than text. Therefore, more scholars begin to pay attention to the research of image emotion analysis. At present, image emotion analysis methods pay seldom attention to the influence of saliency object and face on image emotion expression. Therefore, we propose a multichannel fusion method based on modified CNN for image emotion recognition. Firstly, saliency target and face target region are detected in the whole image. Then feature pyramid is used to improve CNN to recognize saliency target emotion. Weighted loss CNN emotion recognition is constructed on multi-layer supervision module. Finally, the saliency target emotion, face target emotion and the directly recognized emotion on the whole image are fused to get the final result of emotion classification. Experimental results show that the proposed method can improve the accuracy of image emotion recognition.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
40

Verma, Teena, Sahil Niranjan, Abhinav K.Gupt, Vinay KUMAR, and YASH Vashist. "EMOTIONAL RECOGNITION USING FACIAL EXPRESSIONS AND SPEECH ANALYSIS." International Journal of Engineering Applied Sciences and Technology 6, no. 7 (November 1, 2021): 176–80. http://dx.doi.org/10.33564/ijeast.2021.v06i07.028.

Full text
Abstract:
Emotional recognition can be made from Many sources including text, speech, hand, body language and facial expressions. Currently, most sensory systems use only one of these sources. People's feelings change every second and one method used to process emotional recognition may not reflect emotions in the right way. This research recommends the desire to understand and explore people's feelings in many similar ways speech and face. We have chosen to explore, sound and video inputs to develop an ensemble model that gathers the information from all these sources and displays it in a clear and interpretable way. By improving the emotion recognition accuracy, the proposed multisensory emotion recognition system can help to improve the naturalness of human computer interaction. Speech, hand, body language, and facial expressions are all examples of sources for emotional recognition. Most sensory systems currently use only one of these sources. People's feelings fluctuate by the second, therefore one method for processing emotional identification may not accurately reflect emotions. This study suggests that there is a need to comprehend and explore people's sentiments in many ways that voice and face do. Various emotional states were utilised in this case. Speech, facial expressions, and both can be used to detect emotions in the proposed framework. Audio, and video inputs and construct an ensemble model that collects data from all of these sources and presents it in a clear and understandable manner. The suggested multisensory emotion recognition system can help to increase the naturalness of human-computer interaction by boosting emotion recognition accuracy.
APA, Harvard, Vancouver, ISO, and other styles
41

Benuzzi, Francesca, Daniela Ballotta, Claudia Casadio, Vanessa Zanelli, Carlo Adolfo Porro, Paolo Frigio Nichelli, and Fausta Lui. "“When You’re Smiling”: How Posed Facial Expressions Affect Visual Recognition of Emotions." Brain Sciences 13, no. 4 (April 16, 2023): 668. http://dx.doi.org/10.3390/brainsci13040668.

Full text
Abstract:
Facial imitation occurs automatically during the perception of an emotional facial expression, and preventing it may interfere with the accuracy of emotion recognition. In the present fMRI study, we evaluated the effect of posing a facial expression on the recognition of ambiguous facial expressions. Since facial activity is affected by various factors, such as empathic aptitudes, the Interpersonal Reactivity Index (IRI) questionnaire was administered and scores were correlated with brain activity. Twenty-six healthy female subjects took part in the experiment. The volunteers were asked to pose a facial expression (happy, disgusted, neutral), then to watch an ambiguous emotional face, finally to indicate whether the emotion perceived was happiness or disgust. As stimuli, blends of happy and disgusted faces were used. Behavioral results showed that posing an emotional face increased the percentage of congruence with the perceived emotion. When participants posed a facial expression and perceived a non-congruent emotion, a neural network comprising bilateral anterior insula was activated. Brain activity was also correlated with empathic traits, particularly with empathic concern, fantasy and personal distress. Our findings support the idea that facial mimicry plays a crucial role in identifying emotions, and that empathic emotional abilities can modulate the brain circuits involved in this process.
APA, Harvard, Vancouver, ISO, and other styles
42

Grace, Sally A., Wei Lin Toh, Ben Buchanan, David J. Castle, and Susan L. Rossell. "Impaired Recognition of Negative Facial Emotions in Body Dysmorphic Disorder." Journal of the International Neuropsychological Society 25, no. 08 (May 17, 2019): 884–89. http://dx.doi.org/10.1017/s1355617719000419.

Full text
Abstract:
Abstract Objectives: Patients with body dysmorphic disorder (BDD) have difficulty in recognising facial emotions, and there is evidence to suggest that there is a specific deficit in identifying negative facial emotions, such as sadness and anger. Methods: This study investigated facial emotion recognition in 19 individuals with BDD compared with 21 healthy control participants who completed a facial emotion recognition task, in which they were asked to identify emotional expressions portrayed in neutral, happy, sad, fearful, or angry faces. Results: Compared to the healthy control participants, the BDD patients were generally less accurate in identifying all facial emotions but showed specific deficits for negative emotions. The BDD group made significantly more errors when identifying neutral, angry, and sad faces than healthy controls; and were significantly slower at identifying neutral, angry, and happy faces. Conclusions: These findings add to previous face-processing literature in BDD, suggesting deficits in identifying negative facial emotions. There are treatment implications as future interventions would do well to target such deficits.
APA, Harvard, Vancouver, ISO, and other styles
43

Ziccardi, Stefano, Francesco Crescenzo, and Massimiliano Calabrese. "“What Is Hidden behind the Mask?” Facial Emotion Recognition at the Time of COVID-19 Pandemic in Cognitively Normal Multiple Sclerosis Patients." Diagnostics 12, no. 1 (December 27, 2021): 47. http://dx.doi.org/10.3390/diagnostics12010047.

Full text
Abstract:
Social cognition deficits have been described in people with multiple sclerosis (PwMS), even in absence of a global cognitive impairment, affecting predominantly the ability to adequately process emotions from human faces. The COVID-19 pandemic has forced people to wear face masks that might interfere with facial emotion recognition. Therefore, in the present study, we aimed at investigating the ability of emotion recognition in PwMS from faces wearing masks. We enrolled a total of 42 cognitively normal relapsing–remitting PwMS and a matched group of 20 healthy controls (HCs). Participants underwent a facial emotion recognition task in which they had to recognize from faces wearing or not surgical masks which of the six basic emotions (happiness, anger, fear, sadness, surprise, disgust) was presented. Results showed that face masks negatively affected emotion recognition in all participants (p < 0.001); in particular, PwMS showed a global worse accuracy than HCs (p = 0.005), mainly driven by the “no masked” (p = 0.021) than the “masked” (p = 0.064) condition. Considering individual emotions, PwMS showed a selective impairment in the recognition of fear, compared with HCs, in both the conditions investigated (“masked”: p = 0.023; “no masked”: p = 0.016). Face masks affected negatively also response times (p < 0.001); in particular, PwMS were globally hastier than HCs (p = 0.024), especially in the “masked” condition (p = 0.013). Furthermore, a detailed characterization of the performance of PwMS and HCs in terms of accuracy and response speed was proposed. Results from the present study showed the effect of face masks on the ability to process facial emotions in PwMS, compared with HCs. Healthcare professionals working with PwMS at the time of the COVID-19 outbreak should take into consideration this effect in their clinical practice. Implications in the everyday life of PwMS are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
44

a, Vinayak, and Rachana R. Babu. "Facial Emotion Recognition." YMER Digital 21, no. 05 (May 23, 2022): 1010–15. http://dx.doi.org/10.37896/ymer21.05/b5.

Full text
Abstract:
Human expresses their mood and sometimes what they need through their expression. This project traces the mood of the human using a real time recognition system which will detect the emotion. It can be a smiling face, or it can be the face full of anger. Facial emotion recognition is one of the useful task and can be used as a base for many real-time applications. The example can be feedback through moods at any restaurants and hotels about their services and foods. It can be much impactful in the field of military. Its very usage can be helpful for recognizing the people’s behaviour at the border areas to find out the suspects between them. This project consists of various algorithms of machine as well as deep learning. Some of the libraries are: Keras, OpenCV, Matplotlib. Image processing is used in classifying the universal emotions like neutral, surprise, sad, angry, happy, disguist, fear. This project consists of two modules: (i)Processing and generating the model for the application using different algorithms and (ii) Application for using the model using OpenCV to recognize. A set of values obtained after processing those extracted features points are given as input to recognize the emotion. Keywords: facial emotion recognition, deep neural networks, automatic recognition database
APA, Harvard, Vancouver, ISO, and other styles
45

Varyani, Miss Harsha, and Prof R. B. Late. "A Survey on Emotion Based Music Player through Face Recognition System." International Journal for Research in Applied Science and Engineering Technology 11, no. 1 (January 31, 2023): 179–82. http://dx.doi.org/10.22214/ijraset.2023.48529.

Full text
Abstract:
Abstract: This research constructs a face emotion framework that can examine fundamental human facial expression. The approach suggested was used by humans to classify the humans' mood and eventually to play the audio file that links to human emotion using this result. First of all, the device takes the face of the human being as a part of the process. It is carried out facial recognition. After this, the human face can be recognized using attribute extraction techniques. This way the emotion of humans can be identified using the image element. Those signature points are located by the extraction of tongue, mouth and eyebrows, eyebrow. If the input face precisely matches the emotion dataset face, we will detect individual feelings to play the emotional audio file. Training with a small range of characteristics faces can gain recognition in varying environmental conditions. An easy, effective and reliable solution is proposed. In the field of identification and detection, system plays a very important part.
APA, Harvard, Vancouver, ISO, and other styles
46

Millichap, J. Gordon. "Febrile Seizures and Face Emotion Recognition." Pediatric Neurology Briefs 27, no. 11 (November 1, 2013): 83. http://dx.doi.org/10.15844/pedneurbriefs-27-11-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Del-Ben, C. M., C. A. Q. Ferreira, W. C. Alves-Neto, and F. G. Graeff. "Serotonergic modulation of face-emotion recognition." Brazilian Journal of Medical and Biological Research 41, no. 4 (April 2008): 263–69. http://dx.doi.org/10.1590/s0100-879x2008000400002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Hasan, Mustafa Asaad. "Facial Human Emotion Recognition by Using YOLO Faces Detection Algorithm." JOINCS (Journal of Informatics, Network, and Computer Science) 6, no. 2 (November 30, 2023): 32–38. http://dx.doi.org/10.21070/joincs.v6i2.1629.

Full text
Abstract:
Deep emotions have gained importance recently because they constitute a form of interpersonal nonverbal communication that has been demonstrated and used in a variety of real-world contexts, including human-machine interactions, safety, and health. The best elements of a human face must be extracted in order to forecast the proper emotion expression, making this method extremely difficult. In this work, we provide a brand-new structural model to forecast human emotion on the face. The human face is found using the YOLO faces detection technique, and its attributes are extracted. These features then help to classify the face image into one of the seven emotions: natural, happy, sad, angry, surprised, fear, or disgust. The experiment demonstrated the robustness and speed of the suggested structure. This paper made use of the FER2013 dataset. The experimental findings demonstrated that the proposed system's accuracy was 94%.
APA, Harvard, Vancouver, ISO, and other styles
49

Mahalim, Vaishnavi Sanjay, Seema Goroba Admane, Divya Vinod Kundgar, and Ankit Hirday Narayan Singh. "Development of Real-Time Emotion Recognition System Using Facial Expressions." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, no. 10 (October 1, 2023): 1–11. http://dx.doi.org/10.55041/ijsrem26415.

Full text
Abstract:
This research presents a real-time emotion recognition system that combines human-friendly machine interaction with picture processing. For many years, facial detection has been available. Moving further, it is possible to simulate the emotions that people express on their faces and experience in their brains through the use of video, electric signals, or image forms. Since it is hard for computers to detect emotions from images or videos and a difficult task for the human eye, machine emotion detection requires a variety of image processing approaches for feature extraction. The approach proposed in this paper consists of two primary processes: facial expression recognition (FER) and face detection. The experimental investigation of facial emotion recognition is the main topic of this study. An emotion detection system's workflow consists of face detection, feature extraction, pre-processing, classification, and image acquisition. The emotion identification system uses the Haar cascade algorithm, an object detection algorithm, to recognize faces in an image or a real-time video, and the KNN Classifier for image classification in order to identify such emotions. Using the webcam to capture real-time photos, this system operates. The goal of this research is to develop an automatic facial expression recognition system that can recognize various emotions. Based on these studies, the system may be able to distinguish between a number of people who are fearful, furious, shocked, sad, or pleased, among other emotions.
APA, Harvard, Vancouver, ISO, and other styles
50

Fakhar, Shariqa, Junaid Baber, Sibghat Ullah Bazai, Shah Marjan, Michal Jasinski, Elzbieta Jasinska, Muhammad Umar Chaudhry, Zbigniew Leonowicz, and Shumaila Hussain. "Smart Classroom Monitoring Using Novel Real-Time Facial Expression Recognition System." Applied Sciences 12, no. 23 (November 27, 2022): 12134. http://dx.doi.org/10.3390/app122312134.

Full text
Abstract:
Emotions play a vital role in education. Technological advancement in computer vision using deep learning models has improved automatic emotion recognition. In this study, a real-time automatic emotion recognition system is developed incorporating novel salient facial features for classroom assessment using a deep learning model. The proposed novel facial features for each emotion are initially detected using HOG for face recognition, and automatic emotion recognition is then performed by training a convolutional neural network (CNN) that takes real-time input from a camera deployed in the classroom. The proposed emotion recognition system will analyze the facial expressions of each student during learning. The selected emotional states are happiness, sadness, and fear along with the cognitive–emotional states of satisfaction, dissatisfaction, and concentration. The selected emotional states are tested against selected variables gender, department, lecture time, seating positions, and the difficulty of a subject. The proposed system contributes to improve classroom learning.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography