Journal articles on the topic 'Recognition of emotions'

To see the other types of publications on this topic, follow the link: Recognition of emotions.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Recognition of emotions.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Liao, Songyang, Katsuaki Sakata, and Galina V. Paramei. "Color Affects Recognition of Emoticon Expressions." i-Perception 13, no. 1 (January 2022): 204166952210807. http://dx.doi.org/10.1177/20416695221080778.

Full text
Abstract:
In computer-mediated communication, emoticons are conventionally rendered in yellow. Previous studies demonstrated that colors evoke certain affective meanings, and face color modulates perceived emotion. We investigated whether color variation affects the recognition of emoticon expressions. Japanese participants were presented with emoticons depicting four basic emotions (Happy, Sad, Angry, Surprised) and a Neutral expression, each rendered in eight colors. Four conditions (E1–E4) were employed in the lab-based experiment; E5, with an additional participant sample, was an online replication of the critical E4. In E1, colored emoticons were categorized in a 5AFC task. In E2–E5, stimulus affective meaning was assessed using visual scales with anchors corresponding to each emotion. The conditions varied in stimulus arrays: E2: light gray emoticons; E3: colored circles; E4 and E5: colored emoticons. The affective meaning of Angry and Sad emoticons was found to be stronger when conferred in warm and cool colors, respectively, the pattern highly consistent between E4 and E5. The affective meaning of colored emoticons is regressed to that of achromatic expression counterparts and decontextualized color. The findings provide evidence that affective congruency of the emoticon expression and the color it is rendered in facilitates recognition of the depicted emotion, augmenting the conveyed emotional message.
APA, Harvard, Vancouver, ISO, and other styles
2

Mallikarjuna, Basetty, M. Sethu Ram, and Supriya Addanke. "An Improved Face-Emotion Recognition to Automatically Generate Human Expression With Emoticons." International Journal of Reliable and Quality E-Healthcare 11, no. 1 (January 1, 2022): 1–18. http://dx.doi.org/10.4018/ijrqeh.314945.

Full text
Abstract:
Any human face image expression naturally identifies expressions of happy, sad etc.; sometimes human facial image expression recognition is complex, and it is a combination of two emotions. The existing literature provides face emotion classification and image recognition, and the study on deep learning using convolutional neural networks (CNN), provides face emotion recognition most useful for healthcare and with the most complex of the existing algorithms. This paper improves the human face emotion recognition and provides feelings of interest for others to generate emoticons on their smartphone. Face emotion recognition plays a major role by using convolutional neural networks in the area of deep learning and artificial intelligence for healthcare services. Automatic facial emotion recognition consists of two methods, such as face detection with Ada boost classifier algorithm and emotional classification, which consists of feature extraction by using deep learning methods such as CNN to identify the seven emotions to generate emoticons.
APA, Harvard, Vancouver, ISO, and other styles
3

Kamińska, Dorota, Kadir Aktas, Davit Rizhinashvili, Danila Kuklyanov, Abdallah Hussein Sham, Sergio Escalera, Kamal Nasrollahi, Thomas B. Moeslund, and Gholamreza Anbarjafari. "Two-Stage Recognition and beyond for Compound Facial Emotion Recognition." Electronics 10, no. 22 (November 19, 2021): 2847. http://dx.doi.org/10.3390/electronics10222847.

Full text
Abstract:
Facial emotion recognition is an inherently complex problem due to individual diversity in facial features and racial and cultural differences. Moreover, facial expressions typically reflect the mixture of people’s emotional statuses, which can be expressed using compound emotions. Compound facial emotion recognition makes the problem even more difficult because the discrimination between dominant and complementary emotions is usually weak. We have created a database that includes 31,250 facial images with different emotions of 115 subjects whose gender distribution is almost uniform to address compound emotion recognition. In addition, we have organized a competition based on the proposed dataset, held at FG workshop 2020. This paper analyzes the winner’s approach—a two-stage recognition method (1st stage, coarse recognition; 2nd stage, fine recognition), which enhances the classification of symmetrical emotion labels.
APA, Harvard, Vancouver, ISO, and other styles
4

Werner, S., and G. N. Petrenko. "Speech Emotion Recognition: Humans vs Machines." Discourse 5, no. 5 (December 18, 2019): 136–52. http://dx.doi.org/10.32603/2412-8562-2019-5-5-136-152.

Full text
Abstract:
Introduction. The study focuses on emotional speech perception and speech emotion recognition using prosodic clues alone. Theoretical problems of defining prosody, intonation and emotion along with the challenges of emotion classification are discussed. An overview of acoustic and perceptional correlates of emotions found in speech is provided. Technical approaches to speech emotion recognition are also considered in the light of the latest emotional speech automatic classification experiments.Methodology and sources. The typical “big six” classification commonly used in technical applications is chosen and modified to include such emotions as disgust and shame. A database of emotional speech in Russian is created under sound laboratory conditions. A perception experiment is run using Praat software’s experimental environment.Results and discussion. Cross-cultural emotion recognition possibilities are revealed, as the Finnish and international participants recognised about a half of samples correctly. Nonetheless, native speakers of Russian appear to distinguish a larger proportion of emotions correctly. The effects of foreign languages knowledge, musical training and gender on the performance in the experiment were insufficiently prominent. The most commonly confused pairs of emotions, such as shame and sadness, surprise and fear, anger and disgust as well as confusions with neutral emotion were also given due attention.Conclusion. The work can contribute to psychological studies, clarifying emotion classification and gender aspect of emotionality, linguistic research, providing new evidence for prosodic and comparative language studies, and language technology, deepening the understanding of possible challenges for SER systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Hatem, Ahmed Samit, and Abbas M. Al-Bakry. "The Information Channels of Emotion Recognition: A Review." Webology 19, no. 1 (January 20, 2022): 927–41. http://dx.doi.org/10.14704/web/v19i1/web19064.

Full text
Abstract:
Humans are emotional beings. When we express about emotions, we frequently use several modalities, whether we want to so overtly (i.e., Speech, facial expressions,..) or implicitly (i.e., body language, text,..). Emotion recognition has lately piqued the interest of many researchers, and various techniques have been studied. A review on emotion recognition is given in this article. The survey seeks single and multiple source of data or information channels that may be utilized to identify emotions and includes a literature analysis on current studies published to each information channel, as well as the techniques employed and the findings obtained. Ultimately, some of the present emotion recognition problems and future work recommendations have been mentioned.
APA, Harvard, Vancouver, ISO, and other styles
6

Morgan, Shae D. "Comparing Emotion Recognition and Word Recognition in Background Noise." Journal of Speech, Language, and Hearing Research 64, no. 5 (May 11, 2021): 1758–72. http://dx.doi.org/10.1044/2021_jslhr-20-00153.

Full text
Abstract:
Purpose Word recognition in quiet and in background noise has been thoroughly investigated in previous research to establish segmental speech recognition performance as a function of stimulus characteristics (e.g., audibility). Similar methods to investigate recognition performance for suprasegmental information (e.g., acoustic cues used to make judgments of talker age, sex, or emotional state) have not been performed. In this work, we directly compared emotion and word recognition performance in different levels of background noise to identify psychoacoustic properties of emotion recognition (globally and for specific emotion categories) relative to word recognition. Method Twenty young adult listeners with normal hearing listened to sentences and either reported a target word in each sentence or selected the emotion of the talker from a list of options (angry, calm, happy, and sad) at four signal-to-noise ratios in a background of white noise. Psychometric functions were fit to the recognition data and used to estimate thresholds (midway points on the function) and slopes for word and emotion recognition. Results Thresholds for emotion recognition were approximately 10 dB better than word recognition thresholds, and slopes for emotion recognition were half of those measured for word recognition. Low-arousal emotions had poorer thresholds and shallower slopes than high-arousal emotions, suggesting greater confusion when distinguishing low-arousal emotional speech content. Conclusions Communication of a talker's emotional state continues to be perceptible to listeners in competitive listening environments, even after words are rendered inaudible. The arousal of emotional speech affects listeners' ability to discriminate between emotion categories.
APA, Harvard, Vancouver, ISO, and other styles
7

Israelashvili, Jacob, Lisanne S. Pauw, Disa A. Sauter, and Agneta H. Fischer. "Emotion Recognition from Realistic Dynamic Emotional Expressions Cohere with Established Emotion Recognition Tests: A Proof-of-Concept Validation of the Emotional Accuracy Test." Journal of Intelligence 9, no. 2 (May 7, 2021): 25. http://dx.doi.org/10.3390/jintelligence9020025.

Full text
Abstract:
Individual differences in understanding other people’s emotions have typically been studied with recognition tests using prototypical emotional expressions. These tests have been criticized for the use of posed, prototypical displays, raising the question of whether such tests tell us anything about the ability to understand spontaneous, non-prototypical emotional expressions. Here, we employ the Emotional Accuracy Test (EAT), which uses natural emotional expressions and defines the recognition as the match between the emotion ratings of a target and a perceiver. In two preregistered studies (Ntotal = 231), we compared the performance on the EAT with two well-established tests of emotion recognition ability: the Geneva Emotion Recognition Test (GERT) and the Reading the Mind in the Eyes Test (RMET). We found significant overlap (r > 0.20) between individuals’ performance in recognizing spontaneous emotions in naturalistic settings (EAT) and posed (or enacted) non-verbal measures of emotion recognition (GERT, RMET), even when controlling for individual differences in verbal IQ. On average, however, participants reported enjoying the EAT more than the other tasks. Thus, the current research provides a proof-of-concept validation of the EAT as a useful measure for testing the understanding of others’ emotions, a crucial feature of emotional intelligence. Further, our findings indicate that emotion recognition tests using prototypical expressions are valid proxies for measuring the understanding of others’ emotions in more realistic everyday contexts.
APA, Harvard, Vancouver, ISO, and other styles
8

Ekberg, Mattias, Josefine Andin, Stefan Stenfelt, and Örjan Dahlström. "Effects of mild-to-moderate sensorineural hearing loss and signal amplification on vocal emotion recognition in middle-aged–older individuals." PLOS ONE 17, no. 1 (January 7, 2022): e0261354. http://dx.doi.org/10.1371/journal.pone.0261354.

Full text
Abstract:
Previous research has shown deficits in vocal emotion recognition in sub-populations of individuals with hearing loss, making this a high priority research topic. However, previous research has only examined vocal emotion recognition using verbal material, in which emotions are expressed through emotional prosody. There is evidence that older individuals with hearing loss suffer from deficits in general prosody recognition, not specific to emotional prosody. No study has examined the recognition of non-verbal vocalization, which constitutes another important source for the vocal communication of emotions. It might be the case that individuals with hearing loss have specific difficulties in recognizing emotions expressed through prosody in speech, but not non-verbal vocalizations. We aim to examine whether vocal emotion recognition difficulties in middle- aged-to older individuals with sensorineural mild-moderate hearing loss are better explained by deficits in vocal emotion recognition specifically, or deficits in prosody recognition generally by including both sentences and non-verbal expressions. Furthermore a, some of the studies which have concluded that individuals with mild-moderate hearing loss have deficits in vocal emotion recognition ability have also found that the use of hearing aids does not improve recognition accuracy in this group. We aim to examine the effects of linear amplification and audibility on the recognition of different emotions expressed both verbally and non-verbally. Besides examining accuracy for different emotions we will also look at patterns of confusion (which specific emotions are mistaken for other specific emotion and at which rates) during both amplified and non-amplified listening, and we will analyze all material acoustically and relate the acoustic content to performance. Together these analyses will provide clues to effects of amplification on the perception of different emotions. For these purposes, a total of 70 middle-aged-older individuals, half with mild-moderate hearing loss and half with normal hearing will perform a computerized forced-choice vocal emotion recognition task with and without amplification.
APA, Harvard, Vancouver, ISO, and other styles
9

Lim, Myung-Jin, Moung-Ho Yi, and Ju-Hyun Shin. "Intrinsic Emotion Recognition Considering the Emotional Association in Dialogues." Electronics 12, no. 2 (January 8, 2023): 326. http://dx.doi.org/10.3390/electronics12020326.

Full text
Abstract:
Computer communication via text messaging or Social Networking Services (SNS) has become increasingly popular. At this time, many studies are being conducted to analyze user information or opinions and recognize emotions by using a large amount of data. Currently, the methods for the emotion recognition of dialogues requires an analysis of emotion keywords or vocabulary, and dialogue data are mostly classified as a single emotion. Recently, datasets classified as multiple emotions have emerged, but most of them are composed of English datasets. For accurate emotion recognition, a method for recognizing various emotions in one sentence is required. In addition, multi-emotion recognition research in Korean dialogue datasets is also needed. Since dialogues are exchanges between speakers. One’s feelings may be changed by the words of others, and feelings, once generated, may last for a long period of time. Emotions are expressed not only through vocabulary, but also indirectly through dialogues. In order to improve the performance of emotion recognition, it is necessary to analyze Emotional Association in Dialogues (EAD) to effectively reflect various factors that induce emotions. Therefore, in this paper, we propose a more accurate emotion recognition method to overcome the limitations of single emotion recognition. We implement Intrinsic Emotion Recognition (IER) to understand the meaning of dialogue and recognize complex emotions. In addition, conversations are classified according to their characteristics, and the correlation between IER is analyzed to derive Emotional Association in Dialogues (EAD) and apply them. To verify the usefulness of the proposed technique, IER applied with EAD is tested and evaluated. This evaluation determined that Micro-F1 of the proposed method exhibited the best performance, with 74.8% accuracy. Using IER to assess the EAD proposed in this paper can improve the accuracy and performance of emotion recognition in dialogues.
APA, Harvard, Vancouver, ISO, and other styles
10

Jaratrotkamjorn, Apichart. "Bimodal Emotion Recognition Using Deep Belief Network." ECTI Transactions on Computer and Information Technology (ECTI-CIT) 15, no. 1 (January 14, 2021): 73–81. http://dx.doi.org/10.37936/ecti-cit.2021151.226446.

Full text
Abstract:
The emotions are very important in human daily life. In order to make the machine can recognize the human emotional state, and it can intelligently respond to need for human, which are very important in human-computer interaction. The majority of existing work concentrate on the classification of six basic emotions only. In this research work propose the emotion recognition system through the multimodal approach, which integrated information from both facial and speech expressions. The database has eight basic emotions (neutral, calm, happy, sad, angry, fearful, disgust, and surprised). Emotions are classified using deep belief network method. The experiment results show that the performance of bimodal emotion recognition system, it has better improvement. The overall accuracy rate is 97.92%.
APA, Harvard, Vancouver, ISO, and other styles
11

Hirt, Franziska, Egon Werlen, Ivan Moser, and Per Bergamin. "Measuring emotions during learning: lack of coherence between automated facial emotion recognition and emotional experience." Open Computer Science 9, no. 1 (December 13, 2019): 308–17. http://dx.doi.org/10.1515/comp-2019-0020.

Full text
Abstract:
AbstractMeasuring emotions non-intrusively via affective computing provides a promising source of information for adaptive learning and intelligent tutoring systems. Using non-intrusive, simultaneous measures of emotions, such systems could steadily adapt to students emotional states. One drawback, however, is the lack of evidence on how such modern measures of emotions relate to traditional self-reports. The aim of this study was to compare a prominent area of affective computing, facial emotion recognition, to students’ self-reports of interest, boredom, and valence. We analyzed different types of aggregation of the simultaneous facial emotion recognition estimates and compared them to self-reports after reading a text. Analyses of 103 students revealed no relationship between the aggregated facial emotion recognition estimates of the software FaceReader and self-reports. Irrespective of different types of aggregation of the facial emotion recognition estimates, neither the epistemic emotions (i.e., boredom and interest), nor the estimates of valence predicted the respective self-report measure. We conclude that assumptions on the subjective experience of emotions cannot necessarily be transferred to other emotional components, such as estimated by affective computing. We advise to wait for more comprehensive evidence on the predictive validity of facial emotion recognition for learning before relying on it in educational practice.
APA, Harvard, Vancouver, ISO, and other styles
12

Pólya, Tibor, and István Csertő. "Emotion Recognition Based on the Structure of Narratives." Electronics 12, no. 4 (February 11, 2023): 919. http://dx.doi.org/10.3390/electronics12040919.

Full text
Abstract:
One important application of natural language processing (NLP) is the recognition of emotions in text. Most current emotion analyzers use a set of linguistic features such as emotion lexicons, n-grams, word embeddings, and emoticons. This study proposes a new strategy to perform emotion recognition, which is based on the homologous structure of emotions and narratives. It is argued that emotions and narratives share both a goal-based structure and an evaluation structure. The new strategy was tested in an empirical study with 117 participants who recounted two narratives about their past emotional experiences, including one positive and one negative episode. Immediately after narrating each episode, the participants reported their current affective state using the Affect Grid. The goal-based structure and evaluation structure of the narratives were analyzed with a hybrid method. First, a linguistic analysis of the texts was carried out, including tokenization, lemmatization, part-of-speech tagging, and morphological analysis. Second, an extensive set of rule-based algorithms was used to analyze the goal-based structure of, and evaluations in, the narratives. Third, the output was fed into machine learning classifiers of narrative structural features that previously proved to be effective predictors of the narrator’s current affective state. This hybrid procedure yielded a high average F1 score (0.72). The results are discussed in terms of the benefits of employing narrative structure analysis in NLP-based emotion recognition.
APA, Harvard, Vancouver, ISO, and other styles
13

Asghar, Awais, Sarmad Sohaib, Saman Iftikhar, Muhammad Shafi, and Kiran Fatima. "An Urdu speech corpus for emotion recognition." PeerJ Computer Science 8 (May 9, 2022): e954. http://dx.doi.org/10.7717/peerj-cs.954.

Full text
Abstract:
Emotion recognition from acoustic signals plays a vital role in the field of audio and speech processing. Speech interfaces offer humans an informal and comfortable means to communicate with machines. Emotion recognition from speech signals has a variety of applications in the area of human computer interaction (HCI) and human behavior analysis. In this work, we develop the first emotional speech database of the Urdu language. We also develop the system to classify five different emotions: sadness, happiness, neutral, disgust, and anger using different machine learning algorithms. The Mel Frequency Cepstrum Coefficient (MFCC), Linear Prediction Coefficient (LPC), energy, spectral flux, spectral centroid, spectral roll-off, and zero-crossing were used as speech descriptors. The classification tests were performed on the emotional speech corpus collected from 20 different subjects. To evaluate the quality of speech emotions, subjective listing tests were conducted. The recognition of correctly classified emotions in the complete Urdu emotional speech corpus was 66.5% with K-nearest neighbors. It was found that the disgust emotion has a lower recognition rate as compared to the other emotions. Removing the disgust emotion significantly improves the performance of the classifier to 76.5%.
APA, Harvard, Vancouver, ISO, and other styles
14

Fujisawa, Akira, Kazuyuki Matsumoto, Minoru Yoshida, and Kenji Kita. "Emotion Estimation Method Based on Emoticon Image Features and Distributed Representations of Sentences." Applied Sciences 12, no. 3 (January 25, 2022): 1256. http://dx.doi.org/10.3390/app12031256.

Full text
Abstract:
This paper proposes an emotion recognition method for tweets containing emoticons using their emoticon image and language features. Some of the existing methods register emoticons and their facial expression categories in a dictionary and use them, while other methods recognize emoticon facial expressions based on the various elements of the emoticons. However, highly accurate emotion recognition cannot be performed unless the recognition is based on a combination of the features of sentences and emoticons. Therefore, we propose a model that recognizes emotions by extracting the shape features of emoticons from their image data and applying the feature vector input that combines the image features with features extracted from the text of the tweets. Based on evaluation experiments, the proposed method is confirmed to achieve high accuracy and shown to be more effective than methods that use text features only.
APA, Harvard, Vancouver, ISO, and other styles
15

S*, Manisha, Nafisa H. Saida, Nandita Gopal, and Roshni P. Anand. "Bimodal Emotion Recognition using Machine Learning." International Journal of Engineering and Advanced Technology 10, no. 4 (April 30, 2021): 189–94. http://dx.doi.org/10.35940/ijeat.d2451.0410421.

Full text
Abstract:
The predominant communication channel to convey relevant and high impact information is the emotions that is embedded on our communications. Researchers have tried to exploit these emotions in recent years for human robot interactions (HRI) and human computer interactions (HCI). Emotion recognition through speech or through facial expression is termed as single mode emotion recognition. The rate of accuracy of these single mode emotion recognitions are improved using the proposed bimodal method by combining the modalities of speech and facing and recognition of emotions using a Convolutional Neural Network (CNN) model. In this paper, the proposed bimodal emotion recognition system, contains three major parts such as processing of audio, processing of video and fusion of data for detecting the emotion of a person. The fusion of visual information and audio data obtained from two different channels enhances the emotion recognition rate by providing the complementary data. The proposed method aims to classify 7 basic emotions (anger, disgust, fear, happy, neutral, sad, surprise) from an input video. We take audio and image frame from the video input to predict the final emotion of a person. The dataset used is an audio-visual dataset uniquely suited for the study of multi-modal emotion expression and perception. Dataset used here is RAVDESS dataset which contains audio-visual dataset, visual dataset and audio dataset. For bimodal emotion detection the audio-visual dataset is used.
APA, Harvard, Vancouver, ISO, and other styles
16

G, Nikhil, Naganarasimha M, and Yogesh S. "HUMAN FACIAL EMOTION RECOGNITION USING CNN." International Journal of Engineering Applied Sciences and Technology 7, no. 1 (May 1, 2022): 321–23. http://dx.doi.org/10.33564/ijeast.2022.v07i01.049.

Full text
Abstract:
Human beings express emotions in everyday interactions. Understanding their emotions and knowing how to react to these expressions greatly enhances the interaction. An automatic Facial Expression Recognition system must try to solve the following problems: detection and location of faces ina cluttered scene, facial feature extraction, and facial expression classification. Knowing the user emotion, the system can adapt to the user. Facial expressions play an important role in recognition of emotions and are used in the process of non-verbal communication. They are very important in daily emotional communication, just next to the tone of voice. They are also an indicator of feelings, allowing a man to express an emotional state. The main motivation behind this project is to detect mental health of an individual.
APA, Harvard, Vancouver, ISO, and other styles
17

Meftah, Imen Tayari, Nhan Le Thanh, and Chokri Ben Amar. "Multimodal Approach for Emotion Recognition Using a Formal Computational Model." International Journal of Applied Evolutionary Computation 4, no. 3 (July 2013): 11–25. http://dx.doi.org/10.4018/jaec.2013070102.

Full text
Abstract:
Emotions play a crucial role in human-computer interaction. They are generally expressed and perceived through multiple modalities such as speech, facial expressions, physiological signals. Indeed, the complexity of emotions makes the acquisition very difficult and makes unimodal systems (i.e., the observation of only one source of emotion) unreliable and often unfeasible in applications of high complexity. Moreover the lack of a standard in human emotions modeling hinders the sharing of affective information between applications. In this paper, the authors present a multimodal approach for the emotion recognition from many sources of information. This paper aims to provide a multi-modal system for emotion recognition and exchange that will facilitate inter-systems exchanges and improve the credibility of emotional interaction between users and computers. The authors elaborate a multimodal emotion recognition method from Physiological Data based on signal processing algorithms. The authors’ method permits to recognize emotion composed of several aspects like simulated and masked emotions. This method uses a new multidimensional model to represent emotional states based on an algebraic representation. The experimental results show that the proposed multimodal emotion recognition method improves the recognition rates in comparison to the unimodal approach. Compared to the state of art multimodal techniques, the proposed method gives a good results with 72% of correct.
APA, Harvard, Vancouver, ISO, and other styles
18

Vicsi, Klára, and Dávid Sztahó. "Recognition of Emotions on the Basis of Different Levels of Speech Segments." Journal of Advanced Computational Intelligence and Intelligent Informatics 16, no. 2 (March 20, 2012): 335–40. http://dx.doi.org/10.20965/jaciii.2012.p0335.

Full text
Abstract:
Emotions play a very important role in human-human and human-machine communication. They can be expressed by voice, bodily gestures, and facial movements. People’s acceptance of any kind of intelligent device depends, to a large extent, on how the device reflects emotions. This is the reason why automatic emotion recognition is a recent research topic. In this paper we deal with automatic emotion recognition from human voice. Numerous papers in this field deal with database creation and with the examination of acoustic features appropriate for such recognition, but only few attempts were made to compare different emotional segmentation units that are needed to recognize the emotions in spontaneous speech properly. In the Laboratory of Speech Acoustics experiments were ran to examine the effect of diverse speech segment lengths on recognition performance. An emotional database was prepared on the basis of three different segmentation levels: word, intonational phrase and sentence. Automatic recognition tests were conducted using support vector machines with four basic emotions: neutral, anger, sadness, and joy. The analysis of the results clearly shows that intonation phrase-sized speech units give the best performance in emotional recognition in continuous speech.
APA, Harvard, Vancouver, ISO, and other styles
19

Cui, Yuxin, Sheng Wang, and Ran Zhao. "Machine Learning-Based Student Emotion Recognition for Business English Class." International Journal of Emerging Technologies in Learning (iJET) 16, no. 12 (June 18, 2021): 94. http://dx.doi.org/10.3991/ijet.v16i12.23313.

Full text
Abstract:
Traditional English teaching model neglects student emotions, making many tired of learning. Machine learning supports end-to-end recognition of learning emotions, such that the recognition system can adaptively adjust the learning difficulty in English classroom. With the help of machine learning, this paper presents a method to extract the facial expression features of students in business English class, and establishes a student emotion recognition model, which consists of such modules as emotion mechanism, signal acquisition, analysis and recognition, emotion understanding, emotion expression, and wearable equipment. The results show that the proposed emotion recognition model monitors the real-time emotional states of each student during English learning; upon detecting frustration or boredom, machine learning will timely switch to the contents that interest the student or easier to learn, keeping the student active in learning. The research provides an end-to-end student emotion recognition system to assist with classroom teaching, and enhance the positive emotions of students in English learning.
APA, Harvard, Vancouver, ISO, and other styles
20

Toraman, Suat, and Ömer Osman Dursun. "GameEmo-CapsNet: Emotion Recognition from Single-Channel EEG Signals Using the 1D Capsule Networks." Traitement du Signal 38, no. 6 (December 31, 2021): 1689–98. http://dx.doi.org/10.18280/ts.380612.

Full text
Abstract:
Human emotion recognition with machine learning methods through electroencephalographic (EEG) signals has become a highly interesting subject for researchers. Although it is simple to define emotions that can be expressed physically such as speech, facial expressions, and gestures, it is more difficult to define psychological emotions that are expressed internally. The most important stimuli in revealing inner emotions are aural and visual stimuli. In this study, EEG signals using both aural and visual stimuli were examined and emotions were evaluated in both binary and multi-class emotion recognitions models. A general emotion recognition model was proposed for non-subject-based classification. Unlike in previous studies, a subject-based testing was performed for the first time in the literature. Capsule Networks, a new neural network model, has been developed for binary and multi-class emotion recognition. In the proposed method, a novel fusion strategy was introduced for binary-class emotion recognition and the model was tested using the GAMEEMO dataset. Binary-class emotion recognition achieved a classification accuracy which was 10% better than the classification performance achieved in other studies in the literature. Based on these findings, we suggest that the proposed method will bring a different perspective to emotion recognition.
APA, Harvard, Vancouver, ISO, and other styles
21

Trinh Van, Loan, Thuy Dao Thi Le, Thanh Le Xuan, and Eric Castelli. "Emotional Speech Recognition Using Deep Neural Networks." Sensors 22, no. 4 (February 12, 2022): 1414. http://dx.doi.org/10.3390/s22041414.

Full text
Abstract:
The expression of emotions in human communication plays a very important role in the information that needs to be conveyed to the partner. The forms of expression of human emotions are very rich. It could be body language, facial expressions, eye contact, laughter, and tone of voice. The languages of the world’s peoples are different, but even without understanding a language in communication, people can almost understand part of the message that the other partner wants to convey with emotional expressions as mentioned. Among the forms of human emotional expression, the expression of emotions through voice is perhaps the most studied. This article presents our research on speech emotion recognition using deep neural networks such as CNN, CRNN, and GRU. We used the Interactive Emotional Dyadic Motion Capture (IEMOCAP) corpus for the study with four emotions: anger, happiness, sadness, and neutrality. The feature parameters used for recognition include the Mel spectral coefficients and other parameters related to the spectrum and the intensity of the speech signal. The data augmentation was used by changing the voice and adding white noise. The results show that the GRU model gave the highest average recognition accuracy of 97.47%. This result is superior to existing studies on speech emotion recognition with the IEMOCAP corpus.
APA, Harvard, Vancouver, ISO, and other styles
22

Homorogan, C., R. Adam, R. Barboianu, Z. Popovici, C. Bredicean, and M. Ienciu. "Emotional Face Recognition in Bipolar Disorder." European Psychiatry 41, S1 (April 2017): S117. http://dx.doi.org/10.1016/j.eurpsy.2017.01.1904.

Full text
Abstract:
IntroductionEmotional face recognition is significant for social communication. This is impaired in mood disorders, such as bipolar disorder. Individuals with bipolar disorder lack the ability to perceive facial expressions.ObjectivesTo analyse the capacity of emotional face recognition in subjects diagnosed with bipolar disorder.AimsTo establish a correlation between emotion recognition ability and the evolution of bipolar disease.MethodsA sample of 24 subjects were analysed in this trial, diagnosed with bipolar disorder (according to ICD-10 criteria), who were hospitalised in the Psychiatry Clinic of Timisoara and monitored in outpatients clinic. Subjects were introduced in the trial based on inclusion/exclusion criteria. The analysed parameters were: socio-demographic (age, gender, education level), the number of relapses, the predominance of manic or depressive episodes, and the ability of identifying emotions (Reading the Mind in the Eyes Test).ResultsMost of the subjects (79.16%) had a low ability to identify emotions, 20.83% had a normal capacity to recognise emotions, and none of them had a high emotion recognition capacity. The positive emotions (love, joy, surprise) were easier recognised, by 75% of the subjects, than the negative ones (anger, sadness, fear). There was no evident difference in emotional face recognition between the individuals with predominance of manic episodes than the ones who had mostly depressive episodes, and between the number of relapses.ConclusionsThe individuals with bipolar disorder have difficulties in identifying facial emotions, but with no obvious correlation between the analysed parameters.Disclosure of interestThe authors have not supplied their declaration of competing interest.
APA, Harvard, Vancouver, ISO, and other styles
23

Chang, Chun, Kaihua Chen, Jianjun Cao, Qian Wu, and Hemu Chen. "Analyzing the Effect of Badminton on Physical Health and Emotion Recognition on the account of Smart Sensors." Applied Bionics and Biomechanics 2022 (April 4, 2022): 1–13. http://dx.doi.org/10.1155/2022/8349448.

Full text
Abstract:
Emotional ability is an important symbol of human intelligence. Human’s understanding of emotions, from subjective consciousness to continuous or discrete emotional dimensions, and then to physiological separability, has shown a trend of gradually diverging from psychological research to the field of intelligent human-computer interaction. This article is aimed at studying the effects of smart sensor-based emotion recognition technology and badminton on physical health. It proposes a method of using smart sensor technology to recognize badminton movements and emotions during the movement. And the impact of emotion recognition based on smart sensors and badminton sports on physical health is carried out in this article. Experimental results show that the emotion recognition technology based on smart sensors can well recognize the changes in people’s emotions during badminton sports, and the accuracy of emotion recognition is higher than 70%. At the same time, experiments show that badminton can greatly improve people’s physical fitness and strengthen people’s physique.
APA, Harvard, Vancouver, ISO, and other styles
24

Anu Kiruthika M. and Angelin Gladston. "Implementation of Recurrent Network for Emotion Recognition of Twitter Data." International Journal of Social Media and Online Communities 12, no. 1 (January 2020): 1–13. http://dx.doi.org/10.4018/ijsmoc.2020010101.

Full text
Abstract:
A new generation of emoticons, called emojis, is being largely used for both mobile and social media communications. Emojis are considered a graphic expression of emotions, and users have been widely used to express their emotions in social media. Emojis are graphic unicode symbols used to express perceptions, views, and ideas as a shorthand. Unlike the small number of well-known emoticons carrying clear emotional content, hundreds of emojis are being used in different social networks. The task of emoji emotion recognition is to predict the original emoji in a tweet. Recurrent neural network is used for building emoji emotion recognition system. Glove is a word-embedding method used for obtaining vector representation of words and are used for training the recurrent neural network. This is achieved by mapping words into a meaningful space where the distance between words is related to semantic similarity. Based on the word embedding in the Twitter dataset, recurrent neural network builds the model and finally predicts the emoji associated with the tweets with an accuracy of 83%.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhou, Tie Hua, Wenlong Liang, Hangyu Liu, Ling Wang, Keun Ho Ryu, and Kwang Woo Nam. "EEG Emotion Recognition Applied to the Effect Analysis of Music on Emotion Changes in Psychological Healthcare." International Journal of Environmental Research and Public Health 20, no. 1 (December 26, 2022): 378. http://dx.doi.org/10.3390/ijerph20010378.

Full text
Abstract:
Music therapy is increasingly being used to promote physical health. Emotion semantic recognition is more objective and provides direct awareness of the real emotional state based on electroencephalogram (EEG) signals. Therefore, we proposed a music therapy method to carry out emotion semantic matching between the EEG signal and music audio signal, which can improve the reliability of emotional judgments, and, furthermore, deeply mine the potential influence correlations between music and emotions. Our proposed EER model (EEG-based Emotion Recognition Model) could identify 20 types of emotions based on 32 EEG channels, and the average recognition accuracy was above 90% and 80%, respectively. Our proposed music-based emotion classification model (MEC model) could classify eight typical emotion types of music based on nine music feature combinations, and the average classification accuracy was above 90%. In addition, the semantic mapping was analyzed according to the influence of different music types on emotional changes from different perspectives based on the two models, and the results showed that the joy type of music video could improve fear, disgust, mania, and trust emotions into surprise or intimacy emotions, while the sad type of music video could reduce intimacy to the fear emotion.
APA, Harvard, Vancouver, ISO, and other styles
26

A, Prof Swethashree. "Speech Emotion Recognition." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (August 31, 2021): 2637–40. http://dx.doi.org/10.22214/ijraset.2021.37375.

Full text
Abstract:
Abstract: Speech Emotion Recognition, abbreviated as SER, the act of trying to identify a person's feelings and relationships. Affected situations from speech. This is because the truth often reflects the basic feelings of tone and tone of voice. Emotional awareness is a fast-growing field of research in recent years. Unlike humans, machines do not have the power to comprehend and express emotions. But human communication with the computer can be improved by using automatic sensory recognition, accordingly reducing the need for human intervention. In this project, basic emotions such as peace, happiness, fear, disgust, etc. are analyzed signs of emotional expression. We use machine learning techniques such as Multilayer perceptron Classifier (MLP Classifier) which is used to separate information provided by groups to be divided equally. Coefficients of Mel-frequency cepstrum (MFCC), chroma and mel features are extracted from speech signals and used to train MLP differentiation. By accomplishing this purpose, we use python libraries such as Librosa, sklearn, pyaudio, numpy and audio file to analyze speech patterns and see the feeling. Keywords: Speech emotion recognition, mel cepstral coefficient, neural artificial network, multilayer perceptrons, mlp classifier, python.
APA, Harvard, Vancouver, ISO, and other styles
27

Prasad, Dr Kanakam Siva Rama, N. Srinivasa Rao, and B. Sravani. "Advanced Model Implementation to Recognize Emotion Based Speech with Machine Learning." International Journal of Innovative Research in Engineering & Management 9, no. 6 (2022): 47–54. http://dx.doi.org/10.55524/ijirem.2022.9.6.8.

Full text
Abstract:
Emotions are essential in developing interpersonal relationships. Emotions make emphasizing with others’ problems easy and leads to better communication without misunderstandings. Humans possess the natural ability of understanding others’ emotions from their speech, hand gestures, facial expressions etc and react accordingly but, it is impossible for machines to extract and understand emotions unless they are trained to do so. Speech Emotion Recognition is one step towards it, SER uses ML algorithms to forecast the emotion behind a speech. The features which include MEL, MFCC, and Chroma of a set of audio parts are extracted using python libraries and are used to build the ML model. An MLP (Multi-Layer Perceptron) is used which will be mapping the features along with the sound file and predicts the emotion. The project details more about the development and deployment of the model. A technique known as "Speech Emotion Recognition" could identify emotional characteristics in speech signals by computer and contrasts and analysis the characteristics parameters and the emotional change acquired. In current market, speech emotion recognition was emerging crossing field of artificial Intelligence.
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Haoqi, Brian Baucom, and Panayiotis Georgiou. "Linking emotions to behaviors through deep transfer learning." PeerJ Computer Science 6 (January 6, 2020): e246. http://dx.doi.org/10.7717/peerj-cs.246.

Full text
Abstract:
Human behavior refers to the way humans act and interact. Understanding human behavior is a cornerstone of observational practice, especially in psychotherapy. An important cue of behavior analysis is the dynamical changes of emotions during the conversation. Domain experts integrate emotional information in a highly nonlinear manner; thus, it is challenging to explicitly quantify the relationship between emotions and behaviors. In this work, we employ deep transfer learning to analyze their inferential capacity and contextual importance. We first train a network to quantify emotions from acoustic signals and then use information from the emotion recognition network as features for behavior recognition. We treat this emotion-related information as behavioral primitives and further train higher level layers towards behavior quantification. Through our analysis, we find that emotion-related information is an important cue for behavior recognition. Further, we investigate the importance of emotional-context in the expression of behavior by constraining (or not) the neural networks’ contextual view of the data. This demonstrates that the sequence of emotions is critical in behavior expression. To achieve these frameworks we employ hybrid architectures of convolutional networks and recurrent networks to extract emotion-related behavior primitives and facilitate automatic behavior recognition from speech.
APA, Harvard, Vancouver, ISO, and other styles
29

Chóliz, Mariano, and Enrique G. Fernández-Abascal. "Recognition of Emotional Facial Expressions: The Role of Facial and Contextual Information in the Accuracy of Recognition." Psychological Reports 110, no. 1 (February 2012): 338–50. http://dx.doi.org/10.2466/07.09.17.pr0.110.1.338-350.

Full text
Abstract:
Recognition of emotional facial expressions is a central area in the psychology of emotion. This study presents two experiments. The first experiment analyzed recognition accuracy for basic emotions including happiness, anger, fear, sadness, surprise, and disgust. 30 pictures (5 for each emotion) were displayed to 96 participants to assess recognition accuracy. The results showed that recognition accuracy varied significantly across emotions. The second experiment analyzed the effects of contextual information on recognition accuracy. Information congruent and not congruent with a facial expression was displayed before presenting pictures of facial expressions. The results of the second experiment showed that congruent information improved facial expression recognition, whereas incongruent information impaired such recognition.
APA, Harvard, Vancouver, ISO, and other styles
30

Sowmiya, S., and J. C. Miraclin Joyce Pamila. "Survey on Emotion Recognition System." March 2022 4, no. 1 (May 25, 2022): 11–22. http://dx.doi.org/10.36548/jitdw.2022.1.002.

Full text
Abstract:
Humans are good at making millions of facial movements during conversation. In interpersonal relationships, the human emotion sensing system is vital. At an older age, automatic emotion recognition is a trendy research problem. Emotions are expressed through personas, hand and body gestures, and facial expressions. Emotion recognition via facial expressions is one of the most important fields in the human-machine interface. The strategy of recognizing emotions from facial expressions is known as facial expression analysis. Emotions are automatically perceived by the human brain, and software that can recognise emotions has recently been developed. This technology is constantly improving, and it will ultimately be able to sense emotions as accurately as human brains. The purpose of this work is to present a survey of emotion detection research using various machine learning techniques. It also summarises the benefits, drawbacks, and limitations of current approaches, as well as the concept's evolution in research.
APA, Harvard, Vancouver, ISO, and other styles
31

Iqbal, Muhammad, Bhakti Yudho Suprapto, Hera Hikmarika, Hermawati Hermawati, and Suci Dwijayanti. "Design of Real-Time Face Recognition and Emotion Recognition on Humanoid Robot Using Deep Learning." Jurnal Ecotipe (Electronic, Control, Telecommunication, Information, and Power Engineering) 9, no. 2 (October 6, 2022): 149–58. http://dx.doi.org/10.33019/jurnalecotipe.v9i2.3044.

Full text
Abstract:
A robot is capable of mimicking human beings, including recognizing their faces and emotions. However, current studies of the humanoid robot have not been implemented in the real-time system. In addition, face recognition and emotion recognition have been treated as separate problems. Thus, for real-time application on a humanoid robot, this study proposed a combination of face recognition and emotion recognition. Face and emotion recognition systems were developed concurrently in this study using convolutional neural network architectures. The proposed architecture was compared to the well-known architecture, AlexNet, to determine which architecture would be better suited for implementation on a humanoid robot. Primary data from 30 respondents was used for face recognition. Meanwhile, emotional data were collected from the same respondents and combined with secondary data from a 2500-person dataset. Surprise, anger, neutral, smile, and sadness were among the emotions. The experiment was carried out in real-time on a humanoid robot using the two architectures. Using the AlexNet model, the accuracy of face and emotion recognition was 87 % and 70 %, respectively. Meanwhile, the proposed architecture achieved accuracy rates of 95 % for face recognition and 75 % for emotion recognition, respectively. Thus, the proposed method performs better in terms of recognizing faces and emotions, and it can be implemented on a humanoid robot.
APA, Harvard, Vancouver, ISO, and other styles
32

Léveillé, Edith, Samuel Guay, Caroline Blais, Peter Scherzer, and Louis De Beaumont. "Sex-Related Differences in Emotion Recognition in Multi-concussed Athletes." Journal of the International Neuropsychological Society 23, no. 1 (December 15, 2016): 65–77. http://dx.doi.org/10.1017/s1355617716001004.

Full text
Abstract:
AbstractObjectives:Concussion is defined as a complex pathophysiological process affecting the brain. Although the cumulative and long-term effects of multiple concussions are now well documented on cognitive and motor function, little is known about their effects on emotion recognition. Recent studies have suggested that concussion can result in emotional sequelae, particularly in females and multi-concussed athletes. The objective of this study was to investigate sex-related differences in emotion recognition in asymptomatic male and female multi-concussed athletes.Methods:We tested 28 control athletes (15 males) and 22 multi-concussed athletes (10 males) more than a year since the last concussion. Participants completed the Post-Concussion Symptom Scale, the Beck Depression Inventory-II, the Beck Anxiety Inventory, a neuropsychological test battery and a morphed emotion recognition task. Pictures of a male face expressing basic emotions (anger, disgust, fear, happiness, sadness, surprise) morphed with another emotion were randomly presented. After each face presentation, participants were asked to indicate the emotion expressed by the face.Results:Results revealed significant sex by group interactions in accuracy and intensity threshold for negative emotions, together with significant main effects of emotion and group.Conclusions:Male concussed athletes were significantly impaired in recognizing negative emotions and needed more emotional intensity to correctly identify these emotions, compared to same-sex controls. In contrast, female concussed athletes performed similarly to same-sex controls. These findings suggest that sex significantly modulates concussion effects on emotional facial expression recognition. (JINS, 2017,23, 65–77)
APA, Harvard, Vancouver, ISO, and other styles
33

Kamińska, Dorota, and Adam Pelikant. "Recognition of Human Emotion from a Speech Signal Based on Plutchik's Model." International Journal of Electronics and Telecommunications 58, no. 2 (June 1, 2012): 165–70. http://dx.doi.org/10.2478/v10177-012-0024-4.

Full text
Abstract:
Recognition of Human Emotion from a Speech Signal Based on Plutchik's ModelMachine recognition of human emotional states is an essential part in improving man-machine interaction. During expressive speech the voice conveys semantic message as well as the information about emotional state of the speaker. The pitch contour is one of the most significant properties of speech, which is affected by the emotional state. Therefore pitch features have been commonly used in systems for automatic emotion detection. In this work different intensities of emotions and their influence on pitch features have been studied. This understanding is important to develop such a system. Intensities of emotions are presented on Plutchik's cone-shaped 3D model. ThekNearest Neighbor algorithm has been used for classification. The classification has been divided into two parts. First, the primary emotion has been detected, then its intensity has been specified. The results show that the recognition accuracy of the system is over 50% for primary emotions, and over 70% for its intensities.
APA, Harvard, Vancouver, ISO, and other styles
34

Verma, Teena, Sahil Niranjan, Abhinav K.Gupt, Vinay KUMAR, and YASH Vashist. "EMOTIONAL RECOGNITION USING FACIAL EXPRESSIONS AND SPEECH ANALYSIS." International Journal of Engineering Applied Sciences and Technology 6, no. 7 (November 1, 2021): 176–80. http://dx.doi.org/10.33564/ijeast.2021.v06i07.028.

Full text
Abstract:
Emotional recognition can be made from Many sources including text, speech, hand, body language and facial expressions. Currently, most sensory systems use only one of these sources. People's feelings change every second and one method used to process emotional recognition may not reflect emotions in the right way. This research recommends the desire to understand and explore people's feelings in many similar ways speech and face. We have chosen to explore, sound and video inputs to develop an ensemble model that gathers the information from all these sources and displays it in a clear and interpretable way. By improving the emotion recognition accuracy, the proposed multisensory emotion recognition system can help to improve the naturalness of human computer interaction. Speech, hand, body language, and facial expressions are all examples of sources for emotional recognition. Most sensory systems currently use only one of these sources. People's feelings fluctuate by the second, therefore one method for processing emotional identification may not accurately reflect emotions. This study suggests that there is a need to comprehend and explore people's sentiments in many ways that voice and face do. Various emotional states were utilised in this case. Speech, facial expressions, and both can be used to detect emotions in the proposed framework. Audio, and video inputs and construct an ensemble model that collects data from all of these sources and presents it in a clear and understandable manner. The suggested multisensory emotion recognition system can help to increase the naturalness of human-computer interaction by boosting emotion recognition accuracy.
APA, Harvard, Vancouver, ISO, and other styles
35

Schönenberg, Michael, Alexander Schneidt, Eva Wiedemann, and Aiste Jusyte. "Processing of Dynamic Affective Information in Adults With ADHD." Journal of Attention Disorders 23, no. 1 (March 30, 2015): 32–39. http://dx.doi.org/10.1177/1087054715577992.

Full text
Abstract:
Objective: ADHD has been repeatedly linked to problems in social functioning. Although some theories assume that the emotion recognition deficits are explained by general attentional deficits, mounting evidence suggests that they may actually constitute a distinct impairment. However, it remains unclear whether the deficient processing affects specific emotional categories or may generalize to all basic emotions. The present study aims to investigate these questions by assessing the sensitivity to all six basic emotions in adults with ADHD. Method: The participants judged the emotion onset in animated morph clips displaying facial expressions that slowly changed from neutral to emotional. Results: ADHD participants exhibited an impaired recognition of sad and fearful facial expressions. Conclusion: The present findings indicate that ADHD is possibly associated with a specific deficit in the recognition of facial emotions signaling negative social feedback.
APA, Harvard, Vancouver, ISO, and other styles
36

Rosenberg, Hannah, Skye McDonald, Marie Dethier, Roy P. C. Kessels, and R. Frederick Westbrook. "Facial Emotion Recognition Deficits following Moderate–Severe Traumatic Brain Injury (TBI): Re-examining the Valence Effect and the Role of Emotion Intensity." Journal of the International Neuropsychological Society 20, no. 10 (November 2014): 994–1003. http://dx.doi.org/10.1017/s1355617714000940.

Full text
Abstract:
AbstractMany individuals who sustain moderate–severe traumatic brain injuries (TBI) are poor at recognizing emotional expressions, with a greater impairment in recognizing negative (e.g., fear, disgust, sadness, and anger) than positive emotions (e.g., happiness and surprise). It has been questioned whether this “valence effect” might be an artifact of the wide use of static facial emotion stimuli (usually full-blown expressions) which differ in difficulty rather than a real consequence of brain impairment. This study aimed to investigate the valence effect in TBI, while examining emotion recognition across different intensities (low, medium, and high).Method: Twenty-seven individuals with TBI and 28 matched control participants were tested on the Emotion Recognition Task (ERT). The TBI group was more impaired in overall emotion recognition, and less accurate recognizing negative emotions. However, examining the performance across the different intensities indicated that this difference was driven by some emotions (e.g., happiness) being much easier to recognize than others (e.g., fear and surprise). Our findings indicate that individuals with TBI have an overall deficit in facial emotion recognition, and that both people with TBI and control participants found some emotions more difficult than others. These results suggest that conventional measures of facial affect recognition that do not examine variance in the difficulty of emotions may produce erroneous conclusions about differential impairment. They also cast doubt on the notion that dissociable neural pathways underlie the recognition of positive and negative emotions, which are differentially affected by TBI and potentially other neurological or psychiatric disorders. (JINS, 2014, 20, 1–10)
APA, Harvard, Vancouver, ISO, and other styles
37

Poncet, Fanny, Robert Soussignan, Margaux Jaffiol, Baptiste Gaudelus, Arnaud Leleu, Caroline Demily, Nicolas Franck, and Jean-Yves Baudouin. "The spatial distribution of eye movements predicts the (false) recognition of emotional facial expressions." PLOS ONE 16, no. 1 (January 26, 2021): e0245777. http://dx.doi.org/10.1371/journal.pone.0245777.

Full text
Abstract:
Recognizing facial expressions of emotions is a fundamental ability for adaptation to the social environment. To date, it remains unclear whether the spatial distribution of eye movements predicts accurate recognition or, on the contrary, confusion in the recognition of facial emotions. In the present study, we asked participants to recognize facial emotions while monitoring their gaze behavior using eye-tracking technology. In Experiment 1a, 40 participants (20 women) performed a classic facial emotion recognition task with a 5-choice procedure (anger, disgust, fear, happiness, sadness). In Experiment 1b, a second group of 40 participants (20 women) was exposed to the same materials and procedure except that they were instructed to say whether (i.e., Yes/No response) the face expressed a specific emotion (e.g., anger), with the five emotion categories tested in distinct blocks. In Experiment 2, two groups of 32 participants performed the same task as in Experiment 1a while exposed to partial facial expressions composed of actions units (AUs) present or absent in some parts of the face (top, middle, or bottom). The coding of the AUs produced by the models showed complex facial configurations for most emotional expressions, with several AUs in common. Eye-tracking data indicated that relevant facial actions were actively gazed at by the decoders during both accurate recognition and errors. False recognition was mainly associated with the additional visual exploration of less relevant facial actions in regions containing ambiguous AUs or AUs relevant to other emotional expressions. Finally, the recognition of facial emotions from partial expressions showed that no single facial actions were necessary to effectively communicate an emotional state. In contrast, the recognition of facial emotions relied on the integration of a complex set of facial cues.
APA, Harvard, Vancouver, ISO, and other styles
38

Kurtić, Azra, and Nurka Pranjić. "Facial expression recognition accuracy of valence emotion among high and low indicated PTSD." Primenjena psihologija 4, no. 1 (March 9, 2011): 5–11. http://dx.doi.org/10.19090/pp.2011.1.5-11.

Full text
Abstract:
Introduction: Emotional experience of stressful event reflects itself in form of inability to start and maintain social contact, to cope with stress and sometimes distorted cognitive outages. Aim: To test hypothesis that facially expressed emotions were useful monitor in practice as mediator for understanding nature of emotionally difficulties of traumatized forty-two individuals are facing with. Primary task was assessed whether psychologically traumatized individuals differ in facial recognition accuracy, and secondary, accuracy positive versus negative emotions among two studied groups. Subject and methods: The total sample of participants were divided in two groups based on score results of DSM- IV Harvard Trauma Questionnaire, Bosnia and Herzegovina version which was expressed perception of their PTSD symptoms self- assessed used of the score results of DSM- IV Harvard Trauma Questionnaire– Bosnia and Herzegovina version (the experimental group with high indicative PTSD and control group without moderate PTSD). Accuracy of recognition of seven facially expressed emotions was investigated. The authors presented results of significantly lower (p<.05) recognition accuracy in experimental group for all studied emotions with exception of emotion of sadness. Also, recognition of negative emotions are more accurate (p<.05). These findings suggest that emotional stress leads to a less accurate recognition of facially expressed emotions especially positive valence emotions.
APA, Harvard, Vancouver, ISO, and other styles
39

Berlibayeva, M. "Basic techniques and methods of developing emotional intelligence in preschool children." Pedagogy and Psychology 46, no. 1 (March 31, 2021): 176–85. http://dx.doi.org/10.51889/2021-1.2077-6861.24.

Full text
Abstract:
This article is devoted to the disclosure of the basic techniques and techniques for the development of emotional intelligence in preschool children. The work substantiates the need for the development of emotional intelligence in preschool children, its importance for the successful socialization of the child's personality. The author notes that the emotional intelligence of preschool children is a type of intelligence responsible for the child's recognition of his own emotions and the emotions of the people around him, as well as for controlling, managing his emotions and for influencing the emotions of other people. According to the author, at present, the number of preschool children with emotional instability has increased: aggressive, angry, conflict, which is why it is necessary to develop emotional intelligence at this age, but, unfortunately, many educators and parents do not pay due attention to this issue. Emotional intelligence is not an innate personality trait; the development of emotional intelligence is carried out in stages. At the first stage, emotion is perceived – this is the child's recognition of his emotions and the emotions of other people. At the second stage – understanding emotion – the ability to determine the reasons for the appearance of a particular emotion in oneself and in the people around him, establishing a connection between emotions and thoughts. At the third stage – managing emotions – the ability to suppress emotions, awaken and direct own and others' emotions to achieve goals. At the fourth stage – using emotions to stimulate thinking – awakening creativity in oneself, activating the brain with the help of one's own emotions. The article discusses various techniques and techniques for the development of emotional intelligence in preschool children.
APA, Harvard, Vancouver, ISO, and other styles
40

Santos, Isabel M., Pedro Bem-Haja, André Silva, Catarina Rosa, Diâner F. Queiroz, Miguel F. Alves, Talles Barroso, Luíza Cerri, and Carlos F. Silva. "The Interplay between Chronotype and Emotion Regulation in the Recognition of Facial Expressions of Emotion." Behavioral Sciences 13, no. 1 (December 31, 2022): 38. http://dx.doi.org/10.3390/bs13010038.

Full text
Abstract:
Emotion regulation strategies affect the experience and processing of emotions and emotional stimuli. Chronotype has also been shown to influence the processing of emotional stimuli, with late chronotypes showing a bias towards better processing of negative stimuli. Additionally, greater eveningness has been associated with increased difficulties in emotion regulation and preferential use of expressive suppression strategies. Therefore, the present study aimed to understand the interplay between chronotype and emotion regulation on the recognition of dynamic facial expressions of emotion. To that end, 287 participants answered self-report measures and performed an online facial emotion recognition task from short video clips where a neutral face gradually morphed into a full-emotion expression (one of the six basic emotions). Participants should press the spacebar to stop each video as soon as they could recognize the emotional expression, and then identify it from six provided labels/emotions. Greater eveningness was associated with shorter response times (RT) in the identification of sadness, disgust and happiness. Higher scores of expressive suppression were associated with longer RT in identifying sadness, disgust, anger and surprise. Expressive suppression significantly moderated the relationship between chronotype and the recognition of sadness and anger, with chronotype being a significant predictor of emotion recognition times only at higher levels of expressive suppression. No significant effects were observed for cognitive reappraisal. These results are consistent with a negative bias in emotion processing in late chronotypes and increased difficulty in anger and sadness recognition for expressive suppressor morning-types.
APA, Harvard, Vancouver, ISO, and other styles
41

Berkovich, Izhak, and Ori Eyal. "The mediating role of principals’ transformational leadership behaviors in promoting teachers’ emotional wellness at work." Educational Management Administration & Leadership 45, no. 2 (July 9, 2016): 316–35. http://dx.doi.org/10.1177/1741143215617947.

Full text
Abstract:
The present study aims to examine whether principals’ emotional intelligence (specifically, their ability to recognize emotions in others) makes them more effective transformational leaders, measured by the reframing of teachers’ emotions. The study uses multisource data from principals and their teachers in 69 randomly sampled primary schools. Principals undertook a performance task to allow assessment of their emotion recognition ability; half the teachers’ sampled ( N = 319) reported on principals’ leadership behaviors and the other half ( N = 320) on teachers’ subjective perceptions of principals as promoting teachers’ reframing of negative emotions into more positive ones. Data were analyzed through multilevel structural equation modeling. Findings indicated a cross-level relationship between principals’ transformational leadership behaviors and teachers’ emotional reframing, as well as a relationship between principals’ emotion recognition ability and their transformational behaviors. Furthermore, the study revealed that principals’ emotion recognition ability has an indirect effect on teachers’ emotional reframing through principals’ transformational leadership behaviors. The results provide empirical support for the claim that transformational leadership promotes emotional transformation. The theoretical and practical implications of the study are discussed.
APA, Harvard, Vancouver, ISO, and other styles
42

Grace, Sally A., Wei Lin Toh, Ben Buchanan, David J. Castle, and Susan L. Rossell. "Impaired Recognition of Negative Facial Emotions in Body Dysmorphic Disorder." Journal of the International Neuropsychological Society 25, no. 08 (May 17, 2019): 884–89. http://dx.doi.org/10.1017/s1355617719000419.

Full text
Abstract:
Abstract Objectives: Patients with body dysmorphic disorder (BDD) have difficulty in recognising facial emotions, and there is evidence to suggest that there is a specific deficit in identifying negative facial emotions, such as sadness and anger. Methods: This study investigated facial emotion recognition in 19 individuals with BDD compared with 21 healthy control participants who completed a facial emotion recognition task, in which they were asked to identify emotional expressions portrayed in neutral, happy, sad, fearful, or angry faces. Results: Compared to the healthy control participants, the BDD patients were generally less accurate in identifying all facial emotions but showed specific deficits for negative emotions. The BDD group made significantly more errors when identifying neutral, angry, and sad faces than healthy controls; and were significantly slower at identifying neutral, angry, and happy faces. Conclusions: These findings add to previous face-processing literature in BDD, suggesting deficits in identifying negative facial emotions. There are treatment implications as future interventions would do well to target such deficits.
APA, Harvard, Vancouver, ISO, and other styles
43

Kim, Daeha, and Byung Cheol Song. "Contrastive Adversarial Learning for Person Independent Facial Emotion Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 7 (May 18, 2021): 5948–56. http://dx.doi.org/10.1609/aaai.v35i7.16743.

Full text
Abstract:
Since most facial emotion recognition (FER) methods significantly rely on supervision information, they have a limit to analyzing emotions independently of persons. On the other hand, adversarial learning is a well-known approach for generalized representation learning because it never requires supervision information. This paper presents a new adversarial learning for FER. In detail, the proposed learning enables the FER network to better understand complex emotional elements inherent in strong emotions by adversarially learning weak emotion samples based on strong emotion samples. As a result, the proposed method can recognize the emotions independently of persons because it understands facial expressions more accurately. In addition, we propose a contrastive loss function for efficient adversarial learning. Finally, the proposed adversarial learning scheme was theoretically verified, and it was experimentally proven to show state of the art (SOTA) performance.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhu-Zhou, Fangfang, Roberto Gil-Pita, Joaquín García-Gómez, and Manuel Rosa-Zurera. "Robust Multi-Scenario Speech-Based Emotion Recognition System." Sensors 22, no. 6 (March 18, 2022): 2343. http://dx.doi.org/10.3390/s22062343.

Full text
Abstract:
Every human being experiences emotions daily, e.g., joy, sadness, fear, anger. These might be revealed through speech—words are often accompanied by our emotional states when we talk. Different acoustic emotional databases are freely available for solving the Emotional Speech Recognition (ESR) task. Unfortunately, many of them were generated under non-real-world conditions, i.e., actors played emotions, and recorded emotions were under fictitious circumstances where noise is non-existent. Another weakness in the design of emotion recognition systems is the scarcity of enough patterns in the available databases, causing generalization problems and leading to overfitting. This paper examines how different recording environmental elements impact system performance using a simple logistic regression algorithm. Specifically, we conducted experiments simulating different scenarios, using different levels of Gaussian white noise, real-world noise, and reverberation. The results from this research show a performance deterioration in all scenarios, increasing the error probability from 25.57% to 79.13% in the worst case. Additionally, a virtual enlargement method and a robust multi-scenario speech-based emotion recognition system are proposed. Our system’s average error probability of 34.57% is comparable to the best-case scenario with 31.55%. The findings support the prediction that simulated emotional speech databases do not offer sufficient closeness to real scenarios.
APA, Harvard, Vancouver, ISO, and other styles
45

Farmer, Eliot, Crescent Jicol, and Karin Petrini. "Musicianship Enhances Perception But Not Feeling of Emotion From Others’ Social Interaction Through Speech Prosody." Music Perception 37, no. 4 (March 11, 2020): 323–38. http://dx.doi.org/10.1525/mp.2020.37.4.323.

Full text
Abstract:
Music expertise has been shown to enhance emotion recognition from speech prosody. Yet, it is currently unclear whether music training enhances the recognition of emotions through other communicative modalities such as vision and whether it enhances the feeling of such emotions. Musicians and nonmusicians were presented with visual, auditory, and audiovisual clips consisting of the biological motion and speech prosody of two agents interacting. Participants judged as quickly as possible whether the expressed emotion was happiness or anger, and subsequently indicated whether they also felt the emotion they had perceived. Measures of accuracy and reaction time were collected from the emotion recognition judgements, while yes/no responses were collected as indication of felt emotions. Musicians were more accurate than nonmusicians at recognizing emotion in the auditory-only condition, but not in the visual-only or audiovisual conditions. Although music training enhanced recognition of emotion through sound, it did not affect the felt emotion. These findings indicate that emotional processing in music and language may use overlapping but also divergent resources, or that some aspects of emotional processing are less responsive to music training than others. Hence music training may be an effective rehabilitative device for interpreting others’ emotion through speech.
APA, Harvard, Vancouver, ISO, and other styles
46

Tian, Wenqiang. "Personalized Emotion Recognition and Emotion Prediction System Based on Cloud Computing." Mathematical Problems in Engineering 2021 (May 26, 2021): 1–10. http://dx.doi.org/10.1155/2021/9948733.

Full text
Abstract:
Promoting economic development and improving people’s quality of life have a lot to do with the continuous improvement of cloud computing technology and the rapid expansion of applications. Emotions play an important role in all aspects of human life. It is difficult to avoid the influence of inner emotions in people’s behavior and deduction. This article mainly studies the personalized emotion recognition and emotion prediction system based on cloud computing. This paper proposes a method of intelligently identifying users’ emotional states through the use of cloud computing. First, an emotional induction experiment is designed to induce the testers’ positive, neutral, and negative three basic emotional states and collect cloud data and EEG under different emotional states. Then, the cloud data is processed and analyzed to extract emotional features. After that, this paper constructs a facial emotion prediction system based on cloud computing data model, which consists of face detection and facial emotion recognition. The system uses the SVM algorithm for face detection, uses the temporal feature algorithm for facial emotion analysis, and finally uses the classification method of machine learning to classify emotions, so as to realize the purpose of identifying the user’s emotional state through cloud computing technology. Experimental data shows that the EEG signal emotion recognition method based on time domain features performs best has better generalization ability and is improved by 6.3% on the basis of traditional methods. The experimental results show that the personalized emotion recognition method based on cloud computing is more effective than traditional methods.
APA, Harvard, Vancouver, ISO, and other styles
47

Quan, Changqin, and Fuji Ren. "Visualizing Emotions from Chinese Blogs by Textual Emotion Analysis and Recognition Techniques." International Journal of Information Technology & Decision Making 15, no. 01 (January 2016): 215–34. http://dx.doi.org/10.1142/s0219622014500710.

Full text
Abstract:
The research on blog emotion analysis and recognition has become increasingly important in recent years. In this study, based on the Chinese blog emotion corpus (Ren-CECps), we analyze and compare blog emotion visualization from different text levels: word, sentence, and paragraph. Then, a blog emotion visualization system is designed for practical applications. Machine learning methods are applied for the implementation of blog emotion recognition at different textual levels. Based on the emotion recognition engine, the blog emotion visualization interface is designed to provide a more intuitive display of emotions in blogs, which can detect emotion for bloggers, and capture emotional change rapidly. In addition, we evaluated the performance of sentence emotion recognition by comparing five classification algorithms under different schemas, which demonstrates the effectiveness of the Complement Naive Bayes model for sentence emotion recognition. The system can recognize multi-label emotions in blogs, which provides a richer and more detailed emotion expression.
APA, Harvard, Vancouver, ISO, and other styles
48

Tsang, Vicky. "Eye-tracking study on facial emotion recognition tasks in individuals with high-functioning autism spectrum disorders." Autism 22, no. 2 (November 8, 2016): 161–70. http://dx.doi.org/10.1177/1362361316667830.

Full text
Abstract:
The eye-tracking experiment was carried out to assess fixation duration and scan paths that individuals with and without high-functioning autism spectrum disorders employed when identifying simple and complex emotions. Participants viewed human photos of facial expressions and decided on the identification of emotion, the negative–positive emotion orientation, and the degree of emotion intensity. Results showed that there was an atypical emotional processing in the high-functioning autism spectrum disorder group to identify facial emotions when eye-tracking data were compared between groups. We suggest that the high-functioning autism spectrum disorder group prefers to use a rule-bound categorical approach as well as featured processing strategy in the facial emotion recognition tasks. Therefore, the high-functioning autism spectrum disorder group more readily distinguishes overt emotions such as happiness and sadness. However, they perform more inconsistently in covert emotions such as disgust and angry, which demand more cognitive strategy employment during emotional perception. Their fixation time in eye-tracking data demonstrated a significant difference from that of their controls when judging complex emotions, showing reduced “in” gazes and increased “out” gazes. The data were in compliance with the findings in their emotion intensity ratings which showed individuals with autism spectrum disorder misjudge the intensity of complex emotions especially the emotion of fear.
APA, Harvard, Vancouver, ISO, and other styles
49

Huang, Chengwei, Guoming Chen, Hua Yu, Yongqiang Bao, and Li Zhao. "Speech Emotion Recognition under White Noise." Archives of Acoustics 38, no. 4 (December 1, 2013): 457–63. http://dx.doi.org/10.2478/aoa-2013-0054.

Full text
Abstract:
Abstract Speaker‘s emotional states are recognized from speech signal with Additive white Gaussian noise (AWGN). The influence of white noise on a typical emotion recogniztion system is studied. The emotion classifier is implemented with Gaussian mixture model (GMM). A Chinese speech emotion database is used for training and testing, which includes nine emotion classes (e.g. happiness, sadness, anger, surprise, fear, anxiety, hesitation, confidence and neutral state). Two speech enhancement algorithms are introduced for improved emotion classification. In the experiments, the Gaussian mixture model is trained on the clean speech data, while tested under AWGN with various signal to noise ratios (SNRs). The emotion class model and the dimension space model are both adopted for the evaluation of the emotion recognition system. Regarding the emotion class model, the nine emotion classes are classified. Considering the dimension space model, the arousal dimension and the valence dimension are classified into positive regions or negative regions. The experimental results show that the speech enhancement algorithms constantly improve the performance of our emotion recognition system under various SNRs, and the positive emotions are more likely to be miss-classified as negative emotions under white noise environment.
APA, Harvard, Vancouver, ISO, and other styles
50

Cecchetto, Cinzia, Marilena Aiello, Delia D’Amico, Daniela Cutuli, Daniela Cargnelutti, Roberto Eleopra, and Raffaella Ida Rumiati. "Facial and Bodily Emotion Recognition in Multiple Sclerosis: The Role of Alexithymia and Other Characteristics of the Disease." Journal of the International Neuropsychological Society 20, no. 10 (November 2014): 1004–14. http://dx.doi.org/10.1017/s1355617714000939.

Full text
Abstract:
AbstractMultiple sclerosis (MS) may be associated with impaired perception of facial emotions. However, emotion recognition mediated by bodily postures has never been examined in these patients. Moreover, several studies have suggested a relation between emotion recognition impairments and alexithymia. This is in line with the idea that the ability to recognize emotions requires the individuals to be able to understand their own emotions. Despite a deficit in emotion recognition has been observed in MS patients, the association between impaired emotion recognition and alexithymia has received little attention. The aim of this study was, first, to investigate MS patient’s abilities to recognize emotions mediated by both facial and bodily expressions and, second, to examine whether any observed deficits in emotions recognition could be explained by the presence of alexithymia. Thirty patients with MS and 30 healthy matched controls performed experimental tasks assessing emotion discrimination and recognition of facial expressions and bodily postures. Moreover, they completed questionnaires evaluating alexithymia, depression, and fatigue. First, facial emotion recognition and, to a lesser extent, bodily emotion recognition can be impaired in MS patients. In particular, patients with higher disability showed an impairment in emotion recognition compared with patients with lower disability and controls. Second, their deficit in emotion recognition was not predicted by alexithymia. Instead, the disease’s characteristics and the performance on some cognitive tasks significantly correlated with emotion recognition. Impaired facial emotion recognition is a cognitive signature of MS that is not dependent on alexithymia. (JINS, 2014, 19, 1–11)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography