Dissertations / Theses on the topic 'Recognition of emotions'

To see the other types of publications on this topic, follow the link: Recognition of emotions.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Recognition of emotions.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Stanley, Jennifer Tehan. "Emotion recognition in context." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24617.

Full text
Abstract:
Thesis (Ph.D.)--Psychology, Georgia Institute of Technology, 2008.
Committee Chair: Blanchard-Fields, Fredda; Committee Member: Corballis, Paul; Committee Member: Hertzog, Christopher; Committee Member: Isaacowitz, Derek; Committee Member: Kanfer, Ruth
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Jiaming. "Contextual recognition of robot emotions." Thesis, University of Sheffield, 2013. http://etheses.whiterose.ac.uk/3809/.

Full text
Abstract:
In the field of human-robot interaction, socially interactive robots are often equipped with the ability to detect the affective states of users, the ability to express emotions through the use of synthetic facial expressions, speech and textual content, and the ability for imitating and social learning. Past work on creating robots that can make convincing emotional expressions has concentrated on the quality of those expressions, and on assessing people’s ability to recognize them. Previous recognition studies presented the facial expressions of the robots in neutral contexts, without any strong emotional valence (e.g., emotionally valenced music or video). It is therefore worth empirically exploring whether observers’ judgments of the facial cues of a robot would be affected by a surrounding emotional context. This thesis takes its inspiration from the contextual effects found on the interpretation of the expressions on human faces and computer avatars, and looks at the extent to which they also apply to the interpretation of the facial expressions of a mechanical robot head. The kinds of contexts that affect the recognition of robot emotional expressions, the circumstances under which such contextual effects occur, and the relationship between emotions and the surrounding situation, are observed and analyzed in a series of 11 experiments. In these experiments, the FACS (Facial Action Coding System) (Ekman and Friesen, 2002) was applied to set up the parameters of the servos to make the robot head produce sequences of facial expressions. Four different emotional surrounding or preceding contexts were used (i.e., recorded BBC News pieces, selected affective pictures, classical music pieces and film clips). This thesis provides evidence that observers’ judgments about the facial expressions of a robot can be affected by a surrounding emotional context. From a psychological perspective, the contextual effects found on the robotic facial expressions based on the FACS, indirectly support the claims that human emotions are both biologically based and socially constructed. From a robotics perspective, it is argued that the results obtained from the analyses will be useful for guiding researchers to enhance the expressive skills of emotional robots in a surrounding emotional context. This thesis also analyzes the possible factors contributing to the contextual effects found in the original 11 experiments. Some future work, including four new experiments (a preliminary experiment designed to identify appropriate contextual materials and three further experiments in which factors likely to affect a context effect are controlled one by one) is also proposed in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
3

Xiao, Zhongzhe. "Recognition of emotions in audio signals." Ecully, Ecole centrale de Lyon, 2008. http://www.theses.fr/2008ECDL0002.

Full text
Abstract:
Les travaux de recherche réalisés dans le cadre de cette thèse de doctorat portent sur la reconnaissance automatique de l’émotion et de l’humeur au sein de signaux sonores. En effet, l’émotion portée par les signaux audio constitue une information sémantique particulièrement importante dont l’analyse automatique offre de nombreuses possibilités en termes d’applications, telles que les interactions homme-machine intelligentes et l’indexation multimédia. L’objectif de cette thèse est ainsi d’étudier des solutions informatique d’analyse de l’émotion audio tant pour la parole que pour les signaux musicaux. Nous utilisons dans notre travail un modèle émotionnel discret combiné à un modèle dimensionnel, en nous appuyant sur des études existantes sur les corrélations entre les propriétés acoustiques et l’émotion dans la parole ainsi que l’humeur dans les signaux de musique. Les principales contributions de nos travaux sont les suivantes. Tout d’abord, nous avons proposé, en complément des caractéristiques audio basées sur les propriétés fréquentielles et d’énergie, de nouvelles caractéristiques harmoniques et Zipf, afin d’améliorer la caractérisation des propriétés des signaux de parole en terme de timbre et de prosodie. Deuxièmement, dans la mesure où très peu de ressources pour l’étude de l’émotion dans la parole et dans la musique sont disponibles par rapport au nombre important de caractéristiques audio qu’il est envisageable d’extraire, une méthode de sélection de caractéristiques nomméeESFS, basée sur la théorie de l’évidence est proposée afin de simplifier le modèle de classification et d’en améliorer les performances. De plus, nous avons montré que l’utilisation d’un classifieur hiérarchique basé sur un modèle dimensionnel de l’émotion, permet d’obtenir de meilleurs résultats de classification qu’un unique classifieur global, souvent utilisé dans la littérature. Par ailleurs, puisqu’il n’existe pas d’accord universel sur la définition des émotions de base, et parce que les états émotionnels considérés sont très dépendant des applications, nous avons également proposé un algorithme basés sur ESFS et permettant de construire automatiquement un classifieur hiérarchique adapté à un ensemble spécifique d’états émotionnels dans le cadre d’une application particulière. Cette classification hiérarchique procède en divisant un problème de classification complexe en un ensemble de problèmes plus petits et plus simples grâce à la combinaison d’un ensemble de sous-classifieurs binaires organisés sous forme d’un arbre binaire. Enfin, les émotions étant par nature des notions subjectives, nous avons également proposé un classifieur ambigu, basé sur la théorie de l’évidence, permettant l’association d’un signal audio à de multiples émotions, comme le font souvent les êtres humains
This Ph. D thesis work is dedicated to automatic emotion/mood recognition in audio signals. Indeed, audio emotion is high semantic information and its automatic analysis may have many applications such as smart human-computer interactions or multimedia indexing. The purpose of this thesis is thus to investigate machine-based audio emotion analysis solutions for both speech and music signals. Our work makes use of a discrete emotional model combined with the dimensional one and relies upon existing studies on acoustics correlates of emotional speech and music mood. The key contributions are the following. First, we have proposed, in complement to popular frequency-based and energy-based features, some new audio features, namely harmonic and Zipf features, to better characterize timbre and prosodic properties of emotional speech. Second, as there exists very few emotional resources either for speech or music for machine learning as compared to audio features that one can extract, an evidence theory-based feature selection scheme named Embedded Sequential Forward Selection (ESFS) is proposed to deal with the classic “curse of dimensionality” problem and thus over-fitting. Third, using a manually built dimensional emotion model-based hierarchical classifier to deal with fuzzy borders of emotional states, we demonstrated that a hierarchical classification scheme performs better than single global classifier mostly used in the literature. Furthermore, as there does not exist any universal agreement on basic emotion definition and as emotional states are typically application dependent, we also proposed a ESFS-based algorithm for automatically building a hierarchical classification scheme (HCS) which is best adapted to a specific set of application dependent emotional states. The HCS divides a complex classification problem into simpler and smaller problems by combining several binary sub-classifiers in the structure of a binary tree in several stages, and gives the result as the type of emotional states of the audio samples. Finally, to deal with the subjective nature of emotions, we also proposed an evidence theory-based ambiguous classifier allowing multiple emotions labeling as human often does. The effectiveness of all these recognition techniques was evaluated on Berlin and DES datasets for emotional speech recognition and on a music mood dataset that we collected in our laboratory as there exist no public dataset so far. Keywords: audio signal, emotion classification, music mood analysis, audio features, feature selection, hierarchical classification, ambiguous classification, evidence theory
APA, Harvard, Vancouver, ISO, and other styles
4

Xiao, Zhongzhe Chen Liming. "Recognition of emotions in audio signals." Ecully : Ecole Centrale de Lyon, 2008. http://bibli.ec-lyon.fr/exl-doc/zxiao.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Golan, Ofer. "Systemising emotions : teaching emotion recognition to people with autism using interactive multimedia." Thesis, University of Cambridge, 2007. https://www.repository.cam.ac.uk/handle/1810/252028.

Full text
Abstract:
Recognition of emotions and mental states (ER) in others is a core difficulty for individuals with autism spectrum conditions (ASC). In contrast, they show good skills in ‘systemizing’- understanding non-agentive systems. This thesis evaluated the effectiveness of Mind Reading, a computer program teaching ER from a wide range of facial expression videos and recorded speech segments, systematically presented. Three different experiments tested the effectiveness of a minimum of 10 hours of software use over a period of 10-15 weeks among individuals with ASC. Experiments included evaluation of independent use of the software by adults and by 8-11 year olds with ASC, and tutor and group supported use of the software in adults with ASC. ER skills were assessed on four levels of generalisation before and after the training period, and compared to matched ASC and typically developing control groups. Results showed improved ER for software users from faces and voices, compared to the ASC control groups. Improvement was mostly limited to faces and voices which were included in the software. Generalisation to stimuli not included in the software was found in the children experiment, in the vocal and visual channels separately. Follow up assessment after a year showed greater improvement on general socio-emotional functioning measures among child and adult software users, compared to ASC controls. These results suggest that individuals with ASC can improve their ability to recognise emotions using systematic computer-based training with long term effects, but may need further tutoring to prevent hyper-systemising, and to enhance generalisation to other situations and stimuli. The reasons behind generalisation difficulties and the study’s limitations are discussed, and suggestions for future work are offered.
APA, Harvard, Vancouver, ISO, and other styles
6

Cheung, Ching-ying Crystal. "Cognition of emotion recognition." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B29740277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Reichert, Nils. "CORRELATION BETWEEN COMPUTER RECOGNIZED FACIAL EMOTIONS AND INFORMED EMOTIONS DURING A CASINO COMPUTER GAME." Thesis, Fredericton: University of New Brunswick, 2012. http://hdl.handle.net/1882/44596.

Full text
Abstract:
Emotions play an important role for everyday communication. Different methods allow computers to recognize emotions. Most are trained with acted emotions and it is unknown if such a model would work for recognizing naturally appearing emotions. An experiment was setup to estimate the recognition accuracy of the emotion recognition software SHORE, which could detect the emotions angry, happy, sad, and surprised. Subjects played a casino game while being recorded. The software recognition was correlated with the recognition of ten human observers. The results showed a strong recognition for happy, medium recognition for surprised, and a weak recognition for sad and angry faces. In addition, questionnaires containing self-informed emotions were compared with the computer recognition, but only weak correlations were found. SHORE was able to recognize emotions almost as well as humans were, but if humans had problems to recognize an emotion, then the accuracy of the software was much lower.
APA, Harvard, Vancouver, ISO, and other styles
8

Gohar, Kadar Navit. "Diagnostic colours of emotions." University of Sydney, 2008. http://hdl.handle.net/2123/2298.

Full text
Abstract:
Doctor of Philosophy
This thesis investigates the role of colour in the cognitive processesing of emotional information. The research is guided by the effect of colour diagnosticity which has been shown previously to influence recognition performance of several types of objects as well as natural scenes. The research presented in Experiment 1 examined whether colour information is considered a diagnostic perceptual feature of seven emotional categories: happiness, sadness, anger, fear, disgust, surprise and neutral. Participants (N = 119), who were naïve to the specific purpose and expectations of the experiment, chose colour more than any other perceptual quality (e.g. shape and tactile information) as a feature that describes the seven emotional categories. The specific colour features given for the six basic emotions were consistently different from those given to the non-emotional neutral category. While emotional categories were often described by chromatic colour features (e.g. red, blue, orange) the neutral category was often ascribed achromatic colour features (e.g. white, grey, transparent) as the most symptomatic perceptual qualities for its description. The emotion 'anger' was unique in being the only emotion showing an agreement higher that 50% of the total given colour features for one particular colour - red. Confirming that colour is a diagnostic feature of emotions led to the examination of the effect of diagnostic colours of emotion on recognition memory for emotional words and faces: the effect, if any, of appropriate and inappropriate colours (matched with emotion) on the strength of memory for later recognition of faces and words (Experiments 2 & 3). The two experiments used retention intervals of 15 minutes and one week respectively and the colour-emotion associations were determined for each individual participant. Results showed that regardless of the subject’s consistency level in associating colours with emotions, and compared with the individual inappropriate or random colours, individual appropriate colours of emotions significantly enhance recognition memory for six basic emotional faces and words. This difference between the individual inappropriate colours or random colours and the individual appropriate colours of emotions was not found to be significant for non-emotional neutral stimuli. Post hoc findings from both experiments further show that appropriate colours of emotion are associated more consistently than inappropriate colours of emotions. This suggests that appropriate colour-emotion associations are unique both in their strength of association and in the form of their representation. Experiment 4 therefore aimed to investigate whether appropriate colour-emotion associations also trigger an implicit automatic cognitive system that allows faster naming times for appropriate versus inappropriate colours of emotional word carriers. Results from the combined Emotional-Semantic Stroop task confirm the above hypothesis and therefore imply that colour plays a substantial role not only in our conceptual representations of objects but also in our conceptual representations of basic emotions. The resemblance of the present findings collectively to those found previously for objects and natural scenes suggests a common cognitive mechanism for the processing of emotional diagnostic colours and the processing of diagnostic colours of objects or natural scenes. Overall, this thesis provides the foundation for many future directions of research in the area of colour and emotion as well as a few possible immediate practical implications.
APA, Harvard, Vancouver, ISO, and other styles
9

Lau, Yuet-han Jasmine. "Ageing-related effect on emotion recognition." Click to view E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B37101730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gohar, Kadar Navit. "Diagnostic colours of emotions." Thesis, The University of Sydney, 2007. http://hdl.handle.net/2123/2298.

Full text
Abstract:
This thesis investigates the role of colour in the cognitive processesing of emotional information. The research is guided by the effect of colour diagnosticity which has been shown previously to influence recognition performance of several types of objects as well as natural scenes. The research presented in Experiment 1 examined whether colour information is considered a diagnostic perceptual feature of seven emotional categories: happiness, sadness, anger, fear, disgust, surprise and neutral. Participants (N = 119), who were naïve to the specific purpose and expectations of the experiment, chose colour more than any other perceptual quality (e.g. shape and tactile information) as a feature that describes the seven emotional categories. The specific colour features given for the six basic emotions were consistently different from those given to the non-emotional neutral category. While emotional categories were often described by chromatic colour features (e.g. red, blue, orange) the neutral category was often ascribed achromatic colour features (e.g. white, grey, transparent) as the most symptomatic perceptual qualities for its description. The emotion 'anger' was unique in being the only emotion showing an agreement higher that 50% of the total given colour features for one particular colour - red. Confirming that colour is a diagnostic feature of emotions led to the examination of the effect of diagnostic colours of emotion on recognition memory for emotional words and faces: the effect, if any, of appropriate and inappropriate colours (matched with emotion) on the strength of memory for later recognition of faces and words (Experiments 2 & 3). The two experiments used retention intervals of 15 minutes and one week respectively and the colour-emotion associations were determined for each individual participant. Results showed that regardless of the subject’s consistency level in associating colours with emotions, and compared with the individual inappropriate or random colours, individual appropriate colours of emotions significantly enhance recognition memory for six basic emotional faces and words. This difference between the individual inappropriate colours or random colours and the individual appropriate colours of emotions was not found to be significant for non-emotional neutral stimuli. Post hoc findings from both experiments further show that appropriate colours of emotion are associated more consistently than inappropriate colours of emotions. This suggests that appropriate colour-emotion associations are unique both in their strength of association and in the form of their representation. Experiment 4 therefore aimed to investigate whether appropriate colour-emotion associations also trigger an implicit automatic cognitive system that allows faster naming times for appropriate versus inappropriate colours of emotional word carriers. Results from the combined Emotional-Semantic Stroop task confirm the above hypothesis and therefore imply that colour plays a substantial role not only in our conceptual representations of objects but also in our conceptual representations of basic emotions. The resemblance of the present findings collectively to those found previously for objects and natural scenes suggests a common cognitive mechanism for the processing of emotional diagnostic colours and the processing of diagnostic colours of objects or natural scenes. Overall, this thesis provides the foundation for many future directions of research in the area of colour and emotion as well as a few possible immediate practical implications.
APA, Harvard, Vancouver, ISO, and other styles
11

De, Klerk Hester Magdalena. "Young South African children’s recognition of emotions as depicted by picture communication symbols." Thesis, University of Pretoria, 2011. http://hdl.handle.net/2263/28904.

Full text
Abstract:
Experiencing and expressing emotions is an essential part of psychological well-being. It is for this reason that most graphic symbol sets used in the field of AAC include an array of symbols depicting emotions. However, to date, very limited research has been done on children’s ability to recognise and use these symbols to express feelings within different cultural contexts. The purpose of the current study was to describe and compare Afrikaans and Sepedi speaking grade R children’s choice of graphic symbols when depicting four basic emotions, i.e. happy; sad; afraid; and angry. After ninety participants (44 Afrikaans and 46 Sepedi speaking) passed a pre-assessment task, they were exposed 24 emotions vignettes. Participants had to indicate the intensity the protagonist in the story would experience. The next step was for the participants to choose a graphic symbol from a 16 matrix overlay which they thought best represented the symbol and intensity. The results indicated a significant difference at a 1% level between the two groups’ selection of expected symbols to represent emotions. Afrikaans speaking participants more often chose expected symbols than Sepedi speaking participants to represent different basic emotions. Sepedi speaking participants made use of a larger variety of symbols to represent the emotions. Participants from both language groups most frequently selected expected symbols to represent happy followed by those for angry and afraid with expected symbols for sad selected least frequently. Except for a significant difference at the 1% level for happy no significant differences were present between the intensities selected by the different language groups for the other three basic emotions. No significant differences between the two gender groups’ choices of expected symbols to represent emotions or between the intensities selected by the different gender groups were observed.
Thesis (PhD)--University of Pretoria, 2011.
Centre for Augmentative and Alternative Communication (CAAC)
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
12

FLOR, H. R. "Development Of a Multisensorial System For Emotions Recognition." Universidade Federal do Espírito Santo, 2017. http://repositorio.ufes.br/handle/10/9561.

Full text
Abstract:
Made available in DSpace on 2018-08-02T00:00:40Z (GMT). No. of bitstreams: 1 tese_10810_Hamilton Rivera Flor20171019-95619.pdf: 4725252 bytes, checksum: 16042ed4abfc5b07268db9f41baa2a83 (MD5) Previous issue date: 2017-03-17
Automated reading and analysis of human emotion has the potential to be a powerful tool to develop a wide variety of applications, such as human-computer interaction systems, but, at the same time, this is a very difficult issue because the human communication is very complex. Humans employ multiple sensory systems in emotion recognition. At the same way, an emotionally intelligent machine requires multiples sensors to be able to create an affective interaction with users. Thus, this Master thesis proposes the development of a multisensorial system for automatic emotion recognition. The multisensorial system is composed of three sensors, which allowed exploring different emotional aspects, as the eye tracking, using the IR-PCR technique, helped conducting studies about visual social attention; the Kinect, in conjunction with the FACS-AU system technique, allowed developing a tool for facial expression recognition; and the thermal camera, using the FT-RoI technique, was employed for detecting facial thermal variation. When performing the multisensorial integration of the system, it was possible to obtain a more complete and varied analysis of the emotional aspects, allowing evaluate focal attention, valence comprehension, valence expressions, facial expression, valence recognition and arousal recognition. Experiments were performed with sixteen healthy adult volunteers and 105 healthy children volunteers and the results were the developed system, which was able to detect eye gaze, recognize facial expression and estimate the valence and arousal for emotion recognition, This system also presents the potential to analyzed emotions of people by facial features using contactless sensors in semi-structured environments, such as clinics, laboratories, or classrooms. This system also presents the potential to become an embedded tool in robots to endow these machines with an emotional intelligence for a more natural interaction with humans. Keywords: emotion recognition, eye tracking, facial expression, facial thermal variation, integration multisensorial
APA, Harvard, Vancouver, ISO, and other styles
13

Lau, Yuet-han Jasmine, and 劉月嫻. "Ageing-related effect on emotion recognition." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B37101730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Ng, Hau-hei. "The effect of mood on facial emotion recognition." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hdl.handle.net/10722/210312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Motan, Irem. "Recognition Of Self Conscious Emotions In Relation To Psychopathology." Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12609222/index.pdf.

Full text
Abstract:
The aim of this study is to discover nonverbal, bodily gesture and contextual cues indicating self-conscious emotions and use these clues to examine personal differences and psychopathological symptoms. Moreover, possible effects of cultural differences on self-conscious emotions&rsquo
recognition and their relation to psychopathology are meant to be discussed. To achieve aforementioned goals, the study is partitioned into three separate but interdependent phases. The aim of the study is scale adaptation for which the State Shame and Guilt Scale, Test of Self-Conscious Affect-3, Guilt- Shame Scale, State-Trait Anxiety Inventory, and Beck Depression Inventory are applied to a group of 250 university students. The second study&rsquo
s objective is to determine the nonverbal expressions used in recognition of self-conscious emotions. To meet this goal, 5 TAT cards, whose compatibility with the research questions is verified, are applied to 45 university students in separate sessions by using close ended questions. In the third part of the study, 9 TAT cards, which include clues about recognition and nonverbal expressions of self-conscious emotions, adapted corresponding scales, and a psychopathological symptoms measuring scale (SCL-90) in self-report format are applied on a group of 250 university students. Factor and correlation analyses done in the first part reveal that adapted scales are reliable and valid, while group comparisons and measurements of the second part indicate differences in emotions. Findings reveal that shame can be recognized by nonverbal expressions whereas for guilt contextual clues are facilitated. In the third part, group comparisons and regression analyses, which are done in order to reveal self-conscious emotions&rsquo
recognition and their significant relationships with psychopathology, display that state self-conscious emotions and shame-proneness have very important roles on psychopathology. All these findings are discussed in the light of cultural effects.
APA, Harvard, Vancouver, ISO, and other styles
16

Verdecchia, Andrea <1983&gt. "EEG system design for VR viewers and emotions recognition." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amsdottorato.unibo.it/8879/6/Andrea_Verdecchia_tesi.pdf.

Full text
Abstract:
BACKGROUND: Taking advantage of virtual reality today is within everyone's reach and this has led the large commercial companies and research centers to re-evaluate their methodologies. In this context the interest in proposing the Brain Computer Interfaces (BCIs) as an interpreter of the personal experience induced by virtual reality viewers is increasing more and more. OBJECTIVE: The present work aims to describe the design of an electroencephalographic system (EEG) that can easily be integrated with virtual reality viewers currently on the market. The final applications of such system are several, but our intention, inspired by Neuromarketing, wants to analyze the possibility of recognize the mental state of like and dislike. METHODS: The design process involved two phases: the first relating to the development of the hardware system that led to the analysis of techniques to obtain the most possible clean signals; the second one concerns the analysis of the acquired signals to determine the possible presence of characteristics which belong and distinguish the two mental states of like and dislike, through basic statistical analysis techniques. RESULTS: Our analysis shows that differences between the like and dislike state of mind can be found analyzing the power in the different frequencies band relative to the brain's activity classification (Theta, Alpha, Beta and Gamma): in the like case the power is slightly higher respect the dislike one. Moreover we have found through the use or logistic regression that the EEG channels F7, F8 and Fp1 are the most determinant component in the detection, along with the frequencies in the Beta-high band (20-30 Hz).
APA, Harvard, Vancouver, ISO, and other styles
17

GRANATO, MARCO. "EMOTIONS RECOGNITION IN VIDEO GAME PLAYERS USING PHYSIOLOGICAL INFORMATION." Doctoral thesis, Università degli Studi di Milano, 2019. http://hdl.handle.net/2434/607597.

Full text
Abstract:
Video games are interactive software able to arouse different kinds of emotions in players. Usually, the game designer tries to define a set of game features able to enjoy, engage, and/or educate the consumers. Through the gameplay, the narrative, and the game environment, a video game is able to interact with players' intellect and emotions. Thanks to the technological developments of the last years, the gaming industry has grown to become one of the most important entertainment markets. The scientific community and private companies have put a lot of efforts on the technical aspects as well as on the interaction aspects between the players and the video game. Considering the game design, many theories have been proposed to define some guidelines to design games able to arouse specific emotions in consumers. They mainly use interviews or observations in order to deduce the goodness of their approach through qualitative data. There are some works based on empirical studies aimed at studying the emotional states directly on players, using quantitative data. However, these researches usually consider the data analysis as a classification problem involving, mainly, the game events. Our goal is to understand how the feelings, experienced by the players, can be automatically deducted, and how these emotional states can be used to improve the game quality. In order to pursue this purpose, we have measured the mental states using physiological signals in order to return a set of quantitative values used to identify the players emotions. The most common ways to identify emotions are: to use a discrete set of labels (e.g., joy, anger), or to assess them inside an n-dimensional vector space. Albeit the most natural way to describe the emotions is to represent them through their name, the latter approach provides a quantitative result that can be used to define the new game status. In this thesis, we propose a framework aimed at an automatic assessment, using physiological data, of emotions in a 2-dimensional space, structured by valence and arousal vectors. The former may vary between pleasure and displeasure, while the latter defines the level of physiological activation. As a consequence, we have considered as most effective to infer the players’ mental states, the following physiological data: electrocardiography (ECG), electromyography on 5 facial muscles (Facial EMG), galvanic skins response (GSR), and respiration intensity/rate. We have recorded a video, during a set of game sessions, of the player's face and of her gameplay. To acquire the affective information, we have shown the recorded video and audio to the player, and we have asked to self-assess her/his emotional state over the entire game on the valence and arousal vectors presented above. Starting from this framework, we have conducted two sets of experiments. In the first experiment, our aim was to validate the procedure. We have collected the data of 10 participants while playing at 4 platform games. We have also analyzed the data to identify the emotion pattern of the player during the gaming sessions. The analysis has been conducted in two directions: individual analysis (to find the physiological pattern of an individual player), and collective analysis (to find the generic patterns of the sample population). The goal of the second experiment has been to create a dataset of physiological information of 33 players, and to extend the data analysis and the results provided by the pilot study. We have asked the participants to play at 2 racing games in two different environments: on a standard monitor and using a head mounted display for Virtual Reality. After we have collected the information useful to the dataset creation, we have analyzed the data focusing on individual analysis. In both analyses, the self-assessment and the physiological data have been used in order to infer the emotional state of the players in each moment of the game sessions, and to build a prediction model of players' emotions using Machine Learning techniques. Therefore, the three main contributions of this thesis are: to design a novel framework for study the emotions of video game players, to develop an open-source architecture and a set of software able to acquire the physiological signals and the affective states, to create an affective dataset using racing video games as stimuli, to understand which physiological conditions could be the most relevant in order to determine the players' emotions, and to propose a method for the real-time prediction of a player's mental state during a video game session. The results to suggest that it is possible to design a model that fits with player's characteristics, predicting her emotions. It could be an effective tool available to game designers who can introduce innovative features to their games.
APA, Harvard, Vancouver, ISO, and other styles
18

Sun, Rui. "The evaluation of the stability of acoustic features in affective conveyance across multiple emotional databases." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49041.

Full text
Abstract:
The objective of the research presented in this thesis was to systematically investigate the computational structure for cross-database emotion recognition. The research consisted of evaluating the stability of acoustic features, particularly the glottal and Teager Energy based features, and investigating three normalization methods and two data fusion techniques. One of the challenges of cross-database training and testing is accounting for the potential variation in the types of emotions expressed as well as the recording conditions. In an attempt to alleviate the impact of these types of variations, three normalization methods on the acoustic data were studied. Motivated by the lack of large and diverse enough emotional database to train the classifier, using multiple databases to train posed another challenge: data fusion. This thesis proposed two data fusion techniques, pre-classification SDS and post-classification ROVER to study the issue. Using the glottal, TEO and TECC features, of which the stability of emotion distinguishing ability has been highlighted on multiple databases, the systematic computational structure proposed in this thesis could improve the performance of cross-database binary-emotion recognition by up to 23% for neutral vs. emotional and 10% for positive vs. negative.
APA, Harvard, Vancouver, ISO, and other styles
19

Bellegarde, Lucille Gabrielle Anna. "Perception of emotions in small ruminants." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/25915.

Full text
Abstract:
Animals are sentient beings, capable of experiencing emotions. Being able to assess emotional states in farm animals is crucial to improving their welfare. Although the function of emotion is not primarily for communication, the outward expression of an emotional state involves changes in posture, vocalisations, odours and facial expressions. These changes can be perceived and used as indicators of emotional state by other animals. Since emotions can be perceived between conspecifics, understanding how emotions are identified and how they can spread within a social group could have a major impact on improving the welfare of farmed species, which are mostly reared in groups. A recently developed method for the evaluation of emotions in animals is based on cognitive biases such as judgment biases, i.e. an individual in a negative emotional state will show pessimistic judgments while and individual in a positive emotional state will show optimistic judgments. The aims of this project were to (A) establish whether sheep and goats can discriminate between images of faces of familiar conspecifics taken in different positive and negative situations, (B) establish whether sheep and goats perceive the valence (positive of negative) of the emotion expressed by the animal on the image, (C) validate the use of images of faces in cognitive bias studies. The use of images of faces of conspecifics as emotional stimuli was first validated, using a discrimination task in a two-armed maze. A new methodology was then developed across a series of experiments to assess spontaneous reactions of animals exposed to video clips or to images of faces of familiar conspecifics. Detailed observations of ear postures were used as the main behavioural indicator. Individual characteristics (dominance status within the herd, dominance pairwise relationships and humananimal relationship) were also recorded during preliminary tests and included in the analyses. The impact of a low-mood state on the perception of emotions was assessed in sheep after subjecting half of the animals to unpredictable negative housing conditions and keeping the other half in good standard housing conditions. Sheep were then presented with videos of conspecifics filmed in situations of varying valence. Reactions to ambiguous stimuli were evaluated by presenting goats with images of morphed faces. Goats were also presented with images of faces of familiar conspecifics taken situations of varying emotional intensity. Sheep could discriminate images of faces of conspecifics taken either in a negative or in a neutral situation and their learning process of the discrimination task was affected by the type of emotion displayed. Sheep reacted differently depending on the valence of the video clips (P < 0.05); however, there was no difference between the control and the low-mood groups (P > 0.05). Goats also showed different behavioural reactions to images of faces photographed in different situations (P < 0.05), indicating that they perceived the images as different. Responses to morphed images were not necessarily intermediate to responses to negative and positive images and not gradual either, which poses a major problem to the potential use of facial images in cognitive bias experiments. Overall, animals were more attentive towards images or videos of conspecifics in negative situations, i.e., presumably, in a negative emotional state. This suggests that sheep and goats are able to perceive the valence of the emotional state. The identity of the individual on the photo also affected the animals’ spontaneous reaction to the images. Social relationships such as dominance, but also affinity between the tested and photographed individual seem to influence emotion perception.
APA, Harvard, Vancouver, ISO, and other styles
20

Cheung, Ching-ying Crystal. "Facial emotion recognition after subcortical cerebrovascular diseases /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk:8888/cgi-bin/hkuto%5Ftoc%5Fpdf?B23425027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

張晶凝 and Ching-ying Crystal Cheung. "Facial emotion recognition after subcortical cerebrovascular diseases." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31224155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kreklewetz, Kimberly. "Facial affect recognition in psychopathic offenders /." Burnaby B.C. : Simon Fraser University, 2005. http://ir.lib.sfu.ca/handle/1892/2166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Donnan, Gemma Louise Jean. "An investigation of cultural variations in emotion experience, regulation and expression in two Scottish settings." Thesis, University of Aberdeen, 2017. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=234053.

Full text
Abstract:
Individuals from Aberdeen/Aberdeenshire and Glasgow/Greater Glasgow have anecdotally been thought to differ in their expression of emotion with the former group being thought to be less emotionally expressive that the latter. The current thesis carried out three studies to empirically examine this. A systematic review of measures of emotion experience, regulation, expression and alexithymia was carried out to establish their psychometric properties. The results of the review lead to recommendations for which scales to use within future studies of the thesis. The second study used measures of emotion experience (Positive Affect Negative Affect Schedule), emotion regulation (Emotion Regulation Questionnaire) and alexithymia (Toronto Alexithymia Scale-20), identified within the review, in samples of adults from Aberdeen/Aberdeenshire and Glasgow/Greater Glasgow. A multiple indicators multiple causes model was used to examine group differences in response to these measures, this method allowed examination of differences on factor means and individual indicator items on the scales. It was found that Aberdeen/Aberdeenshire participants demonstrated a higher factor mean on the Negative Affect (NA) factor of the PANAS; the Aberdeen/Aberdeenshire participants also endorsed an individual item on the ERQ (Item 5) and the TAS-20 (Item 1) more than the Glasgow/Greater Glasgow participants. Finally, a qualitative study was carried out in which participants from each group recalled events related to six emotions. In describing events related to fear, anger and sadness, Aberdeen/Aberdeenshire participants tended to use positive statements that downplayed events related to these emotions, while the Glasgow/Greater Glasgow participants tended to use 'catastrophic' statements when describing events related to the same emotions. This may indicate differing cultural models between these populations.
APA, Harvard, Vancouver, ISO, and other styles
24

Kuhn, Lisa Katharina. "Emotion recognition in the human face and voice." Thesis, Brunel University, 2015. http://bura.brunel.ac.uk/handle/2438/11216.

Full text
Abstract:
At a perceptual level, faces and voices consist of very different sensory inputs and therefore, information processing from one modality can be independent of information processing from another modality (Adolphs & Tranel, 1999). However, there may also be a shared neural emotion network that processes stimuli independent of modality (Peelen, Atkinson, & Vuilleumier, 2010) or emotions may be processed on a more abstract cognitive level, based on meaning rather than on perceptual signals. This thesis therefore aimed to examine emotion recognition across two separate modalities in a within-subject design, including a cognitive Chapter 1 with 45 British adults, a developmental Chapter 2 with 54 British children as well as a cross-cultural Chapter 3 with 98 German and British children, and 78 German and British adults. Intensity ratings as well as choice reaction times and correlations of confusion analyses of emotions across modalities were analysed throughout. Further, an ERP Chapter investigated the time-course of emotion recognition across two modalities. Highly correlated rating profiles of emotions in faces and voices were found which suggests a similarity in emotion recognition across modalities. Emotion recognition in primary-school children improved with age for both modalities although young children relied mainly on faces. British as well as German participants showed comparable patterns for rating basic emotions, but subtle differences were also noted and Germans perceived emotions as less intense than British. Overall, behavioural results reported in the present thesis are consistent with the idea of a general, more abstract level of emotion processing which may act independently of modality. This could be based, for example, on a shared emotion brain network or some more general, higher-level cognitive processes which are activated across a range of modalities. Although emotion recognition abilities are already evident during childhood, this thesis argued for a contribution of ‘nurture’ to emotion mechanisms as recognition was influenced by external factors such as development and culture.
APA, Harvard, Vancouver, ISO, and other styles
25

Linardatos, Eftihia. "FACIAL EMOTION RECOGNITION IN GENERALIZED ANXIETY DISORDER AND DEPRESSION: ASSESSING FOR UNIQUE AND COMMON RESPONSES TO EMOTIONS AND NEUTRALITY." Kent State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=kent1322539420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Cornew, Lauren A. "Emotion processing in the auditory modality the time course and development of emotional prosody recognition /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2008. http://wwwlib.umi.com/cr/ucsd/fullcit?p3330854.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2008.
Title from first page of PDF file (viewed December 11, 2008). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
27

Sundell, Jessica. "Psychopathic Personality Traits, Empathy, and Recognition of Facial Expressions of Emotions." Thesis, Stockholms universitet, Psykologiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-172310.

Full text
Abstract:
Psychopathic personality traits have been found to be associated with a variety of emotional deficits, including poor facial expression recognition, and reduced capacity to experience empathy. However, research has yielded conflicting results. This study investigated the relationship between psychopathic personality traits, facial emotion recognition, as well as empathy, in a community sample (n = 127), identified as having either low or elevated levels of psychopathic traits. Facial expression recognition was measured using the Hexagon task, which contains morphed facial expressions with two levels of expressivity. Psychopathic traits were assessed using the Youth Psychopathic Traits Inventory, and empathy was measured with the Interpersonal Reactivity Index. Individuals with elevated psychopathic traits did not display lower accuracy in facial expression recognition compared to the low psychopathic traits group, rather the reverse was found.  Weak to strong negative correlations were found between psychopathic traits and empathy. Zero to weak correlations was found between psychopathic traits and expression recognition, as well as between empathy and expression recognition. The results are compared with similar studies, and implications for the study of psychopathy and emotion recognition are discussed.
APA, Harvard, Vancouver, ISO, and other styles
28

Ryan, Melissa-Sue, and n/a. "Ageing and emotion : categorisation, recognition, and social understanding." University of Otago. Department of Psychology, 2009. http://adt.otago.ac.nz./public/adt-NZDU20090309.150008.

Full text
Abstract:
The present thesis investigated age differences in emotion recognition skills of 146 older adults (age range 60-92 years) and 146 young adults (age range 18-25 years) in four experiments. Experiment 1 assessed participants� ability to categorise facial expressions of sadness, fear, happiness, and surprise. In Experiments 2 and 3, participants were asked to identify six emotions (happiness, sadness, surprise, fear, anger, disgust) from still and dynamic faces, alone and in combination with vocal expressions. Finally, Experiment 4 compared performance on these standard emotion recognition paradigms to that of more ecologically-valid measures; the Faux Pas and Verbosity and Social Cues Tasks. Across the four studies, there was evidence of an age-related decline in emotion recognition skills. Older adults were overall less sensitive to perceptual differences between faces in Experiment 1 and showed a loss of categorical perception effect for fearful faces. Older adults were less accurate than young adults at recognising expressions of sadness, anger, and fear, across types of expression (voices and faces). There were some differences across modalities, with older adults showing difficulties with fear recognition for faces, but not voices, and difficulty in matching happy voices to happy faces but not for happy voices and faces presented in isolation. Experiment 2 also showed that the majority of older adult participants had some decline in emotion recognition skills. Age differences in performance were also apparent on the more ecologically-valid measures. Older adults were more likely than young adults to rate the protagonist as behaving inappropriately in the Faux Pas Task, even with the control videos, suggesting difficulty in discriminating faux pas. Older adults were also judged to be more verbose and to offer more off-topic information during the Verbosity Task than young adults and were less likely to recognise expressions of boredom in the Social Cues Task. These findings are discussed in terms of three theoretical accounts. A positivity bias (indicating increased recognition and experience of positive emotions and reduction for negative emotions) was not consistent with the older adults� difficulties with matching happy faces to voices and relatively preserved performance with disgusted expressions. Age-related decline in cognitive processes did not account for the specific pattern of age differences observed. The most plausible explanation for the age differences in the present thesis is that age-related neurological changes in the brain areas that process emotions, specifically the temporal and frontal areas, are likely to contribute to the older adults� declines in performance on emotion categorisation, emotion recognition, and social cognition tasks. The implications for everyday social interactions for older adults are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
29

Durrani, Sophia J. "Studies of emotion recognition from multiple communication channels." Thesis, University of St Andrews, 2005. http://hdl.handle.net/10023/13140.

Full text
Abstract:
Crucial to human interaction and development, emotions have long fascinated psychologists. Current thinking suggests that specific emotions, regardless of the channel in which they are communicated, are processed by separable neural mechanisms. Yet much research has focused only on the interpretation of facial expressions of emotion. The present research addressed this oversight by exploring recognition of emotion from facial, vocal, and gestural tasks. Happiness and disgust were best conveyed by the face, yet other emotions were equally well communicated by voices and gestures. A novel method for exploring emotion perception, by contrasting errors, is proposed. Studies often fail to consider whether the status of the perceiver affects emotion recognition abilities. Experiments presented here revealed an impact of mood, sex, and age of participants. Dysphoric mood was associated with difficulty in interpreting disgust from vocal and gestural channels. To some extent, this supports the concept that neural regions are specialised for the perception of disgust. Older participants showed decreased emotion recognition accuracy but no specific pattern of recognition difficulty. Sex of participant and of actor affected emotion recognition from voices. In order to examine neural mechanisms underlying emotion recognition, an exploration was undertaken using emotion tasks with Parkinson's patients. Patients showed no clear pattern of recognition impairment across channels of communication. In this study, the exclusion of surprise as a stimulus and response option in a facial emotion recognition task yielded results contrary to those achieved without this modification. Implications for this are discussed. Finally, this thesis gives rise to three caveats for neuropsychological research. First, the impact of the observers' status, in terms of mood, age, and sex, should not be neglected. Second, exploring multiple channels of communication is important for understanding emotion perception. Third, task design should be appraised before conclusions regarding impairments in emotion perception are presumed.
APA, Harvard, Vancouver, ISO, and other styles
30

Atwood, Kristen Diane. "Recognition of Facial Expressions of Six Emotions by Children with Specific Language Impairment." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1501.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Chan, Pui-shan Vivien. "Facial emotion recognition ability of children in Hong Kong." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B2974023X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Schacht, Annekathrin. "Emotions in visual word processing." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2008. http://dx.doi.org/10.18452/15727.

Full text
Abstract:
Die Einflüsse von Emotionen auf Informationsverarbeitungsprozesse zählen zu einem der zentralen Aspekte kognitionspsychologischer und neurowissenschaftlicher Forschung. Studien zur Prozessierung affektiver Bilder und emotionaler Gesichtsausdrücke haben gezeigt, daß emotionale Stimuli – vermutlich aufgrund ihrer starken intrinsischen Relevanz für den Organismus – in besonderem Maße Aufmerksamkeit binden und hierdurch einer präferierten und elaborierteren Weiterverarbeitung zugeführt werden. Evidenz zur Aktivierung und Verarbeitung emotionaler Valenz in der visuellen Wortverarbeitung ist hingegen gering und größtenteils inkonsistent. In einer Serie von Experimenten, die in der vorliegenden Arbeit zusammenfassend beschrieben und diskutiert werden, wurde mit Hilfe Ereigniskorrelierter Potentiale (EKPs) versucht, die Effekte emotionaler Valenz von deutschsprachigen Verben innerhalb des Wortverarbeitungsprozesses zu lokalisieren. In den EKPs zeigen sich – hinsichtlich ihrer Latenz und Topographie – dissoziierbare emotionsrelatierte Komponenten, die mit unterschiedlichen Stufen der Verarbeitungsprozesse in Verbindung gebracht werden können. Die Befunde legen nahe, daß die emotionale Valenz von Verben auf einer (post-) lexikalischen Verarbeitungsstufe aktiviert wird. Dieser frühen Registrierung liegen wahrscheinlich domänenunspezifische neuronale Mechanismen zugrunde, die weitestgehend ressourcen- und aufgabenunabhängig wirken. Auf späteren Stufen hingegen scheinen emotions-relatierte Prozesse durch zahlreiche weitere Faktoren beeinflußt zu werden. Die Modulation der Dynamik früher, nicht aber später Emotionsprozessierung durch nicht-valente Kontextinformation sowie in Abhängigkeit der Stimulusdomäne legt einen zeitlich variablen Verarbeitungsprozeß emotionaler Information nahe, der mit streng seriellen Modellen der Informationsverarbeitung nicht vereinbar ist, und möglicherweise der flexiblen Verhaltensanpassung an verschiedene Umweltbedingungen dient.
In recent cognitive and neuroscientific research the influences of emotion on information processing are of special interest. As has been shown in several studies on affective picture as well as facial emotional expression processing, emotional stimuli tend to involuntarily draw attentional resources and preferential and sustained processing, possibly caused by their high intrinsic relevance. However, evidence for emotion effects in visual word processing is scant and heterogeneous. As yet, little is known about at which stage and under what conditions the specific emotional content of a word is activated. A series of experiments which will be summarized and discussed in the following section aimed to localize the effects of emotion in visual word processing by recording event-related potentials (ERPs). Distinct effects of emotional valence on ERPs were found which were distinguishable with regard to their temporal and spatial distribution and might be therefore related to different stages within the processing stream. As a main result, the present findings indicate that the activation of emotional valence of verbs occurs on a (post-) lexical stage. The underlying neural mechanisms of this early registration appear to be domain-unspecific, and further, largely independent of processing resources and task demands. On later stages, emotional processes are modulated by several different factors. Further, the findings of an acceleration of early but not late emotion effects caused by neutral context information as well as by domain-specifity indicate a flexible dynamic of emotional processes which would be hard to account for by strictly serial processing models.
APA, Harvard, Vancouver, ISO, and other styles
33

Gharsalli, Sonia. "Reconnaissance des émotions par traitement d’images." Thesis, Orléans, 2016. http://www.theses.fr/2016ORLE2075/document.

Full text
Abstract:
La reconnaissance des émotions est l'un des domaines scientifiques les plus complexes. Ces dernières années, de plus en plus d'applications tentent de l'automatiser. Ces applications innovantes concernent plusieurs domaines comme l'aide aux enfants autistes, les jeux vidéo, l'interaction homme-machine. Les émotions sont véhiculées par plusieurs canaux. Nous traitons dans notre recherche les expressions émotionnelles faciales en s'intéressant spécifiquement aux six émotions de base à savoir la joie, la colère, la peur, le dégoût, la tristesse et la surprise. Une étude comparative de deux méthodes de reconnaissance des émotions l'une basée sur les descripteurs géométriques et l'autre basée sur les descripteurs d'apparence est effectuée sur la base CK+, base d'émotions simulées, et la base FEEDTUM, base d'émotions spontanées. Différentes contraintes telles que le changement de résolution, le nombre limité d'images labélisées dans les bases d'émotions, la reconnaissance de nouveaux sujets non inclus dans la base d'apprentissage sont également prises en compte. Une évaluation de différents schémas de fusion est ensuite réalisée lorsque de nouveaux cas, non inclus dans l'ensemble d'apprentissage, sont considérés. Les résultats obtenus sont prometteurs pour les émotions simulées (ils dépassent 86%), mais restent insuffisant pour les émotions spontanées. Nous avons appliqué également une étude sur des zones locales du visage, ce qui nous a permis de développer des méthodes hybrides par zone. Ces dernières améliorent les taux de reconnaissance des émotions spontanées. Finalement, nous avons développé une méthode de sélection des descripteurs d'apparence basée sur le taux d'importance que nous avons comparée avec d'autres méthodes de sélection. La méthode de sélection proposée permet d'améliorer le taux de reconnaissance par rapport aux résultats obtenus par deux méthodes reprises de la littérature
Emotion recognition is one of the most complex scientific domains. In the last few years, various emotion recognition systems are developed. These innovative applications are applied in different domains such as autistic children, video games, human-machine interaction… Different channels are used to express emotions. We focus on facial emotion recognition specially the six basic emotions namely happiness, anger, fear, disgust, sadness and surprise. A comparative study between geometric method and appearance method is performed on CK+ database as the posed emotion database, and FEEDTUM database as the spontaneous emotion database. We consider different constraints in this study such as different image resolutions, the low number of labelled images in learning step and new subjects. We evaluate afterward various fusion schemes on new subjects, not included in the training set. Good recognition rate is obtained for posed emotions (more than 86%), however it is still low for spontaneous emotions. Based on local feature study, we develop local features fusion methods. These ones increase spontaneous emotions recognition rates. A feature selection method is finally developed based on features importance scores. Compared with two methods, our developed approach increases the recognition rate
APA, Harvard, Vancouver, ISO, and other styles
34

Stevens, Christopher. "Child sexual offenders’ recognition of facial affect: are offenders less sensitive to emotions in children?" Thesis, University of Canterbury. Psychology, 2015. http://hdl.handle.net/10092/10569.

Full text
Abstract:
Understanding the risk factors that contribute to sexual offending against children is an important topic for research. The present study set out to examine whether deficits in emotion recognition might contribute to sexual offending, by testing if child sexual offenders were impaired in their recognition of facial expressions of emotion, particularly with children, relative to non-offender controls. To do this, we tested 49 child sexual offenders and 46 non-offender controls on their ability to recognise facial expressions of emotion using photographs of both adults and children posing emotions from the Radboud Faces Database (Langner et al., 2010). We created continua along six emotion pairs (e.g. happiness-sadness) in 10% increments, from the emotions of sadness, anger, happiness, and fear, with morphing software. Using signal detection analyses, we found that across the emotion pairs, non-offenders were significantly better able to discriminate between emotions than offenders, although there were no significant differences within individual emotion pairs, and was not significant with either age or level of education as a covariate. When discriminating between fear and anger, non-offenders showed a significant bias towards labeling an emotion as fear when judging male faces, whereas offenders did not, and this difference remained significant with age, level of education and socioeconomic status as covariates. Additionally, both groups showed a strong bias towards labeling an emotion as anger when judging female faces. Thus sexual offenders were more likely to identify anger rather than fear with male faces, suggesting that sexual offenders lack an inhibition against recognising anger in males that non-offenders showed. Overall, contrary to our predictions, we found no evidence to indicate that child sexual offenders showed a specific deficit in their recognition of emotions in children. However, future research should continue to examine this area and its potential link to recidivism.
APA, Harvard, Vancouver, ISO, and other styles
35

Yip, Tin-hang James. "Emotion recognition in patients with Parkinson's disease : contribution of the substantia nigra /." Hong Kong : University of Hong Kong, 2002. http://sunzi.lib.hku.hk/hkuto/record.jsp?B24873007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Sanchez, Cortes Diana. "The influence of alexithymia and sex in the recognition of emotions from visual, auditory, and bimodal cues." Thesis, Stockholms universitet, Psykologiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-98519.

Full text
Abstract:
Alexithymia is a personality trait associated with impairments in emotional processing. This study investigated the influence of alexithymia and sex in the ability to recognize emotional expressions presented in faces, voices, and their combination. Alexithymia was assessed by the Toronto Alexithymia Scale (TAS-20) and participants (n = 122) judged 12 emotions displayed uni- or bimodally in two sensory modalities as measured by the Geneva Multimodal Emotion Portrayals Core Set (GEMEP-CS). According to their scores, participants were grouped into low, average, and high alexithymia. The results showed that sex did not moderate the relationship between alexithymia and emotional recognition. The low alexithymia group recognized emotions more accurately than the other two subgroups, at least in the visual modality. No group differences were found in the voice and the bimodal tasks. These findings illustrate the importance of accounting for how different modalities influence the presentation of emotional cues, as well as suggesting the use of dynamic instruments such as GEMEP-CS that increment ecological validity and are more sensitive in detecting individual differences, over posed techniques such as still pictures
Genetic and neural factors underlying individual differences in emotion recognition ability
APA, Harvard, Vancouver, ISO, and other styles
37

Arruda, Beatriz Bettencourt. "Emoções e perturbação emocional: reconhecimento de expressões faciais." Master's thesis, [s.n.], 2015. http://hdl.handle.net/10284/4741.

Full text
Abstract:
Dissertação apresentada à Universidade Fernando Pessoa como parte dos requisitos para a obtenção do grau de Mestre em Psicologia, ramo de Psicologia Clínica e da Saúde
É demais reconhecido o interesse e a pertinência do estudo das emoções, atendendo ao papel que assumem na vida do ser humano, enquanto ser biológico e social. As emoções desempenham uma função social e comunicativa, interferindo na definição de relações interpessoais e redes sociais, assim como uma função intrapessoal, psicológica e biológica que garante a sobrevivência da espécie. O rosto humano, por sua vez, desempenha um papel fundamental na comunicação de emoções, afigurando-se o reconhecimento de expressões faciais como um meio imediato de obter informação relativa às emoções do outro. O presente estudo tem como objetivo estudar diferenças no reconhecimento de emoções básicas em expressões faciais consoante a presença ou ausência de perturbação emocional, assim como consoante o sexo e a idade. Neste participaram 85 indivíduos, residentes no Arquipélago dos Açores, com idades compreendidas entre os 18 e os 57 anos. Os dados foram recolhidos através de um breve Questionário de Caracterização Sociodemográfica, da versão papel da Plataforma Informática i-Emotions (i-E) e da versão portuguesa do Inventário de Sintomas Psicopatológicos (BSI). Os resultados evidenciaram a não existência de diferenças significativas ao nível do reconhecimento geral das expressões faciais das emoções básicas. No entanto, foram encontradas diferenças significativas relativas ao reconhecimento de expressões faciais de emoções básicas específicas, tendo sido evidenciado um melhor desempenho no reconhecimento da expressão facial da emoção aversão/nojo por parte dos participantes sem perturbação emocional, comparativamente com os com perturbação emocional, no da emoção medo por parte dos indivíduos do sexo feminino, comparativamente com os do sexo masculino, e no das emoções medo e tristeza, por parte do grupo com idades superiores a 30 anos em comparação com o grupo com idades entre os 18 e os 30 anos.
It's well known the interest and relevance of the study of emotions, given the role they assume in human life, while a biological and social being. Emotions play a social and communicative role, affecting the definition of interpersonal relationships and social networks, as well as an intrapersonal, psychological and biological function that ensures the species’ survival. The human face, in turn, plays a key role in the communication of emotions, knowing that the recognition of facial expressions is a way to get immediate information about the other’s emotions. This paper aims to study differences in the recognition of basic emotions through facial expressions depending on the presence or absence of emotional distress, as well as on sex and on age. A total of 85 individuals living in the Azores, aged 18 to 57 years, were inquired. Data were collected through a brief socio-demographic questionnaire, the paper version of Platform Computing i-Emotions (iE) and the portuguese version of Brief Symptom Inventory (BSI). The results showed that there were no significant differences in the general recognition of facial expressions of the basic emotions. However, differences were found in the recognition of facial expressions of specific basic emotions. Actually, a better performance has been shown in the recognition of the following facial expressions: on the disgust emotion by the participants without emotional distress, compared with those that have emotional disturbance; on the fear emotion by females compared to males; and on the fear and sadness emotions by people aged over 30 compared with those aged 18 to 30 years.
APA, Harvard, Vancouver, ISO, and other styles
38

Guerrero, Razuri Javier Francisco. "Decisional-Emotional Support System for a Synthetic Agent : Influence of Emotions in Decision-Making Toward the Participation of Automata in Society." Doctoral thesis, Stockholms universitet, Institutionen för data- och systemvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-122084.

Full text
Abstract:
Emotion influences our actions, and this means that emotion has subjective decision value. Emotions, properly interpreted and understood, of those affected by decisions provide feedback to actions and, as such, serve as a basis for decisions. Accordingly, "affective computing" represents a wide range of technological opportunities toward the implementation of emotions to improve human-computer interaction, which also includes insights across a range of contexts of computational sciences into how we can design computer systems to communicate and recognize the emotional states provided by humans. Today, emotional systems such as software-only agents and embodied robots seem to improve every day at managing large volumes of information, and they remain emotionally incapable to read our feelings and react according to them. From a computational viewpoint, technology has made significant steps in determining how an emotional behavior model could be built; such a model is intended to be used for the purpose of intelligent assistance and support to humans. Human emotions are engines that allow people to generate useful responses to the current situation, taking into account the emotional states of others. Recovering the emotional cues emanating from the natural behavior of humans such as facial expressions and bodily kinetics could help to develop systems that allow recognition, interpretation, processing, simulation, and basing decisions on human emotions. Currently, there is a need to create emotional systems able to develop an emotional bond with users, reacting emotionally to encountered situations with the ability to help, assisting users to make their daily life easier. Handling emotions and their influence on decisions can improve the human-machine communication with a wider vision. The present thesis strives to provide an emotional architecture applicable for an agent, based on a group of decision-making models influenced by external emotional information provided by humans, acquired through a group of classification techniques from machine learning algorithms. The system can form positive bonds with the people it encounters when proceeding according to their emotional behavior. The agent embodied in the emotional architecture will interact with a user, facilitating their adoption in application areas such as caregiving to provide emotional support to the elderly. The agent's architecture uses an adversarial structure based on an Adversarial Risk Analysis framework with a decision analytic flavor that includes models forecasting a human's behavior and their impact on the surrounding environment. The agent perceives its environment and the actions performed by an individual, which constitute the resources needed to execute the agent's decision during the interaction. The agent's decision that is carried out from the adversarial structure is also affected by the information of emotional states provided by a classifiers-ensemble system, giving rise to a "decision with emotional connotation" included in the group of affective decisions. The performance of different well-known classifiers was compared in order to select the best result and build the ensemble system, based on feature selection methods that were introduced to predict the emotion. These methods are based on facial expression, bodily gestures, and speech, with satisfactory accuracy long before the final system.

At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 8: Accepted.

APA, Harvard, Vancouver, ISO, and other styles
39

葉天恒 and Tin-hang James Yip. "Emotion recognition in patients with Parkinson's disease: contribution of the substantia nigra." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B31227016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Bui, Kim-Kim. "Face Processing in Schizophrenia : Deficit in Face Perception or in Recognition of Facial Emotions?" Thesis, University of Skövde, School of Humanities and Informatics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-3349.

Full text
Abstract:

Schizophrenia is a psychiatric disorder characterized by social dysfunction. People with schizophrenia misinterpret social information and it is suggested that this difficulty may result from visual processing deficits. As faces are one of the most important sources of social information it is hypothesized that people suffering from the disorder have impairments in the visual face processing system. It is unclear which mechanism of the face processing system is impaired but two types of deficits are most often proposed: a deficit in face perception in general (i.e., processing of facial features as such) and a deficit in facial emotion processing (i.e., recognition of emotional facial expressions). Due to the contradictory evidence from behavioural, electrophysiological as well as neuroimaging studies offering support for the involvement of one or the other deficit in schizophrenia it is early to make any conclusive statements as to the nature and level of impairment. Further studies are needed for a better understanding of the key mechanism and abnormalities underlying social dysfunction in schizophrenia.

APA, Harvard, Vancouver, ISO, and other styles
41

Tehan, Jennifer R. "Age-related differences in deceit detection the role of emotion recognition /." Thesis, Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-04102006-110201/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Zakharov, Konstantin. "Affect Recognition and Support in Intelligent Tutoring Systems." Thesis, University of Canterbury. Computer Science and Software Engineering, 2007. http://hdl.handle.net/10092/1216.

Full text
Abstract:
Empirical research provides evidence of strong interaction between cognitive and affective processes in the human mind. Education research proposes a model of constructive learning that relates cognitive and affective processes in an evolving cycle of affective states. Intelligent Tutoring Systems (ITSs) are capable of providing comprehensive cognitive support. Affective support in ITSs, however, is lagging behind; the in-depth exploration of cognitive and affective processes in ITSs is yet to be seen. Our research focuses on the integration of affective support in an ITS enhanced with an affective pedagogical agent. In our work we adopt the dimensional (versus categorical) view of emotions for modelling affective states of the agent and the ITSs users. In two stages we develop and evaluate an affective pedagogical agent. The affective response of the first agent version is based on the appraisal of the interaction state; this agent's affective response is displayed as affective facial expressions. The pilot study at the end of the first stage of the project confirms the viability of our approach which combines the dimensional view of emotions with the appraisal of interaction state. In the second stage of the project we develop a facial feature tracking application for real-time emotion recognition in a video-stream. Affective awareness of the second version of the agent is based on the output from the facial feature tracking application and the appraisal of the interaction state. This agent's response takes the form of affectoriented messages designed to interrupt the state of negative flow. The evaluation of the affect-aware agent against an unemotional affect-unaware agent provides positive results, thus confirming the superiority of the affect-aware agent. Although the uptake of the agent was not unanimous, the agent established and maintained good rapport with the users in a role of a caring tutor. The results of the pilot study and the final evaluation validate our choices in the design of affective interaction. In both experiments, the participants appreciated the addition of audible feedback messages, describing it as an enhancement which helped them save time and maintain their focus. Finally, we offer directions for future research on affective support which can be conducted within the framework developed in the course of this project.
APA, Harvard, Vancouver, ISO, and other styles
43

Eriksson, Erik J. "That voice sounds familiar : factors in speaker recognition." Doctoral thesis, Umeå : Department of Philosophy and Linguistics, Umeå University, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Beer, Jenay Michelle. "Recognizing facial expression of virtual agents, synthetic faces, and human faces: the effects of age and character type on emotion recognition." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33984.

Full text
Abstract:
An agent's facial expression may communicate emotive state to users both young and old. The ability to recognize emotions has been shown to differ with age, with older adults more commonly misidentifying the facial emotions of anger, fear, and sadness. This research study examined whether emotion recognition of facial expressions differed between different types of on-screen agents, and between age groups. Three on-screen characters were compared: a human, a synthetic human, and a virtual agent. In this study 42 younger (age 28-28) and 42 older (age 65-85) adults completed an emotion recognition task with static pictures of the characters demonstrating four basic emotions (anger, fear, happiness, and sadness) and neutral. The human face resulted in the highest proportion match, followed by the synthetic human, then the virtual agent with the lowest proportion match. Both the human and synthetic human faces resulted in age-related differences for the emotions anger, fear, sadness, and neutral, with younger adults showing higher proportion match. The virtual agent showed age-related differences for the emotions anger, fear, happiness, and neutral, with younger adults showing higher proportion match. The data analysis and interpretation of the present study differed from previous work by utilizing two unique approaches to understanding emotion recognition. First, misattributions participants made when identifying emotion were investigated. Second, a similarity index of the feature placement between any two virtual agent emotions was calculated, suggesting that emotions were commonly misattributed as other emotions similar in appearance. Overall, these results suggest that age-related differences transcend human faces to other types of on-screen characters, and differences between older and younger adults in emotion recognition may be further explained by perceptual discrimination between two emotions of similar feature appearance.
APA, Harvard, Vancouver, ISO, and other styles
45

Solli, Martin. "Topics in Content Based Image Retrieval : Fonts and Color Emotions." Licentiate thesis, Norrköping : Department of Science and Technology, Linköping University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-16941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Ali, Afiya. "Recognition of facial affect in individuals scoring high and low in psychopathic personality characteristics." The University of Waikato, 2007. http://adt.waikato.ac.nz/public/adt-uow20070129.190938/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Fletcher, Jennifer M. "Effects of Teaching Emotions to Students with High Functioning Autism Spectrum Disorders Through Picture Books." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2245.

Full text
Abstract:
Individuals with autism spectrum disorders (ASD) struggle with identifying others' emotions, which impacts their ability to successfully interact in social situations. Because of the increasing number of children identified with ASD, effective techniques are needed to help children identify emotions in others. The use of technology is being researched as a way to help children with emotion identification. However, technology is not always available for teachers to use in classrooms, whereas picture books are much easier to access and have been successfully used to improve students' social skills. Picture books are naturally used in classroom, home, and therapy settings. This study investigated the effectiveness of using picture books as a teaching tool with students with ASD, helping them learn how to identify emotions. A multiple baseline across three male subjects between the ages of six and ten was employed. Each picture book focused on teaching one specific emotion: scared, sad, and furious. Following intervention, when shown novel photographs, two of the participants identified three target emotions. One participant successfully identified one target emotion and showed marked improvement in identifying the other two target emotions. Using picture books is an easy, inexpensive way to teach emotions and can be naturally included in a classroom. Parents and other professionals can use picture books in a home or therapy setting to help children with ASD learn emotions and improve their social understanding.
APA, Harvard, Vancouver, ISO, and other styles
48

Visser, Naomi Aletta. "The ability of four-year-old children to recognize basic emotions represented by graphic symbols." Diss., University of Pretoria, 2007. http://hdl.handle.net/2263/29503.

Full text
Abstract:
Emotions are an essential part of development. There is evidence that young children understand and express emotions through facial expressions. Correct identification and recognition of facial expressions is important to facilitate communication and social interaction. Emotions are represented in a wide variety of symbol sets and systems in Alternative and Augmentative Communication (AAC) to enable a person with little or no functional speech to express emotion. These symbols consist of a facial expression with facial features to distinguish between emotions. In spite of the importance of expressing and understanding emotions to facilitate communication, there is limited research on young children’s ability to recognize emotions represented by graphic symbols. The purpose of this study was to investigate the ability of typically developing fouryearold children to recognize basic emotions as represented by graphic symbols. In order to determine their ability to recognize emotions on graphic symbols, their ability to understand emotions had to be determined. Participants were then required to recognize four basic emotions (happy, sad, afraid, angry) represented by various graphic symbols, taken from PCS (Johnson, 1981), PICSYMS (Carlson, 1985) and Makaton (Grove&Walker, 1990). The purpose was to determine which graphic symbol the children recognized as representation of an emotion. Results showed that the emotion of happy was easier to recognize, which might be because it was the only emotion in the pleasure dimension of emotions. Sad, afraid and angry were more difficult to recognize which might be because they fall in the displeasure dimension. It is also evident from the findings that the facial features in the graphic symbol play an important part in conveying a specific emotion. The results that were obtained are discussed in relation to previous findings. Finally, recommendations for future use are made.
Dissertation (MA (Augumentative and Alternative Communication))--University of Pretoria, 2008.
Centre for Augmentative and Alternative Communication (CAAC)
MA
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
49

Tian, Leimin. "Recognizing emotions in spoken dialogue with acoustic and lexical cues." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31284.

Full text
Abstract:
Automatic emotion recognition has long been a focus of Affective Computing. It has become increasingly apparent that awareness of human emotions in Human-Computer Interaction (HCI) is crucial for advancing related technologies, such as dialogue systems. However, performance of current automatic emotion recognition is disappointing compared to human performance. Current research on emotion recognition in spoken dialogue focuses on identifying better feature representations and recognition models from a data-driven point of view. The goal of this thesis is to explore how incorporating prior knowledge of human emotion recognition in the automatic model can improve state-of-the-art performance of automatic emotion recognition in spoken dialogue. Specifically, we study this by proposing knowledge-inspired features representing occurrences of disfluency and non-verbal vocalisation in speech, and by building a multimodal recognition model that combines acoustic and lexical features in a knowledge-inspired hierarchical structure. In our study, emotions are represented with the Arousal, Expectancy, Power, and Valence emotion dimensions. We build unimodal and multimodal emotion recognition models to study the proposed features and modelling approach, and perform emotion recognition on both spontaneous and acted dialogue. Psycholinguistic studies have suggested that DISfluency and Non-verbal Vocalisation (DIS-NV) in dialogue is related to emotions. However, these affective cues in spoken dialogue are overlooked by current automatic emotion recognition research. Thus, we propose features for recognizing emotions in spoken dialogue which describe five types of DIS-NV in utterances, namely filled pause, filler, stutter, laughter, and audible breath. Our experiments show that this small set of features is predictive of emotions. Our DIS-NV features achieve better performance than benchmark acoustic and lexical features for recognizing all emotion dimensions in spontaneous dialogue. Consistent with Psycholinguistic studies, the DIS-NV features are especially predictive of the Expectancy dimension of emotion, which relates to speaker uncertainty. Our study illustrates the relationship between DIS-NVs and emotions in dialogue, which contributes to Psycholinguistic understanding of them as well. Note that our DIS-NV features are based on manual annotations, yet our long-term goal is to apply our emotion recognition model to HCI systems. Thus, we conduct preliminary experiments on automatic detection of DIS-NVs, and on using automatically detected DIS-NV features for emotion recognition. Our results show that DIS-NVs can be automatically detected from speech with stable accuracy, and auto-detected DIS-NV features remain predictive of emotions in spontaneous dialogue. This suggests that our emotion recognition model can be applied to a fully automatic system in the future, and holds the potential to improve the quality of emotional interaction in current HCI systems. To study the robustness of the DIS-NV features, we conduct cross-corpora experiments on both spontaneous and acted dialogue. We identify how dialogue type influences the performance of DIS-NV features and emotion recognition models. DIS-NVs contain additional information beyond acoustic characteristics or lexical contents. Thus, we study the gain of modality fusion for emotion recognition with the DIS-NV features. Previous work combines different feature sets by fusing modalities at the same level using two types of fusion strategies: Feature-Level (FL) fusion, which concatenates feature sets before recognition; and Decision-Level (DL) fusion, which makes the final decision based on outputs of all unimodal models. However, features from different modalities may describe data at different time scales or levels of abstraction. Moreover, Cognitive Science research indicates that when perceiving emotions, humans make use of information from different modalities at different cognitive levels and time steps. Therefore, we propose a HierarchicaL (HL) fusion strategy for multimodal emotion recognition, which incorporates features that describe data at a longer time interval or which are more abstract at higher levels of its knowledge-inspired hierarchy. Compared to FL and DL fusion, HL fusion incorporates both inter- and intra-modality differences. Our experiments show that HL fusion consistently outperforms FL and DL fusion on multimodal emotion recognition in both spontaneous and acted dialogue. The HL model combining our DIS-NV features with benchmark acoustic and lexical features improves current performance of multimodal emotion recognition in spoken dialogue. To study how other emotion-related tasks of spoken dialogue can benefit from the proposed approaches, we apply the DIS-NV features and the HL fusion strategy to recognize movie-induced emotions. Our experiments show that although designed for recognizing emotions in spoken dialogue, DIS-NV features and HL fusion remain effective for recognizing movie-induced emotions. This suggests that other emotion-related tasks can also benefit from the proposed features and model structure.
APA, Harvard, Vancouver, ISO, and other styles
50

Choy, Grace. "Emotional competence of Chinese and Australian children: The recognition of facial expressions of emotion and the understanding of display rules." Thesis, Queensland University of Technology, 2000. https://eprints.qut.edu.au/36632/1/36632_Digitised%20Thesis.pdf.

Full text
Abstract:
Children's sensitivity to the emotions expressed by their peers, and their knowledge of the display rules that govern the manifestation of facial expressions, are crucial for their social interactions and development. In compliance with display rules, the facial expressions displayed (i.e., apparent emotion) may be incongruent with the emotion experienced (i.e., real emotion). This dissertation investigated Chinese and Australian children's abilities to recognise facial expressions of emotion and to understand display rules in the two cultures. Children's acquisition of these two skills demonstrates emotional competence (Saami, 1999). Participants were 144 Chinese children living in Hong Kong ( 49 percent were boys and 51 percent were girls; 82 four-year-olds and 62 six-year-olds), and 176 Caucasian children living in Australia (56 percent were boys and 44 percent were girls; 80 four-year-olds and 96 six-year-olds). The children were recruited from 17 kindergartens, preschools, child-care centres, and primary schools in Hong Kong and Brisbane, Australia. All children were tested individually. In Study One, all children were presented with a set of facial stimuli displayed by Chinese children (C-FACE) and an equivalent set displayed by Caucasian children (A-FACE). Each set of facial stimuli consisted of seven photographs depicting facial expressions of happiness, sadness, anger, fear, surprise, shame and neutrality. The two sets were presented in random order and children were asked to select the photograph depicting each emotion as it was requested by the experimenter. This permits the examination of both in-group perception (i.e., the observer and the displayer of the same culture) and out-group perception (i.e., the observer and the displayer of different cultures). The Chinese set of children's facial expressions of emotion (C-FACE) was constructed specifically for this research. The Caucasian set of children's facial expressions of emotion (A-FACE) was developed by Field and Walden (1982). In Study Two, hypothetical stories that elicit the application of display rules were presented to both Chinese and Australian children. The stories were audio-taped and varied in terms of cultural contexts (i.e., Chinese versus Australian contexts), appropriateness for emotional regulation (i.e., non-regulation versus regulation), emotional valence (i.e., negative versus positive), and the explicitness of motivation for emotional regulation (i.e., implicit versus explicit). Children were asked to select from an array of five different facial expressions both the real emotion experienced, and the apparent emotion shown by the story character. These photographs were from the C-FACE and A-FACE sets used in Study 1. C-FACE was used with Chinese context stories and A-FACE with Australian context stories. Chinese and Australian 6-year-olds were significantly more accurate than 4-year-olds in the recognition of facial expressions of emotion displayed by both in-group and out-group peers. Six-year-old children also had a significantly better understanding of display rules than the 4-year-olds. It seems likely that cognitive factors such as improved perceptual skills and the development of a theory of mind, and socialisation factors such as exposure to and the acquisition of emotional scripts may account for the age differences. Both cultural similarities and differences were found in children's understanding of emotional expressions and display rules. In Study 1, Australian 4-year-olds were more accurate than Chinese 4-year-olds in out-group perception, possibly because of the multicultural experience of Australian children. However, increasing the amount of exposure to Chinese peers did not increase the Australian children's accuracy of out-group perception. In Study 2, Chinese children gave more dissembled responses (i.e., selected different real and apparent emotions) than Australian children, who most often indicated the expression of genuine emotion (i.e., selected the same real and apparent emotion). Chinese and Australian children also had different interpretations of the emotion experienced by the story character in the Chinese context and they used different regulation strategies in a positive context. The provision of an explicit statement about emotional regulation in the story enhanced Australian children's performance without making any difference for Chinese children. These results are consistent with the strength of different cultural demands for emotional inhibition across the two cultures. There was also evidence of cultural similarities. Both Chinese and Australian children demonstrated that happiness, sadness and anger were more frequently recognised than neutrality and shame when they were displayed by in-group peers. Fear and surprise were least frequently identified and reciprocally confused by the two cultural groups. In addition, 6-year-old girls from both cultures were more accurate than their boy counterparts in out-group perception. Moreover, both Chinese and Australian. children had a better understanding of non-regulation and negative contexts than regulation and positive contexts. The present research also found that both Chinese and Australian children were more accurate in recognising facial expressions of emotion displayed by in-group members than out-group members. Both Chinese and Australian children also applied their own cultural display rules in the interpretation of emotional behaviour in another cultural context. These two factors may account for some of the misunderstandings that arise in inter-cultural communications. Overall, the results suggest that the abilities to recognise facial expressions of emotion and understand display rules could be influenced by the age and culture of the subjects, and the culture of the stimuli. In assessing children's ability to understand facial expressions of emotion and the application of display rules it is therefore important to use stimuli from the same ethnic group.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography