Littérature scientifique sur le sujet « Recognition of emotions »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Recognition of emotions ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Recognition of emotions"

1

Liao, Songyang, Katsuaki Sakata et Galina V. Paramei. « Color Affects Recognition of Emoticon Expressions ». i-Perception 13, no 1 (janvier 2022) : 204166952210807. http://dx.doi.org/10.1177/20416695221080778.

Texte intégral
Résumé :
In computer-mediated communication, emoticons are conventionally rendered in yellow. Previous studies demonstrated that colors evoke certain affective meanings, and face color modulates perceived emotion. We investigated whether color variation affects the recognition of emoticon expressions. Japanese participants were presented with emoticons depicting four basic emotions (Happy, Sad, Angry, Surprised) and a Neutral expression, each rendered in eight colors. Four conditions (E1–E4) were employed in the lab-based experiment; E5, with an additional participant sample, was an online replication of the critical E4. In E1, colored emoticons were categorized in a 5AFC task. In E2–E5, stimulus affective meaning was assessed using visual scales with anchors corresponding to each emotion. The conditions varied in stimulus arrays: E2: light gray emoticons; E3: colored circles; E4 and E5: colored emoticons. The affective meaning of Angry and Sad emoticons was found to be stronger when conferred in warm and cool colors, respectively, the pattern highly consistent between E4 and E5. The affective meaning of colored emoticons is regressed to that of achromatic expression counterparts and decontextualized color. The findings provide evidence that affective congruency of the emoticon expression and the color it is rendered in facilitates recognition of the depicted emotion, augmenting the conveyed emotional message.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Mallikarjuna, Basetty, M. Sethu Ram et Supriya Addanke. « An Improved Face-Emotion Recognition to Automatically Generate Human Expression With Emoticons ». International Journal of Reliable and Quality E-Healthcare 11, no 1 (1 janvier 2022) : 1–18. http://dx.doi.org/10.4018/ijrqeh.314945.

Texte intégral
Résumé :
Any human face image expression naturally identifies expressions of happy, sad etc.; sometimes human facial image expression recognition is complex, and it is a combination of two emotions. The existing literature provides face emotion classification and image recognition, and the study on deep learning using convolutional neural networks (CNN), provides face emotion recognition most useful for healthcare and with the most complex of the existing algorithms. This paper improves the human face emotion recognition and provides feelings of interest for others to generate emoticons on their smartphone. Face emotion recognition plays a major role by using convolutional neural networks in the area of deep learning and artificial intelligence for healthcare services. Automatic facial emotion recognition consists of two methods, such as face detection with Ada boost classifier algorithm and emotional classification, which consists of feature extraction by using deep learning methods such as CNN to identify the seven emotions to generate emoticons.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kamińska, Dorota, Kadir Aktas, Davit Rizhinashvili, Danila Kuklyanov, Abdallah Hussein Sham, Sergio Escalera, Kamal Nasrollahi, Thomas B. Moeslund et Gholamreza Anbarjafari. « Two-Stage Recognition and beyond for Compound Facial Emotion Recognition ». Electronics 10, no 22 (19 novembre 2021) : 2847. http://dx.doi.org/10.3390/electronics10222847.

Texte intégral
Résumé :
Facial emotion recognition is an inherently complex problem due to individual diversity in facial features and racial and cultural differences. Moreover, facial expressions typically reflect the mixture of people’s emotional statuses, which can be expressed using compound emotions. Compound facial emotion recognition makes the problem even more difficult because the discrimination between dominant and complementary emotions is usually weak. We have created a database that includes 31,250 facial images with different emotions of 115 subjects whose gender distribution is almost uniform to address compound emotion recognition. In addition, we have organized a competition based on the proposed dataset, held at FG workshop 2020. This paper analyzes the winner’s approach—a two-stage recognition method (1st stage, coarse recognition; 2nd stage, fine recognition), which enhances the classification of symmetrical emotion labels.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Werner, S., et G. N. Petrenko. « Speech Emotion Recognition : Humans vs Machines ». Discourse 5, no 5 (18 décembre 2019) : 136–52. http://dx.doi.org/10.32603/2412-8562-2019-5-5-136-152.

Texte intégral
Résumé :
Introduction. The study focuses on emotional speech perception and speech emotion recognition using prosodic clues alone. Theoretical problems of defining prosody, intonation and emotion along with the challenges of emotion classification are discussed. An overview of acoustic and perceptional correlates of emotions found in speech is provided. Technical approaches to speech emotion recognition are also considered in the light of the latest emotional speech automatic classification experiments.Methodology and sources. The typical “big six” classification commonly used in technical applications is chosen and modified to include such emotions as disgust and shame. A database of emotional speech in Russian is created under sound laboratory conditions. A perception experiment is run using Praat software’s experimental environment.Results and discussion. Cross-cultural emotion recognition possibilities are revealed, as the Finnish and international participants recognised about a half of samples correctly. Nonetheless, native speakers of Russian appear to distinguish a larger proportion of emotions correctly. The effects of foreign languages knowledge, musical training and gender on the performance in the experiment were insufficiently prominent. The most commonly confused pairs of emotions, such as shame and sadness, surprise and fear, anger and disgust as well as confusions with neutral emotion were also given due attention.Conclusion. The work can contribute to psychological studies, clarifying emotion classification and gender aspect of emotionality, linguistic research, providing new evidence for prosodic and comparative language studies, and language technology, deepening the understanding of possible challenges for SER systems.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Hatem, Ahmed Samit, et Abbas M. Al-Bakry. « The Information Channels of Emotion Recognition : A Review ». Webology 19, no 1 (20 janvier 2022) : 927–41. http://dx.doi.org/10.14704/web/v19i1/web19064.

Texte intégral
Résumé :
Humans are emotional beings. When we express about emotions, we frequently use several modalities, whether we want to so overtly (i.e., Speech, facial expressions,..) or implicitly (i.e., body language, text,..). Emotion recognition has lately piqued the interest of many researchers, and various techniques have been studied. A review on emotion recognition is given in this article. The survey seeks single and multiple source of data or information channels that may be utilized to identify emotions and includes a literature analysis on current studies published to each information channel, as well as the techniques employed and the findings obtained. Ultimately, some of the present emotion recognition problems and future work recommendations have been mentioned.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Morgan, Shae D. « Comparing Emotion Recognition and Word Recognition in Background Noise ». Journal of Speech, Language, and Hearing Research 64, no 5 (11 mai 2021) : 1758–72. http://dx.doi.org/10.1044/2021_jslhr-20-00153.

Texte intégral
Résumé :
Purpose Word recognition in quiet and in background noise has been thoroughly investigated in previous research to establish segmental speech recognition performance as a function of stimulus characteristics (e.g., audibility). Similar methods to investigate recognition performance for suprasegmental information (e.g., acoustic cues used to make judgments of talker age, sex, or emotional state) have not been performed. In this work, we directly compared emotion and word recognition performance in different levels of background noise to identify psychoacoustic properties of emotion recognition (globally and for specific emotion categories) relative to word recognition. Method Twenty young adult listeners with normal hearing listened to sentences and either reported a target word in each sentence or selected the emotion of the talker from a list of options (angry, calm, happy, and sad) at four signal-to-noise ratios in a background of white noise. Psychometric functions were fit to the recognition data and used to estimate thresholds (midway points on the function) and slopes for word and emotion recognition. Results Thresholds for emotion recognition were approximately 10 dB better than word recognition thresholds, and slopes for emotion recognition were half of those measured for word recognition. Low-arousal emotions had poorer thresholds and shallower slopes than high-arousal emotions, suggesting greater confusion when distinguishing low-arousal emotional speech content. Conclusions Communication of a talker's emotional state continues to be perceptible to listeners in competitive listening environments, even after words are rendered inaudible. The arousal of emotional speech affects listeners' ability to discriminate between emotion categories.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Israelashvili, Jacob, Lisanne S. Pauw, Disa A. Sauter et Agneta H. Fischer. « Emotion Recognition from Realistic Dynamic Emotional Expressions Cohere with Established Emotion Recognition Tests : A Proof-of-Concept Validation of the Emotional Accuracy Test ». Journal of Intelligence 9, no 2 (7 mai 2021) : 25. http://dx.doi.org/10.3390/jintelligence9020025.

Texte intégral
Résumé :
Individual differences in understanding other people’s emotions have typically been studied with recognition tests using prototypical emotional expressions. These tests have been criticized for the use of posed, prototypical displays, raising the question of whether such tests tell us anything about the ability to understand spontaneous, non-prototypical emotional expressions. Here, we employ the Emotional Accuracy Test (EAT), which uses natural emotional expressions and defines the recognition as the match between the emotion ratings of a target and a perceiver. In two preregistered studies (Ntotal = 231), we compared the performance on the EAT with two well-established tests of emotion recognition ability: the Geneva Emotion Recognition Test (GERT) and the Reading the Mind in the Eyes Test (RMET). We found significant overlap (r > 0.20) between individuals’ performance in recognizing spontaneous emotions in naturalistic settings (EAT) and posed (or enacted) non-verbal measures of emotion recognition (GERT, RMET), even when controlling for individual differences in verbal IQ. On average, however, participants reported enjoying the EAT more than the other tasks. Thus, the current research provides a proof-of-concept validation of the EAT as a useful measure for testing the understanding of others’ emotions, a crucial feature of emotional intelligence. Further, our findings indicate that emotion recognition tests using prototypical expressions are valid proxies for measuring the understanding of others’ emotions in more realistic everyday contexts.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ekberg, Mattias, Josefine Andin, Stefan Stenfelt et Örjan Dahlström. « Effects of mild-to-moderate sensorineural hearing loss and signal amplification on vocal emotion recognition in middle-aged–older individuals ». PLOS ONE 17, no 1 (7 janvier 2022) : e0261354. http://dx.doi.org/10.1371/journal.pone.0261354.

Texte intégral
Résumé :
Previous research has shown deficits in vocal emotion recognition in sub-populations of individuals with hearing loss, making this a high priority research topic. However, previous research has only examined vocal emotion recognition using verbal material, in which emotions are expressed through emotional prosody. There is evidence that older individuals with hearing loss suffer from deficits in general prosody recognition, not specific to emotional prosody. No study has examined the recognition of non-verbal vocalization, which constitutes another important source for the vocal communication of emotions. It might be the case that individuals with hearing loss have specific difficulties in recognizing emotions expressed through prosody in speech, but not non-verbal vocalizations. We aim to examine whether vocal emotion recognition difficulties in middle- aged-to older individuals with sensorineural mild-moderate hearing loss are better explained by deficits in vocal emotion recognition specifically, or deficits in prosody recognition generally by including both sentences and non-verbal expressions. Furthermore a, some of the studies which have concluded that individuals with mild-moderate hearing loss have deficits in vocal emotion recognition ability have also found that the use of hearing aids does not improve recognition accuracy in this group. We aim to examine the effects of linear amplification and audibility on the recognition of different emotions expressed both verbally and non-verbally. Besides examining accuracy for different emotions we will also look at patterns of confusion (which specific emotions are mistaken for other specific emotion and at which rates) during both amplified and non-amplified listening, and we will analyze all material acoustically and relate the acoustic content to performance. Together these analyses will provide clues to effects of amplification on the perception of different emotions. For these purposes, a total of 70 middle-aged-older individuals, half with mild-moderate hearing loss and half with normal hearing will perform a computerized forced-choice vocal emotion recognition task with and without amplification.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Lim, Myung-Jin, Moung-Ho Yi et Ju-Hyun Shin. « Intrinsic Emotion Recognition Considering the Emotional Association in Dialogues ». Electronics 12, no 2 (8 janvier 2023) : 326. http://dx.doi.org/10.3390/electronics12020326.

Texte intégral
Résumé :
Computer communication via text messaging or Social Networking Services (SNS) has become increasingly popular. At this time, many studies are being conducted to analyze user information or opinions and recognize emotions by using a large amount of data. Currently, the methods for the emotion recognition of dialogues requires an analysis of emotion keywords or vocabulary, and dialogue data are mostly classified as a single emotion. Recently, datasets classified as multiple emotions have emerged, but most of them are composed of English datasets. For accurate emotion recognition, a method for recognizing various emotions in one sentence is required. In addition, multi-emotion recognition research in Korean dialogue datasets is also needed. Since dialogues are exchanges between speakers. One’s feelings may be changed by the words of others, and feelings, once generated, may last for a long period of time. Emotions are expressed not only through vocabulary, but also indirectly through dialogues. In order to improve the performance of emotion recognition, it is necessary to analyze Emotional Association in Dialogues (EAD) to effectively reflect various factors that induce emotions. Therefore, in this paper, we propose a more accurate emotion recognition method to overcome the limitations of single emotion recognition. We implement Intrinsic Emotion Recognition (IER) to understand the meaning of dialogue and recognize complex emotions. In addition, conversations are classified according to their characteristics, and the correlation between IER is analyzed to derive Emotional Association in Dialogues (EAD) and apply them. To verify the usefulness of the proposed technique, IER applied with EAD is tested and evaluated. This evaluation determined that Micro-F1 of the proposed method exhibited the best performance, with 74.8% accuracy. Using IER to assess the EAD proposed in this paper can improve the accuracy and performance of emotion recognition in dialogues.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Jaratrotkamjorn, Apichart. « Bimodal Emotion Recognition Using Deep Belief Network ». ECTI Transactions on Computer and Information Technology (ECTI-CIT) 15, no 1 (14 janvier 2021) : 73–81. http://dx.doi.org/10.37936/ecti-cit.2021151.226446.

Texte intégral
Résumé :
The emotions are very important in human daily life. In order to make the machine can recognize the human emotional state, and it can intelligently respond to need for human, which are very important in human-computer interaction. The majority of existing work concentrate on the classification of six basic emotions only. In this research work propose the emotion recognition system through the multimodal approach, which integrated information from both facial and speech expressions. The database has eight basic emotions (neutral, calm, happy, sad, angry, fearful, disgust, and surprised). Emotions are classified using deep belief network method. The experiment results show that the performance of bimodal emotion recognition system, it has better improvement. The overall accuracy rate is 97.92%.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Recognition of emotions"

1

Stanley, Jennifer Tehan. « Emotion recognition in context ». Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24617.

Texte intégral
Résumé :
Thesis (Ph.D.)--Psychology, Georgia Institute of Technology, 2008.
Committee Chair: Blanchard-Fields, Fredda; Committee Member: Corballis, Paul; Committee Member: Hertzog, Christopher; Committee Member: Isaacowitz, Derek; Committee Member: Kanfer, Ruth
Styles APA, Harvard, Vancouver, ISO, etc.
2

Zhang, Jiaming. « Contextual recognition of robot emotions ». Thesis, University of Sheffield, 2013. http://etheses.whiterose.ac.uk/3809/.

Texte intégral
Résumé :
In the field of human-robot interaction, socially interactive robots are often equipped with the ability to detect the affective states of users, the ability to express emotions through the use of synthetic facial expressions, speech and textual content, and the ability for imitating and social learning. Past work on creating robots that can make convincing emotional expressions has concentrated on the quality of those expressions, and on assessing people’s ability to recognize them. Previous recognition studies presented the facial expressions of the robots in neutral contexts, without any strong emotional valence (e.g., emotionally valenced music or video). It is therefore worth empirically exploring whether observers’ judgments of the facial cues of a robot would be affected by a surrounding emotional context. This thesis takes its inspiration from the contextual effects found on the interpretation of the expressions on human faces and computer avatars, and looks at the extent to which they also apply to the interpretation of the facial expressions of a mechanical robot head. The kinds of contexts that affect the recognition of robot emotional expressions, the circumstances under which such contextual effects occur, and the relationship between emotions and the surrounding situation, are observed and analyzed in a series of 11 experiments. In these experiments, the FACS (Facial Action Coding System) (Ekman and Friesen, 2002) was applied to set up the parameters of the servos to make the robot head produce sequences of facial expressions. Four different emotional surrounding or preceding contexts were used (i.e., recorded BBC News pieces, selected affective pictures, classical music pieces and film clips). This thesis provides evidence that observers’ judgments about the facial expressions of a robot can be affected by a surrounding emotional context. From a psychological perspective, the contextual effects found on the robotic facial expressions based on the FACS, indirectly support the claims that human emotions are both biologically based and socially constructed. From a robotics perspective, it is argued that the results obtained from the analyses will be useful for guiding researchers to enhance the expressive skills of emotional robots in a surrounding emotional context. This thesis also analyzes the possible factors contributing to the contextual effects found in the original 11 experiments. Some future work, including four new experiments (a preliminary experiment designed to identify appropriate contextual materials and three further experiments in which factors likely to affect a context effect are controlled one by one) is also proposed in this thesis.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Xiao, Zhongzhe. « Recognition of emotions in audio signals ». Ecully, Ecole centrale de Lyon, 2008. http://www.theses.fr/2008ECDL0002.

Texte intégral
Résumé :
Les travaux de recherche réalisés dans le cadre de cette thèse de doctorat portent sur la reconnaissance automatique de l’émotion et de l’humeur au sein de signaux sonores. En effet, l’émotion portée par les signaux audio constitue une information sémantique particulièrement importante dont l’analyse automatique offre de nombreuses possibilités en termes d’applications, telles que les interactions homme-machine intelligentes et l’indexation multimédia. L’objectif de cette thèse est ainsi d’étudier des solutions informatique d’analyse de l’émotion audio tant pour la parole que pour les signaux musicaux. Nous utilisons dans notre travail un modèle émotionnel discret combiné à un modèle dimensionnel, en nous appuyant sur des études existantes sur les corrélations entre les propriétés acoustiques et l’émotion dans la parole ainsi que l’humeur dans les signaux de musique. Les principales contributions de nos travaux sont les suivantes. Tout d’abord, nous avons proposé, en complément des caractéristiques audio basées sur les propriétés fréquentielles et d’énergie, de nouvelles caractéristiques harmoniques et Zipf, afin d’améliorer la caractérisation des propriétés des signaux de parole en terme de timbre et de prosodie. Deuxièmement, dans la mesure où très peu de ressources pour l’étude de l’émotion dans la parole et dans la musique sont disponibles par rapport au nombre important de caractéristiques audio qu’il est envisageable d’extraire, une méthode de sélection de caractéristiques nomméeESFS, basée sur la théorie de l’évidence est proposée afin de simplifier le modèle de classification et d’en améliorer les performances. De plus, nous avons montré que l’utilisation d’un classifieur hiérarchique basé sur un modèle dimensionnel de l’émotion, permet d’obtenir de meilleurs résultats de classification qu’un unique classifieur global, souvent utilisé dans la littérature. Par ailleurs, puisqu’il n’existe pas d’accord universel sur la définition des émotions de base, et parce que les états émotionnels considérés sont très dépendant des applications, nous avons également proposé un algorithme basés sur ESFS et permettant de construire automatiquement un classifieur hiérarchique adapté à un ensemble spécifique d’états émotionnels dans le cadre d’une application particulière. Cette classification hiérarchique procède en divisant un problème de classification complexe en un ensemble de problèmes plus petits et plus simples grâce à la combinaison d’un ensemble de sous-classifieurs binaires organisés sous forme d’un arbre binaire. Enfin, les émotions étant par nature des notions subjectives, nous avons également proposé un classifieur ambigu, basé sur la théorie de l’évidence, permettant l’association d’un signal audio à de multiples émotions, comme le font souvent les êtres humains
This Ph. D thesis work is dedicated to automatic emotion/mood recognition in audio signals. Indeed, audio emotion is high semantic information and its automatic analysis may have many applications such as smart human-computer interactions or multimedia indexing. The purpose of this thesis is thus to investigate machine-based audio emotion analysis solutions for both speech and music signals. Our work makes use of a discrete emotional model combined with the dimensional one and relies upon existing studies on acoustics correlates of emotional speech and music mood. The key contributions are the following. First, we have proposed, in complement to popular frequency-based and energy-based features, some new audio features, namely harmonic and Zipf features, to better characterize timbre and prosodic properties of emotional speech. Second, as there exists very few emotional resources either for speech or music for machine learning as compared to audio features that one can extract, an evidence theory-based feature selection scheme named Embedded Sequential Forward Selection (ESFS) is proposed to deal with the classic “curse of dimensionality” problem and thus over-fitting. Third, using a manually built dimensional emotion model-based hierarchical classifier to deal with fuzzy borders of emotional states, we demonstrated that a hierarchical classification scheme performs better than single global classifier mostly used in the literature. Furthermore, as there does not exist any universal agreement on basic emotion definition and as emotional states are typically application dependent, we also proposed a ESFS-based algorithm for automatically building a hierarchical classification scheme (HCS) which is best adapted to a specific set of application dependent emotional states. The HCS divides a complex classification problem into simpler and smaller problems by combining several binary sub-classifiers in the structure of a binary tree in several stages, and gives the result as the type of emotional states of the audio samples. Finally, to deal with the subjective nature of emotions, we also proposed an evidence theory-based ambiguous classifier allowing multiple emotions labeling as human often does. The effectiveness of all these recognition techniques was evaluated on Berlin and DES datasets for emotional speech recognition and on a music mood dataset that we collected in our laboratory as there exist no public dataset so far. Keywords: audio signal, emotion classification, music mood analysis, audio features, feature selection, hierarchical classification, ambiguous classification, evidence theory
Styles APA, Harvard, Vancouver, ISO, etc.
4

Xiao, Zhongzhe Chen Liming. « Recognition of emotions in audio signals ». Ecully : Ecole Centrale de Lyon, 2008. http://bibli.ec-lyon.fr/exl-doc/zxiao.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Golan, Ofer. « Systemising emotions : teaching emotion recognition to people with autism using interactive multimedia ». Thesis, University of Cambridge, 2007. https://www.repository.cam.ac.uk/handle/1810/252028.

Texte intégral
Résumé :
Recognition of emotions and mental states (ER) in others is a core difficulty for individuals with autism spectrum conditions (ASC). In contrast, they show good skills in ‘systemizing’- understanding non-agentive systems. This thesis evaluated the effectiveness of Mind Reading, a computer program teaching ER from a wide range of facial expression videos and recorded speech segments, systematically presented. Three different experiments tested the effectiveness of a minimum of 10 hours of software use over a period of 10-15 weeks among individuals with ASC. Experiments included evaluation of independent use of the software by adults and by 8-11 year olds with ASC, and tutor and group supported use of the software in adults with ASC. ER skills were assessed on four levels of generalisation before and after the training period, and compared to matched ASC and typically developing control groups. Results showed improved ER for software users from faces and voices, compared to the ASC control groups. Improvement was mostly limited to faces and voices which were included in the software. Generalisation to stimuli not included in the software was found in the children experiment, in the vocal and visual channels separately. Follow up assessment after a year showed greater improvement on general socio-emotional functioning measures among child and adult software users, compared to ASC controls. These results suggest that individuals with ASC can improve their ability to recognise emotions using systematic computer-based training with long term effects, but may need further tutoring to prevent hyper-systemising, and to enhance generalisation to other situations and stimuli. The reasons behind generalisation difficulties and the study’s limitations are discussed, and suggestions for future work are offered.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Cheung, Ching-ying Crystal. « Cognition of emotion recognition ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B29740277.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Reichert, Nils. « CORRELATION BETWEEN COMPUTER RECOGNIZED FACIAL EMOTIONS AND INFORMED EMOTIONS DURING A CASINO COMPUTER GAME ». Thesis, Fredericton : University of New Brunswick, 2012. http://hdl.handle.net/1882/44596.

Texte intégral
Résumé :
Emotions play an important role for everyday communication. Different methods allow computers to recognize emotions. Most are trained with acted emotions and it is unknown if such a model would work for recognizing naturally appearing emotions. An experiment was setup to estimate the recognition accuracy of the emotion recognition software SHORE, which could detect the emotions angry, happy, sad, and surprised. Subjects played a casino game while being recorded. The software recognition was correlated with the recognition of ten human observers. The results showed a strong recognition for happy, medium recognition for surprised, and a weak recognition for sad and angry faces. In addition, questionnaires containing self-informed emotions were compared with the computer recognition, but only weak correlations were found. SHORE was able to recognize emotions almost as well as humans were, but if humans had problems to recognize an emotion, then the accuracy of the software was much lower.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Gohar, Kadar Navit. « Diagnostic colours of emotions ». University of Sydney, 2008. http://hdl.handle.net/2123/2298.

Texte intégral
Résumé :
Doctor of Philosophy
This thesis investigates the role of colour in the cognitive processesing of emotional information. The research is guided by the effect of colour diagnosticity which has been shown previously to influence recognition performance of several types of objects as well as natural scenes. The research presented in Experiment 1 examined whether colour information is considered a diagnostic perceptual feature of seven emotional categories: happiness, sadness, anger, fear, disgust, surprise and neutral. Participants (N = 119), who were naïve to the specific purpose and expectations of the experiment, chose colour more than any other perceptual quality (e.g. shape and tactile information) as a feature that describes the seven emotional categories. The specific colour features given for the six basic emotions were consistently different from those given to the non-emotional neutral category. While emotional categories were often described by chromatic colour features (e.g. red, blue, orange) the neutral category was often ascribed achromatic colour features (e.g. white, grey, transparent) as the most symptomatic perceptual qualities for its description. The emotion 'anger' was unique in being the only emotion showing an agreement higher that 50% of the total given colour features for one particular colour - red. Confirming that colour is a diagnostic feature of emotions led to the examination of the effect of diagnostic colours of emotion on recognition memory for emotional words and faces: the effect, if any, of appropriate and inappropriate colours (matched with emotion) on the strength of memory for later recognition of faces and words (Experiments 2 & 3). The two experiments used retention intervals of 15 minutes and one week respectively and the colour-emotion associations were determined for each individual participant. Results showed that regardless of the subject’s consistency level in associating colours with emotions, and compared with the individual inappropriate or random colours, individual appropriate colours of emotions significantly enhance recognition memory for six basic emotional faces and words. This difference between the individual inappropriate colours or random colours and the individual appropriate colours of emotions was not found to be significant for non-emotional neutral stimuli. Post hoc findings from both experiments further show that appropriate colours of emotion are associated more consistently than inappropriate colours of emotions. This suggests that appropriate colour-emotion associations are unique both in their strength of association and in the form of their representation. Experiment 4 therefore aimed to investigate whether appropriate colour-emotion associations also trigger an implicit automatic cognitive system that allows faster naming times for appropriate versus inappropriate colours of emotional word carriers. Results from the combined Emotional-Semantic Stroop task confirm the above hypothesis and therefore imply that colour plays a substantial role not only in our conceptual representations of objects but also in our conceptual representations of basic emotions. The resemblance of the present findings collectively to those found previously for objects and natural scenes suggests a common cognitive mechanism for the processing of emotional diagnostic colours and the processing of diagnostic colours of objects or natural scenes. Overall, this thesis provides the foundation for many future directions of research in the area of colour and emotion as well as a few possible immediate practical implications.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Lau, Yuet-han Jasmine. « Ageing-related effect on emotion recognition ». Click to view E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B37101730.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Gohar, Kadar Navit. « Diagnostic colours of emotions ». Thesis, The University of Sydney, 2007. http://hdl.handle.net/2123/2298.

Texte intégral
Résumé :
This thesis investigates the role of colour in the cognitive processesing of emotional information. The research is guided by the effect of colour diagnosticity which has been shown previously to influence recognition performance of several types of objects as well as natural scenes. The research presented in Experiment 1 examined whether colour information is considered a diagnostic perceptual feature of seven emotional categories: happiness, sadness, anger, fear, disgust, surprise and neutral. Participants (N = 119), who were naïve to the specific purpose and expectations of the experiment, chose colour more than any other perceptual quality (e.g. shape and tactile information) as a feature that describes the seven emotional categories. The specific colour features given for the six basic emotions were consistently different from those given to the non-emotional neutral category. While emotional categories were often described by chromatic colour features (e.g. red, blue, orange) the neutral category was often ascribed achromatic colour features (e.g. white, grey, transparent) as the most symptomatic perceptual qualities for its description. The emotion 'anger' was unique in being the only emotion showing an agreement higher that 50% of the total given colour features for one particular colour - red. Confirming that colour is a diagnostic feature of emotions led to the examination of the effect of diagnostic colours of emotion on recognition memory for emotional words and faces: the effect, if any, of appropriate and inappropriate colours (matched with emotion) on the strength of memory for later recognition of faces and words (Experiments 2 & 3). The two experiments used retention intervals of 15 minutes and one week respectively and the colour-emotion associations were determined for each individual participant. Results showed that regardless of the subject’s consistency level in associating colours with emotions, and compared with the individual inappropriate or random colours, individual appropriate colours of emotions significantly enhance recognition memory for six basic emotional faces and words. This difference between the individual inappropriate colours or random colours and the individual appropriate colours of emotions was not found to be significant for non-emotional neutral stimuli. Post hoc findings from both experiments further show that appropriate colours of emotion are associated more consistently than inappropriate colours of emotions. This suggests that appropriate colour-emotion associations are unique both in their strength of association and in the form of their representation. Experiment 4 therefore aimed to investigate whether appropriate colour-emotion associations also trigger an implicit automatic cognitive system that allows faster naming times for appropriate versus inappropriate colours of emotional word carriers. Results from the combined Emotional-Semantic Stroop task confirm the above hypothesis and therefore imply that colour plays a substantial role not only in our conceptual representations of objects but also in our conceptual representations of basic emotions. The resemblance of the present findings collectively to those found previously for objects and natural scenes suggests a common cognitive mechanism for the processing of emotional diagnostic colours and the processing of diagnostic colours of objects or natural scenes. Overall, this thesis provides the foundation for many future directions of research in the area of colour and emotion as well as a few possible immediate practical implications.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Recognition of emotions"

1

Yang, Yi-Hsuan. Music emotion recognition. Boca Raton, Fla : CRC, 2011.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

My mixed emotions : Help your kids handle their feelings. New York, New York : Dorling Kindersley, 2018.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

H, Chen Homer, dir. Music emotion recognition. Boca Raton, Fla : CRC, 2011.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

A, Tsihrintzis George, dir. Visual affect recognition. Amsterdam : IOS Press, 2010.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Gevarter, William B. MoCogl : A computer simulation of recognition-primed human decision making, considering emotions. [Washington, DC ? : National Aeronautics and Space Administration, 1992.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

author, Mao Qirong, Lin, Qing, active 2013 author et Cheng Keyang author, dir. Shi jue yu yin qing gan shi bie. Beijing : Ke xue chu ban she, 2013.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

N, Emde Robert, Osofsky Joy D et Butterfield Perry M. 1932-, dir. The IFEEL pictures : A new instrument for interpreting emotions. Madison, Conn : International Universities Press, 1993.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Lambelet, Clément. Happiness is the only true emotion : Clément Lambelet. Paris] : RVB books, 2019.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Michela, Balconi, dir. Neuropsychology and cognition of emotional face comprehension, 2006. Trivandrum, India : Research Signpost, 2006.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Konar, Amit, et Aruna Chakraborty, dir. Emotion Recognition. Hoboken, NJ, USA : John Wiley & Sons, Inc., 2015. http://dx.doi.org/10.1002/9781118910566.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Recognition of emotions"

1

Rao, K. Sreenivasa, et Shashidhar G. Koolagudi. « Emotion Recognition on Real Life Emotions ». Dans SpringerBriefs in Electrical and Computer Engineering, 95–100. New York, NY : Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-6360-3_6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Bonomi, Alberto G. « Physical Activity Recognition Using a Wearable Accelerometer ». Dans Sensing Emotions, 41–51. Dordrecht : Springer Netherlands, 2010. http://dx.doi.org/10.1007/978-90-481-3258-4_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Haines, Simon. « Recognition in Shakespeare and Hegel ». Dans Shakespeare and Emotions, 218–30. London : Palgrave Macmillan UK, 2015. http://dx.doi.org/10.1057/9781137464750_20.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Hupont, Isabelle, Sergio Ballano, Eva Cerezo et Sandra Baldassarri. « From a Discrete Perspective of Emotions to Continuous, Dynamic, and Multimodal Affect Sensing ». Dans Emotion Recognition, 461–91. Hoboken, NJ, USA : John Wiley & Sons, Inc., 2015. http://dx.doi.org/10.1002/9781118910566.ch18.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Zhang, Jiaming, et Amanda J. C. Sharkey. « Contextual Recognition of Robot Emotions ». Dans Towards Autonomous Robotic Systems, 78–89. Berlin, Heidelberg : Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23232-9_8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Laitinen, Arto. « Collective Intentionality and Recognition from Others ». Dans Institutions, Emotions, and Group Agents, 213–27. Dordrecht : Springer Netherlands, 2013. http://dx.doi.org/10.1007/978-94-007-6934-2_13.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Singh, Rajiv, Swati Nigam, Amit Kumar Singh et Mohamed Elhoseny. « Biometric Recognition of Emotions Using Wavelets ». Dans Intelligent Wavelet Based Techniques for Advanced Multimedia Applications, 123–35. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-31873-4_9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

González-Meneses, Yesenia N., Josefina Guerrero-García, Carlos Alberto Reyes-García et Ramón Zatarain-Cabada. « Automatic Recognition of Learning-Centered Emotions ». Dans Lecture Notes in Computer Science, 33–43. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77004-4_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Grekow, Jacek. « Representations of Emotions ». Dans From Content-based Music Emotion Recognition to Emotion Maps of Musical Pieces, 7–11. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70609-2_2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Pereira, Lara, Susana Brás et Raquel Sebastião. « Characterization of Emotions Through Facial Electromyogram Signals ». Dans Pattern Recognition and Image Analysis, 230–41. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04881-4_19.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Recognition of emotions"

1

Sinha, Arryan, et G. Suseela. « Deep Learning-Based Speech Emotion Recognition ». Dans International Research Conference on IOT, Cloud and Data Science. Switzerland : Trans Tech Publications Ltd, 2023. http://dx.doi.org/10.4028/p-0892re.

Texte intégral
Résumé :
Speech Emotion Recognition, as described in this study, uses Neural Networks to classify the emotions expressed in each speech (SER). It’s centered upon concept where voice tone and pitch frequently reflect underlying emotion. Speech Emotion Recognition aids in the classification of elicited emotions. The MLP-Classifier is a tool for classifying emotions in a circumstance. As wave signal, allowing for flexible learning rate selection. RAVDESS (Ryerson Audio-Visual Dataset Emotional Speech and Song Database data) will be used. To extract the characteristics from particular audio input, Contrast, MFCC, Mel Spectrograph Frequency, & Chroma are some of factors that may be employed. To facilitate extraction of features from audio script, dataset will be labelled using decimal encoding. Utilizing input audio sample, precision was found to be 80.28%. Additional testing confirmed this result.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Schmid, Ramona, Sophia Maria Saat, Knut Möller et Verena Wagner-Hartl. « Induction method influence on emotion recognition based on psychophysiological parameters ». Dans Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems. AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1002851.

Texte intégral
Résumé :
Recognizing emotions is an essential ability in our daily social interactions. However, there are individuals who have difficulties interpreting emotions, such as patients with autism spectrum disorders (ASD). In order to cope better with everyday life, emotion training can be a supporting factor for them. However, studies show that emotion training is not only helpful for patients with ASD, but also in the working environment, for example in trainings for managers or teams. In recent research, there are already approaches to use new technologies such as virtual reality to train emotional and social skills. For the evaluation of these new concepts, it is important to make the emotional state of a person measurable. Therefore, a measurement environment has already been developed at Furtwangen University. This is based on a multidimensional approach combining subjective and objective psychophysiological measures. Moreover, the development of facial emotion recognition (FER) systems based on machine learning techniques are also increasing for measuring a person's emotional state. Often, they focus on the recognition of Ekman’s basic emotions. To train and evaluate such FER systems, these basic emotions have to be induced in an individual. Therefore, a number of methods for emotion induction can be found in research, e.g. visual stimuli or mental methods. However, in most studies, only a few selected emotions, such as anger and happiness, were induced. Thus, there is a lack of studies that examined the induction of all six basic emotions.For that reason, the aim of the presented experimental study was to investigate two different methods of emotion induction for the six basic emotions anger, disgust, fear, happiness, sadness, surprise, and a neutral category. Overall, 14 women and 10 men (N = 24) aged between 19 and 59 years (M = 29.25, SD = 11.46) participated in the study. For the first induction method, affective visual stimuli from common emotional picture databases (EmoPicS, OASIS and IAPS) were used. For the second induction method, emotions were induced by a so-called autobiographical recall. Therefore, the participants had to imagine autobiographical situations that evoked the required emotion in them in the past. After each different induction of one of the six emotions or the neutral category, the participants’ emotional state was assessed using the two dimensions valence and arousal of the Self-Assessment Manikin (SAM). Furthermore, cardiovascular (ECG) and electrodermal (EDA) activity were recorded. The results show a significant interaction induction method x emotional category for both subjective assessments valence and arousal. Furthermore, based on the results of the psychophysiological responses of the participants (ECG and EDA), it is shown that the second method to induce emotions (autobiographical recall) was significantly more arousing than the first induction method using visual stimuli. To sum it up, the results of the experimental study show an influence of the induction method that is evident in both the subjective and the psychophysiological parameters.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Esau, Natascha, Lisa Kleinjohann et Bernd Kleinjohann. « Emotional Competence in Human-Robot Communication ». Dans ASME 2008 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/detc2008-49409.

Texte intégral
Résumé :
Since emotional competence is an important factor in human communication, it will certainly also improve communication between humans and robots or other machines. Emotional competence is defined by the aspects emotion recognition, emotion representation, emotion regulation and emotional behavior. In this paper we present how these aspects are intergrated into the architecture of the robot head MEXI. MEXI is able to recognize emotions from facial expressions and prosody of natural speech and represents its internal state made up of emotions and drives by according facial expressions, head movements and speech utterances. For its emotions and drives internal and external regulation mechanisms are realized. Furthermore, this internal state and its perceptions, including the emotions recognized at its human counterpart, are used by MEXI to control its actions. Thereby MEXI can react adequately in an emotional communication.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Hou, Tianyu, Nicoletta Adamo et Nicholas J. Villani. « Micro-expressions in Animated Agents ». Dans Intelligent Human Systems Integration (IHSI 2022) Integrating People and Intelligent Systems. AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1001081.

Texte intégral
Résumé :
The purpose of this research was to examine the perception of micro-expressions in animated agents with different visual styles. Specifically, the work reported in the paper sought to examine: (1) whether people can recognize micro-expressions in animated agents, (2) the extent to which the degree of exaggeration of micro-expressions affects recognition, perceived naturalness and intensity of the animated agents’ emotions, and (3) whether there are differences in recognition and perception based on the agent’s visual style (realistic vs stylized). The research work involved two experiments: a recognition study and an emotion rating study; 275 participants participated in each experiment. In the recognition study, the participants watched eight micro-expression animations representing four different emotions. Four animations featured a stylized character and four a realistic character. For each animation, subjects were asked to identify the character’s emotion conveyed by the mi-cro-expression. Results showed that all four emotions for both characters were recognized with an acceptable degree of accuracy. In the emotion rating study, participants watched two sets of eight animation clips. Eight animations in each set featured the characters performing both macro- and micro-expressions, the difference between these two sets was the exaggeration degree of micro-expressions (normal vs exaggerated). Participants were asked to recognize the character’s true emotion (conveyed by the micro-expressions) and rate the naturalness and intensity of the character’s emotion in each clip using a 5-point Likert scale. Findings showed that the degree of exaggeration of the micro-expressions had a significant effect on emotion’s naturalness rating, emotion’s intensity rating, and true emotion recognition, and the character visual style had a significant effect on emotion’s intensity rating.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Schmid, Ramona, Linn Braunmiller, Lena Hansen, Christopher Schonert, Knut Möller et Verena Wagner-Hartl *. « Emotion recognition - Validation of a measurement environment based on psychophysiological parameters ». Dans Intelligent Human Systems Integration (IHSI 2022) Integrating People and Intelligent Systems. AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1001065.

Texte intégral
Résumé :
Emotions are a fundamental part of our social interaction. A person for whom it is difficult or impossible to interpret emotions may face major problems in everyday life, e.g., patients with autism spectrum disorders. However, understanding emotions is not only of great importance in private social interactions but also in the working environment, e.g., for managers or collaborative work. Hence, there is a great interest in emotion research, including how emotions can be measured. For this purpose, a measuring environment was developed. The aim of the presented study was to validate this measurement environment by evoking different emotions in the participants. A multidimensional approach combining subjective and objective measurements was chosen. Participants assessed their emotional state subjectively. Additionally, psychophysiological responses (cardiovascular and electrodermal activity, electromyogram) were recorded. Results prove a successful validation of the measurement environment. Furthermore, first results of the subjective and psychophysiological data were presented.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Veltmeijer, Emmeke, Charlotte Gerritsen et Koen Hindriks. « Automatic Recognition of Emotional Subgroups in Images ». Dans Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California : International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/190.

Texte intégral
Résumé :
Both social group detection and group emotion recognition in images are growing fields of interest, but never before have they been combined. In this work we aim to detect emotional subgroups in images, which can be of great importance for crowd surveillance or event analysis. To this end, human annotators are instructed to label a set of 171 images, and their recognition strategies are analysed. Three main strategies for labeling images are identified, with each strategy assigning either 1) more weight to emotions (emotion-based fusion), 2) more weight to spatial structures (group-based fusion), or 3) equal weight to both (summation strategy). Based on these strategies, algorithms are developed to automatically recognize emotional subgroups. In particular, K-means and hierarchical clustering are used with location and emotion features derived from a fine-tuned VGG network. Additionally, we experiment with face size and gaze direction as extra input features. The best performance comes from hierarchical clustering with emotion, location and gaze direction as input.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Tivatansakula, Somchanok, Gantaphon Chalumpornb et Supadchaya Puangpontipb. « Healthcare System Focusing on Emotional Aspect Using Augmented Reality : Emotion Detection by Facial Expression ». Dans Applied Human Factors and Ergonomics Conference. AHFE International, 2021. http://dx.doi.org/10.54941/ahfe100521.

Texte intégral
Résumé :
Current research includes many proposals of systems that provide assistances and services to people in the healthcare fields; however, these systems emphasize the support physical rather than emotional aspects. Emotional health is as important as physical health. Negative emotional health can lead to social or mental health problems. To cope with negative emotional health in daily life, we propose a healthcare system that focuses on emotional aspects. This system provides services to improve user emotion. To improve user emotion, we need to recognize users’ current emotional state. Therefore, our system integrates emotion detection to suggest the appropriate service. This system is designed as a web-based system. While users use the system, facial expression and speech are detected and analyzed and to determine the users’ emotions. When negative emotions are detected, our system suggests that the users take a break by providing services (designed to provide relaxation, amusement and excitement services) with augmented reality and Kinect to improve their emotional state. This paper focuses on feature extraction and classification of emotion detection by facial expression recognition.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Goodarzi, Farhad, Fakhrul Zaman Rokhani, M. Iqbal Saripan et Mohammad Hamiruce Marhaban. « Mixed emotions in multi view face emotion recognition ». Dans 2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA). IEEE, 2017. http://dx.doi.org/10.1109/icsipa.2017.8120643.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Iskra, Andrej. « Analysis of emotion expression on frontal and profile facial images ». Dans 11th International Symposium on Graphic Engineering and Design. University of Novi Sad, Faculty of technical sciences, Department of graphic engineering and design, 2022. http://dx.doi.org/10.24867/grid-2022-p22.

Texte intégral
Résumé :
Expressions of emotions are often found in facial images. In addition to the neutral facial expression, we know six basic expressions of emotion: joy, anger, sadness, fear, surprise, and disgust. The similarity of some emotion expressions sometimes leads to incorrect recognition or confusion of two emotions. In our study, we tried to find out how these substitutions manifest in the recognition of emotions on frontal and profile face images. The results of the substitutions in emotion recognition were presented with a substitution matrix. The second part of the study focused on confirming these results with the analysis of facial feature observation and fixation duration. In the analysis of facial features, the three main facial features (eyes, mouth, and forehead with nasal) that attract the most attention were considered. Fixation duration was also measured for these facial features. The basis of the research equipment was an eye tracker, which we used to define the areas of interest (AOI) for the analysis. The results of the observational proportions of facial features confirmed a relatively large scale of substitutions of the emotions fear and surprise, anger and disgust, and partial fear and disgust in frontal facial images. In profile facial images, the most frequent incorrect recognition were the emotions happiness and surprise, anger and disgust, fear and disgust, and anger and sadness. Since there is less information about the face in the profile facial image than in the frontal facial images, the results also confirmed a higher proportion of incorrect recognition in the profile face images and thus a more difficult recognition of emotions in the profile face images. The greater extent of incorrect recognition was also confirmed by the fixation duration results. Both results (observation proportions of facial features and fixation duration) were also presented in a graph.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Liu, Taiao, Yajun Du et Qiaoyu Zhou. « Text Emotion Recognition Using GRU Neural Network with Attention Mechanism and Emoticon Emotions ». Dans RICAI 2020 : 2020 2nd International Conference on Robotics, Intelligent Control and Artificial Intelligence. New York, NY, USA : ACM, 2020. http://dx.doi.org/10.1145/3438872.3439094.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Recognition of emotions"

1

Ivanova, E. S. PERFORMANCE INDICATORS OF THE VOLUME Active vocabulary EMOTIONS AND ACCURACY Recognition of facial expressions STUDENTS. LJournal, 2017. http://dx.doi.org/10.18411/a-2017-002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Metaxas, D. Human Identification and Recognition of Emotional State from Visual Input. Fort Belvoir, VA : Defense Technical Information Center, décembre 2005. http://dx.doi.org/10.21236/ada448621.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Gou, Xinyun, Jiaxi Huang, Liuxue Guo, Jin Zhao, Dongling Zhong, Yuxi Li, Xiaobo Liu et al. The conscious recognition of emotion in depression disorder : A meta-analysis of neuroimaging studies. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, novembre 2022. http://dx.doi.org/10.37766/inplasy2022.11.0057.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Тарасова, Олена Юріївна, et Ірина Сергіївна Мінтій. Web application for facial wrinkle recognition. Кривий Ріг, КДПУ, 2022. http://dx.doi.org/10.31812/123456789/7012.

Texte intégral
Résumé :
Facial recognition technology is named one of the main trends of recent years. It’s wide range of applications, such as access control, biometrics, video surveillance and many other interactive humanmachine systems. Facial landmarks can be described as key characteristics of the human face. Commonly found landmarks are, for example, eyes, nose or mouth corners. Analyzing these key points is useful for a variety of computer vision use cases, including biometrics, face tracking, or emotion detection. Different methods produce different facial landmarks. Some methods use only basic facial landmarks, while others bring out more detail. We use 68 facial markup, which is a common format for many datasets. Cloud computing creates all the necessary conditions for the successful implementation of even the most complex tasks. We created a web application using the Django framework, Python language, OpenCv and Dlib libraries to recognize faces in the image. The purpose of our work is to create a software system for face recognition in the photo and identify wrinkles on the face. The algorithm for determining the presence and location of various types of wrinkles and determining their geometric determination on the face is programmed.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Lin, XiaoGuang, XueLing Zhang, QinQin Liu, PanWen Zhao, Hui Zhang, HongSheng Wang et ZhongQuan Yi. Facial emotion recognition in adult with traumatic brain injury : a protocol for systematic review and meta-analysis. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, mai 2020. http://dx.doi.org/10.37766/inplasy2020.5.0109.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Sun, Yang, Jing Zhao, PanWen Zhao, Hui Zhang, JianGuo Zhong, PingLei Pan, GenDi Wang, ZhongQuan Yi et LILI Xie. Social cognition in children and adolescents with epilepsy : a meta-analysis. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, mars 2022. http://dx.doi.org/10.37766/inplasy2022.3.0011.

Texte intégral
Résumé :
Review question / Objective: To our knowledge, no meta-analysis has summarized social cognitive performance in children and adolescents with epilepsy as independent groups. Therefore, we conducted this meta-analysis to examine differences between children and adolescents with epilepsy and HCs in terms of ToM and FER performance. Condition being studied: Epilepsy is characterized by chronic, unprovoked and recurrent seizures, is the most frequent neurological disease in childhood and usually occurs in early development. Worldwide, it is estimated that approximately 50 million people suffer from the pain of epileptic seizures, with more than half of the cases beginning in childhood and adolescence. So a comprehensive understanding of children and adolescence with epilepsy has become the focus of widespread attention. Recently, a number of studies have assessed ToM or facial emotion recognition deficits in children and adolescents with epilepsy, but the conclusions are inconsistent. These inconsistent findings might be related to the small sample sizes in most studies. Additionally, the methods used to evaluate ToM or facial emotion recognition performance were varied across studies. A meta-analysis can increase statistical power, estimate the severity of these deficits, and help resolve conflicting findings.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Clarke, Alison, Sherry Hutchinson et Ellen Weiss. Psychosocial support for children. Population Council, 2005. http://dx.doi.org/10.31899/hiv14.1003.

Texte intégral
Résumé :
Masiye Camp in Matopos National Park, and Kids’ Clubs in downtown Bulawayo, Zimbabwe, are examples of a growing number of programs in Africa and elsewhere that focus on the psychological and social needs of AIDS-affected children. Given the traumatic effects of grief, loss, and other hardships faced by these children, there is increasing recognition of the importance of programs to help them strengthen their social and emotional support systems. This Horizons Report describes findings from operations research in Zimbabwe and Rwanda that examines the psychosocial well-being of orphans and vulnerable children and ways to increase their ability to adapt and cope in the face of adversity. In these studies, a person’s psychosocial well-being refers to his/her emotional and mental state and his/her network of human relationships and connections. A total of 1,258 youth were interviewed. All were deemed vulnerable by their communities because they had been affected by HIV/AIDS and/or other factors such as severe poverty.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Comorbid anxiety disorder has a protective effect in conduct disorder. ACAMH, mars 2019. http://dx.doi.org/10.13056/acamh.10622.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie