Gotowa bibliografia na temat „Face emotion recognition”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Face emotion recognition”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Face emotion recognition"

1

Mallikarjuna, Basetty, M. Sethu Ram i Supriya Addanke. "An Improved Face-Emotion Recognition to Automatically Generate Human Expression With Emoticons". International Journal of Reliable and Quality E-Healthcare 11, nr 1 (1.01.2022): 1–18. http://dx.doi.org/10.4018/ijrqeh.314945.

Pełny tekst źródła
Streszczenie:
Any human face image expression naturally identifies expressions of happy, sad etc.; sometimes human facial image expression recognition is complex, and it is a combination of two emotions. The existing literature provides face emotion classification and image recognition, and the study on deep learning using convolutional neural networks (CNN), provides face emotion recognition most useful for healthcare and with the most complex of the existing algorithms. This paper improves the human face emotion recognition and provides feelings of interest for others to generate emoticons on their smartphone. Face emotion recognition plays a major role by using convolutional neural networks in the area of deep learning and artificial intelligence for healthcare services. Automatic facial emotion recognition consists of two methods, such as face detection with Ada boost classifier algorithm and emotional classification, which consists of feature extraction by using deep learning methods such as CNN to identify the seven emotions to generate emoticons.
Style APA, Harvard, Vancouver, ISO itp.
2

Iqbal, Muhammad, Bhakti Yudho Suprapto, Hera Hikmarika, Hermawati Hermawati i Suci Dwijayanti. "Design of Real-Time Face Recognition and Emotion Recognition on Humanoid Robot Using Deep Learning". Jurnal Ecotipe (Electronic, Control, Telecommunication, Information, and Power Engineering) 9, nr 2 (6.10.2022): 149–58. http://dx.doi.org/10.33019/jurnalecotipe.v9i2.3044.

Pełny tekst źródła
Streszczenie:
A robot is capable of mimicking human beings, including recognizing their faces and emotions. However, current studies of the humanoid robot have not been implemented in the real-time system. In addition, face recognition and emotion recognition have been treated as separate problems. Thus, for real-time application on a humanoid robot, this study proposed a combination of face recognition and emotion recognition. Face and emotion recognition systems were developed concurrently in this study using convolutional neural network architectures. The proposed architecture was compared to the well-known architecture, AlexNet, to determine which architecture would be better suited for implementation on a humanoid robot. Primary data from 30 respondents was used for face recognition. Meanwhile, emotional data were collected from the same respondents and combined with secondary data from a 2500-person dataset. Surprise, anger, neutral, smile, and sadness were among the emotions. The experiment was carried out in real-time on a humanoid robot using the two architectures. Using the AlexNet model, the accuracy of face and emotion recognition was 87 % and 70 %, respectively. Meanwhile, the proposed architecture achieved accuracy rates of 95 % for face recognition and 75 % for emotion recognition, respectively. Thus, the proposed method performs better in terms of recognizing faces and emotions, and it can be implemented on a humanoid robot.
Style APA, Harvard, Vancouver, ISO itp.
3

Sondawale, Shweta. "Face and Speech Emotion Recognition System". International Journal for Research in Applied Science and Engineering Technology 12, nr 4 (30.04.2024): 5621–28. http://dx.doi.org/10.22214/ijraset.2024.61278.

Pełny tekst źródła
Streszczenie:
Abstract: Emotions serve as the cornerstone of human communication, facilitating the expression of one's inner thoughts and feelings to others. Speech Emotion Recognition (SER) represents a pivotal endeavour aimed at deciphering the emotional nuances embedded within a speaker's voice signal. Universal emotions such as neutrality, anger, happiness, and sadness form the basis of this recognition process, allowing for the identification of fundamental emotional states. To achieve this, spectral and prosodic features are leveraged, each offering unique insights into the emotional content of speech. Spectral features, exemplified by the Mel Frequency Cepstral Coefficient (MFCC), provide a detailed analysis of the frequency distribution within speech signals, while prosodic features encompass elements like fundamental frequency, volume, pitch, speech intensity, and glottal parameters, capturing the rhythmic and tonal variations indicative of different emotional states. Through the integration of these features, SER systems can effectively simulate and classify a diverse range of emotional expressions, paving the way for enhanced human-computer interaction and communication technologies
Style APA, Harvard, Vancouver, ISO itp.
4

Mareeswari V. "Face Emotion Recognition based Recommendation System". ACS Journal for Science and Engineering 2, nr 1 (1.03.2022): 73–80. http://dx.doi.org/10.34293/acsjse.v2i1.29.

Pełny tekst źródła
Streszczenie:
Face recognition technology has gotten a lot of press because of its wide range of applications and market potential. It is used in a variety of fields, including surveillance systems, digital video editing, and other technical advancements. In the fields of tourism, music, video, and film, these systems have overcome the burden of irrelevant knowledge by taking into account user desires and emotional states. Advice systems, emotion recognition, and machine learning are proposed as thematic categories in the analysis. Our vision is to develop a method for recommending new content that is based on the emotional reactions of the viewers. Music is a form of art that is thought to have a stronger connection to a person's emotions. It has the unique potential to boost one's mood, and video streaming services are becoming more prevalent in people's lives, necessitating the development of better video recommendation systems that respond to their users in a customised manner. Furthermore, many users will believe that travel would be a method to help them cope with their ongoing emotions. Our project aims to create a smart travel recommendation system based on the user's emotional state. This project focuses on developing an efficient music, video, movie, and tourism recommendation system that uses Facial Recognition techniques to assess the emotion of users. The system's overall concept is to identify facial expression and provide music, video, and movie recommendations based on the user's mood.
Style APA, Harvard, Vancouver, ISO itp.
5

Levitan, Carmel A., Isabelle Rusk, Danielle Jonas-Delson, Hanyun Lou, Lennon Kuzniar, Gray Davidson i Aleksandra Sherman. "Mask wearing affects emotion perception". i-Perception 13, nr 3 (maj 2022): 204166952211073. http://dx.doi.org/10.1177/20416695221107391.

Pełny tekst źródła
Streszczenie:
To reduce the spread of COVID-19, mask wearing has become ubiquitous in much of the world. We studied the extent to which masks impair emotion recognition and dampen the perceived intensity of facial expressions by naturalistically inducing positive, neutral, and negative emotions in individuals while they were masked and unmasked. Two groups of online participants rated the emotional intensity of each presented image. One group rated full faces (N=104); the other (N=102) rated cropped images where only the upper face was visible. We found that masks impaired the recognition of and rated intensity of positive emotions. This happened even when the faces were cropped and the lower part of the face was not visible. Masks may thus reduce positive emotion and/or expressivity of positive emotion. However, perception of negativity was unaffected by masking, perhaps because unlike positive emotions like happiness which are signaled more in the mouth, negative emotions like anger rely more on the upper face.
Style APA, Harvard, Vancouver, ISO itp.
6

Liao, Songyang, Katsuaki Sakata i Galina V. Paramei. "Color Affects Recognition of Emoticon Expressions". i-Perception 13, nr 1 (styczeń 2022): 204166952210807. http://dx.doi.org/10.1177/20416695221080778.

Pełny tekst źródła
Streszczenie:
In computer-mediated communication, emoticons are conventionally rendered in yellow. Previous studies demonstrated that colors evoke certain affective meanings, and face color modulates perceived emotion. We investigated whether color variation affects the recognition of emoticon expressions. Japanese participants were presented with emoticons depicting four basic emotions (Happy, Sad, Angry, Surprised) and a Neutral expression, each rendered in eight colors. Four conditions (E1–E4) were employed in the lab-based experiment; E5, with an additional participant sample, was an online replication of the critical E4. In E1, colored emoticons were categorized in a 5AFC task. In E2–E5, stimulus affective meaning was assessed using visual scales with anchors corresponding to each emotion. The conditions varied in stimulus arrays: E2: light gray emoticons; E3: colored circles; E4 and E5: colored emoticons. The affective meaning of Angry and Sad emoticons was found to be stronger when conferred in warm and cool colors, respectively, the pattern highly consistent between E4 and E5. The affective meaning of colored emoticons is regressed to that of achromatic expression counterparts and decontextualized color. The findings provide evidence that affective congruency of the emoticon expression and the color it is rendered in facilitates recognition of the depicted emotion, augmenting the conveyed emotional message.
Style APA, Harvard, Vancouver, ISO itp.
7

Wyman, Austin, i Zhiyong Zhang. "API Face Value". Journal of Behavioral Data Science 3, nr 1 (13.07.2023): 1–11. http://dx.doi.org/10.35566/jbds/v3n1/wyman.

Pełny tekst źródła
Streszczenie:
Emotion recognition application programming interface (API) is a recent advancement in computing technology that synthesizes computer vision, machine-learning algorithms, deep-learning neural networks, and other information to detect and label human emotions. The strongest iterations of this technology are produced by technology giants with large, cloud infrastructure (i.e., Google, and Microsoft), bolstering high true positive rates. We review the current status of applications of emotion recognition API in psychological research and find that, despite evidence of spatial, age, and race bias effects, API is improving the accessibility of clinical and educational research. Specifically, emotion detection software can assist individuals with emotion-related deficits (e.g., Autism Spectrum Disorder, Attention Deficit-Hyperactivity Disorder, Alexithymia). API has been incorporated in various computer-assisted interventions for Autism, where it has been used to diagnose, train, and monitor emotional responses to one's environment. We identify AP's potential to enhance interventions in other emotional dysfunction populations and to address various professional needs. Future work should aim to address the bias limitations of API software and expand its utility in subfields of clinical, educational, neurocognitive, and industrial-organizational psychology.
Style APA, Harvard, Vancouver, ISO itp.
8

Zhang, Zhiqin. "Deep Face Emotion Recognition". Journal of Physics: Conference Series 1087 (wrzesień 2018): 062036. http://dx.doi.org/10.1088/1742-6596/1087/6/062036.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Lawrence, Louise, i Deborah Abdel Nabi. "The Compilation and Validation of a Collection of Emotional Expression Images Communicated by Synthetic and Human Faces". International Journal of Synthetic Emotions 4, nr 2 (lipiec 2013): 34–62. http://dx.doi.org/10.4018/ijse.2013070104.

Pełny tekst źródła
Streszczenie:
The BARTA (Bolton Affect Recognition Tri-Stimulus Approach) is a unique database comprising over 400 colour images of the universally recognised basic emotional expressions and is the first compilation to include three different classes of validated face stimuli; emoticon, computer-generated cartoon and photographs of human faces. The validated tri-stimulus collection (all images received =70% inter-rater (child and adult) consensus) has been developed to promote pioneering research into the differential effects of synthetic emotion representation on atypical emotion perception, processing and recognition in autism spectrum disorders (ASD) and, given the recent evidence for an ASD synthetic-face processing advantage (Rosset et al., 2008), provides a means of investigating the benefits associated with the recruitment of synthetic face images in ASD emotion recognition training contexts.
Style APA, Harvard, Vancouver, ISO itp.
10

Homorogan, C., R. Adam, R. Barboianu, Z. Popovici, C. Bredicean i M. Ienciu. "Emotional Face Recognition in Bipolar Disorder". European Psychiatry 41, S1 (kwiecień 2017): S117. http://dx.doi.org/10.1016/j.eurpsy.2017.01.1904.

Pełny tekst źródła
Streszczenie:
IntroductionEmotional face recognition is significant for social communication. This is impaired in mood disorders, such as bipolar disorder. Individuals with bipolar disorder lack the ability to perceive facial expressions.ObjectivesTo analyse the capacity of emotional face recognition in subjects diagnosed with bipolar disorder.AimsTo establish a correlation between emotion recognition ability and the evolution of bipolar disease.MethodsA sample of 24 subjects were analysed in this trial, diagnosed with bipolar disorder (according to ICD-10 criteria), who were hospitalised in the Psychiatry Clinic of Timisoara and monitored in outpatients clinic. Subjects were introduced in the trial based on inclusion/exclusion criteria. The analysed parameters were: socio-demographic (age, gender, education level), the number of relapses, the predominance of manic or depressive episodes, and the ability of identifying emotions (Reading the Mind in the Eyes Test).ResultsMost of the subjects (79.16%) had a low ability to identify emotions, 20.83% had a normal capacity to recognise emotions, and none of them had a high emotion recognition capacity. The positive emotions (love, joy, surprise) were easier recognised, by 75% of the subjects, than the negative ones (anger, sadness, fear). There was no evident difference in emotional face recognition between the individuals with predominance of manic episodes than the ones who had mostly depressive episodes, and between the number of relapses.ConclusionsThe individuals with bipolar disorder have difficulties in identifying facial emotions, but with no obvious correlation between the analysed parameters.Disclosure of interestThe authors have not supplied their declaration of competing interest.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Face emotion recognition"

1

Bate, Sarah. "The role of emotion in face recognition". Thesis, University of Exeter, 2008. http://hdl.handle.net/10036/51993.

Pełny tekst źródła
Streszczenie:
This thesis examines the role of emotion in face recognition, using measures of the visual scanpath as indicators of recognition. There are two key influences of emotion in face recognition: the emotional expression displayed upon a face, and the emotional feelings evoked within a perceiver in response to a familiar person. An initial set of studies examined these processes in healthy participants. First, positive emotional expressions were found to facilitate the processing of famous faces, and negative expressions facilitated the processing of novel faces. A second set of studies examined the role of emotional feelings in recognition. Positive feelings towards a face were also found to facilitate processing, in both an experimental study using newly learned faces and in the recognition of famous faces. A third set of studies using healthy participants examined the relative influences of emotional expression and emotional feelings in face recognition. For newly learned faces, positive expressions and positive feelings had a similar influence in recognition, with no presiding role of either dimension. However, emotional feelings had an influence over and above that of expression in the recognition of famous faces. A final study examined whether emotional valence could influence covert recognition in developmental prosopagnosia, and results suggested the patients process faces according to emotional valence rather than familiarity per se. Specifically, processing was facilitated for studied-positive faces compared to studied-neutral and novel faces, but impeded for studied-negative faces. This pattern of findings extends existing reports of a positive-facilitation effect in face recognition, and suggests there may be a closer relationship between facial familiarity and emotional valence than previously envisaged. The implications of these findings are discussed in relation to models of normal face recognition and theories of covert recognition in prosopagnosia.
Style APA, Harvard, Vancouver, ISO itp.
2

Tomlinson, Eleanor Katharine. "Face-processing and emotion recognition in schizophrenia". Thesis, University of Birmingham, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433700.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Kuhn, Lisa Katharina. "Emotion recognition in the human face and voice". Thesis, Brunel University, 2015. http://bura.brunel.ac.uk/handle/2438/11216.

Pełny tekst źródła
Streszczenie:
At a perceptual level, faces and voices consist of very different sensory inputs and therefore, information processing from one modality can be independent of information processing from another modality (Adolphs & Tranel, 1999). However, there may also be a shared neural emotion network that processes stimuli independent of modality (Peelen, Atkinson, & Vuilleumier, 2010) or emotions may be processed on a more abstract cognitive level, based on meaning rather than on perceptual signals. This thesis therefore aimed to examine emotion recognition across two separate modalities in a within-subject design, including a cognitive Chapter 1 with 45 British adults, a developmental Chapter 2 with 54 British children as well as a cross-cultural Chapter 3 with 98 German and British children, and 78 German and British adults. Intensity ratings as well as choice reaction times and correlations of confusion analyses of emotions across modalities were analysed throughout. Further, an ERP Chapter investigated the time-course of emotion recognition across two modalities. Highly correlated rating profiles of emotions in faces and voices were found which suggests a similarity in emotion recognition across modalities. Emotion recognition in primary-school children improved with age for both modalities although young children relied mainly on faces. British as well as German participants showed comparable patterns for rating basic emotions, but subtle differences were also noted and Germans perceived emotions as less intense than British. Overall, behavioural results reported in the present thesis are consistent with the idea of a general, more abstract level of emotion processing which may act independently of modality. This could be based, for example, on a shared emotion brain network or some more general, higher-level cognitive processes which are activated across a range of modalities. Although emotion recognition abilities are already evident during childhood, this thesis argued for a contribution of ‘nurture’ to emotion mechanisms as recognition was influenced by external factors such as development and culture.
Style APA, Harvard, Vancouver, ISO itp.
4

Durrani, Sophia J. "Studies of emotion recognition from multiple communication channels". Thesis, University of St Andrews, 2005. http://hdl.handle.net/10023/13140.

Pełny tekst źródła
Streszczenie:
Crucial to human interaction and development, emotions have long fascinated psychologists. Current thinking suggests that specific emotions, regardless of the channel in which they are communicated, are processed by separable neural mechanisms. Yet much research has focused only on the interpretation of facial expressions of emotion. The present research addressed this oversight by exploring recognition of emotion from facial, vocal, and gestural tasks. Happiness and disgust were best conveyed by the face, yet other emotions were equally well communicated by voices and gestures. A novel method for exploring emotion perception, by contrasting errors, is proposed. Studies often fail to consider whether the status of the perceiver affects emotion recognition abilities. Experiments presented here revealed an impact of mood, sex, and age of participants. Dysphoric mood was associated with difficulty in interpreting disgust from vocal and gestural channels. To some extent, this supports the concept that neural regions are specialised for the perception of disgust. Older participants showed decreased emotion recognition accuracy but no specific pattern of recognition difficulty. Sex of participant and of actor affected emotion recognition from voices. In order to examine neural mechanisms underlying emotion recognition, an exploration was undertaken using emotion tasks with Parkinson's patients. Patients showed no clear pattern of recognition impairment across channels of communication. In this study, the exclusion of surprise as a stimulus and response option in a facial emotion recognition task yielded results contrary to those achieved without this modification. Implications for this are discussed. Finally, this thesis gives rise to three caveats for neuropsychological research. First, the impact of the observers' status, in terms of mood, age, and sex, should not be neglected. Second, exploring multiple channels of communication is important for understanding emotion perception. Third, task design should be appraised before conclusions regarding impairments in emotion perception are presumed.
Style APA, Harvard, Vancouver, ISO itp.
5

Alashkar, Taleb. "3D dynamic facial sequences analysis for face recognition and emotion detection". Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10109/document.

Pełny tekst źródła
Streszczenie:
L’étude menée dans le cadre de cette thèse vise l’étude du rôle de la dynamique de formes faciales 3D à révéler l’identité des personnes et leurs états émotionnels. Pour se faire, nous avons proposé un cadre géométrique pour l’étude des formes faciales 3D et leurs dynamiques dans le temps. Une séquence 3D est d’abord divisée en courtes sous-séquences, puis chacune des sous-séquences obtenues est représentée dans une variété de Grassmann (ensemble des sous-espaces linéaires de dimension fixe). Nous avons exploité la géométrie de ces variétés pour comparer des sous-séquences 3D, calculer des statistiques (telles que des moyennes) et quantifier la divergence entre des éléments d’une même variété Grassmannienne. Nous avons aussi proposé deux représentations possibles pour les deux applications cibles – (1) la première est basée sur les dictionnaires (de sous-espaces) associée à des techniques de Dictionary Learning Sparse Coding pour la reconnaissance d’identité et (2) le représentation par des trajectoires paramétrées par le temps sur les Grassmanniennes couplée avec une variante de l’algorithme de classification SVM, permettant un apprentissage avec des données partielles, pour la détection précoce des émotions spontanée. Les expérimentations réalisées sur les bases publiques BU-4DFE, Cam3D et BP4D-Spontaneous montrent à la fois l’intérêt du cadre géométrique proposé (en terme de temps de calcul et de robustesse au bruit et aux données manquantes) et les représentations adoptées (dictionnaires pour la reconnaissance d’identité et trajectoires pour la détection précoce des émotions spontanées)
In this thesis, we have investigated the problems of identity recognition and emotion detection from facial 3D shapes animations (called 4D faces). In particular, we have studied the role of facial (shapes) dynamics in revealing the human identity and their exhibited spontaneous emotion. To this end, we have adopted a comprehensive geometric framework for the purpose of analyzing 3D faces and their dynamics across time. That is, a sequence of 3D faces is first split to an indexed collection of short-term sub-sequences that are represented as matrix (subspace) which define a special matrix manifold called, Grassmann manifold (set of k-dimensional linear subspaces). The geometry of the underlying space is used to effectively compare the 3D sub-sequences, compute statistical summaries (e.g. sample mean, etc.) and quantify densely the divergence between subspaces. Two different representations have been proposed to address the problems of face recognition and emotion detection. They are respectively (1) a dictionary (of subspaces) representation associated to Dictionary Learning and Sparse Coding techniques and (2) a time-parameterized curve (trajectory) representation on the underlying space associated with the Structured-Output SVM classifier for early emotion detection. Experimental evaluations conducted on publicly available BU-4DFE, BU4D-Spontaneous and Cam3D Kinect datasets illustrate the effectiveness of these representations and the algorithmic solutions for identity recognition and emotion detection proposed in this thesis
Style APA, Harvard, Vancouver, ISO itp.
6

Bui, Kim-Kim. "Face Processing in Schizophrenia : Deficit in Face Perception or in Recognition of Facial Emotions?" Thesis, University of Skövde, School of Humanities and Informatics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-3349.

Pełny tekst źródła
Streszczenie:

Schizophrenia is a psychiatric disorder characterized by social dysfunction. People with schizophrenia misinterpret social information and it is suggested that this difficulty may result from visual processing deficits. As faces are one of the most important sources of social information it is hypothesized that people suffering from the disorder have impairments in the visual face processing system. It is unclear which mechanism of the face processing system is impaired but two types of deficits are most often proposed: a deficit in face perception in general (i.e., processing of facial features as such) and a deficit in facial emotion processing (i.e., recognition of emotional facial expressions). Due to the contradictory evidence from behavioural, electrophysiological as well as neuroimaging studies offering support for the involvement of one or the other deficit in schizophrenia it is early to make any conclusive statements as to the nature and level of impairment. Further studies are needed for a better understanding of the key mechanism and abnormalities underlying social dysfunction in schizophrenia.

Style APA, Harvard, Vancouver, ISO itp.
7

Bellegarde, Lucille Gabrielle Anna. "Perception of emotions in small ruminants". Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/25915.

Pełny tekst źródła
Streszczenie:
Animals are sentient beings, capable of experiencing emotions. Being able to assess emotional states in farm animals is crucial to improving their welfare. Although the function of emotion is not primarily for communication, the outward expression of an emotional state involves changes in posture, vocalisations, odours and facial expressions. These changes can be perceived and used as indicators of emotional state by other animals. Since emotions can be perceived between conspecifics, understanding how emotions are identified and how they can spread within a social group could have a major impact on improving the welfare of farmed species, which are mostly reared in groups. A recently developed method for the evaluation of emotions in animals is based on cognitive biases such as judgment biases, i.e. an individual in a negative emotional state will show pessimistic judgments while and individual in a positive emotional state will show optimistic judgments. The aims of this project were to (A) establish whether sheep and goats can discriminate between images of faces of familiar conspecifics taken in different positive and negative situations, (B) establish whether sheep and goats perceive the valence (positive of negative) of the emotion expressed by the animal on the image, (C) validate the use of images of faces in cognitive bias studies. The use of images of faces of conspecifics as emotional stimuli was first validated, using a discrimination task in a two-armed maze. A new methodology was then developed across a series of experiments to assess spontaneous reactions of animals exposed to video clips or to images of faces of familiar conspecifics. Detailed observations of ear postures were used as the main behavioural indicator. Individual characteristics (dominance status within the herd, dominance pairwise relationships and humananimal relationship) were also recorded during preliminary tests and included in the analyses. The impact of a low-mood state on the perception of emotions was assessed in sheep after subjecting half of the animals to unpredictable negative housing conditions and keeping the other half in good standard housing conditions. Sheep were then presented with videos of conspecifics filmed in situations of varying valence. Reactions to ambiguous stimuli were evaluated by presenting goats with images of morphed faces. Goats were also presented with images of faces of familiar conspecifics taken situations of varying emotional intensity. Sheep could discriminate images of faces of conspecifics taken either in a negative or in a neutral situation and their learning process of the discrimination task was affected by the type of emotion displayed. Sheep reacted differently depending on the valence of the video clips (P < 0.05); however, there was no difference between the control and the low-mood groups (P > 0.05). Goats also showed different behavioural reactions to images of faces photographed in different situations (P < 0.05), indicating that they perceived the images as different. Responses to morphed images were not necessarily intermediate to responses to negative and positive images and not gradual either, which poses a major problem to the potential use of facial images in cognitive bias experiments. Overall, animals were more attentive towards images or videos of conspecifics in negative situations, i.e., presumably, in a negative emotional state. This suggests that sheep and goats are able to perceive the valence of the emotional state. The identity of the individual on the photo also affected the animals’ spontaneous reaction to the images. Social relationships such as dominance, but also affinity between the tested and photographed individual seem to influence emotion perception.
Style APA, Harvard, Vancouver, ISO itp.
8

Chiller-Glaus, Sarah. "Testing the limits of face recognition : identification from photographs in travel documents and dynamic aspects of emotion recognition /". [S.l.] : [s.n.], 2009. http://opac.nebis.ch/cgi-bin/showAbstract.pl?sys=000281129.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Merz, Sabine Psychology Faculty of Science UNSW. "Face emotion recognition in children and adolescents; effects of puberty and callous unemotional traits in a community sample". Publisher:University of New South Wales. Psychology, 2008. http://handle.unsw.edu.au/1959.4/41247.

Pełny tekst źródła
Streszczenie:
Previous research suggests that as well as behavioural difficulties, a small subset of aggressive and antisocial children show callous unemotional (CU) personality traits (i.e., lack of remorse and absence of empathy) that set them apart from their low-CU peers. These children have been identified as being most at risk to follow a path of severe and persistent antisocial behaviour, showing distinct behavioural patterns, and have been found to respond less to traditional treatment programs. One particular focus of this thesis is that emerging findings have shown emotion recognition deficits within both groups. Whereas children who only show behavioural difficulties (in the absence of CU traits) have been found to misclassify vague and neutral expressions as anger, the presence of CU traits has been associated with an inability to correctly identify fear and to a lesser extend, sadness. Furthermore, emotion recognition competence varies with age and development. In general, emotion recognition improves with age, but interestingly there is some evidence that it may become less efficient during puberty. No research could be located, however, that assessed emotion recognition through childhood and adolescence for children high and low on CU traits and antisocial behaviour. The primary focus of this study was to investigate the impact of these personality traits and pubertal development on emotion recognition competence in isolation and in combination. A specific aim was to assess if puberty would exacerbate these deficits in children with pre-existing deficits in emotion recognition. The effect of gender, emotion type and measure characteristics, in particular the age of the target face, was also examined. A community sample of 703 children and adolescents aged 7-17 were administered the Strength and Difficulties Questionnaire to assess adjustment, the Antisocial Process Screening Device to assess antisocial traits, and the Pubertal Development Scale was administered to evaluate pubertal stage. Empathy was assessed using the Bryant Index of Empathy for Children and Adolescents. Parents or caregivers completed parent version of these measures for their children. Emotion recognition ability was measured using the newly developed UNSW FACES task (Dadds, Hawes & Merz, 2004). Description of the development and validation of this measure are included. Contrary to expectations, emotion recognition accuracy was not negatively affected by puberty. In addition, no overall differences in emotion recognition ability were found due to participant’s gender or target face age group characteristics. The hypothesis that participants would be better at recognising emotions expressed by their own age group was therefore not supported. In line with expectations, significant negative associations between CU traits and fear recognition were found. However, these were small, and contrary to expectations, were found for girls rather than boys. Also, puberty did not exacerbate emotion recognition deficits in high CU children. However, the relationship between CU traits and emotion recognition was affected differently by pubertal status. The implications of these results are discussed in relation to future research into emotion recognition deficits within this population. In addition, theoretical and practical implications of these findings for the development of antisocial behaviour and the treatment of children showing CU traits are explored.
Style APA, Harvard, Vancouver, ISO itp.
10

Bloom, Elana. "Recognition, expression, and understanding facial expressions of emotion in adolescents with nonverbal and general learning disabilities". Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=100323.

Pełny tekst źródła
Streszczenie:
Students with learning disabilities (LD) have been found to exhibit social difficulties compared to those without LD (Wong, 2004). Recognition, expression, and understanding of facial expressions of emotions have been shown to be important for social functioning (Custrini & Feldman, 1989; Philippot & Feldman, 1990). LD subtypes have been studied (Rourke, 1999) and children with nonverbal learning disabilities (NVLD) have been observed to be worse at recognizing facial expressions compared to children with verbal learning disabilities (VLD), no learning disability (NLD; Dimitrovsky, Spector, Levy-Shiff, & Vakil, 1998; Dimitrovsky, Spector, & Levy-Shiff, 2000), and those with psychiatric difficulties without LD controls (Petti, Voelker, Shore, & Hyman-Abello, 2003). However, little has been done in this area with adolescents with NVLD. Recognition, expression and understanding facial expressions of emotion, as well as general social functioning have yet to be studied simultaneously among adolescents with NVLD, NLD, and general learning disabilities (GLD). The purpose of this study was to examine abilities of adolescents with NVLD, GLD, and without LD to recognize, express, and understand facial expressions of emotion, in addition to their general social functioning.
Adolescents aged 12 to 15 were screened for LD and NLD using the Wechsler Intelligence Scale for Children---Third Edition (WISC-III; Weschler, 1991) and the Wide Range Achievement Test---Third Edition (WRAT3; Wilkinson, 1993) and subtyped into NVLD and GLD groups based on the WRAT3. The NVLD ( n = 23), matched NLD (n = 23), and a comparable GLD (n = 23) group completed attention, mood, and neuropsychological measures. The adolescent's ability to recognize (Pictures of Facial Affect; Ekman & Friesen, 1976), express, and understand facial expressions of emotion, and their general social functioning was assessed. Results indicated that the GLD group was significantly less accurate at recognizing and understanding facial expressions of emotion compared to the NVLD and NLD groups, who did not differ from each other. No differences emerged between the NVLD, NLD, and GLD groups on the expression or social functioning tasks. The neuropsychological measures did not account for a significant portion of the variance on the emotion tasks. Implications regarding severity of LD are discussed.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Face emotion recognition"

1

Dutta, Paramartha, i Asit Barman. Human Emotion Recognition from Face Images. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-3883-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Lambelet, Clément. Happiness is the only true emotion: Clément Lambelet. Paris]: RVB books, 2019.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

A, Tsihrintzis George, red. Visual affect recognition. Amsterdam: IOS Press, 2010.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Michela, Balconi, red. Neuropsychology and cognition of emotional face comprehension, 2006. Trivandrum, India: Research Signpost, 2006.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

N, Emde Robert, Osofsky Joy D i Butterfield Perry M. 1932-, red. The IFEEL pictures: A new instrument for interpreting emotions. Madison, Conn: International Universities Press, 1993.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Konar, Amit, i Aruna Chakraborty. Emotion Recognition. Wiley & Sons, Incorporated, John, 2015.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Dutta, Paramartha, i Asit Barman. Human Emotion Recognition from Face Images. Springer Singapore Pte. Limited, 2021.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Dutta, Paramartha, i Asit Barman. Human Emotion Recognition from Face Images. Springer, 2020.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Anbarjafari, Gholamreza, Pejman Rasti, Fatemeh Noroozi, Jelena Gorbova i Rain E. Haamer. Machine Learning for Face, Emotion, and Pain Recognition. SPIE, 2018. http://dx.doi.org/10.1117/3.2322572.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Konar, Amit, i Aruna Chakraborty. Emotion Recognition: A Pattern Analysis Approach. Wiley & Sons, Limited, John, 2015.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Face emotion recognition"

1

Sati, Vishwani, Sergio Márquez Sánchez, Niloufar Shoeibi, Ashish Arora i Juan M. Corchado. "Face Detection and Recognition, Face Emotion Recognition Through NVIDIA Jetson Nano". W Advances in Intelligent Systems and Computing, 177–85. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58356-9_18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Alharbi, Alhasan Ali, Mukta Dhopeshwarkar i Shubhashree Savant. "Detection of Emotion Intensity Using Face Recognition". W Communications in Computer and Information Science, 207–13. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-0507-9_18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Ramsaran, Pallavi, i Leckraj Nagowah. "Music Recommendation Based on Face Emotion Recognition". W Smart Mobile Communication & Artificial Intelligence, 180–91. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-56075-0_18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Wallbott, Harald G. "Recognition of Emotion in Specific Populations: Compensation, Deficit or Specific (Dis)Abilities?" W The Human Face, 169–87. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4615-1063-5_9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Lo Presti, Liliana, i Marco La Cascia. "Ensemble of Hankel Matrices for Face Emotion Recognition". W Image Analysis and Processing — ICIAP 2015, 586–97. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23234-8_54.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Kala, Vallu Sri Satya, Vanka Bhavyasri i S. Vigneshwari. "Face Recognition Based Attendance System and Emotion Classification". W Lecture Notes in Electrical Engineering, 625–34. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-7511-2_63.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Khandelwal, Sudarshan, Shridhar Sharma, Suyash Agrawal, Gayatri Kalshetti, Bindu Garg i Rachna Jain. "Depression Level Analysis Using Face Emotion Recognition Method". W Proceedings of Data Analytics and Management, 265–78. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-6550-2_21.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Tahmid, Marjana, Md Samiul Alam, Abhigna Bangalore Shreedhar i Mohammad Kalim Akram. "Face Masks Use and Face Perception: Social Judgments and Emotion Recognition". W 12th International Conference on Information Systems and Advanced Technologies “ICISAT 2022”, 39–53. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-25344-7_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Xia, Qingxin, Jiakang Li i Aoqi Dong. "Road Rage Recognition System Based on Face Detection Emotion". W Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 174–81. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-93479-8_11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Ballihi, Lahoucine, Adel Lablack, Boulbaba Ben Amor, Ioan Marius Bilasco i Mohamed Daoudi. "Positive/Negative Emotion Detection from RGB-D Upper Body Images". W Face and Facial Expression Recognition from Real World Videos, 109–20. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-13737-7_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Face emotion recognition"

1

Panchal, S. S., Anand Hiremath i Netra R. Toravi. "Automatic Face Emotion Recognition". W 2018 International Conference on Smart Systems and Inventive Technology (ICSSIT). IEEE, 2018. http://dx.doi.org/10.1109/icssit.2018.8748847.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

De Silva, Liyanage C., i Kho G. P. Esther. "Emotion-independent face recognition". W Photonics West 2001 - Electronic Imaging, redaktorzy Bernd Girod, Charles A. Bouman i Eckehard G. Steinbach. SPIE, 2000. http://dx.doi.org/10.1117/12.411839.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Esau, Natascha, Lisa Kleinjohann, Bernd Kleinjohann i Evgenija Wetzel. "A Fuzzy Emotion Model and Its Application in Facial Expression Recognition". W ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/detc2007-35206.

Pełny tekst źródła
Streszczenie:
This paper presents a fuzzy emotion model and its use by a fuzzy emotion recognition system that allows to analyze facial expressions in video sequences. In order to process images in real-time a tracking mechanism is used for face localization. The fuzzy classification itself works on single images. It analyzes the deformation of a face using a size-invariant, feature based representation by a set of typical angles. Automatic adaptation to the characteristics of individual human faces is achieved by a short training phase that can be done before the emotion recognition starts. In contrast to most existing approaches, also blended emotions with varying intensities as proposed by many psychologists can be recognized and represented by the fuzzy emotion model. This model is generally applicable also for other emotion recognition solutions.
Style APA, Harvard, Vancouver, ISO itp.
4

Goodarzi, Farhad, Fakhrul Zaman Rokhani, M. Iqbal Saripan i Mohammad Hamiruce Marhaban. "Mixed emotions in multi view face emotion recognition". W 2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA). IEEE, 2017. http://dx.doi.org/10.1109/icsipa.2017.8120643.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Nedkov, Svetoslav, i Dimo Dimov. "Emotion recognition by face dynamics". W the 14th International Conference. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2516775.2516794.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Li, Jiequan, i M. Oussalah. "Automatic face emotion recognition system". W 2010 IEEE 9th International Conference on Cybernetic Intelligent Systems (CIS). IEEE, 2010. http://dx.doi.org/10.1109/ukricis.2010.5898118.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Torres, Juan M. Mayor, i Evgeny A. Stepanov. "Enhanced face/audio emotion recognition". W WI '17: International Conference on Web Intelligence 2017. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3106426.3109423.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Shahabinejad, Mostafa, Yang Wang, Yuanhao Yu, Jin Tang i Jiani Li. "Toward Personalized Emotion Recognition: A Face Recognition Based Attention Method for Facial Emotion Recognition". W 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021). IEEE, 2021. http://dx.doi.org/10.1109/fg52635.2021.9666982.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Veltmeijer, Emmeke, Charlotte Gerritsen i Koen Hindriks. "Automatic Recognition of Emotional Subgroups in Images". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/190.

Pełny tekst źródła
Streszczenie:
Both social group detection and group emotion recognition in images are growing fields of interest, but never before have they been combined. In this work we aim to detect emotional subgroups in images, which can be of great importance for crowd surveillance or event analysis. To this end, human annotators are instructed to label a set of 171 images, and their recognition strategies are analysed. Three main strategies for labeling images are identified, with each strategy assigning either 1) more weight to emotions (emotion-based fusion), 2) more weight to spatial structures (group-based fusion), or 3) equal weight to both (summation strategy). Based on these strategies, algorithms are developed to automatically recognize emotional subgroups. In particular, K-means and hierarchical clustering are used with location and emotion features derived from a fine-tuned VGG network. Additionally, we experiment with face size and gaze direction as extra input features. The best performance comes from hierarchical clustering with emotion, location and gaze direction as input.
Style APA, Harvard, Vancouver, ISO itp.
10

Chang, Xin, i Wladyslaw Skarbek. "From face identification to emotion recognition". W Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments 2019, redaktorzy Ryszard S. Romaniuk i Maciej Linczuk. SPIE, 2019. http://dx.doi.org/10.1117/12.2536735.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Face emotion recognition"

1

Тарасова, Олена Юріївна, i Ірина Сергіївна Мінтій. Web application for facial wrinkle recognition. Кривий Ріг, КДПУ, 2022. http://dx.doi.org/10.31812/123456789/7012.

Pełny tekst źródła
Streszczenie:
Facial recognition technology is named one of the main trends of recent years. It’s wide range of applications, such as access control, biometrics, video surveillance and many other interactive humanmachine systems. Facial landmarks can be described as key characteristics of the human face. Commonly found landmarks are, for example, eyes, nose or mouth corners. Analyzing these key points is useful for a variety of computer vision use cases, including biometrics, face tracking, or emotion detection. Different methods produce different facial landmarks. Some methods use only basic facial landmarks, while others bring out more detail. We use 68 facial markup, which is a common format for many datasets. Cloud computing creates all the necessary conditions for the successful implementation of even the most complex tasks. We created a web application using the Django framework, Python language, OpenCv and Dlib libraries to recognize faces in the image. The purpose of our work is to create a software system for face recognition in the photo and identify wrinkles on the face. The algorithm for determining the presence and location of various types of wrinkles and determining their geometric determination on the face is programmed.
Style APA, Harvard, Vancouver, ISO itp.
2

Clarke, Alison, Sherry Hutchinson i Ellen Weiss. Psychosocial support for children. Population Council, 2005. http://dx.doi.org/10.31899/hiv14.1003.

Pełny tekst źródła
Streszczenie:
Masiye Camp in Matopos National Park, and Kids’ Clubs in downtown Bulawayo, Zimbabwe, are examples of a growing number of programs in Africa and elsewhere that focus on the psychological and social needs of AIDS-affected children. Given the traumatic effects of grief, loss, and other hardships faced by these children, there is increasing recognition of the importance of programs to help them strengthen their social and emotional support systems. This Horizons Report describes findings from operations research in Zimbabwe and Rwanda that examines the psychosocial well-being of orphans and vulnerable children and ways to increase their ability to adapt and cope in the face of adversity. In these studies, a person’s psychosocial well-being refers to his/her emotional and mental state and his/her network of human relationships and connections. A total of 1,258 youth were interviewed. All were deemed vulnerable by their communities because they had been affected by HIV/AIDS and/or other factors such as severe poverty.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii