Academic literature on the topic 'Facial expression – Evaluation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Facial expression – Evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Facial expression – Evaluation"

1

Liang, Yanqiu. "Intelligent Emotion Evaluation Method of Classroom Teaching Based on Expression Recognition." International Journal of Emerging Technologies in Learning (iJET) 14, no. 04 (February 27, 2019): 127. http://dx.doi.org/10.3991/ijet.v14i04.10130.

Full text
Abstract:
To solve the problem of emotional loss in teaching and improve the teaching effect, an intelligent teaching method based on facial expression recognition was studied. The traditional active shape model (ASM) was improved to extract facial feature points. Facial expression was identified by using the geometric features of facial features and support vector machine (SVM). In the expression recognition process, facial geometry and SVM methods were used to generate expression classifiers. Results showed that the SVM method based on the geometric characteristics of facial feature points effectively realized the automatic recognition of facial expressions. Therefore, the automatic classification of facial expressions is realized, and the problem of emotional deficiency in intelligent teaching is effectively solved.
APA, Harvard, Vancouver, ISO, and other styles
2

Silvey, Brian A. "The Role of Conductor Facial Expression in Students’ Evaluation of Ensemble Expressivity." Journal of Research in Music Education 60, no. 4 (October 19, 2012): 419–29. http://dx.doi.org/10.1177/0022429412462580.

Full text
Abstract:
The purpose of this study was to explore whether conductor facial expression affected the expressivity ratings assigned to music excerpts by high school band students. Three actors were videotaped while portraying approving, neutral, and disapproving facial expressions. Each video was duplicated twice and then synchronized with one of three professional wind ensemble recordings. Participants ( N = 133) viewed nine 1-min videos of varying facial expressions, actors, and excerpts and rated each ensemble’s expressivity on a 10-point rating scale. Results of a one-way repeated measures ANOVA indicated that conductor facial expression significantly affected ratings of ensemble expressivity ( p < .001, partial η2 = .15). Post hoc comparisons revealed that participants’ ensemble expressivity ratings were significantly higher for excerpts featuring approving facial expressions than for either neutral or disapproving expressions. Participants’ mean ratings were lowest for neutral facial expression excerpts, indicating that an absence of facial affect influenced evaluations of ensemble expressivity most negatively.
APA, Harvard, Vancouver, ISO, and other styles
3

Hong, Yu-Jin, Sung Eun Choi, Gi Pyo Nam, Heeseung Choi, Junghyun Cho, and Ig-Jae Kim. "Adaptive 3D Model-Based Facial Expression Synthesis and Pose Frontalization." Sensors 20, no. 9 (May 1, 2020): 2578. http://dx.doi.org/10.3390/s20092578.

Full text
Abstract:
Facial expressions are one of the important non-verbal ways used to understand human emotions during communication. Thus, acquiring and reproducing facial expressions is helpful in analyzing human emotional states. However, owing to complex and subtle facial muscle movements, facial expression modeling from images with face poses is difficult to achieve. To handle this issue, we present a method for acquiring facial expressions from a non-frontal single photograph using a 3D-aided approach. In addition, we propose a contour-fitting method that improves the modeling accuracy by automatically rearranging 3D contour landmarks corresponding to fixed 2D image landmarks. The acquired facial expression input can be parametrically manipulated to create various facial expressions through a blendshape or expression transfer based on the FACS (Facial Action Coding System). To achieve a realistic facial expression synthesis, we propose an exemplar-texture wrinkle synthesis method that extracts and synthesizes appropriate expression wrinkles according to the target expression. To do so, we constructed a wrinkle table of various facial expressions from 400 people. As one of the applications, we proved that the expression-pose synthesis method is suitable for expression-invariant face recognition through a quantitative evaluation, and showed the effectiveness based on a qualitative evaluation. We expect our system to be a benefit to various fields such as face recognition, HCI, and data augmentation for deep learning.
APA, Harvard, Vancouver, ISO, and other styles
4

Mao, Jun. "Evaluation of Classroom Teaching Effect Based on Facial Expression Recognition." Journal of Contemporary Educational Research 5, no. 12 (December 23, 2021): 63–68. http://dx.doi.org/10.26689/jcer.v5i12.2855.

Full text
Abstract:
Classroom is an important environment for communication in teaching events. Therefore, both school and society should pay more attention to it. However, in the traditional teaching classroom, there is actually a relatively lack of communication and exchanges. Facial expression recognition is a branch of facial recognition technology with high precision. Even in large teaching scenes, it can capture the changes of students’ facial expressions and analyze their concentration accurately. This paper expounds the concept of this technology, and studies the evaluation of classroom teaching effects based on facial expression recognition.
APA, Harvard, Vancouver, ISO, and other styles
5

Zecca, M., T. Chaminade, M. A. Umilta, K. Itoh, M. Saito, N. Endo, Y. Mizoguchi, et al. "2A1-O10 Emotional Expression Humanoid Robot WE-4RII : Evaluation of the perception of facial emotional expressions by using fMRI." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2007 (2007): _2A1—O10_1—_2A1—O10_4. http://dx.doi.org/10.1299/jsmermd.2007._2a1-o10_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ramis, Silvia, Jose Maria Buades, and Francisco J. Perales. "Using a Social Robot to Evaluate Facial Expressions in the Wild." Sensors 20, no. 23 (November 24, 2020): 6716. http://dx.doi.org/10.3390/s20236716.

Full text
Abstract:
In this work an affective computing approach is used to study the human-robot interaction using a social robot to validate facial expressions in the wild. Our global goal is to evaluate that a social robot can be used to interact in a convincing manner with human users to recognize their potential emotions through facial expressions, contextual cues and bio-signals. In particular, this work is focused on analyzing facial expression. A social robot is used to validate a pre-trained convolutional neural network (CNN) which recognizes facial expressions. Facial expression recognition plays an important role in recognizing and understanding human emotion by robots. Robots equipped with expression recognition capabilities can also be a useful tool to get feedback from the users. The designed experiment allows evaluating a trained neural network in facial expressions using a social robot in a real environment. In this paper a comparison between the CNN accuracy and human experts is performed, in addition to analyze the interaction, attention and difficulty to perform a particular expression by 29 non-expert users. In the experiment, the robot leads the users to perform different facial expressions in motivating and entertaining way. At the end of the experiment, the users are quizzed about their experience with the robot. Finally, a set of experts and the CNN classify the expressions. The obtained results allow affirming that the use of social robot is an adequate interaction paradigm for the evaluation on facial expression.
APA, Harvard, Vancouver, ISO, and other styles
7

Asano, Hirotoshi, and Hideto Ide. "Facial-Expression-Based Arousal Evaluation by NST." Journal of Robotics and Mechatronics 22, no. 1 (February 20, 2010): 76–81. http://dx.doi.org/10.20965/jrm.2010.p0076.

Full text
Abstract:
Fatigue accumulation and poor attention could cause accidents in situations such as flight control, and automobile operation. This has contributed to international interest in intelligent transport system (ITS) research and development. We evaluated human sleepiness arousal based on facial thermal image analysis, in doing so based on nasal skin temperature for different levels of sleepiness during vehicle driving, we found that nasal skin temperature can replace facialexpression in evaluating sleep transition.
APA, Harvard, Vancouver, ISO, and other styles
8

Mayer, C., M. Eggers, and B. Radig. "Cross-database evaluation for facial expression recognition." Pattern Recognition and Image Analysis 24, no. 1 (March 2014): 124–32. http://dx.doi.org/10.1134/s1054661814010106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Santra, Arpita, Vivek Rai, Debasree Das, and Sunistha Kundu. "Facial Expression Recognition Using Convolutional Neural Network." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (May 31, 2022): 1081–92. http://dx.doi.org/10.22214/ijraset.2022.42439.

Full text
Abstract:
Abstract: Human & computer interaction has been an important field of study for ages. Humans share universal and fundamental set of emotions which are exhibited through consistent facial expressions or emotion. If computer could understand the feelings of humans, it can give the proper services based on the feedback received. An algorithm that performs detection, extraction, and evaluation of these facial expressions will allow for automatic recognition of human emotion in images and videos. Automatic recognition of facial expressions can be an important component of natural human-machine interfaces; it may also be used in behavioural science and in clinical practices. In this model we give the overview of the work done in the past related to Emotion Recognition using Facial expressions along with our approach towards solving the problem. The approaches used for facial expression include classifiers like Support Vector Machine (SVM), Convolution Neural Network (CNN) are used to classify emotions based on certain regions of interest on the face like lips, lower jaw, eyebrows, cheeks and many more. Kaggle facial expression dataset with seven facial expression labels as happy, sad, surprise, fear, anger, disgust, and neutral is used in this project. The system achieved 56.77 % accuracy and 0.57 precision on testing dataset. Keywords: Facial Expression Recognition, Convolutional Neural Network, Deep Learning.
APA, Harvard, Vancouver, ISO, and other styles
10

Mahmood, Mayyadah R., Maiwan B. Abdulrazaq, Subhi R. M. Zeebaree, Abbas Kh Ibrahim, Rizgar Ramadhan Zebari, and Hivi Ismat Dino. "Classification techniques’ performance evaluation for facial expression recognition." Indonesian Journal of Electrical Engineering and Computer Science 21, no. 2 (February 1, 2020): 1176. http://dx.doi.org/10.11591/ijeecs.v21.i2.pp1176-1184.

Full text
Abstract:
<p><span>Facial exprestion recognition as a recently developed method in computer vision is founded upon the idea of analazing the facial changes in which are witnessed due to emotional impacts on an individual. This paper provides a performance evaluation of a set of supervised classifiers used for facial expression recognition based on minimum features selected by chi-square. These features are the most iconic and influential ones that have tangible value for result dermination. The highest ranked six features are applied on six classifiers including multi-layer preceptron, support vector machine, decision tree, random forest, radial baised function, and k-nearest neioughbor to figure out the most accurate one when the minum number of features are utilized. This is done via analyzing and appraising the classifiers’ performance. CK+ is used as the research’s dataset. Random forest with the total accuracy ratio of 94.23 % is illustrated as the most accurate classifier amongst the rest. </span></p>
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Facial expression – Evaluation"

1

van, der Heide Ewoud. "Using games as educational tools : An evaluation of a game for children to train facial expression recognition." Thesis, KTH, Medieteknik och interaktionsdesign, MID, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231478.

Full text
Abstract:
Facial expressions play a large role in non verbal communication. Research shows promising results for using games to improve facial expression recognition in children with autism spectrum disorder. Games are effective educational tools and are successful in motivating students. Using a game to improve facial expression recognition could be beneficial for all children as it reduces the risk for problematic behavior and mental health issues. For this study a game to train facial expression recognition to children was developed and evaluated. The goal of the evaluation was to determine which factors influence performance and engagement in the game and if there are expressions that are often identified incorrectly. Additionally the children’s attitude towards the game was evaluated. The results show that performance is affected by the difficulty, context and intensity. The children that showed the most engagement also performed better in the beginning of the game, however the correlation between performance and engagement is complex. Unfortunately it was not possible to evaluate the effect of rewards on the children’s engagement, but children were generally positive on the rewards. The confusion of expressions was in line with earlier research, but not as symmetrical. The players were generally positive about the game. Further research is needed to determine the long term learning effects of the game and to assess ways to engage players more.
Ansiktsuttryck utgör en stor del av den icke-verbala kommunikationen vid sociala interaktioner. Forskning visar lovande resultat för att använda spel för att förbättra förmågan att identifiera ansiktsuttryck för barn med autismspektrumtillstånd. Spel är effektiva utbildningsverktyg och framgångsrika att motivera studenter. Att använda spel för att förbättra denna förmåga skulle kunna vara fördelaktigt för alla barn eftersom det minskar risken för problembeteenden och psykisk ohölsa. Syftet med denna studie var att utveckla och utvärdera ett spel för barn för att träna förmågan att identifiera ansiktsuttryck. Målet med utvärderingen var att avgöra vilka faktorer som påverkar prestation och engagemang i spelet samt huruvida det finns uttryck som ofta identifieras inkorrekt. Ett ytterligare mål var att utvärdera barnens åsikter om spelet. 54 barn i åldrarna 8 och 11 år testade spelet vid två tillfällen. Resultatet visar att prestation påverkas av svårighetsgrad, kontext samt intensitet. De barn som visade högst engagemang presterade även bättre inledningsvis än övriga, dock återfanns inget linjärt samband mellan prestation och engagemang. Tyvärr var det inte möjligt att utvärdera hur belöningarna i spelet påverkade engagemanget, däremot uttryckte barnen sig positivt om dem. Förvirringen kring uttryck var i linje med tidigare forskning, dock mindre symmetrisk. Spelarna uttryckte generellt positiva åsikter om spelet. Vidare forskning behövs för att avgöra spelets långsiktiga inlärningseffekter samt för att undersöka sätt att påverka spelarnas engagemang.
APA, Harvard, Vancouver, ISO, and other styles
2

Ferreira, Bárbara Carvalho. "Expressões faciais de emoções de crianças com deficiência visual e videntes : avaliação e intervenção sob a perspectiva das Habilidades Sociais." Universidade Federal de São Carlos, 2012. https://repositorio.ufscar.br/handle/ufscar/5970.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:30:05Z (GMT). No. of bitstreams: 1 4479.pdf: 1460927 bytes, checksum: 8f694b1f5e420acf8e1ad43d8629b715 (MD5) Previous issue date: 2012-04-20
Financiadora de Estudos e Projetos
The ability of expressing emotions via facial expressions is an indispensable component of some required childhood social skills. Therefore, facial expressions are crucial for successful social relations and the quality of life of both typically developing children and persons with special educational needs, such as visual impaired children. As facial expression of emotions and social skills are profoundly connected, there is the demand of programming interventions directed to maintaining, modulating and enhancing facial expressions topographically and functionally. In order to produce interventions socially valid and effective, planning programs which produce indicators of external and internal validity is of utmost importance. In other words, interventions must be carried out with reliable measures and well-delimited procedures so as the acquired repertoire may be generalized and maintained. In view of social, methodological and empirical issues that underlie those areas (facial expressions and social skills), the present study aimed at evaluating the impact of a program which trained the facial expression of emotions on the social skills repertoire of blind, low vision and sighted children in (1) acquiring, enhancing and maintaining the discrimination of characteristic facial signs of each basic emotion; (2) acquiring, enhancing and maintaining facial expression of basic emotions using photo and video registers; (3) the quality of facial expressions of basic emotions registered by photos; (4) the ability of emotionally express themselves through their face, actions and voice, evaluated by parents and teachers; (5) acquiring, enhancing and maintaining their social skills, according to their self-evaluation, as well as parents and teachers evaluation. A single-case research design with pretest and posttest, multiple probes and replications intra and inter subjects was adopted. Participants were 3 blind children, 3 children with low vision and 3 sighted children. The intervention program was carried out individually and lasted for 21 sessions. Moreover, the evaluation was carried out by 2 judges, the child s parents and teachers and the children themselves. The Social Skills Rating System (SSRS-BR), Checklist for Evaluation and Probe of Emotional Expressiveness, Checklist for Assessing Facial and Emotional Expression, Inventory of Facial Expression of Emotions by Pictures and Films, Protocol for Assessing the Quality of Facial Expression of Emotions, and, Protocol for Emotional Expressiveness Assessment by Facial Expressions and Non-Verbal Components of Emotions were the instruments used for the evaluation. Except for the SSRSBR, all of the instruments were especially created for the present study. Data analysis was carried out as the following: descriptive statistical analysis was performed for each individual (subjects as their own control) and JT Method (clinical significance and reliable change index) was used in order to assess SSRS-BR data. Results indicated that blind, low vision and typically developing children (ordered from the former to the latter) presented more difficulties in discriminating the facial signs characteristic of six basic emotions during the evaluation, which took place prior to the intervention. The percentage of correct answers of all children in the probes after the intervention was between 83,3% and 100%. In addition to that, parents, teachers and judges evaluated the facial expression repertoire of participants as having improved and maintained itself after the intervention, as well as the quality of facial expressiveness. All participants improved their general score in social skills, with some reliable positive changes (improvement) and clinically significant changes, evidencing the enhancement of the participants repertoire observed after the intervention. In summary, the intervention program was effective for improving and maintaining the facial expression of emotions and some classes of social skills, especially those related to emotional expressiveness.
A expressividade facial de emoções é considerada um dos componentes indispensáveis de algumas classes de habilidades sociais imprescindíveis na infância e, portanto, essenciais para a qualidade de vida e das relações sociais, seja das pessoas com necessidades educacionais especiais, como as crianças com deficiência visual, ou com desenvolvimento típico. Quando se considera esta relação entre a expressão facial de emoções e o repertório de habilidades sociais, torna-se necessário programar intervenções direcionadas para manutenção, modulação e aprimoramento topográfico e funcional da expressividade de emoções pela face, na sua relação com as diferentes classes de habilidades sociais. Para que estas intervenções sejam efetivas e socialmente válidas, é importante planejar programas que produzam indicadores de validade interna e externa, ou seja, com confiabilidade das medidas e com procedimentos bem delimitados para generalização e manutenção do repertório adquirido. Considerando as questões sociais, metodológicas e empíricas que permeiam estas duas áreas do conhecimento (expressões faciais de emoções e habilidades sociais), o presente estudo teve como objetivo avaliar o impacto de um programa de treinamento de expressão facial de emoções, na interface com as habilidades sociais, sobre o repertório de crianças cegas, com baixa visão e videntes na (1) aquisição, aprimoramento e manutenção da discriminação dos sinais faciais característicos de cada emoção básica; (2) aquisição, aprimoramento e manutenção da expressão facial de emoções básicas, registrada por meio de fotografias e filmagens; (3) qualidade das expressões faciais de emoções básicas, registradas por meio de fotografias; (4) expressividade emocional pela face, gestos e voz, avaliado pelos pais e professores; (5) aquisição, aprimoramento e manutenção das habilidades sociais, conforme a autoavaliação e avaliação pelos pais e professores. Adotando-se o delineamento pré e pósteste com sujeito único, com múltiplas sondagens e replicações intra e entre sujeitos com diferentes graus de comprometimento visual, o estudo foi conduzido com três crianças cegas, três com baixa visão e três videntes. O programa de intervenção foi em formato individual, com 21 sessões, que tinham uma estrutura semelhante, mas com flexibilidade para alterações de procedimentos e dos materiais diferenciados e adaptados às características, recursos, dificuldades e especificidades de cada criança. A avaliação foi realizada por dois juízes, além das próprias crianças, seus pais e professores, que avaliaram o repertório da criança por meio do Sistema de Avaliação de Habilidades Sociais (SSRS-BR); Roteiro de Sondagem e Avaliação da Expressividade Emocional; Roteiro de Avaliação das Expressões Faciais de Emoções; Ficha de Avaliação das Expressões Faciais de Emoções por Fotografias e Filmagens; Protocolo de Avaliação da Qualidade das Expressões Faciais de Emoções; e, Protocolo de Avaliação da Expressividade Facial de Emoção e dos demais Componentes Não- Verbais. O tratamento dos dados ocorreu por meio de estatística descritiva, para análises individuais (sujeito como próprio controle), e pelo Método JT (significância clínica e índice de mudança confiável) para os dados do SSRS-BR. Os dados do estudo apontaram que as crianças cegas, seguidas pelas com baixa visão e, depois, pelas videntes, apresentaram mais dificuldades em discriminar os sinais faciais característicos das seis emoções básicas nas avaliações que antecederam a intervenção. Nas sondagens que ocorreram após a intervenção, a porcentagem de acertos de todas as crianças foi entre 83,3% e 100%. Além disso, o repertório de expressão facial de emoções de todos os participantes, avaliado pelos pais, professoras e juízes, foi aprimorado e mantido após o programa de intervenção, assim como a qualidade da expressividade de emoções pela face. No caso do repertório de habilidades sociais, todos os participantes obtiveram ganhos na pontuação geral, com algumas mudanças positivas confiáveis (melhora) e mudanças clinicamente significativas, evidenciando o aprimoramento após a intervenção. Conclui-se, portanto, que o programa de intervenção foi efetivo para o aprimoramento e manutenção da expressão facial de emoções e de algumas classes de habilidades sociais, principalmente aquelas relacionadas a expressividade emocional.
APA, Harvard, Vancouver, ISO, and other styles
3

Grossard, Charline. "Evaluation et rééducation des expressions faciales émotionnelles chez l’enfant avec TSA : le projet JEMImE Serious games to teach social interactions and emotions to individuals with autism spectrum disorders (ASD) Children facial expression production : influence of age, gender, emotion subtype, elicitation condition and culture." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS625.

Full text
Abstract:
Le trouble du Spectre de l’Autisme (TSA) est caractérisé par des difficultés concernant les habiletés sociales dont l’utilisation des expressions faciales émotionnelles (EFE). Si de nombreuses études s’intéressent à leur reconnaissance, peu évaluent leur production chez l’enfant typique et avec TSA. Les nouvelles technologies sont plébiscitées pour travailler les habiletés sociales auprès des enfants avec TSA, or, peu d’études concernent leur utilisation pour le travail de la production des EFE. Au début de ce projet, nous retrouvions seulement 4 jeux la travaillant. Notre objectif a été la création du jeu sérieux JEMImE travaillant la production des EFE chez l’enfant avec TSA grâce à un feedback automatisé. Nous avons d’abord constitué une base de données d’EFE d’enfants typiques et avec TSA pour créer un algorithme de reconnaissance des EFE et étudier leurs compétences de production. Plusieurs facteurs les influencent comme l’âge, le type d’émotion, la culture. Les EFE des enfants avec TSA sont jugées de moins bonne qualité par des juges humains et par l’algorithme de reconnaissance des EFE qui a besoin de plus de points repères sur leurs visages pour classer leurs EFE. L’algorithme ensuite intégré dans JEMImE donne un retour visuel en temps réel à l’enfant pour corriger ses productions. Une étude pilote auprès de 23 enfants avec TSA met en avant une bonne adaptation des enfants aux retours de l’algorithme ainsi qu’une bonne expérience dans l’utilisation du jeu. Ces résultats prometteurs ouvrent la voie à un développement plus poussé du jeu pour augmenter le temps de jeu et ainsi évaluer l’effet de cet entraînement sur la production des EFE chez les enfants avec TSA
The autism spectrum disorder (ASD) is characterized by difficulties in socials skills, as emotion recognition and production. Several studies focused on emotional facial expressions (EFE) recognition, but few worked on its production, either in typical children or in children with ASD. Nowadays, information and communication technologies are used to work on social skills in ASD but few studies using these technologies focus on EFE production. After a literature review, we found only 4 games regarding EFE production. Our final goal was to create the serious game JEMImE to work on EFE production with children with ASD using an automatic feedback. We first created a dataset of EFE of typical children and children with ASD to train an EFE recognition algorithm and to study their production skills. Several factors modulate them, such as age, type of emotion or culture. We observed that human judges and the algorithm assess the quality of the EFE of children with ASD as poorer than the EFE of typical children. Also, the EFE recognition algorithm needs more features to classify their EFE. We then integrated the algorithm in JEMImE to give the child a visual feedback in real time to correct his/her productions. A pilot study including 23 children with ASD showed that children are able to adapt their productions thanks to the feedback given by the algorithm and illustrated an overall good subjective experience with JEMImE. The beta version of JEMImE shows promising potential and encourages further development of the game in order to offer longer game exposure to children with ASD and so allow a reliable assessment of the effect of this training on their production of EFE
APA, Harvard, Vancouver, ISO, and other styles
4

Johansson, David. "Design and evaluation of an avatar-mediated system for child interview training." Thesis, Linnéuniversitetet, Institutionen för medieteknik (ME), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-40054.

Full text
Abstract:
There is an apparent problem with children being abused in different ways in their everyday life and the lack of education related to these issues among working adults in the vicinity of these children, for example as social workers or teachers. There are formal courses in child interview training that teach participants how to talk to children in a correct manner. Avatar-mediation enables new methods of practicing this communication without having to involve a real child or role play face-to-face with another adult. In this study it was explored how a system could be designed in order to enable educational practice sessions where a child interview expert can be mediated through avatars in the form of virtual children. Prototypes were developed in order to evaluate the feasibility of the scenario regarding methods for controlling the avatar and how the avatar was perceived by the participants. It was found that there is a clear value in the educational approach of using avatar-mediation. From the perspective of the interactor it was found that using a circular radial interface for graphical representation of different emotions was possible to control a video-based avatar while simultaneously having a conversation with the participant. The results of the study include a proposed design of an interface, description of underlying system functionality and suggestions on how avatar behavior can be characterized in order to achieve a high level of presence for the participant.
APA, Harvard, Vancouver, ISO, and other styles
5

Reverdy, Clément. "Annotation et synthèse basée données des expressions faciales de la Langue des Signes Française." Thesis, Lorient, 2019. http://www.theses.fr/2019LORIS550.

Full text
Abstract:
La Langue des Signes Française (LSF) représente une part de l'identité et de la culture de la communauté des sourds en France. L'un des moyens permettant de promouvoir cette langue est la génération de contenu par le biais de personnages virtuels appelés avatars signeurs. Le système que nous proposons s’intègre dans un projet plus général de synthèse gestuelle de la LSF par concaténation qui permet de générer de nouvelles phrases à partir d'un corpus de données de mouvements annotées et capturées via un dispositif de capture de mouvement basé marqueurs (MoCap) en éditant les données existantes. En LSF, l'expressivité faciale est le vecteur de nombreuses informations (e.g., affectives, clausales ou adjectivales), d'où son importance. Cette thèse a pour but d'intégrer l'aspect facial de la LSF au système de synthèse concaténative décrit précédemment. Ainsi, nous proposons une chaîne de traitement de l'information allant de la capture des données via un dispositif de MoCap jusqu'à l'animation faciale de l'avatar à partir de ces données et l'annotation automatique des corpus ainsi constitués. La première contribution de cette thèse concerne la méthodologie employée et la représentation par blendshapes à la fois pour la synthèse d'animations faciales et pour l'annotation automatique. Elle permet de traiter le système d'analyse / synthèse à un certain niveau d'abstraction, avec des descripteurs homogènes et signifiants. La seconde contribution concerne le développement d'une approche d'annotation automatique qui s'appuie sur la reconnaissance d'expressions faciales émotionnelles par des techniques d'apprentissage automatique. La dernière contribution réside dans la méthode de synthèse qui s'exprime comme un problème d'optimisation assez classique mais au sein duquel nous avons inclus une énergie basée laplacien quantifiant les déformations d'une surface en tant qu'énergie de régularisation
French Sign Language (LSF) represents part of the identity and culture of the deaf community in France. One way to promote this language is to generate signed content through virtual characters called signing avatars. The system we propose is part of a more general project of gestural synthesis of LSF by concatenation that allows to generate new sentences from a corpus of annotated motion data captured via a marker-based motion capture device (MoCap) by editing existing data. In LSF, facial expressivity is particularly important since it is the vector of numerous information (e.g., affective, clausal or adjectival). This thesis aims to integrate the facial aspect of LSF into the concatenative synthesis system described above. Thus, a processing pipeline is proposed, from data capture via a MoCap device to facial animation of the avatar from these data and to automatic annotation of the corpus thus constituted. The first contribution of this thesis concerns the employed methodology and the representation by blendshapes both for the synthesis of facial animations and for automatic annotation. It enables the analysis/synthesis scheme to be processed at an abstract level, with homogeneous and meaningful descriptors. The second contribution concerns the development of an automatic annotation method based on the recognition of expressive facial expressions using machine learning techniques. The last contribution lies in the synthesis method, which is expressed as a rather classic optimization problem but in which we have included
APA, Harvard, Vancouver, ISO, and other styles
6

Leitch, Kristen Allison. "Evaluating Consumer Emotional Response to Beverage Sweeteners through Facial Expression Analysis." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/73695.

Full text
Abstract:
Emotional processing and characterization of internal and external stimuli is believed to play an integral role in consumer acceptance or rejection of food products. In this research three experiments were completed with the ultimate goal of adding to the growing body of research pertaining to food, emotions and acceptance using traditional affective sensory methods in combination with implicit (uncontrollable) and explicit (cognitive) emotional measures. Sweetness equivalence of several artificial (acesulfame potassium, saccharin and sucralose) and natural (42% high fructose corn syrup and honey) sweeteners were established to a 5% sucrose solution. Differences in consumer acceptability and emotional response to sucrose (control) and four equi-sweet alternatives (acesulfame potassium, high fructose corn syrup, honey, and sucralose) in tea were evaluated using a 9-point hedonic scale, check-all-that-apply (CATA) emotion term questionnaire (explicit), and automated facial expression analysis (AFEA) (implicit). Facial expression responses and emotion term categorization based on selection frequencies were able to adequately discern differences in emotional response as it related to hedonic liking between sweetener categories (artificial; natural). The potential influence of varying product information on consumer acceptance and emotional responses was then evaluated in relation to three sweeteners (sucrose, ace-k, HFCS) in tea solutions. Observed differences in liking and emotional term characterizations based on the validity of product information for sweeteners were attributed to cognitive dissonance. False informational cues had an observed dampening effect on the implicit emotional response to alternative sweeteners. Significant moderate correlations between liking and several basic emotions supported the belief that implicit emotions are contextually specific. Limitations pertaining to AFEA data collection and emotional interpretations to sweeteners include high panelist variability (within and across), calibration techniques, video quality, software sensitivity, and a general lack of consistency concerning methods of analysis. When used in conjunction with traditional affective methodology and cognitive emotional characterization, AFEA provides an additional layer of valued information about the consumer food experience.
Master of Science in Life Sciences
APA, Harvard, Vancouver, ISO, and other styles
7

Schiavenato, Martin. "EVALUATING NEONATAL FACIAL PAIN EXPRESSION: IS THERE A PRIMAL FACE OF PAIN?" Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3010.

Full text
Abstract:
Pain assessment continues to be poorly managed in the clinical arena. A review of the communication process in pain assessment is carried out and the hierarchical approach often recommended in the literature –with self-report as its "gold-standard," is criticized as limited and simplistic. A comprehensive approach to pain assessment is recommended and a model that conceptualizes pain assessment as a complex transaction with various patient and clinician dependant factors is proposed. Attention is then focused on the pediatric patient whose pain assessment is often dependent on nonverbal communicative action. The clinical approaches to pain assessment in this population –mainly the use of behavioral/observational pain scales and facial pain scales, are explored. The primal face of pain (PFP) is identified and proposed theoretically as an important link in the function of facial pain scales. Finally, the existence of the PFP is investigated in a sample of 57 neonates across differences in sex and ethnic origin while controlling for potentially confounding factors. Facial expression to a painful stimulus is measured based on the Neonatal Facial Coding System (NFCS) and applying an innovative computer-based methodology. No statistically significant differences in facial expression were found in infant display thereby supporting the existence of the PFP.
Ph.D.
School of Nursing
Other
Nursing PhD
APA, Harvard, Vancouver, ISO, and other styles
8

Likowski, Katja U. [Verfasser], and Paul [Akademischer Betreuer] Pauli. "Facial mimicry, valence evaluation or emotional reaction? : mechanisms underlying the modulation of congruent and incongruent facial reactions to emotional facial expressions / Katja U. Likowski. Betreuer: Paul Pauli." Würzburg : Universitätsbibliothek der Universität Würzburg, 2011. http://d-nb.info/1014891884/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Merchak, Rachel J. "Recognition of Facial Expressions of Emotion: The Effects of Anxiety, Depression, and Fear of Negative Evaluation." Wittenberg University Honors Theses / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=wuhonors1398956266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Isabella, Giuliana. "The influence of emotional contagion on products evaluation." reponame:Repositório Institucional do FGV, 2011. http://hdl.handle.net/10438/8195.

Full text
Abstract:
Submitted by Cristiane Shirayama (cristiane.shirayama@fgv.br) on 2011-05-25T16:53:46Z No. of bitstreams: 1 61090100052.pdf: 6651446 bytes, checksum: 67d33da8abccfa273801cae6c3a8ca8c (MD5)
Approved for entry into archive by Vera Lúcia Mourão(vera.mourao@fgv.br) on 2011-05-25T16:56:41Z (GMT) No. of bitstreams: 1 61090100052.pdf: 6651446 bytes, checksum: 67d33da8abccfa273801cae6c3a8ca8c (MD5)
Approved for entry into archive by Vera Lúcia Mourão(vera.mourao@fgv.br) on 2011-05-25T16:58:09Z (GMT) No. of bitstreams: 1 61090100052.pdf: 6651446 bytes, checksum: 67d33da8abccfa273801cae6c3a8ca8c (MD5)
Made available in DSpace on 2011-05-25T17:22:55Z (GMT). No. of bitstreams: 1 61090100052.pdf: 6651446 bytes, checksum: 67d33da8abccfa273801cae6c3a8ca8c (MD5) Previous issue date: 2011-02-28
Emotional Contagion is the mechanism that includes mimicking and the automatic synchronization of facial expressions, vocalizations, postures, and movements with another person and, consequently, convergence of emotions between the sender and receiver. Researches of this mechanism conducted usually in the fields of Psychology and Marketing tends to investigate face-to-face interactions. However, the question remains to what extent, if any, emotional contagion may occur with facial expressions in photos, since many purchase situations are brought on by catalogues or websites. This thesis has the goal to verify this gap and, in addition, verify whether emotional contagion is more common in females than in males as stated in previous studies. Emotions have been studied because it is intuitively apparent that emotions affect the dynamics of the interaction between a salesperson and customers (Verbeke, 1997); in other words, emotions may significantly affect consumer behavior. Therefore, this thesis also verified whether the facial expressions that transmit emotions could be associated to product evaluations. To investigate these questions, an experiment was done with 171 participants, which were exposed to either smiling (positive emotion) or neutral advertising. The differences between the individual advertisements were limited to the facial expressions of figures in the advertisements (either smiling or neutral/without smiling). One specialist and two students analyzed videotaped records of the participants’ responses, and found that participants who saw the positive stimulus mimicked the picture (smiling back) confirming the Emotional Contagion in Photos (the first hypothesis). The second hypothesis was to analyze if there is difference based in gender. The results demonstrated that there is not a significant difference between genders; female and male equally suffer Emotional Contagion. The third hypothesis was related to whether the positive emotions vs. neutral emotions acquired from the positive facial expression in the photo are associated to a positive evaluation of the product also displayed in the photo. Evidences show that the ad with a positive expression could change more positively the attitude, the sympathy, the reliability, and the intention of purpose of the participant compared to those who were exposed to the neutral condition. Therefore, the analysis concludes that the facial expressions displayed in photos produce emotional contagion and may interfere on the evaluation product. A discussion of the theoretical and practical implications and limitations for these findings are presented.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Facial expression – Evaluation"

1

Ichino, Anna, and Greg Currie. Truth and Trust in Fiction. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198805403.003.0004.

Full text
Abstract:
This chapter examines two pathways through which fictions may affect beliefs: by invading readers’ cognitive system via heuristics and other sub-rational devices, and by expressing authorial beliefs that readers take to be reliable. Focusing mostly on the latter pathway, the chapter distinguishes fiction as a mechanism for the transmission of uncontroversial factual information from fiction as a means of expressing distinctive perspectives on evaluative propositions. In both cases, the inferences on which readers rely are precarious, and especially so with evaluative cases where there is little hope of independent verification. Moreover, trust, which in other contexts can increase the reliability of beliefs transmitted from person to person, cannot be much depended on when it comes to belief transmission from author to reader.
APA, Harvard, Vancouver, ISO, and other styles
2

Bucy, Erik P., and Patrick Stewart. The Personalization of Campaigns: Nonverbal Cues in Presidential Debates. Oxford University Press, 2018. http://dx.doi.org/10.1093/acrefore/9780190228637.013.52.

Full text
Abstract:
Nonverbal cues are important elements of persuasive communication whose influence in political debates are receiving renewed attention. Recent advances in political debate research have been driven by biologically grounded explanations of behavior that draw on evolutionary theory and view televised debates as contests for social dominance. The application of biobehavioral coding to televised presidential debates opens new vistas for investigating this time-honored campaign tradition by introducing a systematic and readily replicated analytical framework for documenting the unspoken signals that are a continuous feature of competitive candidate encounters. As research utilizing biobehavioral measures of presidential debates and other political communication progresses, studies are becoming increasingly characterized by the use of multiple methodologies and merging of disparate data into combined systems of coding that support predictive modeling.Key elements of nonverbal persuasion include candidate appearance, communication style and behavior, as well as gender dynamics that regulate candidate interactions. Together, the use of facial expressions, voice tone, and bodily gestures form uniquely identifiable display repertoires that candidates perform within televised debate settings. Also at play are social and political norms that govern candidate encounters. From an evaluative standpoint, the visual equivalent of a verbal gaffe is the commission of a nonverbal expectancy violation, which draws viewer attention and interferes with information intake. Through second screens, viewers are able to register their reactions to candidate behavior in real time, and merging biobehavioral and social media approaches to debate effects is showing how such activity can be used as an outcome measure to assess the efficacy of candidate nonverbal communication during televised presidential debates.Methodological approaches employed to investigate nonverbal cues in presidential debates have expanded well beyond the time-honored technique of content analysis to include lab experiments, focus groups, continuous response measurement, eye tracking, vocalic analysis, biobehavioral coding, and use of the Facial Action Coding System to document the muscle movements that comprise leader expressions. Given the tradeoffs and myriad considerations involved in analyzing nonverbal cues, critical issues in measurement and methodology must be addressed when conducting research in this evolving area. With automated coding of nonverbal behavior just around the corner, future research should be designed to take advantage of the growing number of methodological advances in this rapidly evolving area of political communication research.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Facial expression – Evaluation"

1

Poria, Swarup, Ananya Mondal, and Pritha Mukhopadhyay. "Evaluation of the Intricacies of Emotional Facial Expression of Psychiatric Patients Using Computational Models." In Understanding Facial Expressions in Communication, 199–226. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1934-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lampropoulos, Aristomenis S., Ioanna-Ourania Stathopoulou, and George A. Tsihrintzis. "Comparative performance evaluation of classifiers for Facial Expression Recognition." In New Directions in Intelligent Interactive Multimedia Systems and Services - 2, 253–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02937-0_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hori, Maiya, Shogo Kawai, Hiroki Yoshimura, and Yoshio Iwai. "Local Feature Evaluation for a Constrained Local Model Framework." In Face and Facial Expression Recognition from Real World Videos, 11–19. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-13737-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fernández, Carles, Ivan Huerta, and Andrea Prati. "A Comparative Evaluation of Regression Learning Algorithms for Facial Age Estimation." In Face and Facial Expression Recognition from Real World Videos, 133–44. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-13737-7_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shirazi, Mohammad Shokrolah, and Sagun Bati. "Evaluation of the Off-the-Shelf CNNs for Facial Expression Recognition." In Lecture Notes in Networks and Systems, 466–73. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98015-3_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gaikwad, Prajwal, Sanskruti Pardeshi, Shreya Sawant, Shrushti Rudrawar, and Ketaki Upare. "Intelligent Facial Expression Evaluation to Assess Mental Health Through Deep Learning." In Soft Computing and its Engineering Applications, 290–301. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-05767-0_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Xing, Lijun Yin, Daniel Hipp, and Peter Gerhardstein. "Evaluation of Perceptual Biases in Facial Expression Recognition by Humans and Machines." In Advances in Visual Computing, 809–19. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-14364-4_78.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Siddiqi, Muhammad Hameed, Maqbool Ali, Muhammad Idris, Oresti Banos, Sungyoung Lee, and Hyunseung Choo. "A Novel Dataset for Real-Life Evaluation of Facial Expression Recognition Methodologies." In Advances in Artificial Intelligence, 89–95. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-34111-8_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Patil, Manasi N., Brijesh Iyer, and Rajeev Arya. "Performance Evaluation of PCA and ICA Algorithm for Facial Expression Recognition Application." In Advances in Intelligent Systems and Computing, 965–76. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-0448-3_81.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Okubo, Masashi, and Shun Tamura. "A Proposal of Video Evaluation Method Using Facial Expression for Video Recommendation System." In Human Interface and the Management of Information. Information in Intelligent Systems, 254–68. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22649-7_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Facial expression – Evaluation"

1

Botley, T. I., and D. Makris. "Evaluation of facial expression recognition." In 3rd European Conference on Visual Media Production (CVMP 2006). Part of the 2nd Multimedia Conference 2006. IEE, 2006. http://dx.doi.org/10.1049/cp:20061960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sajjanhar, Atul, ZhaoQi Wu, Juan Chen, Quan Wen, and Reziwanguli Xiamixiding. "Experimental evaluation of facial expression recognition." In 2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI). IEEE, 2017. http://dx.doi.org/10.1109/cisp-bmei.2017.8302001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Yuzhe, and Jian Zhu. "Dielectric elastomer actuators for facial expression." In SPIE Smart Structures and Materials + Nondestructive Evaluation and Health Monitoring, edited by Yoseph Bar-Cohen and Frédéric Vidal. SPIE, 2016. http://dx.doi.org/10.1117/12.2218838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tsai, Pohsiang, Tich Phuoc Tran, Tom Hintz, and Tony Jan. "An evaluation of bi-modal facial appearance+facial expression face biometrics." In 2008 19th International Conference on Pattern Recognition (ICPR). IEEE, 2008. http://dx.doi.org/10.1109/icpr.2008.4761856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Ying, Qiang Huang, Xuechao Chen, Zhangguo Yu, Libo Meng, Gan Ma, Peisen Zhang, and Weimin Zhang. "Design and similarity evaluation on humanoid facial expression." In 2015 IEEE International Conference on Mechatronics and Automation (ICMA). IEEE, 2015. http://dx.doi.org/10.1109/icma.2015.7237648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tang, Xiao-Yu, Wang-Yue Peng, Si-Rui Liu, and Jian-Wen Xiong. "Classroom Teaching Evaluation Based on Facial Expression Recognition." In ICEIT 2020: 2020 9th International Conference on Educational and Information Technology. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3383923.3383949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Alomar, Antonia, Araceli Morales, Antonio R. Porras, Marius G. Linguraru, Gemma Piella, and Federico Sukno. "Transferring 3D facial expressions from adults to children." In WSCG'2022 - 30. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2022. Západočeská univerzita, 2022. http://dx.doi.org/10.24132/csrn.3201.14.

Full text
Abstract:
Diagnosis of craniofacial conditions is shifting towards pre- and peri-natal stages, since early assessment has shown to be crucial for the effective treatment of functional and developmental aspects of children. 3D Morphable Models are a valuable tool for such evaluation. However, limited data availability on 3D newborn geometry, and highly variable imaging environments, challenge the construction of 3D baby face models. Our hypothesis is that constructing a bi-linear baby face model that allows identity and expression decoupling, enables to improve craniofacial and brain function assessments. Thus, given that adult and infants facial expression configurations are very similar and that 3D facial expressions in babies are difficult to be scanned in a controlled manner, we propose transferring the facial expressions from the available FaceWarehouse (FW) database to baby scans, to construct a baby-specific bi-linear expression model. First, we defined a spatial mapping between the BabyFM and the FW. Then, we propose an automatic neutralization to remove the expressions from the facial scans. Finally, we apply expression transfer to obtain a complete data tensor. We test the performance and generalization of the resulting bi-linear model with a test set. Results show that the obtained model allow us to successfully and realistically manipulate facial expressions of babies while keeping them decoupled from identity variations.
APA, Harvard, Vancouver, ISO, and other styles
8

Takahashi, Naoki, Yuri Hamada, and Hiroko Shoji. "Analysis of an actors’ emotions and audience's impression of facial expression." In 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022). AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1001774.

Full text
Abstract:
1. IntroductionNon-verbal information is very important in all communication. According to Mehrabian’s study, the facial has 55%, the vocal has 38% and the verbal has 8% of information people receive. Technologies of computer-mediated communication promoted communication that does not need face-to-face (e.g. e-mail). Recently, however, a video communication system that enables us to have conversations looking at each other’s faces is spreading. Understanding impressions conveyed by facial expressions is getting more important again.Our purpose is to examine whether a facial expression can convey his/her emotion to a person. In the experiment, subjects evaluated their impression of images of facial expression. We compared the evaluation with another evaluation by the actor who create the facial expressions.2. MethodsIn the experiment, we made images of the actor’s face corresponding to some emotional keyword and showed subjects as audiences. The actor is one female volunteer in her 20s. Audiences are fifteen male and fifteen female volunteers in their 20s. Fifteen audiences were acquainted with the actor and the rest of them looked at her for the first time in this experiment.Actors instructed to create facial expressions of eight emotions, “surprising,” “frustrating,” “exciting,” “guarding,” “relaxing,” “angry,” “fear,” and “boring.” Stimulus images were bust shots (photographs of the upper body than her bust) of the actor creating facial expression. After the photography, she was instructed to evaluate her own emotions in the images by two Likert scales of eleven points from unpleasure (0) to pleasure (10) and from deactivated (0) to activated (10). Similarly, audiences evaluated her emotion after looking at images on the same two scales. All questionnaires were formed by Google form and conducted via the online survey.In analysis, we assumed that differences of evaluations of the actor and audiences are indicating gaps of emotions the actor expressed and the audiences felt. we examined the significance of the difference using a two-sided t-test (significance level = 0.05) to investigate the degree of the gap.3. Results and discussionEvaluations by actor’s self (N=1) and by audiences (averages of N=30) are generally similar, but there are significant differences (p<0.05) in frustrating, guarding, relaxing, angry, comfortable, fear, and boring in valence and all emotions in arousal. These results show that the actor’s emotion conveyed to audiences roughly, but the degree of the actor’s emotion was not impressed on audiences accurately.We assumed emotional plane which consists of two axes of valence and arousal using Russel’s circumplex model as a reference and calculated distances of the actor’s emotion point and audiences’ impression point on the plane to compare the difference by sex and acquaintance. Male audiences could evaluate relatively close to the actor’s emotion with female audiences, but a significant difference among sex (p>0.05) was not found in any images. On the other hand, acquainted audiences could evaluate relatively close to the actor’s emotion with unacquainted audiences, and there are significant differences among acquaintance in frustration (p=0.029), angry (p=0.029), and comfortable (p=0.040).
APA, Harvard, Vancouver, ISO, and other styles
9

Sagbas, Ensar Arif, Aybars Ugur, and Serdar Korukoglu. "Performance Evaluation of Ensemble Learning Methods for Facial Expression Recognition." In 2019 Innovations in Intelligent Systems and Applications Conference (ASYU). IEEE, 2019. http://dx.doi.org/10.1109/asyu48272.2019.8946428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Ligang, Dian Tjondronegoro, and Vinod Chandran. "Evaluation of Texture and Geometry for Dimensional Facial Expression Recognition." In 2011 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2011. http://dx.doi.org/10.1109/dicta.2011.110.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Facial expression – Evaluation"

1

Czosnek, Henryk Hanokh, Dani Zamir, Robert L. Gilbertson, and Lucas J. William. Resistance to Tomato Yellow Leaf Curl Virus by Combining Expression of a Natural Tolerance Gene and a Dysfunctional Movement Protein in a Single Cultivar. United States Department of Agriculture, June 2000. http://dx.doi.org/10.32747/2000.7573079.bard.

Full text
Abstract:
Background The tomato yellow leaf curl disease (TYLCV) has been a major deterrent to tomato production in Israel for the last 20 years. This whitefly-transmitted viral disease has been found in the Caribbean Island in the early 1990s, probably as an import from the Middle East. In the late 1990s, the virus has spread to the US and is now conspicuous in Florida and Georgia. Objectives Because of the urgency facing the TYLCV epidemics, there was a compelling need to mobilize scientists to develop tomato variety resistant to TYLCV. The major goal was to identify the virus movement protein (MP) and to express a defective from of MP in a cultivar that contained the natural Ty-1 resistance gene. The research included 1. cloning of the TYLCV isolate from the Dominican Republic (DR) which is (or a close variant) also present in the continental USA; 2. ddefining the role of the MP; 3. mutating the putative MP gene; 4. introducing the modified gene into an advance Ty-1 line; 5. testing the transgenic plants in the field. The pressing threat to tomato production in the US resulted in an extension of the objectives: more emphasis was placed on characterization of TYLCV i the DR, on determination of the epidemiology of the virus in the DR, and on using new TYLCV resistance sources for tomato breeding. Achievements and signification 1. The characterization of TYLCV-DR allowed for more effective TYLCV management strategies that are now implemented in the DR. 2. The identification of the TYLCV MPs and, more importantly, insight into their function has provided a model for how these proteins function in TYLCV movement and support the targeting of one or more of these proteins in a dominant lethal strategy to engineer plants for TYLCV resistance. 3. The transgenic plants that are being generated with wild-type and mutated TYLCV MPs will serve to test the hypothesis that interference with one or more of the TYLCV movement proteins will be a strategy for generating TYLCV-resistant plants. 4. The fine mapping of the resistance Ty-1 gene allowed eliminating deleterious chromosome segments from the wild tomato genitor L. chilense. It may in a near future allow the cloning of the first geminivirus resistance gene. 5. Another resistance source from the wild tomato species L. hirsitum was introgressed into the domesticated tomato, resulting in the production of resistant breeding lines. Implications 1. The monitoring of TYLCV in whiteflies has been applied in the DR. These tools are presently being used to assist in the evaluation of the host-free period and to help select the appropriate locations for growing tomatoes in the DR. 2. An overall strategy to obtain resistance against TYLCV has been used. The expression of wild-type or mutated TYLCV MPs in transgenic tomato is another addition to the arsenal used to fight TYLCV, together with marker assisted breeding and mobilization of additional resistant genes from the wild.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography