Дисертації з теми "Face Expression Recognition"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Face Expression Recognition.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Face Expression Recognition".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Zhou, Yun. "Embedded Face Detection and Facial Expression Recognition." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/583.

Повний текст джерела
Анотація:
Face Detection has been applied in many fields such as surveillance, human machine interaction, entertainment and health care. Two main reasons for extensive attention on this typical research domain are: 1) a strong need for the face recognition system is obvious due to the widespread use of security, 2) face recognition is more user friendly and faster since it almost requests the users to do nothing. The system is based on ARM Cortex-A8 development board, including transplantation of Linux operating system, the development of drivers, detecting face by using face class Haar feature and Viola-Jones algorithm. In the paper, the face Detection system uses the AdaBoost algorithm to detect human face from the frame captured by the camera. The paper introduces the pros and cons between several popular images processing algorithm. Facial expression recognition system involves face detection and emotion feature interpretation, which consists of offline training and online test part. Active shape model (ASM) for facial feature node detection, optical flow for face tracking, support vector machine (SVM) for classification is applied in this research.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ener, Emrah. "Recognition Of Human Face Expressions." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/3/12607521/index.pdf.

Повний текст джерела
Анотація:
In this study a fully automatic and scale invariant feature extractor which does not require manual initialization or special equipment is proposed. Face location and size is extracted using skin segmentation and ellipse fitting. Extracted face region is scaled to a predefined size, later upper and lower facial templates are used for feature extraction. Template localization and template parameter calculations are carried out using Principal Component Analysis. Changes in facial feature coordinates between analyzed image and neutral expression image are used for expression classification. Performances of different classifiers are evaluated. Performance of proposed feature extractor is also tested on sample video sequences. Facial features are extracted in the first frame and KLT tracker is used for tracking the extracted features. Lost features are detected using face geometry rules and they are relocated using feature extractor. As an alternative to feature based technique an available holistic method which analyses face without partitioning is implemented. Face images are filtered using Gabor filters tuned to different scales and orientations. Filtered images are combined to form Gabor jets. Dimensionality of Gabor jets is decreased using Principal Component Analysis. Performances of different classifiers on low dimensional Gabor jets are compared. Feature based and holistic classifier performances are compared using JAFFE and AF facial expression databases.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Zhao, Xi. "3D face analysis : landmarking, expression recognition and beyond." Phd thesis, Ecole Centrale de Lyon, 2010. http://tel.archives-ouvertes.fr/tel-00599660.

Повний текст джерела
Анотація:
This Ph.D thesis work is dedicated to automatic facial analysis in 3D, including facial landmarking and facial expression recognition. Indeed, facial expression plays an important role both in verbal and non verbal communication, and in expressing emotions. Thus, automatic facial expression recognition has various purposes and applications and particularly is at the heart of "intelligent" human-centered human/computer(robot) interfaces. Meanwhile, automatic landmarking provides aprior knowledge on location of face landmarks, which is required by many face analysis methods such as face segmentation and feature extraction used for instance for expression recognition. The purpose of this thesis is thus to elaborate 3D landmarking and facial expression recognition approaches for finally proposing an automatic facial activity (facial expression and action unit) recognition solution.In this work, we have proposed a Bayesian Belief Network (BBN) for recognizing facial activities, such as facial expressions and facial action units. A StatisticalFacial feAture Model (SFAM) has also been designed to first automatically locateface landmarks so that a fully automatic facial expression recognition system can be formed by combining the SFAM and the BBN. The key contributions are the followings. First, we have proposed to build a morphable partial face model, named SFAM, based on Principle Component Analysis. This model allows to learn boththe global variations in face landmark configuration and the local ones in terms of texture and local geometry around each landmark. Various partial face instances can be generated from SFAM by varying model parameters. Secondly, we have developed a landmarking algorithm based on the minimization an objective function describing the correlation between model instances and query faces. Thirdly, we have designed a Bayesian Belief Network with a structure describing the casual relationships among subjects, expressions and facial features. Facial expression oraction units are modelled as the states of the expression node and are recognized by identifying the maximum of beliefs of all states. We have also proposed a novel method for BBN parameter inference using a statistical feature model that can beconsidered as an extension of SFAM. Finally, in order to enrich information usedfor 3D face analysis, and particularly 3D facial expression recognition, we have also elaborated a 3D face feature, named SGAND, to characterize the geometry property of a point on 3D face mesh using its surrounding points.The effectiveness of all these methods has been evaluated on FRGC, BU3DFEand Bosphorus datasets for facial landmarking as well as BU3DFE and Bosphorus datasets for facial activity (expression and action unit) recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Munasinghe, Kankanamge Sarasi Madushika. "Facial analysis models for face and facial expression recognition." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/118197/1/Sarasi%20Madushika_Munasinghe%20Kankanamge_Thesis.pdf.

Повний текст джерела
Анотація:
This thesis examines the research and development of new approaches for face and facial expression recognition within the fields of computer vision and biometrics. Expression variation is a challenging issue in current face recognition systems and current approaches are not capable of recognizing facial variations effectively within human-computer interfaces, security and access control applications. This thesis presents new contributions for performing face and expression recognition simultaneously; face recognition in the wild; and facial expression recognition in challenging environments. The research findings include the development of new factor analysis and deep learning approaches which can better handle different facial variations.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Minoi, Jacey-Lynn. "Geometric expression invariant 3D face recognition using statistical discriminant models." Thesis, Imperial College London, 2009. http://hdl.handle.net/10044/1/4648.

Повний текст джерела
Анотація:
Currently there is no complete face recognition system that is invariant to all facial expressions. Although humans find it easy to identify and recognise faces regardless of changes in illumination, pose and expression, producing a computer system with a similar capability has proved to be particularly di cult. Three dimensional face models are geometric in nature and therefore have the advantage of being invariant to head pose and lighting. However they are still susceptible to facial expressions. This can be seen in the decrease in the recognition results using principal component analysis when expressions are added to a data set. In order to achieve expression-invariant face recognition systems, we have employed a tensor algebra framework to represent 3D face data with facial expressions in a parsimonious space. Face variation factors are organised in particular subject and facial expression modes. We manipulate this using single value decomposition on sub-tensors representing one variation mode. This framework possesses the ability to deal with the shortcomings of PCA in less constrained environments and still preserves the integrity of the 3D data. The results show improved recognition rates for faces and facial expressions, even recognising high intensity expressions that are not in the training datasets. We have determined, experimentally, a set of anatomical landmarks that best describe facial expression e ectively. We found that the best placement of landmarks to distinguish di erent facial expressions are in areas around the prominent features, such as the cheeks and eyebrows. Recognition results using landmark-based face recognition could be improved with better placement. We looked into the possibility of achieving expression-invariant face recognition by reconstructing and manipulating realistic facial expressions. We proposed a tensor-based statistical discriminant analysis method to reconstruct facial expressions and in particular to neutralise facial expressions. The results of the synthesised facial expressions are visually more realistic than facial expressions generated using conventional active shape modelling (ASM). We then used reconstructed neutral faces in the sub-tensor framework for recognition purposes. The recognition results showed slight improvement. Besides biometric recognition, this novel tensor-based synthesis approach could be used in computer games and real-time animation applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zhan, Ce. "Facial expression recognition for multi-player on-line games." School of Computer Science and Software Engineering, 2008. http://ro.uow.edu.au/theses/100.

Повний текст джерела
Анотація:
Multi-player on-line games (MOGs) have become increasingly popular because of the opportunity they provide for collaboration, communications and interactions. However, compared with ordinary human communication, MOG still has several limitations, especially in the communication using facial expressions. Although detailed facial animation has already been achieved in a number of MOGs, players have to use text commands to control the expressions of avatars. This thesis proposes an automatic expression recognition system that can be integrated into a MOG to control the facial expressions of avatars. To meet the specific requirements of such a system, a number of algorithms are studied, tailored and extended. In particular, Viola-Jones face detection method is modified in several aspects to detect small scale key facial components with wide shape variations. In addition a new coarse-to-fine method is proposed for extracting 20 facial landmarks from image sequences. The proposed system has been evaluated on a number of databases that are different from the training database and achieved 83% recognition rate for 4 emotional state expressions. During the real-time test, the system achieved an average frame rate of 13 fps for 320 x 240 images on a PC with 2.80 GHz Intel Pentium. Testing results have shown that the system has a practical range of working distances (from user to camera), and is robust against variations in lighting and backgrounds.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Bloom, Elana. "Recognition, expression, and understanding facial expressions of emotion in adolescents with nonverbal and general learning disabilities." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=100323.

Повний текст джерела
Анотація:
Students with learning disabilities (LD) have been found to exhibit social difficulties compared to those without LD (Wong, 2004). Recognition, expression, and understanding of facial expressions of emotions have been shown to be important for social functioning (Custrini & Feldman, 1989; Philippot & Feldman, 1990). LD subtypes have been studied (Rourke, 1999) and children with nonverbal learning disabilities (NVLD) have been observed to be worse at recognizing facial expressions compared to children with verbal learning disabilities (VLD), no learning disability (NLD; Dimitrovsky, Spector, Levy-Shiff, & Vakil, 1998; Dimitrovsky, Spector, & Levy-Shiff, 2000), and those with psychiatric difficulties without LD controls (Petti, Voelker, Shore, & Hyman-Abello, 2003). However, little has been done in this area with adolescents with NVLD. Recognition, expression and understanding facial expressions of emotion, as well as general social functioning have yet to be studied simultaneously among adolescents with NVLD, NLD, and general learning disabilities (GLD). The purpose of this study was to examine abilities of adolescents with NVLD, GLD, and without LD to recognize, express, and understand facial expressions of emotion, in addition to their general social functioning.
Adolescents aged 12 to 15 were screened for LD and NLD using the Wechsler Intelligence Scale for Children---Third Edition (WISC-III; Weschler, 1991) and the Wide Range Achievement Test---Third Edition (WRAT3; Wilkinson, 1993) and subtyped into NVLD and GLD groups based on the WRAT3. The NVLD ( n = 23), matched NLD (n = 23), and a comparable GLD (n = 23) group completed attention, mood, and neuropsychological measures. The adolescent's ability to recognize (Pictures of Facial Affect; Ekman & Friesen, 1976), express, and understand facial expressions of emotion, and their general social functioning was assessed. Results indicated that the GLD group was significantly less accurate at recognizing and understanding facial expressions of emotion compared to the NVLD and NLD groups, who did not differ from each other. No differences emerged between the NVLD, NLD, and GLD groups on the expression or social functioning tasks. The neuropsychological measures did not account for a significant portion of the variance on the emotion tasks. Implications regarding severity of LD are discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Durrani, Sophia J. "Studies of emotion recognition from multiple communication channels." Thesis, University of St Andrews, 2005. http://hdl.handle.net/10023/13140.

Повний текст джерела
Анотація:
Crucial to human interaction and development, emotions have long fascinated psychologists. Current thinking suggests that specific emotions, regardless of the channel in which they are communicated, are processed by separable neural mechanisms. Yet much research has focused only on the interpretation of facial expressions of emotion. The present research addressed this oversight by exploring recognition of emotion from facial, vocal, and gestural tasks. Happiness and disgust were best conveyed by the face, yet other emotions were equally well communicated by voices and gestures. A novel method for exploring emotion perception, by contrasting errors, is proposed. Studies often fail to consider whether the status of the perceiver affects emotion recognition abilities. Experiments presented here revealed an impact of mood, sex, and age of participants. Dysphoric mood was associated with difficulty in interpreting disgust from vocal and gestural channels. To some extent, this supports the concept that neural regions are specialised for the perception of disgust. Older participants showed decreased emotion recognition accuracy but no specific pattern of recognition difficulty. Sex of participant and of actor affected emotion recognition from voices. In order to examine neural mechanisms underlying emotion recognition, an exploration was undertaken using emotion tasks with Parkinson's patients. Patients showed no clear pattern of recognition impairment across channels of communication. In this study, the exclusion of surprise as a stimulus and response option in a facial emotion recognition task yielded results contrary to those achieved without this modification. Implications for this are discussed. Finally, this thesis gives rise to three caveats for neuropsychological research. First, the impact of the observers' status, in terms of mood, age, and sex, should not be neglected. Second, exploring multiple channels of communication is important for understanding emotion perception. Third, task design should be appraised before conclusions regarding impairments in emotion perception are presumed.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wei, Xiaozhou. "3D facial expression modeling and analysis with topographic information." Diss., Online access via UMI:, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Beall, Paula M. "Automaticity and Hemispheric Specialization in Emotional Expression Recognition: Examined using a modified Stroop Task." Thesis, University of North Texas, 2002. https://digital.library.unt.edu/ark:/67531/metadc3267/.

Повний текст джерела
Анотація:
The main focus of this investigation was to examine the automaticity of facial expression recognition through valence judgments in a modified photo-word Stroop paradigm. Positive and negative words were superimposed across male and female faces expressing positive (happy) and negative (angry, sad) emotions. Subjects categorized the valence of each stimulus. Gender biases in judgments of expressions (better recognition for male angry and female sad expressions) and the valence hypothesis of hemispheric advantages for emotions (left hemisphere: positive; right hemisphere: negative) were also examined. Four major findings emerged. First, the valence of expressions was processed automatically (robust interference effects). Second, male faces interfered with processing the valence of words. Third, no posers' gender biases were indicated. Finally, the emotionality of facial expressions and words was processed similarly by both hemispheres.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Cui, Chen. "Adaptive weighted local textural features for illumination, expression and occlusion invariant face recognition." University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1374782158.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

de, la Cruz Nathan. "Autonomous facial expression recognition using the facial action coding system." University of the Western Cape, 2016. http://hdl.handle.net/11394/5121.

Повний текст джерела
Анотація:
>Magister Scientiae - MSc
The South African Sign Language research group at the University of the Western Cape is in the process of creating a fully-edged machine translation system to automatically translate between South African Sign Language and English. A major component of the system is the ability to accurately recognise facial expressions, which are used to convey emphasis, tone and mood within South African Sign Language sentences. Traditionally, facial expression recognition research has taken one of two paths: either recognising whole facial expressions of which there are six i.e. anger, disgust, fear, happiness, sadness, surprise, as well as the neutral expression; or recognising the fundamental components of facial expressions as defined by the Facial Action Coding System in the form of Action Units. Action Units are directly related to the motion of specific muscles in the face, combinations of which are used to form any facial expression. This research investigates enhanced recognition of whole facial expressions by means of a hybrid approach that combines traditional whole facial expression recognition with Action Unit recognition to achieve an enhanced classification approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Dagnes, Nicole. "3D human face analysis for recognition applications and motion capture." Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2542.

Повний текст джерела
Анотація:
Cette thèse se propose comme une étude géométrique de la surface faciale en 3D, dont le but est de fournir un ensemble d'entités, issues du contexte de la géométrie différentielle, à utiliser comme descripteurs faciaux dans les applications d'analyse du visage, comme la reconnaissance faciale et la reconnaissance des expressions faciales. En effet, bien que chaque visage soit unique, tous les visages sont similaires et leurs caractéristiques morphologiques sont les mêmes pour tous les individus. Par conséquent, il est primordial pour l'analyse des visages d'extraire les caractéristiques faciales les plus appropriées. Tous les traits du visage, proposés dans cette étude, sont basés uniquement sur les propriétés géométriques de la surface faciale. En effet, l'objectif final de cette recherche est de démontrer que la géométrie différentielle est un outil complet pour l'analyse des visages et que les caractéristiques géométriques conviennent pour décrire et comparer des visages et, en général, pour extraire des informations pertinentes pour l'analyse faciale dans les différents domaines d'application. Enfin, ce travail se concentre aussi sur l'analyse des troubles musculo-squelettiques en proposant une quantification objective des mouvements du visage pour aider la chirurgie maxillo-faciale et la rééducation des mouvements du visage. Ce travail de recherche explore le système de capture du mouvement 3D, en adoptant la plateforme Technologie, Sport et Santé, située au Centre d'Innovation de l'Université de Technologie de Compiègne, au sein du Laboratoire de Biomécanique et Bioingénierie (BMBI)
This thesis is intended as a geometrical study of the three-dimensional facial surface, whose aim is to provide an application framework of entities coming from Differential Geometry context to use as facial descriptors in face analysis applications, like FR and FER fields. Indeed, although every visage is unique, all faces are similar and their morphological features are the same for all mankind. Hence, it is primary for face analysis to extract suitable features. All the facial features, proposed in this study, are based only on the geometrical properties of the facial surface. Then, these geometrical descriptors and the related entities proposed have been applied in the description of facial surface in pattern recognition contexts. Indeed, the final goal of this research is to prove that Differential Geometry is a comprehensive tool oriented to face analysis and geometrical features are suitable to describe and compare faces and, generally, to extract relevant information for human face analysis in different practical application fields. Finally, since in the last decades face analysis has gained great attention also for clinical application, this work focuses on musculoskeletal disorders analysis by proposing an objective quantification of facial movements for helping maxillofacial surgery and facial motion rehabilitation. At this time, different methods are employed for evaluating facial muscles function. This research work investigates the 3D motion capture system, adopting the Technology, Sport and Health platform, located in the Innovation Centre of the University of Technology of Compiègne, in the Biomechanics and Bioengineering Laboratory (BMBI)
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Chen, Xiaochen. "Tracking vertex flow on 3D dynamic facial models." Diss., Online access via UMI:, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Dietrich, Jonas. "Gaze Behaviour and Its Functional Role During Facial Expression Recognition." Doctoral thesis, Humboldt-Universität zu Berlin, 2019. http://dx.doi.org/10.18452/19783.

Повний текст джерела
Анотація:
Die visuelle Enkodierung emotionaler Gesichtsausdrücke stellt bisher ein Rätsel dar. Ziel der vorliegenden Dissertation war es daher, durch die Untersuchung von Blickbewegungen und ihrer Funktionalität für das Erkennen von Gesichtsausdrücken, neue Erkenntnisse zu den zugrundeliegenden Prozessen zu liefern. In vier Blickbewegungsexperimenten, in denen Probanden ärgerliche, angeekelte, fröhliche, traurige und neutrale Gesichtsausdrücke in statischer und dynamischer Darbietung kategorisieren sollten, wurde untersucht, ob allgemeine Strategien der Gesichterverarbeitung bereits auf der Ebene der visuellen Enkodierung anhand spezifischer Blickbewegungsmuster zu identifizieren sind und ob Unterschiede bei der initialen Aufnahme visueller Information als Folge unterschiedlicher Fixationspositionen das Erkennen von Gesichtsausdrücken beeinflussen. Die Ergebnisse zeigten, dass für statische Gesichtsausdrücke nur sehr wenige Fixationen gemacht werden, die hauptsächlich auf das Zentrum des Gesichts und auf emotionsspezifische, diagnostische Gesichtsmerkmale gerichtet sind, was eine kombiniert holistisch-merkmalsorientierte Enkodierungsstrategie nahelegt. Für weniger intensive und dynamische Gesichtsausdrücke deuten die Ergebnisse auf eine stärker konfigurale Enkodierungsstrategie mit mehreren Fixationen zu einer größeren Anzahl unterschiedlicher Gesichtsmerkmale hin. Darüber hinaus waren Blickbewegungsunterschiede relevant für die Emotionserkennung. Die Fixation diagnostischer Gesichtsmerkmale beschleunigte das Erkennen statischer Gesichtsausdrücke. Für das Erkennen dynamischer Gesichtsausdrücke war hingegen eine zentrale Fixationsposition vorteilhaft, vermutlich durch die Förderung von holistischer Gesichterverarbeitung und Veränderungserkennung. Insgesamt zeigte sich, dass allgemeine Strategien der Gesichterverarbeitung bereits auf der Ebene der visuellen Enkodierung identifizierbar sind und dass Unterschiede in diesen frühen Prozessen die Erkennungsleistung beeinflussen.
Processes that underlie the visual encoding of facial expressions still pose a conundrum. Therefore, this dissertation set out to provide new insights into these processes by investigating gaze behaviour and its functional role during the recognition of facial expressions. Four experimental studies were conducted to examine whether general face processing strategies are already reflected on the visual encoding stage of facial expression recognition indicated by specific fixation patterns and whether differences at the initial uptake of visual information as a consequence of varying fixation positions affect facial expression recognition. Gaze behaviour was recorded while participants were asked to categorise angry, disgusted, happy, sad, and neutral facial expressions in static and dynamic displays. Results revealed that gaze behaviour for static facial expressions was characterised by only a few fixations mainly directed to the centre and to expression-specific diagnostic facial features of the face, suggesting a combined holistic and featural encoding strategy. For less intense and dynamic facial expressions, results indicated a more configural encoding strategy with multiple fixations to a greater number of different facial features. In addition, differences in gaze strategy were relevant for facial expression recognition. Fixating diagnostic facial features accelerated the recognition of static facial expressions. In contrast, a central fixation position was beneficial for recognizing dynamic facial expressions, presumably by facilitating holistic face processing and change detection. Overall, findings demonstrated that general face processing strategies are already reflected on the visual encoding stage of facial expression recognition and that variations in these early processes affect recognition performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Cheng, Xin. "Nonrigid face alignment for unknown subject in video." Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/65338/1/Xin_Cheng_Thesis.pdf.

Повний текст джерела
Анотація:
Non-rigid face alignment is a very important task in a large range of applications but the existing tracking based non-rigid face alignment methods are either inaccurate or requiring person-specific model. This dissertation has developed simultaneous alignment algorithms that overcome these constraints and provide alignment with high accuracy, efficiency, robustness to varying image condition, and requirement of only generic model.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Shreve, Matthew Adam. "Automatic Macro- and Micro-Facial Expression Spotting and Applications." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4770.

Повний текст джерела
Анотація:
Automatically determining the temporal characteristics of facial expressions has extensive application domains such as human-machine interfaces for emotion recognition, face identification, as well as medical analysis. However, many papers in the literature have not addressed the step of determining when such expressions occur. This dissertation is focused on the problem of automatically segmenting macro- and micro-expressions frames (or retrieving the expression intervals) in video sequences, without the need for training a model on a specific subset of such expressions. The proposed method exploits the non-rigid facial motion that occurs during facial expressions by modeling the strain observed during the elastic deformation of facial skin tissue. The method is capable of spotting both macro expressions which are typically associated with emotions such as happiness, sadness, anger, disgust, and surprise, and rapid micro- expressions which are typically, but not always, associated with semi-suppressed macro-expressions. Additionally, we have used this method to automatically retrieve strain maps generated from peak expressions for human identification. This dissertation also contributes a novel 3-D surface strain estimation algorithm using commodity 3-D sensors aligned with an HD camera. We demonstrate the feasibility of the method, as well as the improvements gained when using 3-D, by providing empirical and quantitative comparisons between 2-D and 3-D strain estimations.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Sergerie, Karine. "A face to remember : an fMRI study of the effects of emotional expression on recognition memory." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82422.

Повний текст джерела
Анотація:
Emotion can exert a modulatory role on declarative memory. Several studies have shown that emotional stimuli (e.g., words, pictures) are better remembered than neutral ones. Although facial expressions are powerful emotional stimuli and have been shown to influence perception and attention processes, little is known about their effect on memory. We conducted an event-related fMRI study in 18 healthy individuals (9 men) to investigate the effects of expression on recognition memory for faces. During the encoding phase, participants viewed 84 faces of different individuals, depicting happy, fearful or neutral expressions. Subjects were asked to perform a gender discrimination task and remember the faces for later. In the recognition part subjects performed an old/new decision task on 168 faces (84 new). Both runs were scanned. Our findings highlight the importance of the amygdala, hippocampus and prefrontal cortex on the formation and retrieval of memories with emotional content.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Mushfieldt, Diego. "Robust facial expression recognition in the presence of rotation and partial occlusion." Thesis, University of Western Cape, 2014. http://hdl.handle.net/11394/3367.

Повний текст джерела
Анотація:
>Magister Scientiae - MSc
This research proposes an approach to recognizing facial expressions in the presence of rotations and partial occlusions of the face. The research is in the context of automatic machine translation of South African Sign Language (SASL) to English. The proposed method is able to accurately recognize frontal facial images at an average accuracy of 75%. It also achieves a high recognition accuracy of 70% for faces rotated to 60◦. It was also shown that the method is able to continue to recognize facial expressions even in the presence of full occlusions of the eyes, mouth and left/right sides of the face. The accuracy was as high as 70% for occlusion of some areas. An additional finding was that both the left and the right sides of the face are required for recognition. As an addition, the foundation was laid for a fully automatic facial expression recognition system that can accurately segment frontal or rotated faces in a video sequence.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Biswas, Ajanta. "Investigating facial expression production and inner outer face recognition in children with autism and typically developing children." Thesis, University of Sheffield, 2010. http://etheses.whiterose.ac.uk/14973/.

Повний текст джерела
Анотація:
Behavioural and neuroimaging evidence suggests that autism is characterised, in part, by deficits in social intelligence. Impairments in face and eye gaze processing and facial expression recognition are often used to explain this deficit. Although the general consensus is that children with autism are impaired in face and facial expression processing the actual seat of impairment is unknown. Furthermore, face recognition using only inner face information and facial expression production without any visual cues has never been investigated in children with autism. Research on the development of face recognition abilities provided mixed results with regard to how children identify unfamiliar faces both in typical and atypical populations. Recognising an unfamiliar face from only inner face has not been investigated during development or in children with autism. This thesis investigated unfamiliar face recognition from inner face only information firstly, during developmental period of 5-10 years of age; and secondly, with children with autism and individually matched controls. 5-l0-year-olds were exceptionally good at face recognition from only inner face information. Children with autism were as good as the matched controls in recognising unfamiliar faces from only inner face information. These findings are discussed with reference to holistic face processing ability and perceptual sameness of the stimuli. Research on the development of facial expression recognition indicates a differential pathway for different expressions both in typical and atypical populations. This thesis investigated facial expression production ability with and without context in children with autism and individually matched controls. Children with autism were atypical in fear facial expression production and failed to use context to enhance performance. These findings are discussed with reference to social intelligence and the role of experience in early childhood in development of face expertise.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Derkach, Dmytro. "Spectrum analysis methods for 3D facial expression recognition and head pose estimation." Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/664578.

Повний текст джерела
Анотація:
Al llarg de les últimes dècades, l'anàlisi facial ha atret un interès creixent i considerable per part de la comunitat investigadora amb l’objectiu de millorar la interacció i la cooperació entre les persones i les màquines. Aquest interès ha propiciat la creació de sistemes automàtics capaços de reaccionar a diversos estímuls com ara els moviments del cap o les emocions d’una persona. Més enllà, les tasques automatitzades s’han de poder realitzar amb gran precisió dins d’entorns no controlats, fet que ressalta la necessitat d'algoritmes que aprofitin al màxim els avantatges que proporcionen les dades 3D. Aquests sistemes poden ser útils en molts àmbits com ara la interacció home-màquina, tutories, entrevistes, atenció sanitària, màrqueting, etc. En aquesta tesi, ens centrem en dos aspectes de l'anàlisi facial: el reconeixement d'expressions i l'estimació de l'orientació del cap. En ambdós casos, ens enfoquem en l’ús de dades 3D i presentem contribucions que tenen com a objectiu la identificació de representacions significatives de la geometria facial mitjançant mètodes basats en la descomposició espectral: 1. Proposem una tecnologia basada en la representació espectral per al reconeixement d’expressions facials utilitzant exclusivament la geometria 3D, la qual ens permet una descripció completa de la superfície subjacent que pot ser ajustada al nivell de detall desitjat. Dita tecnologia, es basa en la descomposició de fragments locals de la superfície en les seves components de freqüència espacial, d’una manera semblant a la transformada de Fourier, que estan relacionades amb característiques intrínseques de la superfície. Concretament, proposem la utilització de les Graph Laplacian Features (GLFs) que resulten de la projecció dels fragments locals de la superfície a una base comuna obtinguda a partir del Graph Laplacian eigenspace. El mètode proposat s’ha avaluat en termes de reconeixement d’expressions i Action Units (activacions musculars facials), i els resultats obtinguts confirmen que les GLFs produeixen taxes de reconeixement comparables a l’estat de l’art. 2. Proposem un mètode per a l’estimació de l’orientació del cap que permet modelar el manifold subjacent que formen les rotacions generals en 3D. En primer lloc, construïm un sistema completament automàtic que combina la detecció de landmarks (punts facials rellevants) i característiques basades en diccionari, el qual ha obtingut els millors resultats al FG2017 Head Pose Estimation Challenge. Posteriorment, utilitzem una representació basada en tensors i la seva descomposició en els valors singulars d’ordre més alt per tal de separar els subespais de cada factor de rotació i mostrar que cada un d’ells té una estructura clara que pot ser modelada amb funcions trigonomètriques. Aquesta representació proporciona un coneixement detallat del comportament de les dades i pot ser utilitzada per millorar l’estimació de les orientacions dels angles del cap.
Facial analysis has attracted considerable research efforts over the last decades, with a growing interest in improving the interaction and cooperation between people and computers. This makes it necessary that automatic systems are able to react to things such as the head movements of a user or his/her emotions. Further, this should be done accurately and in unconstrained environments, which highlights the need for algorithms that can take full advantage of 3D data. These systems could be useful in multiple domains such as human-computer interaction, tutoring, interviewing, health-care, marketing etc. In this thesis, we focus on two aspects of facial analysis: expression recognition and head pose estimation. In both cases, we specifically target the use of 3D data and present contributions that aim to identify meaningful representations of the facial geometry based on spectral decomposition methods: 1. We propose a spectral representation framework for facial expression recognition using exclusively 3D geometry, which allows a complete description of the underlying surface that can be further tuned to the desired level of detail. It is based on the decomposition of local surface patches in their spatial frequency components, much like a Fourier transform, which are related to intrinsic characteristics of the surface. We propose the use of Graph Laplacian Features (GLFs), which result from the projection of local surface patches into a common basis obtained from the Graph Laplacian eigenspace. The proposed approach is tested in terms of expression and Action Unit recognition and results confirm that the proposed GLFs produce state-of-the-art recognition rates. 2. We propose an approach for head pose estimation that allows modeling the underlying manifold that results from general rotations in 3D. We start by building a fully-automatic system based on the combination of landmark detection and dictionary-based features, which obtained the best results in the FG2017 Head Pose Estimation Challenge. Then, we use tensor representation and higher order singular value decomposition to separate the subspaces that correspond to each rotation factor and show that each of them has a clear structure that can be modeled with trigonometric functions. Such representation provides a deep understanding of data behavior, and can be used to further improve the estimation of the head pose angles.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

St-Hilaire, Annie. "Are paranoid schizophrenia patients really more accurate than other people at recognizing spontaneous expressions of negative emotion? : a study of the putative association between emotion recognition and thinking errors in paranoia." [Kent, Ohio] : Kent State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=kent1215450307.

Повний текст джерела
Анотація:
Thesis (Ph.D.)--Kent State University, 2008.
Title from PDF t.p. (viewed Nov. 10, 2009). Advisor: Nancy Docherty. Keywords: schizophrenia, paranoia, emotion recognition, posed expressions, spontaneous expressions, cognition. Includes bibliographical references (p. 122-144).
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Darborg, Alex. "Real-time face recognition using one-shot learning : A deep learning and machine learning project." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-40069.

Повний текст джерела
Анотація:
Face recognition is often described as the process of identifying and verifying people in a photograph by their face. Researchers have recently given this field increased attention, continuously improving the underlying models. The objective of this study is to implement a real-time face recognition system using one-shot learning. “One shot” means learning from one or few training samples. This paper evaluates different methods to solve this problem. Convolutional neural networks are known to require large datasets to reach an acceptable accuracy. This project proposes a method to solve this problem by reducing the number of training instances to one and still achieving an accuracy close to 100%, utilizing the concept of transfer learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Mayer, Christoph [Verfasser], Bernd [Akademischer Betreuer] Radig, and Gudrun Johanna [Akademischer Betreuer] Klinker. "Facial Expression Recognition With A Three-Dimensional Face Model / Christoph Mayer. Gutachter: Gudrun Johanna Klinker. Betreuer: Bernd Radig." München : Universitätsbibliothek der TU München, 2012. http://d-nb.info/1019854405/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Szeptycki, Przemyslaw. "Processing and analysis of 2.5D face models for non-rigid mapping based face recognition using differential geometry tools." Phd thesis, Ecole Centrale de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00675988.

Повний текст джерела
Анотація:
This Ph.D thesis work is dedicated to 3D facial surface analysis, processing as well as to the newly proposed 3D face recognition modality, which is based on mapping techniques. Facial surface processing and analysis is one of the most important steps for 3Dface recognition algorithms. Automatic anthropometric facial features localization also plays an important role for face localization, face expression recognition, face registration ect., thus its automatic localization is a crucial step for 3D face processing algorithms. In this work we focused on precise and rotation invariant landmarks localization, which are later used directly for face recognition. The landmarks are localized combining local surface properties expressed in terms of differential geometry tools and global facial generic model, used for face validation. Since curvatures, which are differential geometry properties, are sensitive to surface noise, one of the main contributions of this thesis is a modification of curvatures calculation method. The modification incorporates the surface noise into the calculation method and helps to control smoothness of the curvatures. Therefore the main facial points can be reliably and precisely localized (100% nose tip localization using 8 mm precision)under the influence of rotations and surface noise. The modification of the curvatures calculation method was also tested under different face model resolutions, resulting in stable curvature values. Finally, since curvatures analysis leads to many facial landmark candidates, the validation of which is time consuming, facial landmarks localization based on learning technique was proposed. The learning technique helps to reject incorrect landmark candidates with a high probability, thus accelerating landmarks localization. Face recognition using 3D models is a relatively new subject, which has been proposed to overcome shortcomings of 2D face recognition modality. However, 3Dface recognition algorithms are likely more complicated. Additionally, since 3D face models describe facial surface geometry, they are more sensitive to facial expression changes. Our contribution is reducing dimensionality of the input data by mapping3D facial models on to 2D domain using non-rigid, conformal mapping techniques. Having 2D images which represent facial models, all previously developed 2D face recognition algorithms can be used. In our work, conformal shape images of 3Dfacial surfaces were fed in to 2D2 PCA, achieving more than 86% recognition rate rank-one using the FRGC data set. The effectiveness of all the methods has been evaluated using the FRGC and Bosphorus datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Hariri, Walid. "Contribution à la reconnaissance/authentification de visages 2D/3D." Thesis, Cergy-Pontoise, 2017. http://www.theses.fr/2017CERG0905/document.

Повний текст джерела
Анотація:
L’analyse de visages 3D y compris la reconnaissance des visages et des expressions faciales 3D est devenue un domaine actif de recherche ces dernières années. Plusieurs méthodes ont été développées en utilisant des images 2D pour traiter ces problèmes. Cependant, ces méthodes présentent un certain nombre de limitations dépendantes à l’orientation du visage, à l’éclairage, à l’expression faciale, et aux occultations. Récemment, le développement des capteurs d’acquisition 3D a fait que les données 3D deviennent de plus en plus disponibles. Ces données 3D sont relativement invariables à l’illumination et à la pose, mais elles restent sensibles à la variation de l’expression. L’objectif principal de cette thèse est de proposer de nouvelles techniques de reconnaissance/vérification de visages et de reconnaissance d’expressions faciales 3D. Tout d’abord, une méthode de reconnaissance de visages en utilisant des matrices de covariance comme des descripteurs de régions de visages est proposée. Notre méthode comprend les étapes suivantes : le prétraitement et l’alignement de visages, un échantillonnage uniforme est ensuite appliqué sur la surface faciale pour localiser un ensemble de points de caractéristiques. Autours de chaque point, nous extrayons une matrice de covariance comme un descripteur de région du visage. Deux méthodes d’appariement sont ainsi proposées, et différentes distances (géodésiques / non-géodésique) sont appliquées pour comparer les visages. La méthode proposée est évaluée sur troisbases de visages GAVAB, FRGCv2 et BU-3DFE. Une description hiérarchique en utilisant trois niveaux de covariances est ensuite proposée et validée. La deuxième partie de cette thèse porte sur la reconnaissance des expressions faciales 3D. Pour ce faire, nous avons proposé d’utiliser les matrices de covariances avec les méthodes noyau. Dans cette contribution, nous avons appliqué le noyau de Gauss pour transformer les matrices de covariances en espace d’Hilbert. Cela permet d’utiliser les algorithmes qui sont déjà implémentés pour l’espace Euclidean (i.e. SVM) dans cet espace non-linéaire. Des expérimentations sont alors entreprises sur deux bases d’expressions faciales 3D (BU-3DFE et Bosphorus) pour reconnaître les six expressions faciales prototypiques
3D face analysis including 3D face recognition and 3D Facial expression recognition has become a very active area of research in recent years. Various methods using 2D image analysis have been presented to tackle these problems. 2D image-based methods are inherently limited by variability in imaging factors such as illumination and pose. The recent development of 3D acquisition sensors has made 3D data more and more available. Such data is relatively invariant to illumination and pose, but it is still sensitive to expression variation. The principal objective of this thesis is to propose efficient methods for 3D face recognition/verification and 3D facial expression recognition. First, a new covariance based method for 3D face recognition is presented. Our method includes the following steps : first 3D facial surface is preprocessed and aligned. A uniform sampling is then applied to localize a set of feature points, around each point, we extract a matrix as local region descriptor. Two matching strategies are then proposed, and various distances (geodesic and non-geodesic) are applied to compare faces. The proposed method is assessed on three datasetsincluding GAVAB, FRGCv2 and BU-3DFE. A hierarchical description using three levels of covariances is then proposed and validated. In the second part of this thesis, we present an efficient approach for 3D facial expression recognition using kernel methods with covariance matrices. In this contribution, we propose to use Gaussian kernel which maps covariance matrices into a high dimensional Hilbert space. This enables to use conventional algorithms developed for Euclidean valued data such as SVM on such non-linear valued data. The proposed method have been assessed on two known datasets including BU-3DFE and Bosphorus datasets to recognize the six prototypical expressions
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Ali, Afiya. "Recognition of facial affect in individuals scoring high and low in psychopathic personality characteristics." The University of Waikato, 2007. http://adt.waikato.ac.nz/public/adt-uow20070129.190938/index.html.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Wild-Wall, Nele. "Is there an interaction between facial expression and facial familiarity?" Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2004. http://dx.doi.org/10.18452/15042.

Повний текст джерела
Анотація:
Entgegen traditioneller Gesichtererkennungsmodelle konnte in einigen Studien gezeigt werden, dass die Erkennung des Emotionsausdrucks und der Bekanntheit interagieren. In dieser Dissertation wurde mit Hilfe von ereigniskorrelierten Potentialen untersucht, welche funktionalen Prozesse bei einer Interaktion moduliert werden. Teil I untersuchte, ob die Bekanntheit eines Gesichtes die Emotionsdiskrimination erleichtert. In mehreren Experimenten diskriminierten Versuchspersonen zwei Emotionen, die von bekannten und unbekannten Gesichtern praesentiert wurden . Dabei war die Entscheidung fuer persoenlich bekannte Gesichter mit froehlichem Ausdruck schneller und fehlerfreier. Dies zeigt sich in einer kuerzeren Latenz der P300 Komponente (Trend), welche die Dauer der Reizklassifikation auswies, sowie in einem verkuerzten Intervall zwischen Stimulus und Beginn des Lateralisierten Bereitschaftspotentials (S-LRP), welches die handspezifische Reaktionsauswahl anzeigt. Diese Befunde sprechen fuer eine Erleichterung der Emotionsdiskrimination auf spaeten perzeptuellen Verarbeitungsstufen bei persoenlich bekannten Gesichtern. In weiteren Experimenten mit oeffentlich bekannten, gelernten und unbekannten Gesichtern zeigte sich keine Erleichterung der Emotionsdiskrimination für bekannte Gesichter. Teil II untersuchte, ob es einen Einfluss des Emotionsausdrucks auf die Bekanntheitsentscheidung gibt. Eine Erleichterung zeigte sich fuer neutrale oder froehliche Emotionen nur bei persoenlich bekannten Gesichtern, nicht aber bei gelernten oder unbekannten Gesichtern. Sie spiegelt sich in einer Verkuerzung des S-LRP fuer persoenlich bekannte Gesichter wider, was eine Erleichterung der Reaktionsauswahl nahelegt. Zusammenfassend konnte gezeigt werden, dass eine Interaktion der Bekanntheit mit der Emotionserkennung unter bestimmten Bedingungen auftritt. In einer abschließenden Diskussion werden die experimentellen Ergebnisse in Beziehung gesetzt und in Hinblick auf bisherige Befunde diskutiert.
Contrasting traditional face recognition models previous research has revealed that the recognition of facial expressions and familiarity may not be independent. This dissertation attempts to localize this interaction within the information processing system by means of performance data and event-related potentials. Part I elucidated upon the question of whether there is an interaction between facial familiarity and the discrimination of facial expression. Participants had to discriminate two expressions which were displayed on familiar and unfamiliar faces. The discrimination was faster and less error prone for personally familiar faces displaying happiness. Results revealed a shorter peak latency for the P300 component (trend), reflecting stimulus categorization time, and for the onset of the lateralized readiness potential (S-LRP), reflecting the duration of pre-motor processes. A facilitation of perceptual stimulus categotization for personally familiar faces displaying happiness is suggested. The discrimination of expressions was not facilitated in further experiments using famous or experimentally familiarized, and unfamiliar faces. Part II raises the question of whether there is an interaction between facial expression and the discrimination of facial familiarity. In this task a facilitation was only observable for personally familiar faces displaying a neutral or happy expression, but not for experimentally familiarized, or unfamiliar faces. Event-related potentials reveal a shorter S-LRP interval for personally familiar faces, hence, suggesting a facilitated response selection stage. In summary, the results suggest that an interaction of facial familiarity and facial expression might be possible under some circumstances. Finally, the results are discussed in the context of possible interpretations, previous results, and face recognition models.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Chu, Baptiste. "Neutralisation des expressions faciales pour améliorer la reconnaissance du visage." Thesis, Ecully, Ecole centrale de Lyon, 2015. http://www.theses.fr/2015ECDL0005/document.

Повний текст джерела
Анотація:
Les variations de pose et d’expression constituent des limitations importantes à la reconnaissance de visages en deux dimensions. Dans cette thèse, nous proposons d’augmenter la robustesse des algorithmes de reconnaissances faciales aux changements de pose et d’expression. Pour cela, nous proposons d’utiliser un modèle 3D déformable de visage permettant d’isoler les déformations d’identité de celles relatives à l’expression. Plus précisément, étant donné une image de probe avec expression, une nouvelle vue synthétique du visage est générée avec une pose frontale et une expression neutre. Nous présentons deux méthodes de correction de l’expression. La première est basée sur une connaissance a priori dans le but de changer l’expression de l’image vers une expression neutre. La seconde méthode, conçue pour les scénarios de vérification, est basée sur le transfert de l’expression de l’image de référence vers l’image de probe. De nombreuses expérimentations ont montré une amélioration significative des performances et ainsi valider l’apport de nos méthodes. Nous proposons ensuite une extension de ces méthodes pour traiter de la problématique émergente de reconnaissance de visage à partir d’un flux vidéo. Pour finir, nous présentons différents travaux permettant d’améliorer les performances obtenues dans des cas spécifiques et ainsi améliorer les performances générales obtenues grâce à notre méthode
Expression and pose variations are major challenges for reliable face recognition (FR) in 2D. In this thesis, we aim to endow state of the art face recognition SDKs with robustness to simultaneous facial expression variations and pose changes by using an extended 3D Morphable Model (3DMM) which isolates identity variations from those due to facial expressions. Specifically, given a probe with expression, a novel view of the face is generated where the pose is rectified and the expression neutralized. We present two methods of expression neutralization. The first one uses prior knowledge to infer the neutral expression from an input image. The second method, specifically designed for verification, is based on the transfer of the gallery face expression to the probe. Experiments using rectified and neutralized view with a standard commercial FR SDK on two 2D face databases show significant performance improvement and demonstrates the effectiveness of the proposed approach. Then, we aim to endow the state of the art FR SDKs with the capabilities to recognize faces in videos. Finally, we present different methods for improving biometric performances for specific cases
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Bezerra, Giuliana Silva. "A framework for investigating the use of face features to identify spontaneous emotions." Universidade Federal do Rio Grande do Norte, 2014. http://repositorio.ufrn.br/handle/123456789/19595.

Повний текст джерела
Анотація:
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2016-01-14T18:48:05Z No. of bitstreams: 1 GiulianaSilvaBezerra_DISSERT.pdf: 12899912 bytes, checksum: 413f2be6aef4a909500e6834e7b0ae63 (MD5)
Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2016-01-15T18:57:11Z (GMT) No. of bitstreams: 1 GiulianaSilvaBezerra_DISSERT.pdf: 12899912 bytes, checksum: 413f2be6aef4a909500e6834e7b0ae63 (MD5)
Made available in DSpace on 2016-01-15T18:57:11Z (GMT). No. of bitstreams: 1 GiulianaSilvaBezerra_DISSERT.pdf: 12899912 bytes, checksum: 413f2be6aef4a909500e6834e7b0ae63 (MD5) Previous issue date: 2014-12-12
Emotion-based analysis has raised a lot of interest, particularly in areas such as forensics, medicine, music, psychology, and human-machine interface. Following this trend, the use of facial analysis (either automatic or human-based) is the most common subject to be investigated once this type of data can easily be collected and is well accepted in the literature as a metric for inference of emotional states. Despite this popularity, due to several constraints found in real world scenarios (e.g. lightning, complex backgrounds, facial hair and so on), automatically obtaining affective information from face accurately is a very challenging accomplishment. This work presents a framework which aims to analyse emotional experiences through naturally generated facial expressions. Our main contribution is a new 4-dimensional model to describe emotional experiences in terms of appraisal, facial expressions, mood, and subjective experiences. In addition, we present an experiment using a new protocol proposed to obtain spontaneous emotional reactions. The results have suggested that the initial emotional state described by the participants of the experiment was different from that described after the exposure to the eliciting stimulus, thus showing that the used stimuli were capable of inducing the expected emotional states in most individuals. Moreover, our results pointed out that spontaneous facial reactions to emotions are very different from those in prototypic expressions due to the lack of expressiveness in the latter.
Emotion-based analysis has raised a lot of interest, particularly in areas such as forensics, medicine, music, psychology, and human-machine interface. Following this trend, the use of facial analysis (either automatic or human-based) is the most common subject to be investigated once this type of data can easily be collected and is well accepted in the literature as a metric for inference of emotional states. Despite this popularity, due to several constraints found in real world scenarios (e.g. lightning, complex backgrounds, facial hair and so on), automatically obtaining affective information from face accurately is a very challenging accomplishment. This work presents a framework which aims to analyse emotional experiences through naturally generated facial expressions. Our main contribution is a new 4-dimensional model to describe emotional experiences in terms of appraisal, facial expressions, mood, and subjective experiences. In addition, we present an experiment using a new protocol proposed to obtain spontaneous emotional reactions. The results have suggested that the initial emotional state described by the participants of the experiment was different from that described after the exposure to the eliciting stimulus, thus showing that the used stimuli were capable of inducing the expected emotional states in most individuals. Moreover, our results pointed out that spontaneous facial reactions to emotions are very different from those in prototypic expressions due to the lack of expressiveness in the latter.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Al-Nuaimi, Tufool. "Face recognition and computer graphics for modelling expressive faces in 3D." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/38333.

Повний текст джерела
Анотація:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2007.
Includes bibliographical references (leaves 47-48).
This thesis addresses the problem of the lack of verisimilitude in animation. Since computer vision has been aimed at creating photo-realistic representations of environments and face recognition creates replicas of faces for recognition purposes, we research face recognition techniques to produce photo-realistic models of expressive faces that could be further developed and applied in animation. We use two methods that are commonly used in face recognition to gather information about the subject: 3D scanners and multiple 2D images. For the latter method, Maya is used for modeling. Both methods produced accurate 3D models for a neutral face, but Maya allowed us to manually build 3D models and was therefore more successful in creating exaggerated facial expressions.
by Tufool Al-Nuaimi.
M.Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Alves, Cláudia Daniela Andrade Carvalho. "Transplantação da Face Humana: estudo de caso com Carmen Tarleton - efeitos neuropsicofisiológicos na exibição e no reconhecimento das emoções básicas." Doctoral thesis, [s.n.], 2015. http://hdl.handle.net/10284/5201.

Повний текст джерела
Анотація:
Tese apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Doutor em Ciências Sociais, especialidade em Psicologia
O transplante da face é um procedimento considerado experimental que originou intenso debate sobre os riscos e os benefícios de realizar este tipo de cirurgia. Os resultados mostram que, a partir de um ponto de vista clínico, técnico, estético, funcional, imunológico e psicológico, o transplante facial alcançou a reabilitação funcional, estética, social e psicológica em pacientes com desfiguração severa da face, e encontram-se descritos em publicações de equipas de transplante de todo o mundo. A experiência clínica demonstrou a viabilidade do transplante facial como uma valiosa opção de reconstrução, mas ainda continua a ser considerada como um procedimento experimental com questões não resolvidas por esclarecer. Os resultados funcionais e estéticos têm sido muito encorajadores, com boa recuperação motora e sensorial e melhorias para as funções faciais observadas. Como previsto, têm sido comuns episódios de rejeição aguda, mas facilmente controlados com o aumento da imunossupressão sistémica. Complicações de mortalidade e imunossupressão nos pacientes também foram observadas. As melhorias psicológicas têm sido notáveis e resultaram na reintegração dos pacientes para o mundo exterior, redes sociais e até mesmo no local de trabalho. As equipas de transplante facial têm destacado a seleção rigorosa dos pacientes como o indicador chave do sucesso. Os primeiros resultados globais do programa de transplante da face têm sido geralmente mais positivos do que o expectável. Este sucesso inicial, a divulgação de resultados e o refinamento contínuo do procedimento podem possibilitar que o transplante facial seja, futuramente, uma opção primordial de reconstrução para aqueles com extensas deformações faciais. Assim, é de suma importância compreender o processo neuropsicofisiológico na exibição e no reconhecimento das emoções básicas após o transplante da face. Os resultados obtidos indicam que lesões músculo-esqueléticas na face afetam a capacidade de exibição da expressão da emocionalidade e, por consequência, dificultam o reconhecimento da mesma pelos outros, prejudicando a eficácia da comunicação. Este estudo pretende contribuir para o desenvolvimento da investigação científica sobre a expressão facial da emoção, aplicável em contexto pioneiro e único em Portugal, como é o caso do transplante facial.
Face transplant is considered an experimental procedure that gave rise to intense debate about the risks and benefits of performing this type of surgery. The results show that, from a clinical technical, aesthetic, functional, immunological and psychological point of view, face transplant has reached functional, aesthetic, social and psychological rehabilitation in patients with severe disfiguration of the face, and are described in publications of transplant teams all over the world. Clinical experience demonstrates the feasibility of the face transplant as a valuable reconstruction option, yet is still considered as an experimental procedure with issues unresolved unclear. The functional and aesthetic results have been very encouraging, with good motor and sensory recovery and improvements to facial features observed. As expected, it has been common acute rejection, but easily controlled with increased systemic immunosuppression. Mortality and complications of immunosuppression in patients were also observed. Psychological improvements have been remarkable and resulted in reintegration of patients to the outside world, social networks and even in the workplace. Face transplant teams have highlighted the rigorous selection of patients as the key indicator of success. The first global results of face transplant program have been generally more positive than expected. This initial success, the dissemination of results and the ongoing refinement of the procedure may enable the facial transplant to be in future, a major reconstruction option for those with extensive facial deformities. Thus, it is of paramount importance to understand the neuropsychophysiological process in the display and recognition of basic emotions after transplantation of the face. The results indicate that musculoskeletal injuries on the face affect the display capacity of the emotionality expression and therefore hindering the recognition of the same for others, undermining the effectiveness of communication. This study aims to contribute to the development of scientific research on the facial expression of emotion, applicable in pioneering and unique context in Portugal, such as the face transplant.
La greffe du visage est considéré comme procédure expérimentale qui a suscité un intense débat sur les risques et les avantages de l'exécution de ce type de chirurgie. Les résultats montrent que, d'un point de vue clinique, technique, esthétique, fonctionnel, immunologique et psychologique de la greffe du visage a pris la réadaptation fonctionnelle, esthétique, psychologique et social chez les patients atteints souffrant de grave défigurement du visage, et sont décrits dans les publications de les équipes de transplantation du monde entier. L'expérience clinique démontre la faisabilité de la greffe comme une option valable de reconstruction faciale, mais toujours considéré comme une procédure expérimentale avec des problèmes non résolus. Les résultats fonctionnels et esthétiques ont été très encourageants, avec une bonne récupération moteur et sensorielle et l'amélioration caractéristiques faciales observées. Comme prévu, il a été le rejet aigu communs, mais facilement contrôlé avec une augmentation d'immunosuppression systémique. La mortalité et les complications de l'immunosuppression chez les patients atteints ont également été observées. Améliorations psychologiques ont été remarquables et ont abouti à la réintégration des patients vers le monde extérieur, les réseaux sociaux et même dans le lieu de travail. Les équipes de transplantation du visage ont mis en évidence la sélection rigoureuse des patients comme l'indicateur clé du succès. Les première résultat du programme mondial de la greffe du visage ont été généralement plus positive que prévu. Ce succès initial, la diffusion des résultats et de l'amélioration permanente de la procédure peut permettre à la greffe du visage être, avenir, une majeure option de reconstruction pour ceux qui ont de vastes malformations faciales. Ainsi, il est d'une importance primordiale pour comprendre le neuropsychophysiologique processus dans l'affichage et la reconnaissance des émotions de base après la transplantation du visage. Les résultats indiquent que les lésions musculo-squelettiques sur le visage affectent la capacité d'affichage de l'expression de la émotionnalité et donc difficile à reconnaître le même pour les autres, compromettre l'efficacité de la communication. Cette étude vise à contribuer au développement de la recherche scientifique sur l'expression du visage de l'émotion, applicable dans un contexte pionnier et unique au Portugal, comme la greffe de visage.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Coelho-Moreira, Ana Cristina Gonçalves. "As falas da face: processo Casa Pia - aplicação da análise da expressão facial à luz do Direito Penal Português." Doctoral thesis, [s.n.], 2015. http://hdl.handle.net/10284/4950.

Повний текст джерела
Анотація:
Tese apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Doutor em Ciências Sociais, especialidade Psicofisiologia da Expressão Facial da Emoção
O Processo Casa Pia (PrCP) teve um impacto avassalador na sociedade portuguesa, escrutinando publicamente as instituições estatais que acolhiam crianças. As repercussões foram de tal forma intensas que não só foram alteradas as metodologias e as concepções de protecção do Estado a crianças desfavorecidas como o próprio código penal foi alterado como consequência directa das suas implicações. A emoção e a sua expressão facial desempenham um papel fundamental no desenvolvimento do indivíduo e da sua interacção com a sociedade. O estudo da expressão facial da emoção de alguns alguns dos intervenientes do PrCP, procurou encontrar respostas sobre a manifestação e a exibição na face da culpa e sobre o seu processamento ao nível neuropsicológico. O conceito de culpa no âmbito da expressão facial da emoção, ainda que hoje objecto de aceso debate no seio da comunidade científico é, à luz do direito penal um dos principais instrumentos utilizados para apurar a censurabilidade dos agentes e das suas acções. Ainda que a culpa seja considerada pelo direito penal como algo intrínseco ao agente e às suas acções, seja ela dolosa ou apenas negligente, o seu apuramento permite ao direito penal sustentar e aplicar uma sanção, promovendo a dissuasão de comportamentos idênticos, mantendo, assim a paz, a ordem social, e o respeito pelas instituições e agentes representativos do Estado. Assim, combinando o objectivo último do direito penal e o contributo da análise de expressão facial em contexto forense, foi elaborado um estudo de caso, por recurso metodologia qualitativa comparativa. O principal objectivo foi, procurando resposta às hipóteses colocadas, desenvolver matrizes de análise e medição da culpa, dados os diferentes tipos e níveis de influência que a mesma exerce nos processos de adaptação dos indivíduos à sociedade e às circunstâncias. Os resultado obtidos indicam e sustentam a evidência de uma configuração específica de Au’s na Upper Face e associadas à manifestação na face de culpa, independentemente das circunstâncias (negação ou assunção) que a provocam. Desta feita, o presente estudo poderá representar o início de um necessária colaboração entre a aplicação da análise da expressão facial da emoção e a aplicação do direito em todas as suas vertentes e instituições uma vez que reforça a aplicação do princípio da culpa e , por consequência, as suas dimensões jurídico-penal e ética. The Casa Pia sexual child abuse scandal had a devastating impact on Portuguese society, publicly scrutinizing the state institutions that sheltered children. The repercussions were so intense that, not only the methodologies and state protection concepts for disadvantaged children have changed, but also the criminal law was changed as a direct result of its implications. The emotion and there facial expression plays a key role in the development of the individual and their interaction with society. The study of facial expression of emotion of some of the players of PrCP, sought to find answers on demonstration and display on the face of guilt and the processing at a neurological and psychological level. The concept of guilt within the facial expression of emotion, although today the subject of heated debate within the scientific community is in the light of criminal law one of the main instruments used to determine the reprehensibility of agents and their actions. Although the guilt is considered by criminal law as something intrinsic to the agent and their actions, whether intentional or just negligent, its establishment allows the criminal law uphold and apply a sanction, promoting deterrence identical behaviors, thus maintaining the peace, social order, and respect for state institutions and representative agents. Thus, combining the ultimate goal of criminal law and the contribution of analysis facial expression of emotion in forensic context, a case study was elaborated with a qualitative methodology by comparison. The main objective was looking for answers to the hypotheses, develop patterns of analysis and measurement of guilt, given the different types and levels of influence which it exerts on the processes of adaptation of the individual to society and circumstances. The results obtained indicate and support the evidence of a specific configuration of Au's, in Upper Face associated with the expression on the face of guilt, regardless of the circumstances (denial or assumption) that underlie it. Therefore, the present study may represent the beginning of a close collaboration between the application of analysis of facial expression of emotion and the application of law in all its aspects and institutions, as it strengthens the principle of guilt and therefore its dimensions legal, criminal and ethics. Le Casa Pia enfant scandale des abus sexuelles a eu un impact dévastateur sur la société portugaise, scrutant publiquement institutions de l'Etat qui ont accueilli les enfants. Les répercussions étaient si intenses donc, ne ont pas seulement changé les méthodes et concepts de protection de l'État pour les enfants défavorisés, comme le droit pénal a été modifié en conséquence directe de ses implications. L'emotion et sa expression faciale joue un rôle clé dans le développement de l'individu et de leur interaction avec la société. L'étude de l'expression du face de l'émotion de quelques uns des acteurs de PrCP, a cherché à trouver des réponses sur la démonstration et l’exposition sur le face de la culpabilité et le traitement de niveau neurologique et psychologique. Le concept de culpabilité dans l'expression du visage de l'émotion, même si aujourd'hui l'un vif débat au sein de la communauté scientifique est à la lumière du droit pénal l'un des principaux instruments utilisés pour déterminer le caractère répréhensible d'agents et de leurs actions. Bien que la faute est considérée par la loi pénale comme quelque chose d'intrinsèque à l'agent et leurs actions, qu'elles soient intentionnelles ou tout simplement preuve de négligence, sa clairance permet la loi pénale faire respecter et d'appliquer une sanction, la promotion de la dissuasion des comportements identiques, maintenant ainsi la la paix, l'ordre social, et le respect des institutions de l'Etat et des agents représentatifs. Ainsi, en combinant le but ultime du droit pénal et de la contribution des expressions analyse facias dans le contexte médico-légale, une étude de cas a été préparé avec une méthodologie qualitative par comparaison. L'objectif principal a été la recherche de réponses aux hypothèses, élaborer des tableaux d'analyse et de mesure de la culpabilité, étant donné les différents types et niveaux d'influence qu'il exerce sur les processus d'adaptation de l'individu à la société et les circonstances. Les résultats obtenus indiquent et soutiennent la preuve d'une configuration spécifique de Au son, Upper Face associée à l'expression sur la face de culpabilité, quelles que soient les circonstances (négation ou la véracité) qui la sous-tendent. Cette fois, la présente étude peut représenter le début d'une étroite collaboration entre l’application de l'analyse de l'expression facial de l'émotion et de l'application du droit dans tous ses aspects et les institutions, car il renforce le principe de la culpabilité et donc ses dimensions juridique, pénale et de l’éthique.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Orvoen, Hadrien. "Expressions faciales émotionnelles et Prise de décisions coopératives." Thesis, Paris, EHESS, 2017. http://www.theses.fr/2017EHES0032/document.

Повний текст джерела
Анотація:
Les comportements sociaux coopératifs sont longtemps restés un obstacle aux modèles de choix rationnel, obstacle qu'incarnent des dilemmes sociaux où un individu suivant son intérêt personnel est incité à exploiter la coopération d'autrui à son seul avantage. Je détaillerai tout d'abord comment la coopération peut apparaître un choix sensé lorsque elle est envisagée dans un contexte naturel et réel. Un regard à travers l'anthropologie, la psychologie et la neurobiologie conduit à appréhender la coopération davantage comme une adaptation et un apprentissage que comme un défaut de rationalité. Les émotions jouent un rôle essentiel dans ces processus, et je présenterai en quoi les exprimer aide les êtres humains à se synchroniser et à coopérer. Le sourire est souvent invoqué comme exemple d'un signal universel de coopération et d'approbation, une propriété intimement liée à son expression répétée lors de tâches collaboratives. Malgré tout, on en sait encore peu sur la manière précise dont le sourire et les autres expressions interviennent dans la prise de décision sociale, et en particulier sur le traitement des situations d'incongruence où un sourire accompagnerait une défection. Ce point est le cœur de l'étude expérimentale que je rapporte dans ce manuscrit. J'ai réalisé deux expériences confrontant les participants à un dilemme social dans lequel ils pouvaient investir une somme d'argent auprès de différents joueurs informatisés susceptibles de se l'accaparer, ou, au contraire, de la rétribuer avec intérêts. Les joueurs virtuels étaient personnalisés par un visage dont l'expression pouvait changer après le choix du participant: certains affichaient ainsi des émotions incongruentes avec leur ``décision'' subséquente de rétribuer ou non l'investissement du sujet. Malgré les différences méthodologiques, ces deux expériences ont montré que les expressions incongruentes altéraient la capacité des participants à jauger la propension des joueurs virtuels à rétribuer leurs investissements après une ou plusieurs interactions. Cet effet s'est manifesté tant au travers de rapports explicites que dans les investissements effectués. Dans leurs détails, les résultats de ces expériences ouvrent de nombreuses perspectives expérimentales, et appellent à la construction d'un modèle unifié de la décision sociale face-à-face qui intégrerait les nombreuses connaissances apportées ces dernières années par l'étude des grandes fonctions cognitives, tant au niveau expérimental, théorique que neurobiologique
For few decades, rational choice theories failed to properly account for cooperative behaviors. This was illustrated by social dilemmas, games where a self-motivated individual will be tempted to exploit others' cooperative behavior, harming them for his own personal profit. I will first detail how cooperation may rise as a reasonable --- if not rational --- behavior, provided that we consider social interactions in a more realistic context that rational choice theories initially did. From anthropology to neurobiology, cooperation is understood as an efficient adaptation to this natural environment rather than a quirky, self-defeating behavior. Because pertinent information is often lacking or overwhelming, too complex or ambiguous to deal with, it is essential to communicate, to share, and to trust others. Emotions, and their expression, are a cornerstone of humans' natural and effortless navigation in their social environment. Smiles for instance are universally known as a signal of satisfaction, approbation and cooperation. Like other emotional expressions, they are automatically and preferentially treated. They elicit trust and cooperative behaviors in observers, and are ubiquitous in successful collaborative interactions. Beside that however, few is known about how others' expressions are integrated into decision making. That was the focus of the experimental study I relate in this manuscript. More specifically, I investigated how decisions in a trust-based social dilemma are influenced by smiles which are either displayed along a cooperative or defective behavior (``congruently'' and ``incongruently'', resp.). I carried out two experiments where participants played an investment game with different computerized virtual partners playing the role of trustees. Virtual trustees, which were personalised with a facial avatar, could either take and keep participants investment, or reciprocate it with interests. Moreover, they also displayed facial reactions, that were either congruent or incongruent with their computerized ``decision'' to reciprocate or not. Even if the two experiments presented some methodological differences, they were coherent in that they both showed that participants were altered in remembering a virtual trustee's behavior if the latter expressed incongruent emotions. This was observed from participants' investments in game, and from their post-experimental explicit reports. If many improvements to my experimental approach remain to be done, I think it already completes the existing literature with original results. Many interesting perspectives are left open, which appeal for a deeper investigation of face-to-face decision making. I think it constitutes a theoretical and practical necessity, for which researchers will be required to unify the wide knowledge of the major cognitive functions which was gathered over the last decades
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Domingues, Daniel Chinen. "Reconhecimento automático de expressões faciais por dispositivos móveis." reponame:Repositório Institucional da UFABC, 2014.

Знайти повний текст джерела
Анотація:
Orientador: Prof. Dr. Guiou Kobayashi
Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia da Informação, 2014.
A computação atual vem demandando, cada vez mais, formas avançadas de interação com os computadores. A interface do humano com seus dispositivos móveis carece de métodos mais avançados, e um recurso automático de reconhecimento de expressões faciais seria uma maneira de alcançar patamares maiores nessa escala de evolução. A forma como se dá o reconhecimento de emoções humanas e o que as expressões faciais representam em uma comunicação face-a-face vem sendo referência no desenvolvimento desses sistemas computacionais e com isso, pode-se elencar três grandes desafios para implementar o algoritmo de analise de expressões: Localizar o rosto na imagem, extrair os elementos faciais relevantes e classificar os estados de emoções. O melhor método de resolução de cada um desses sub-desafios, que se relacionam fortemente, determinará a viabilidade, a eficiência e a relevância de um novo sistema de análise de expressões embarcada nos dispositivos portáteis. Este estudo tem como objetivo avaliar a viabilidade da implantação de um sistema automático de reconhecimento de expressões faciais por imagens, em dispositivo móvel, utilizando a plataforma iOS da Apple, integrada com a biblioteca de código aberto e muito utilizada na comunidade da computação visual, o OpenCV. O algoritmo Local Binary Pattern, implementado pelo OpenCV, foi escolhido como lógica de rastreamento da face. Os algorítmos Adaboost e Eigenface foram ,respectivamente, adotados para extração e classificação da emoção e ambos são também suportados pela mencionada biblioteca. O Módulo de Classificação Eigenface demandou um treinamento adicional em um ambiente de maior capacidade de processamento e externo a plataforma móvel; posteriormente, apenas o arquivo de treino foi exportado e consumido pelo aplicativo modelo. O estudo permitiu concluir que o Local Binary Pattern é muito robusto a variações de iluminação e muito eficiente no rastreamento da face; o Adaboost e Eigenface produziram eficiência de aproximadamente 65% na classificação da emoção, quando utilizado apenas as imagens de pico no treino do módulo, condição essa, necessária para manutenção do arquivo de treino em um tamanho compatível com o armazenamento disponível nos dispositivos dessa categoria.
The actual computing is demanding, more and more, advanced forms of interaction with computers. The interfacing from man with their mobile devices lacks more advanced methods, and automatic facial expression recognition would be a way to achieve greater levels in this scale of evolution. The way how is the human emotion recognition and what facial expressions represents in a face to face communication is being reference for development of these computer systems and thus, it can list three major implementation challenges for algorithm analysis of expressions: location of the face in the image, extracting the relevant facial features and emotions¿ states classification. The best method to solve each of these strongly related sub- challenges, determines the feasibility, the efficiency and the relevance of a new expressions analysis system, embedded in portable devices. To evaluate the feasibility of developing an automatic recognition of facial expressions in images, we implemented a mobile system model in the iOS platform with integration to an open source library that is widely used in visual computing community: the OpenCV. The Local Binary Pattern algorithm implemented by OpenCV, was chosen as the face tracking logic; the Eigenface and AdaBoost, respectively, were adopted for extraction and classification of emotion and both are also supported by the library. The Eigenface Classification Module was trained in a more robust and external environment to the mobile platform and subsequently only the training file was exported and consumed by the model application. With this experiment it was concluded that the Local Binary Pattern is very robust to lighting variations and very efficient to tracking the face; the Adaboot and Eigenface resulted in approximately 65% of efficiency when used only maximum emotion images to training the module, a condition necessary for maintenance of the cascade file in a compatible size to available storage on the mobile platform.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Li, Huibin. "Towards three-dimensional face recognition in the real." Phd thesis, Ecole Centrale de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00998798.

Повний текст джерела
Анотація:
Due to the natural, non-intrusive, easily collectible, widespread applicability, machine-based face recognition has received significant attention from the biometrics community over the past three decades. Compared with traditional appearance-based (2D) face recognition, shape-based (3D) face recognition is more stable to illumination variations, small head pose changes, and varying facial cosmetics. However, 3D face scans captured in unconstrained conditions may lead to various difficulties, such as non-rigid deformations caused by varying expressions, data missing due to self occlusions and external occlusions, as well as low-quality data as a result of some imperfections in the scanning technology. In order to deal with those difficulties and to be useful in real-world applications, in this thesis, we propose two 3D face recognition approaches: one is focusing on handling various expression changes, while the other one can recognize people in the presence of large facial expressions, occlusions and large pose various. In addition, we provide a provable and practical surface meshing algorithm for data-quality improvement. To deal with expression issue, we assume that different local facial region (e.g. nose, eyes) has different intra-expression/inter-expression shape variability, and thus has different importance. Based on this assumption, we design a learning strategy to find out the quantification importance of local facial regions in terms of their discriminating power. For facial description, we propose a novel shape descriptor by encoding the micro-structure of multi-channel facial normal information in multiple scales, namely, Multi-Scale and Multi-Component Local Normal Patterns (MSMC-LNP). It can comprehensively describe the local shape changes of 3D facial surfaces by a set of LNP histograms including both global and local cues. For face matching, Weighted Sparse Representation-based Classifier (W-SRC) is formulated based on the learned quantification importance and the LNP histograms. The proposed approach is evaluated on four databases: the FRGC v2.0, Bosphorus, BU-3DFE and 3D-TEC, including face scans in the presence of diverse expressions and action units, or several prototypical expressions with different intensities, or facial expression variations combine with strong facial similarities (i.e. identical twins). Extensive experimental results show that the proposed 3D face recognition approach with the use of discriminative facial descriptors can be able to deal with expression variations and perform quite accurately over all databases, and thereby has a good generalization ability. To deal with expression and data missing issues in an uniform framework, we propose a mesh-based registration free 3D face recognition approach based on a novel local facial shape descriptor and a multi-task sparse representation-based face matching process. [...]
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Bozed, Kenz Amhmed. "Detection of facial expressions based on time dependent morphological features." Thesis, University of Bedfordshire, 2011. http://hdl.handle.net/10547/145618.

Повний текст джерела
Анотація:
Facial expression detection by a machine is a valuable topic for Human Computer Interaction and has been a study issue in the behavioural science for some time. Recently, significant progress has been achieved in machine analysis of facial expressions but there are still some interestes to study the area in order to extend its applications. This work investigates the theoretical concepts behind facial expressions and leads to the proposal of new algorithms in face detection and facial feature localisation, design and construction of a prototype system to test these algorithms. The overall goals and motivation of this work is to introduce vision based techniques able to detect and recognise the facial expressions. In this context, a facial expression prototype system is developed that accomplishes facial segmentation (i.e. face detection, facial features localisation), facial features extraction and features classification. To detect a face, a new simplified algorithm is developed to detect and locate its presence from the fackground by exploiting skin colour properties which are then used to distinguish between face and non-face regions. This allows facial parts to be extracted from a face using elliptical and box regions whose geometrical relationships are then utilised to determine the positions of the eyes and mouth through morphological operations. The mean and standard deviations of segmented facial parts are then computed and used as features for the face. For images belonging to the same class, thses features are applied to the K-mean algorithm to compute the controid point of each class expression. This is repeated for images in the same expression class. The Euclidean distance is computed between each feature point and its cluster centre in the same expression class. This determines how close a facial expression is to a particular class and can be used as observation vectors for a Hidden Markov Model (HMM) classifier. Thus, an HMM is built to evaluate an expression of a subject as belonging to one of the six expression classes, which are Joy, Anger, Surprise, Sadness, Fear and Disgust by an HMM using distance features. To evaluate the proposed classifier, experiments are conducted on new subjects using 100 video clips that contained a mixture of expressions. The average successful detection rate of 95.6% is measured from a total of 9142 frames contained in the video clips. The proposed prototype system processes facial features parts and presents improved results of facial expressions detection rather than using whole facial features as proposed by previous authors. This work has resulted in four contributions: the Ellipse Box Face Detection Algorithm (EBFDA), Facial Features Distance Algorithm (FFDA), Facial features extraction process, and Facial features classification. These were tested and verified using the prototype system.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Aly, Sherin Fathy Mohammed Gaber. "Techniques for Facial Expression Recognition Using the Kinect." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/89220.

Повний текст джерела
Анотація:
Facial expressions convey non-verbal cues. Humans use facial expressions to show emotions, which play an important role in interpersonal relations and can be of use in many applications involving psychology, human-computer interaction, health care, e-commerce, and many others. Although humans recognize facial expressions in a scene with little or no effort, reliable expression recognition by machine is still a challenging problem. Automatic facial expression recognition (FER) has several related problems: face detection, face representation, extraction of the facial expression information, and classification of expressions, particularly under conditions of input data variability such as illumination and pose variation. A system that performs these operations accurately and in realtime would be a major step forward in achieving a human-like interaction between the man and machine. This document introduces novel approaches for the automatic recognition of the basic facial expressions, namely, happiness, surprise, sadness, fear, disgust, anger, and neutral using relatively low-resolution noisy sensor such as the Microsoft Kinect. Such sensors are capable of fast data collection, but the low-resolution noisy data present unique challenges when identifying subtle changes in appearance. This dissertation will present the work that has been done to address these challenges and the corresponding results. The lack of Kinect-based FER datasets motivated this work to build two Kinect-based RGBD+time FER datasets that include facial expressions of adults and children. To the best of our knowledge, they are the first FER-oriented datasets that include children. Availability of children data is important for research focused on children (e.g., psychology studies on facial expressions of children with autism), and also allows researchers to do deeper studies on automatic FER by analyzing possible differences between data coming from adults and children. The key contributions of this dissertation are both empirical and theoretical. The empirical contributions include the design and successful test of three FER systems that outperform existing FER systems either when tested on public datasets or in realtime. One proposed approach automatically tunes itself to the given 3D data by identifying the best distance metric that maximizes the system accuracy. Compared to traditional approaches where a fixed distance metric is employed for all classes, the presented adaptive approach had better recognition accuracy especially in non-frontal poses. Another proposed system combines high dimensional feature vectors extracted from 2D and 3D modalities via a novel fusion technique. This system achieved 80% accuracy which outperforms the state of the art on the public VT-KFER dataset by more than 13%. The third proposed system has been designed and successfully tested to recognize the six basic expressions plus neutral in realtime using only 3D data captured by the Kinect. When tested on a public FER dataset, it achieved 67% (7% higher than other 3D-based FER systems) in multi-class mode and 89% (i.e., 9% higher than the state of the art) in binary mode. When the system was tested in realtime on 20 children, it achieved over 73% on a reduced set of expressions. To the best of our knowledge, this is the first known system that has been tested on relatively large dataset of children in realtime. The theoretical contributions include 1) the development of a novel feature selection approach that ranks the features based on their class separability, and 2) the development of the Dual Kernel Discriminant Analysis (DKDA) feature fusion algorithm. This later approach addresses the problem of fusing high dimensional noisy data that are highly nonlinear distributed.
PHD
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Han, Xia. "Towards the Development of an Efficient Integrated 3D Face Recognition System. Enhanced Face Recognition Based on Techniques Relating to Curvature Analysis, Gender Classification and Facial Expressions." Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5347.

Повний текст джерела
Анотація:
The purpose of this research was to enhance the methods towards the development of an efficient three dimensional face recognition system. More specifically, one of our aims was to investigate how the use of curvature of the diagonal profiles, extracted from 3D facial geometry models can help the neutral face recognition processes. Another aim was to use a gender classifier employed on 3D facial geometry in order to reduce the search space of the database on which facial recognition is performed. 3D facial geometry with facial expression possesses considerable challenges when it comes face recognition as identified by the communities involved in face recognition research. Thus, one aim of this study was to investigate the effects of the curvature-based method in face recognition under expression variations. Another aim was to develop techniques that can discriminate both expression-sensitive and expression-insensitive regions for ii face recognition based on non-neutral face geometry models. In the case of neutral face recognition, we developed a gender classification method using support vector machines based on the measurements of area and volume of selected regions of the face. This method reduced the search range of a database initially for a given image and hence reduces the computational time. Subsequently, in the characterisation of the face images, a minimum feature set of diagonal profiles, which we call T shape profiles, containing diacritic information were determined and extracted to characterise face models. We then used a method based on computing curvatures of selected facial regions to describe this feature set. In addition to the neutral face recognition, to solve the problem arising from data with facial expressions, initially, the curvature-based T shape profiles were employed and investigated for this purpose. For this purpose, the feature sets of the expression-invariant and expression-variant regions were determined respectively and described by geodesic distances and Euclidean distances. By using regression models the correlations between expressions and neutral feature sets were identified. This enabled us to discriminate expression-variant features and there was a gain in face recognition rate. The results of the study have indicated that our proposed curvature-based recognition, 3D gender classification of facial geometry and analysis of facial expressions, was capable of undertaking face recognition using a minimum set of features improving efficiency and computation.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Maas, Casey. "Decoding Faces: The Contribution of Self-Expressiveness Level and Mimicry Processes to Emotional Understanding." Scholarship @ Claremont, 2014. http://scholarship.claremont.edu/scripps_theses/406.

Повний текст джерела
Анотація:
Facial expressions provide valuable information in making judgments about internal emotional states. Evaluation of facial expressions can occur through mimicry processes via the mirror neuron system (MNS) pathway, where a decoder mimics a target’s facial expression and proprioceptive perception prompts emotion recognition. Female participants rated emotional facial expressions when mimicry was inhibited by immobilization of facial muscles and when mimicry was uncontrolled, and were evaluated for self-expressiveness level. A mixed ANOVA was conducted to determine how self-expressiveness level and manipulation of facial muscles impacted recognition accuracy for facial expressions. Main effects of self-expressiveness level and facial muscle manipulation were not found to be significant (p > .05), nor did these variables appear to interact (p > .05). The results of this study suggest that an individual’s self-expressiveness level and use of mimicry processes may not play a central role in emotion recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Ding, Huaxiong. "Combining 2D facial texture and 3D face morphology for estimating people's soft biometrics and recognizing facial expressions." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEC061/document.

Повний текст джерела
Анотація:
Puisque les traits de biométrie douce peuvent fournir des preuves supplémentaires pour aider à déterminer précisément l’identité de l’homme, il y a eu une attention croissante sur la reconnaissance faciale basée sur les biométrie douce ces dernières années. Parmi tous les biométries douces, le sexe et l’ethnicité sont les deux caractéristiques démographiques importantes pour les êtres humains et ils jouent un rôle très fondamental dans l’analyse de visage automatique. En attendant, la reconnaissance des expressions faciales est un autre challenge dans le domaine de l’analyse de visage en raison de la diversité et de l’hybridité des expressions humaines dans différentes cultures, genres et contextes. Ce thèse est dédié à combiner la texture du visage 2D et la morphologie du visage 3D pour estimer les biométries douces: le sexe, l’ethnicité, etc., et reconnaître les expressions faciales. Pour la reconnaissance du sexe et de l’ethnicité, nous présentons une approche efficace en combinant à la fois des textures locales et des caractéristiques de forme extraites à partir des modèles de visage 3D, contrairement aux méthodes existantes qui ne dépendent que des textures ou des caractéristiques de forme. Afin de souligne exhaustivement la différence entre les groupes sexuels et ethniques, nous proposons un nouveau descripteur, à savoir local circular patterns (LCP). Ce descripteur améliore Les motifs binaires locaux (LBP) et ses variantes en remplaçant la quantification binaire par une quantification basée sur le regroupement, entraînant d’une puissance plus discriminative et une meilleure résistance au bruit. En même temps, l’algorithme Adaboost est engagé à sélectionner les caractéristiques discriminatives fortement liés au sexe et à l’ethnicité. Les résultats expérimentaux obtenus sur les bases de données FRGC v2.0 et BU-3DFE démontrent clairement les avantages de la méthode proposée. Pour la reconnaissance des expressions faciales, nous présentons une méthode automatique basée sur les multi-modalité 2D + 3D et démontrons sa performance sur la base des données BU-3DFE. Notre méthode combine des textures locales et des descripteurs de formes pour atteindre l’efficacité et la robustesse. Tout d’abord, un grand ensemble des points des caractéristiques d’images 2D et de modèles 3D sont localisés à l’aide d’un nouvel algorithme, à savoir la cascade parallèle incrémentielle de régression linéaire (iPar-CLR). Ensuite, on utilise un nouveau descripteur basé sur les histogrammes des gradients d’ordre secondaire (HSOG) en conjonction avec le descripteur SIFT pour décrire la texture locale autour de chaque point de caractéristique 2D. De même, la géométrie locale autour de chaque point de caractéristique 3D est décrite par deux nouveaux descripteurs de forme construits à l’aide des quantités différentielle de géométries de la surface au premier ordre et au second ordre, à savoir meshHOG et meshHOS. Enfin, les résultats de reconnaissance des descripteurs 2D et 3D fournis par le classifier SVM sont fusionnés à la fois au niveau de fonctionnalité et de score pour améliorer la précision. Les expérimentaux résultats démontrent clairement qu’il existe des caractéristiques complémentaires entre les descripteurs 2D et 3D. Notre approche basée sur les multi-modalités surpasse les autres méthodes de l’état de l’art en obtenant une précision de reconnaissance 86,32%. De plus, une bonne capacité de généralisation est aussi présentée sur la base de données Bosphorus
Since soft biometrics traits can provide sufficient evidence to precisely determine the identity of human, there has been increasing attention for face based soft biometrics identification in recent years. Among those face based soft biometrics, gender and ethnicity are both key demographic attributes of human beings and they play a very fundamental and important role in automatic machine based face analysis. Meanwhile, facial expression recognition is another challenge problem in face analysis because of the diversity and hybridity of human expressions among different subjects in different cultures, genders and contexts. This Ph.D thesis work is dedicated to combine 2D facial Texture and 3D face morphology for estimating people’s soft biometrics: gender, ethnicity, etc., and recognizing facial expression. For the gender and ethnicity recognition, we present an effective and efficient approach on this issue by combining both boosted local texture and shape features extracted from 3D face models, in contrast to the existing ones that only depend on either 2D texture or 3D shape of faces. In order to comprehensively represent the difference between different genders or ethnics groups, we propose a novel local descriptor, namely local circular patterns (LCP). LCP improves the widely utilized local binary patterns (LBP) and its variants by replacing the binary quantization with a clustering based one, resulting in higher discriminative power as well as better robustness to noise. Meanwhile, the following Adaboost based feature selection finds the most discriminative gender- and ethnic-related features and assigns them with different weights to highlight their importance in classification, which not only further raises the performance but reduces the time and memory cost as well. Experimental results achieved on the FRGC v2.0 and BU-3DFE data sets clearly demonstrate the advantages of the proposed method. For facial expression recognition, we present a fully automatic multi-modal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU–3DFE database. Our approach combines multi-order gradientbased local texture and shape descriptors in order to achieve efficiency a nd robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar–CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are employed to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both featurelevel and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU–3DFE benchmark to compare our approach to the state-of-the-art ones. Our multi-modal feature-based approach outperforms the others by achieving an average recognition accuracy of 86,32%. Moreover, a good generalization ability is shown on the Bosphorus database
Стилі APA, Harvard, Vancouver, ISO та ін.
42

BRENNA, VIOLA. "Positive and negative facial emotional expressions: the effect on infants' and children's facial identity recognition." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2013. http://hdl.handle.net/10281/46845.

Повний текст джерела
Анотація:
Aim of the present study was to investigate the origin and the development of the interdipendence between identity recognition and facial emotional expression processing, suggested by recent models on face processing (Calder & Young, 2005) and supported by outcomes on adults (e.g. Baudouin, Gilibert, Sansone, & Tiberghien, 2000; Schweinberger & Soukup, 1998). Particularly the effect of facial emotional expressions on infants’ and children’s ability to recognize identity of a face was explored. Studies on adults describe a different role of positive and negative emotional expressions on identity recognition (e.g. Lander & Metcalfe, 2007), i.e. positive expressions have a catalytic effect, increasing rating of familiarity of a face, conversely negative expression reduce familiarity judgments, producing an interference effect. Using respectively familiarization paradigm and a delayed two alternative forced-choice matching-to-sample task, 3-month-old infants (Experiment 1, 2, 3) and 4- and 5-year-old children (Experiment 4, 5) were tested. Results of Experiment 1 and 2 suggested an adult-like pattern at 3 months of age. Infants familiarized with a smiling face recognized the new identity in the test phase, but when they were shown with a woman’s face conveying negative expression, both anger or fear, they were not able to discriminate between the new and familiar face stimulus during the test. Moreover, evidence from Experiment 3 demonstrated that a single feature of a happy face (i.e. smiling mouth or “happy eyes”) is sufficient to drive the observed facilitator effect on identity recognition. Conversely, outcomes obtained in experiments with pre-school aged suggested that both positive and negative emotions have a distracting effect on children identity recognition. A decrement in children's performance was observed when faces 8 displayed an emotional expression (i.e. happiness, anger and fear) rather than a neutral expression (Experiment 4). This detrimental effect of a happy expression on face identity recognition emerged independently of the processing stage -i.e., encoding, recognition, encoding and recognition- at which emotional information was provided (Experiment 5). Overall, these findings suggest that, both in infancy and in childhood, facial emotional processing interacts with identity recognition. Moreover, observed outcomes seem to describe an U-shaped developmental trend of the relation between identity recognition and facial emotional expressions processing. The results are discussed by referring to Karmiloff-Smith’s Representational Redescription Model (1992).
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Angeli, Valentina. "Infants' early representation of faces: the role of dynamic cues." Doctoral thesis, Università degli studi di Padova, 2015. http://hdl.handle.net/11577/3427123.

Повний текст джерела
Анотація:
The general aim of the current dissertation is to investigate whether the semi-rigid movement of a face might affect the encoding and the processing of socially relevant information retrievable from faces, such as identity and emotions, in the first year of life. In particular, the research project is aimed, on one hand, at testing whether facial motion promotes the construction of the face representation, which, in turn, might facilitate identity recognition in newborns and categorization of facial expressions in young infants; on the other hand, the current work is aimed at investigating whether infants are able to process facial motion information alone, when other pictorial cues, such as forms, colors, etc. are unavailable. In the first study, I investigated how the movement of a happy facial expression could impact few-day-old infants’ identity recognition. Previous studies have shown that, when newborns have to recognize a face that changed in some characteristics (such as profile view), the recognition of identity is inhibited (e.g., Turati et al., 2008). It has been demonstrated that both rigid and non-rigid facial motion could promote face recognition at birth (Bulf & Turati, 2010; Leo et al., in prep.). Four experiments have been carried out to test whether the beneficial effect of facial motion might be due to a facial representation more robust and less linked to the image stored in newborns’ memory. Results have demonstrated that the benefits fail when the perceptual distance between the memorized face and the face newborns have to recognize increased (Experiment 1). Accordingly, when the perceptual distance is minimized, newborns are able to recognize the same identity despite the subtle changes even when habituated to a static face (Experiment 2). The third study showed that a biologically impossible facial motion hinders newborns’ face recognition (Experiment 3). Finally, when the quantity of pictorial information is equated, the static presentation does not lead to a successful recognition (Experiment 4). Overall, it seems that non-rigid facial motion could promote a face representation less image-constrained, but only in a condition where the degrees of visual discrepancy between the habituated and the test face images have been minimized. The second study investigated whether emotions expressed dynamically might facilitate the ability to categorize facial expressions at 3 months of age. According to the infants’ literature on the perception of static emotional expressions, categorization starts to appear only between 5 and 7 months of age (e.g., deHaan & Nelson, 1998). Findings coming from naturalistic studies of mother-infant interactions (e.g., Nadel et al., 2005), as well as intermodal preference tasks (e.g., Kahana-Kalman & Walker-Andrews, 2001), suggest that infants’ ability to process facial expressions might have been underestimated. In a within-subject design, 3-month-old infants were familiarized to four different identities posing four different intensities of a happy and a fearful expression, presented sequentially in loop in order to convey the dynamic information. Results have shown that 3-month-old infants are able to categorize the emotion of happiness, whereas they do not show this ability when they are familiarized with the emotion of fear. Such difference is likely due to the different degree of familiarity of happy and fear expressions (Malatesta & Havildand, 1982). Thus, the presentation of dynamic emotional expressions enhances infants’ ability to categorize facial expressions. The purpose of the third study was to analyze infants’ ability to process the dynamicity embedded in a face when other pictorial cues are unavailable, as demonstrated in adults (e.g., Bassili, 1978). To this end, point-light displays (Johansson, 1973) of happy and fear expressions were created. In experiment 1, in a habituation procedure, the ability to discriminate between happy and fear only on the basis of motion cues has been investigated in 3-, 6- and 9-month-old infants. Point-light displays of a face were presented both upright and inverted, to test whether infants were able to organize the motion pattern according to a face-schema. Results have shown an inversion effect at all the three age groups, suggesting that infants process the motion patterns as facial motions. Importantly, when habituated to the happy expression, all the three age groups show successful discrimination ability. In contrast, when habituated to the fear PLD, only 3-month-olds show a successful discrimination, whereas 6- and 9-month olds seem to loose such capability. Experiment 2 ruled out the possibility that a spontaneous preference for the fearful face might have affected infants’ looking behavior. These results seem to indicate that the ability to process facial expressions by relying on motion cues follows a developmental trajectory that starts with an early processing of the lower-level facial attributes, in which motion patterns are processed in a face-related way, and then evolves in the capacity to process the higher-level facial attributes, in which face movements are processed as facial expressions. Overall, the results of the present dissertation suggest that, already within the first months of life, the semi-rigid facial motion might promote the processing of the socially relevant information conveyed by faces by means of an enhanced facial representation. Moreover, the current data reveal that infants are able to process facial expressions from facial motion cues alone starting from 6 and 9 months of age.
Il presente lavoro di tesi si propone di indagare come il movimento semi-rigido del volto influenzi la codifica e la elaborazione di alcune informazioni socialmente rilevanti estraibili dal volto stesso, come l'identità e le espressioni emotive, in bambini al di sotto del primo anno di vita. In particolare, l'ipotesi è che il movimento facciale possa promuovere la costruzione di una rappresentazione mentale che, a sua volta, faciliti il riconoscimento degli stimoli in compiti di abituazione e familiarizzazione visiva. Inoltre, è stata analizzata la capacità degli infanti di processare l'informazione cinetica del volto quando altre informazioni pittoriche, come le forme, i colori, ecc., non sono presenti. Nel primo studio è stato indagato come il movimento facciale veicolato dall'espressione facciale di felicità possa influenzare sulla costruzione della rappresentazione del volto in bambini con un massimo di 3 giorni di vita). Precedenti studi alla nascita hanno dimostrato che quando alcune caratteristiche facciali del volto da riconoscere cambiano, la capacità di riconoscimento dell'identità di un volto viene inibita (e.g., Turati et al., 2008). In questi casi, è stato dimostrato come sia il movimento rigido che quello non-rigido del volto facilitino il riconoscimento dell'identità  alla nascita (Bulf & Turati, 2010; Leo et al., in prep.). Attraverso quattro esperimenti, si è voluta verificare l'ipotesi che l'effetto di beneficio del movimento semi-rigido sia legato alla costruzione di una rappresentazione del volto meno legata all'immagine pittorica immagazzinata in memoria. Anzitutto, i dati dimostrano che il movimento facciale non favorisce il riconoscimento quando viene aumentata la distanza percettiva tra il volto memorizzato e quello da riconoscere (Esperimento 1). Coerentemente, quando tale distanza percettiva è minima, i neonati sono in grado di riconoscere lo stesso volto anche in condizioni statiche (Esperimento 2). Il terzo studio mostra che un movimento biologicamente impossibile ostacola il riconoscimento dell'identità alla nascita (Esperimento 3). Infine, è stato dimostrato come le stesse informazioni pittoriche presentate staticamente in sequenza non portano ad alcun beneficio nel riconoscimento (Esperimento 4). Nel complesso, il movimento non-rigido sembra promuovere una rappresentazione del volto resiliente ai cambiamenti, ma soltanto quando la differenza percettiva tra le diverse immagini dello stesso volto è limitata. Il secondo studio ha indagato se l'utilizzo di stimoli facciali emotivi dinamici consenta l'astrazione di caratteristiche comuni permettendo la categorizzazione delle espressioni facciali di felicità e paura già a 3 mesi di vita. La letteratura sulla capacità di categorizzazione negli infanti, infatti, indica che tale abilità si sviluppi soltanto tra i 5 e i 7 mesi di vita (e.g., deHaan & Nelson, 1998). Tuttavia, nella quasi totalità degli studi sono stati utilizzati stimoli statici. Dati provenienti dalle osservazioni naturalistiche delle interazioni madre-bambino (e.g., Nadel et al., 2005), nonché da studi che utilizzano altri paradigmi sperimentali, come preferenze di tipo intermodale (e.g., Kahana-Kalman & Walker-Andrews, 2001), in cui gli stimoli facciali sono dinamici, suggeriscono una sensibilità al tono emotivo delle espressioni facciali (in particolare, quella di felicità) ben più precoce di quella indicata dagli studi di laboratorio. In un disegno within-subjects, bambini di 3 mesi sono stati familiarizzati a 4 differenti identità che mostravano 4 differenti intensità di felicità e paura presentate sequenzialmente in modo da creare una percezione di dinamicità. I risultati hanno mostrato come l'espressione di felicità viene categorizzata già a tre mesi di vita, mentre questo non succede per quella di paura. Tale differenza è riconducibile al diverso grado di familiarità delle due espressioni (Malatesta & Haviland, 1982). Questi risultati supportano l'ipotesi che il movimento facciale promuova l'astrazione di caratteristiche invarianti del volto, facilitando la categorizzazione delle espressioni facciali. Il terzo studio si è proposto di analizzare la capacità di processare la sola informazione cinetica del volto, scorporata dagli altri indici pittorici. A tal fine, sono stati creati stimoli facciali di tipo point-light (Johansson, 1973) raffigurati la dinamicità delle espressioni di felicità e paura. Nell'esperimento 1, tramite abituazione visiva, è stata indagata la capacità di infanti di 3, 6 e 9 mesi di vita di discriminare queste due espressioni facciali sulla base del solo movimento del volto, come precedentemente dimostrato negli adulti (e.g., Bassili, 1978). Gli stimoli sono stati presentati sia dritti che invertiti, al fine di verificare che il movimento fosse processato come un movimento del volto. I risultati hanno mostrato anzitutto un effetto inversione, che indica che l'insieme dei punti in movimento viene organizzato secondo lo schema volto. Inoltre, quando abituati all'espressione di felicità, i bambini di tutte le tre età dimostrano capacità di discriminazione. Al contrario, quando abituati alla paura, solo i bambini di 3 mesi mostrano capacità di discriminazione, mentre a 6 e 9 mesi questa abilità sembra scomparire. L'esperimento 2 ha escluso la possibilità che una preferenza a priori per l'espressione paura possa aver causato questo andamento. I risultati sembrano indicare che la capacità di processare le espressioni facciali sulla sola base cinetica si evolvi secondo una traiettoria di sviluppo che prevede una iniziale elaborazione di attributi del volto 'low-level', in cui i movimenti vengono processati come movimenti del volto, verso una più sofisticata elaborazione di attributi del volto 'high-level', in cui il movimento è processato come espressione facciale. Nel complesso, i dati di questo lavoro di tesi sembrano suggerire che il movimento facciale possa promuovere l'elaborazione delle informazioni sociali trasmissibili dal volto fin dai primi mesi di vita, attraverso un rafforzamento della costruzione di una rappresentazione del volto. Inoltre, i dati hanno mostrato che la capacità di processare le espressioni facciali sulla sola base del movimento emerge tra i 6 e i 9 mesi di vita.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Marshall, Amy D. "Violent husbands' recognition of emotional expressions among the faces of strangers and their wives." [Bloomington, Ind.] : Indiana University, 2004. http://wwwlib.umi.com/dissertations/fullcit/3162247.

Повний текст джерела
Анотація:
Thesis (Ph.D.)--Indiana University, Dept. of Psychology, 2004.
Title from PDF t.p. (viewed Dec. 1, 2008). Source: Dissertation Abstracts International, Volume: 66-01, Section: B, page: 0564. Chair: Amy Holtzworth-Munroe.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Beer, Jenay Michelle. "Recognizing facial expression of virtual agents, synthetic faces, and human faces: the effects of age and character type on emotion recognition." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33984.

Повний текст джерела
Анотація:
An agent's facial expression may communicate emotive state to users both young and old. The ability to recognize emotions has been shown to differ with age, with older adults more commonly misidentifying the facial emotions of anger, fear, and sadness. This research study examined whether emotion recognition of facial expressions differed between different types of on-screen agents, and between age groups. Three on-screen characters were compared: a human, a synthetic human, and a virtual agent. In this study 42 younger (age 28-28) and 42 older (age 65-85) adults completed an emotion recognition task with static pictures of the characters demonstrating four basic emotions (anger, fear, happiness, and sadness) and neutral. The human face resulted in the highest proportion match, followed by the synthetic human, then the virtual agent with the lowest proportion match. Both the human and synthetic human faces resulted in age-related differences for the emotions anger, fear, sadness, and neutral, with younger adults showing higher proportion match. The virtual agent showed age-related differences for the emotions anger, fear, happiness, and neutral, with younger adults showing higher proportion match. The data analysis and interpretation of the present study differed from previous work by utilizing two unique approaches to understanding emotion recognition. First, misattributions participants made when identifying emotion were investigated. Second, a similarity index of the feature placement between any two virtual agent emotions was calculated, suggesting that emotions were commonly misattributed as other emotions similar in appearance. Overall, these results suggest that age-related differences transcend human faces to other types of on-screen characters, and differences between older and younger adults in emotion recognition may be further explained by perceptual discrimination between two emotions of similar feature appearance.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Neto, Wolme Cardoso Alves. "Efeitos do escitalopram sobre a identificação de expressões faciais." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/17/17148/tde-25032009-210215/.

Повний текст джерела
Анотація:
ALVES NETO, W.C. Efeitos do escitalopram sobre a identificação de expressões faciais. Ribeirão Preto, SP: Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo; 2008. Os inibidores seletivos da recaptura de serotonina (ISRS) têm sido utilizados com sucesso para o tratamento de diversas patologias psiquiátricas. Sua eficácia clínica é atribuída a uma potencialização da neurotransmissão serotoninérgica, mas pouco ainda é conhecido sobre os mecanismos neuropsicológicos envolvidos nesse processo. Várias evidências sugerem que a serotonina estaria envolvida, entre outras funções, na regulação do comportamento social, nos processos de aprendizagem e memória e no processamento de emoções. O reconhecimento de expressões faciais de emoções básicas representa um valioso paradigma para o estudo do processamento de emoções, pois são estímulos condensados, uniformes e de grande relevância para o funcionamento social. O objetivo do estudo foi avaliar os efeitos da administração aguda e por via oral do escitalopram, um ISRS, no reconhecimento de expressões faciais de emoções básicas. Uma dose oral de 10 mg de escitalopram foi administrada a doze voluntários saudáveis do sexo masculino, em modelo duplo-cego, controlado por placebo, em delineamento cruzado, ordem randômica, 3 horas antes de realizarem a tarefa de reconhecimento de expressões faciais, com seis emoções básicas raiva, medo, tristeza, asco, alegria e surpresa mais a expressão neutra. As faces foram digitalmente modificadas de forma a criar um gradiente de intensidade entre 10 e 100% de cada emoção, com incrementos sucessivos de 10%. Foram registrados os estados subjetivos de humor e ansiedade ao longo da tarefa e o desempenho foi avaliado pela medida de acurácia (número de acertos sobre o total de estímulos apresentados). De forma geral, o escitalopram interferiu no reconhecimento de todas as expressões faciais, à exceção de medo. Especificamente, facilitou a identificação das faces de tristeza e prejudicou o reconhecimento de alegria. Quando considerado o gênero das faces, esse efeito foi observado para as faces masculinas, enquanto que para as faces femininas o escitalopram não interferiu com o reconhecimento de tristeza e aumentou o de alegria. Além disso, aumentou o reconhecimento das faces de raiva e asco quando administrado na segunda sessão e prejudicou a identificação das faces de surpresa nas intensidades intermediárias de gradação. Também apresentou um efeito positivo global sobre o desempenho na tarefa quando administrado na segunda sessão. Os resultados sugerem uma modulação serotoninérgica sobre o reconhecimento de expressões faciais emocionais e sobre a evocação de material previamente aprendido.
ALVES NETO, W.C. Effects of escitalopram on the processing of emotional faces. Ribeirão Preto, SP: Faculty of Medicine of Ribeirão Preto, University of São Paulo; 2008. The selective serotonin reuptake inhibitors (SSRI) have been used successfully for the treatment of various psychiatry disorders. The SSRI clinical efficacy is attributed to an enhancement of the serotonergic neurotransmission, but little is known about the neuropsychological mechanisms underlying this process. Several evidences suggest that serotonin is involved with the regulation of social behavior, learning and memory process and emotional processing. The recognition of basic emotions on facial expressions represents an useful task to study the emotional processing, since they are a condensate, uniform and important stimuli for social functioning. The aim of the study was to verify the effects of the SSRI escitalopram on the recognition of facial emotional expressions. Twelve healthy males completed two experimental sessions each (crossover design), in a randomized, balanced order, double-blind design. An oral dose of 10 mg of escitalopram was administered 3 hours before they performed an emotion recognition task with six basic emotions angry, fear, sadness, disgust, happiness and surprise and neutral expression. The faces were digitally morphed between 10% and 100% of each emotional standard, creating a 10% steps gradient. The subjective mood and anxiety states through the task were recorded and the performance through the task was defined by the accuracy measure (number of correct answers divided by the total of stimuli presented). In general, except of fear, escitalopram interfered with all the emotions tested. Specifically, facilitated the recognition of sadness, while impaired the identification of happiness. When the gender of the faces was analyzed, this effect was seen in male, but not female faces, where it improves the recognition of happiness. In addition, improves the recognition of angry and disgusted faces when administered at the second session and impaired the identification of surprised faces at intermediate levels of intensity. It also showed a global positive effect on task performance when administered at the second session. The results indicate a serotonergic modulation on the recognition of emotional faces and the recall of previous learned items.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Julin, Fredrik. "Vision based facial emotion detection using deep convolutional neural networks." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-42622.

Повний текст джерела
Анотація:
Emotion detection, also known as Facial expression recognition, is the art of mapping an emotion to some sort of input data taken from a human. This is a powerful tool to extract valuable information from individuals which can be used as data for many different purposes, ranging from medical conditions such as depression to customer feedback. To be able to solve the problem of facial expression recognition, smaller subtasks are required and all of them together form the complete system to the problem. Breaking down the bigger task at hand, one can think of these smaller subtasks in the form of a pipeline that implements the necessary steps for classification of some input to then give an output in the form of emotion. In recent time with the rise of the art of computer vision, images are often used as input for these systems and have shown great promise to assist in the task of facial expression recognition as the human face conveys the subjects emotional state and contain more information than other inputs, such as text or audio. Many of the current state-of-the-art systems utilize computer vision in combination with another rising field, namely AI, or more specifically deep learning. These proposed methods for deep learning are in many cases using a special form of neural network called convolutional neural network that specializes in extracting information from images. Then performing classification using the SoftMax function, acting as the last part before the output in the facial expression pipeline. This thesis work has explored these methods of utilizing convolutional neural networks to extract information from images and builds upon it by exploring a set of machine learning algorithms that replace the more commonly used SoftMax function as a classifier, in attempts to further increase not only the accuracy but also optimize the use of computational resources. The work also explores different techniques for the face detection subtask in the pipeline by comparing two approaches. One of these approaches is more frequently used in the state-of-the-art and is said to be more viable for possible real-time applications, namely the Viola-Jones algorithm. The other is a deep learning approach using a state-of-the-art convolutional neural network to perform the detection, in many cases speculated to be too computationally intense to run in real-time. By applying a state-of-the-art inspired new developed convolutional neural network together with the SoftMax classifier, the final performance did not reach state-of-the-art accuracy. However, the machine-learning classifiers used shows promise and bypass the SoftMax function in performance in several cases when given a massively smaller number of samples as training. Furthermore, the results given from implementing and testing a pure deep learning approach, using deep learning algorithms for both the detection and classification stages of the pipeline, shows that deep learning might outperform the classic Viola-Jones algorithm in terms of both detection rate and frames per second.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Silva, Jadiel Caparrós da [UNESP]. "Aplicação de sistemas imunológicos artificiais para biometria facial: Reconhecimento de identidade baseado nas características de padrões binários." Universidade Estadual Paulista (UNESP), 2015. http://hdl.handle.net/11449/127901.

Повний текст джерела
Анотація:
Made available in DSpace on 2015-09-17T15:26:23Z (GMT). No. of bitstreams: 0 Previous issue date: 2015-05-15. Added 1 bitstream(s) on 2015-09-17T15:45:43Z : No. of bitstreams: 1 000846199.pdf: 4785482 bytes, checksum: d06441c7f33c2c6fc4bfe273884b0d5a (MD5)
O presente trabalho tem como objetivo realizar o reconhecimento de identidade por meio de um método baseado nos Sistemas Imunológicos Artificiais de Seleção Negativa. Para isso, foram explorados os tipos de recursos e alternativas adequadas para a análise de expressões faciais 3D, abordando a técnica de Padrão Binário que tem sido aplicada com sucesso para o problema 2D. Inicialmente, a geometria facial 3D foi convertida em duas representações em 2D, a Depth Map e a APDI, que foram implementadas com uma variedade de tipos de recursos, tais como o Local Phase Quantisers, Gabor Filters e Monogenic Filters, a fim de produzir alguns descritores para então fazer-se a análise de expressões faciais. Posteriormente, aplica-se o Algoritmo de Seleção Negativa onde são realizadas comparações e análises entre as imagens e os detectores previamente criados. Havendo afinidade entre as imagens previamente estabelecidas pelo operador, a imagem é classificada. Esta classificação é chamada de casamento. Por fim, para validar e avaliar o desempenho do método foram realizados testes com imagens diretamente da base de dados e posteriormente com dez descritores desenvolvidos a partir dos padrões binários. Esses tipos de testes foram realizados tendo em vista três objetivos: avaliar quais os melhores descritores e as melhores expressões para se realizar o reconhecimento de identidade e, por fim, validar o desempenho da nova solução de reconhecimento de identidades baseado nos Sistemas Imunológicos Artificiais. Os resultados obtidos pelo método apresentaram eficiência, robustez e precisão no reconhecimento de identidade facial
This work aims to perform the identity recognition by a method based on Artificial Immune Systems, the Negative Selection Algorithm. Thus, the resources and adequate alternatives for analyzing 3D facial expressions were explored, exploring the Binary Pattern technique that is successfully applied for the 2D problem. Firstly, the 3D facial geometry was converted in two 2D representations. The Depth Map and the Azimuthal Projection Distance Image were implemented with other resources such as the Local Phase Quantisers, Gabor Filters and Monogenic Filters to produce descriptors to perform the facial expression analysis. Afterwards, the Negative Selection Algorithm is applied, and comparisons and analysis with the images and the detectors previously created are done. If there is affinity with the images, than the image is classified. This classification is called matching. Finally, to validate and evaluate the performance of the method, tests were realized with images from the database and after with ten descriptors developed from the binary patterns. These tests aim to: evaluate which are the best descriptors and the best expressions to recognize the identities, and to validate the performance of the new solution of identity recognition based on Artificial Immune Systems. The results show efficiency, robustness and precision in recognizing facial identity
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Silva, Jadiel Caparrós da. "Aplicação de sistemas imunológicos artificiais para biometria facial: Reconhecimento de identidade baseado nas características de padrões binários /." Ilha Solteira, 2015. http://hdl.handle.net/11449/127901.

Повний текст джерела
Анотація:
Orientador: Anna Diva Plasencia Lotufo
Co-orientador: Jorge Manuel M. C. Pereira Batista
Banca: Carlos Roberto Minussi
Banca: Ricardo Luiz Barros de Freitas
Banca: Díbio Leandro Borges
Banca: Gelson da Cruz Junior
Resumo: O presente trabalho tem como objetivo realizar o reconhecimento de identidade por meio de um método baseado nos Sistemas Imunológicos Artificiais de Seleção Negativa. Para isso, foram explorados os tipos de recursos e alternativas adequadas para a análise de expressões faciais 3D, abordando a técnica de Padrão Binário que tem sido aplicada com sucesso para o problema 2D. Inicialmente, a geometria facial 3D foi convertida em duas representações em 2D, a Depth Map e a APDI, que foram implementadas com uma variedade de tipos de recursos, tais como o Local Phase Quantisers, Gabor Filters e Monogenic Filters, a fim de produzir alguns descritores para então fazer-se a análise de expressões faciais. Posteriormente, aplica-se o Algoritmo de Seleção Negativa onde são realizadas comparações e análises entre as imagens e os detectores previamente criados. Havendo afinidade entre as imagens previamente estabelecidas pelo operador, a imagem é classificada. Esta classificação é chamada de casamento. Por fim, para validar e avaliar o desempenho do método foram realizados testes com imagens diretamente da base de dados e posteriormente com dez descritores desenvolvidos a partir dos padrões binários. Esses tipos de testes foram realizados tendo em vista três objetivos: avaliar quais os melhores descritores e as melhores expressões para se realizar o reconhecimento de identidade e, por fim, validar o desempenho da nova solução de reconhecimento de identidades baseado nos Sistemas Imunológicos Artificiais. Os resultados obtidos pelo método apresentaram eficiência, robustez e precisão no reconhecimento de identidade facial
Abstract: This work aims to perform the identity recognition by a method based on Artificial Immune Systems, the Negative Selection Algorithm. Thus, the resources and adequate alternatives for analyzing 3D facial expressions were explored, exploring the Binary Pattern technique that is successfully applied for the 2D problem. Firstly, the 3D facial geometry was converted in two 2D representations. The Depth Map and the Azimuthal Projection Distance Image were implemented with other resources such as the Local Phase Quantisers, Gabor Filters and Monogenic Filters to produce descriptors to perform the facial expression analysis. Afterwards, the Negative Selection Algorithm is applied, and comparisons and analysis with the images and the detectors previously created are done. If there is affinity with the images, than the image is classified. This classification is called matching. Finally, to validate and evaluate the performance of the method, tests were realized with images from the database and after with ten descriptors developed from the binary patterns. These tests aim to: evaluate which are the best descriptors and the best expressions to recognize the identities, and to validate the performance of the new solution of identity recognition based on Artificial Immune Systems. The results show efficiency, robustness and precision in recognizing facial identity
Doutor
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Löfdahl, Tomas, and Mattias Wretman. "Långsammare igenkänning av emotioner i ansiktsuttryck hos individer med utmattningssyndrom : En pilotstudie." Thesis, Mittuniversitetet, Institutionen för samhällsvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17590.

Повний текст джерела
Анотація:
Syftet med denna pilotstudie var att skapa hypoteser om och hur utmattningssyndrom påverkar förmågan att känna igen emotioner i ansiktsuttryck. En grupp patienter med utmattningssyndrom jämfördes med en matchad frisk kontrollgrupp (N=14). Grupperna undersöktes med ett datorbaserat test beståendes av färgbilder av autentiska ansiktsuttryck som gradvis i steg om 10% förändrades från ett neutralt ansiktsuttryck till någon av de fem grundemotionerna ilska, avsky, rädsla, glädje och ledsenhet. Mätningarna gjordes i termer av igenkänningsprecision och responshastighet. Resultatet visade att patientgruppen responderade signifikant långsammare än kontrollgruppen sett över samtliga emotioner i testet. Inga emotionsspecifika skillnader såväl som skillnader i igenkänningsprecision kunde påvisas mellan grupperna. Orsakerna till diskrepansen i responshastighet diskuterades utifrån fyra tänkbara förklaringsområden: ansiktsperceptuell funktion, visuell uppmärksamhet, självfokuserad uppmärksamhet samt noggrannhet/oro. Rekommendationer gjordes till framtida forskning om att utforska dessa områden närmare.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії