Tesis sobre el tema "Face Expression Recognition"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Face Expression Recognition".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Zhou, Yun. "Embedded Face Detection and Facial Expression Recognition". Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/583.
Texto completoEner, Emrah. "Recognition Of Human Face Expressions". Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/3/12607521/index.pdf.
Texto completoZhao, Xi. "3D face analysis : landmarking, expression recognition and beyond". Phd thesis, Ecole Centrale de Lyon, 2010. http://tel.archives-ouvertes.fr/tel-00599660.
Texto completoMunasinghe, Kankanamge Sarasi Madushika. "Facial analysis models for face and facial expression recognition". Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/118197/1/Sarasi%20Madushika_Munasinghe%20Kankanamge_Thesis.pdf.
Texto completoMinoi, Jacey-Lynn. "Geometric expression invariant 3D face recognition using statistical discriminant models". Thesis, Imperial College London, 2009. http://hdl.handle.net/10044/1/4648.
Texto completoZhan, Ce. "Facial expression recognition for multi-player on-line games". School of Computer Science and Software Engineering, 2008. http://ro.uow.edu.au/theses/100.
Texto completoBloom, Elana. "Recognition, expression, and understanding facial expressions of emotion in adolescents with nonverbal and general learning disabilities". Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=100323.
Texto completoAdolescents aged 12 to 15 were screened for LD and NLD using the Wechsler Intelligence Scale for Children---Third Edition (WISC-III; Weschler, 1991) and the Wide Range Achievement Test---Third Edition (WRAT3; Wilkinson, 1993) and subtyped into NVLD and GLD groups based on the WRAT3. The NVLD ( n = 23), matched NLD (n = 23), and a comparable GLD (n = 23) group completed attention, mood, and neuropsychological measures. The adolescent's ability to recognize (Pictures of Facial Affect; Ekman & Friesen, 1976), express, and understand facial expressions of emotion, and their general social functioning was assessed. Results indicated that the GLD group was significantly less accurate at recognizing and understanding facial expressions of emotion compared to the NVLD and NLD groups, who did not differ from each other. No differences emerged between the NVLD, NLD, and GLD groups on the expression or social functioning tasks. The neuropsychological measures did not account for a significant portion of the variance on the emotion tasks. Implications regarding severity of LD are discussed.
Durrani, Sophia J. "Studies of emotion recognition from multiple communication channels". Thesis, University of St Andrews, 2005. http://hdl.handle.net/10023/13140.
Texto completoWei, Xiaozhou. "3D facial expression modeling and analysis with topographic information". Diss., Online access via UMI:, 2008.
Buscar texto completoBeall, Paula M. "Automaticity and Hemispheric Specialization in Emotional Expression Recognition: Examined using a modified Stroop Task". Thesis, University of North Texas, 2002. https://digital.library.unt.edu/ark:/67531/metadc3267/.
Texto completoCui, Chen. "Adaptive weighted local textural features for illumination, expression and occlusion invariant face recognition". University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1374782158.
Texto completode, la Cruz Nathan. "Autonomous facial expression recognition using the facial action coding system". University of the Western Cape, 2016. http://hdl.handle.net/11394/5121.
Texto completoThe South African Sign Language research group at the University of the Western Cape is in the process of creating a fully-edged machine translation system to automatically translate between South African Sign Language and English. A major component of the system is the ability to accurately recognise facial expressions, which are used to convey emphasis, tone and mood within South African Sign Language sentences. Traditionally, facial expression recognition research has taken one of two paths: either recognising whole facial expressions of which there are six i.e. anger, disgust, fear, happiness, sadness, surprise, as well as the neutral expression; or recognising the fundamental components of facial expressions as defined by the Facial Action Coding System in the form of Action Units. Action Units are directly related to the motion of specific muscles in the face, combinations of which are used to form any facial expression. This research investigates enhanced recognition of whole facial expressions by means of a hybrid approach that combines traditional whole facial expression recognition with Action Unit recognition to achieve an enhanced classification approach.
Dagnes, Nicole. "3D human face analysis for recognition applications and motion capture". Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2542.
Texto completoThis thesis is intended as a geometrical study of the three-dimensional facial surface, whose aim is to provide an application framework of entities coming from Differential Geometry context to use as facial descriptors in face analysis applications, like FR and FER fields. Indeed, although every visage is unique, all faces are similar and their morphological features are the same for all mankind. Hence, it is primary for face analysis to extract suitable features. All the facial features, proposed in this study, are based only on the geometrical properties of the facial surface. Then, these geometrical descriptors and the related entities proposed have been applied in the description of facial surface in pattern recognition contexts. Indeed, the final goal of this research is to prove that Differential Geometry is a comprehensive tool oriented to face analysis and geometrical features are suitable to describe and compare faces and, generally, to extract relevant information for human face analysis in different practical application fields. Finally, since in the last decades face analysis has gained great attention also for clinical application, this work focuses on musculoskeletal disorders analysis by proposing an objective quantification of facial movements for helping maxillofacial surgery and facial motion rehabilitation. At this time, different methods are employed for evaluating facial muscles function. This research work investigates the 3D motion capture system, adopting the Technology, Sport and Health platform, located in the Innovation Centre of the University of Technology of Compiègne, in the Biomechanics and Bioengineering Laboratory (BMBI)
Chen, Xiaochen. "Tracking vertex flow on 3D dynamic facial models". Diss., Online access via UMI:, 2008.
Buscar texto completoDietrich, Jonas. "Gaze Behaviour and Its Functional Role During Facial Expression Recognition". Doctoral thesis, Humboldt-Universität zu Berlin, 2019. http://dx.doi.org/10.18452/19783.
Texto completoProcesses that underlie the visual encoding of facial expressions still pose a conundrum. Therefore, this dissertation set out to provide new insights into these processes by investigating gaze behaviour and its functional role during the recognition of facial expressions. Four experimental studies were conducted to examine whether general face processing strategies are already reflected on the visual encoding stage of facial expression recognition indicated by specific fixation patterns and whether differences at the initial uptake of visual information as a consequence of varying fixation positions affect facial expression recognition. Gaze behaviour was recorded while participants were asked to categorise angry, disgusted, happy, sad, and neutral facial expressions in static and dynamic displays. Results revealed that gaze behaviour for static facial expressions was characterised by only a few fixations mainly directed to the centre and to expression-specific diagnostic facial features of the face, suggesting a combined holistic and featural encoding strategy. For less intense and dynamic facial expressions, results indicated a more configural encoding strategy with multiple fixations to a greater number of different facial features. In addition, differences in gaze strategy were relevant for facial expression recognition. Fixating diagnostic facial features accelerated the recognition of static facial expressions. In contrast, a central fixation position was beneficial for recognizing dynamic facial expressions, presumably by facilitating holistic face processing and change detection. Overall, findings demonstrated that general face processing strategies are already reflected on the visual encoding stage of facial expression recognition and that variations in these early processes affect recognition performance.
Cheng, Xin. "Nonrigid face alignment for unknown subject in video". Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/65338/1/Xin_Cheng_Thesis.pdf.
Texto completoShreve, Matthew Adam. "Automatic Macro- and Micro-Facial Expression Spotting and Applications". Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4770.
Texto completoSergerie, Karine. "A face to remember : an fMRI study of the effects of emotional expression on recognition memory". Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82422.
Texto completoMushfieldt, Diego. "Robust facial expression recognition in the presence of rotation and partial occlusion". Thesis, University of Western Cape, 2014. http://hdl.handle.net/11394/3367.
Texto completoThis research proposes an approach to recognizing facial expressions in the presence of rotations and partial occlusions of the face. The research is in the context of automatic machine translation of South African Sign Language (SASL) to English. The proposed method is able to accurately recognize frontal facial images at an average accuracy of 75%. It also achieves a high recognition accuracy of 70% for faces rotated to 60◦. It was also shown that the method is able to continue to recognize facial expressions even in the presence of full occlusions of the eyes, mouth and left/right sides of the face. The accuracy was as high as 70% for occlusion of some areas. An additional finding was that both the left and the right sides of the face are required for recognition. As an addition, the foundation was laid for a fully automatic facial expression recognition system that can accurately segment frontal or rotated faces in a video sequence.
Biswas, Ajanta. "Investigating facial expression production and inner outer face recognition in children with autism and typically developing children". Thesis, University of Sheffield, 2010. http://etheses.whiterose.ac.uk/14973/.
Texto completoDerkach, Dmytro. "Spectrum analysis methods for 3D facial expression recognition and head pose estimation". Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/664578.
Texto completoFacial analysis has attracted considerable research efforts over the last decades, with a growing interest in improving the interaction and cooperation between people and computers. This makes it necessary that automatic systems are able to react to things such as the head movements of a user or his/her emotions. Further, this should be done accurately and in unconstrained environments, which highlights the need for algorithms that can take full advantage of 3D data. These systems could be useful in multiple domains such as human-computer interaction, tutoring, interviewing, health-care, marketing etc. In this thesis, we focus on two aspects of facial analysis: expression recognition and head pose estimation. In both cases, we specifically target the use of 3D data and present contributions that aim to identify meaningful representations of the facial geometry based on spectral decomposition methods: 1. We propose a spectral representation framework for facial expression recognition using exclusively 3D geometry, which allows a complete description of the underlying surface that can be further tuned to the desired level of detail. It is based on the decomposition of local surface patches in their spatial frequency components, much like a Fourier transform, which are related to intrinsic characteristics of the surface. We propose the use of Graph Laplacian Features (GLFs), which result from the projection of local surface patches into a common basis obtained from the Graph Laplacian eigenspace. The proposed approach is tested in terms of expression and Action Unit recognition and results confirm that the proposed GLFs produce state-of-the-art recognition rates. 2. We propose an approach for head pose estimation that allows modeling the underlying manifold that results from general rotations in 3D. We start by building a fully-automatic system based on the combination of landmark detection and dictionary-based features, which obtained the best results in the FG2017 Head Pose Estimation Challenge. Then, we use tensor representation and higher order singular value decomposition to separate the subspaces that correspond to each rotation factor and show that each of them has a clear structure that can be modeled with trigonometric functions. Such representation provides a deep understanding of data behavior, and can be used to further improve the estimation of the head pose angles.
St-Hilaire, Annie. "Are paranoid schizophrenia patients really more accurate than other people at recognizing spontaneous expressions of negative emotion? : a study of the putative association between emotion recognition and thinking errors in paranoia". [Kent, Ohio] : Kent State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=kent1215450307.
Texto completoTitle from PDF t.p. (viewed Nov. 10, 2009). Advisor: Nancy Docherty. Keywords: schizophrenia, paranoia, emotion recognition, posed expressions, spontaneous expressions, cognition. Includes bibliographical references (p. 122-144).
Darborg, Alex. "Real-time face recognition using one-shot learning : A deep learning and machine learning project". Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-40069.
Texto completoMayer, Christoph [Verfasser], Bernd [Akademischer Betreuer] Radig y Gudrun Johanna [Akademischer Betreuer] Klinker. "Facial Expression Recognition With A Three-Dimensional Face Model / Christoph Mayer. Gutachter: Gudrun Johanna Klinker. Betreuer: Bernd Radig". München : Universitätsbibliothek der TU München, 2012. http://d-nb.info/1019854405/34.
Texto completoSzeptycki, Przemyslaw. "Processing and analysis of 2.5D face models for non-rigid mapping based face recognition using differential geometry tools". Phd thesis, Ecole Centrale de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00675988.
Texto completoHariri, Walid. "Contribution à la reconnaissance/authentification de visages 2D/3D". Thesis, Cergy-Pontoise, 2017. http://www.theses.fr/2017CERG0905/document.
Texto completo3D face analysis including 3D face recognition and 3D Facial expression recognition has become a very active area of research in recent years. Various methods using 2D image analysis have been presented to tackle these problems. 2D image-based methods are inherently limited by variability in imaging factors such as illumination and pose. The recent development of 3D acquisition sensors has made 3D data more and more available. Such data is relatively invariant to illumination and pose, but it is still sensitive to expression variation. The principal objective of this thesis is to propose efficient methods for 3D face recognition/verification and 3D facial expression recognition. First, a new covariance based method for 3D face recognition is presented. Our method includes the following steps : first 3D facial surface is preprocessed and aligned. A uniform sampling is then applied to localize a set of feature points, around each point, we extract a matrix as local region descriptor. Two matching strategies are then proposed, and various distances (geodesic and non-geodesic) are applied to compare faces. The proposed method is assessed on three datasetsincluding GAVAB, FRGCv2 and BU-3DFE. A hierarchical description using three levels of covariances is then proposed and validated. In the second part of this thesis, we present an efficient approach for 3D facial expression recognition using kernel methods with covariance matrices. In this contribution, we propose to use Gaussian kernel which maps covariance matrices into a high dimensional Hilbert space. This enables to use conventional algorithms developed for Euclidean valued data such as SVM on such non-linear valued data. The proposed method have been assessed on two known datasets including BU-3DFE and Bosphorus datasets to recognize the six prototypical expressions
Ali, Afiya. "Recognition of facial affect in individuals scoring high and low in psychopathic personality characteristics". The University of Waikato, 2007. http://adt.waikato.ac.nz/public/adt-uow20070129.190938/index.html.
Texto completoWild-Wall, Nele. "Is there an interaction between facial expression and facial familiarity?" Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2004. http://dx.doi.org/10.18452/15042.
Texto completoContrasting traditional face recognition models previous research has revealed that the recognition of facial expressions and familiarity may not be independent. This dissertation attempts to localize this interaction within the information processing system by means of performance data and event-related potentials. Part I elucidated upon the question of whether there is an interaction between facial familiarity and the discrimination of facial expression. Participants had to discriminate two expressions which were displayed on familiar and unfamiliar faces. The discrimination was faster and less error prone for personally familiar faces displaying happiness. Results revealed a shorter peak latency for the P300 component (trend), reflecting stimulus categorization time, and for the onset of the lateralized readiness potential (S-LRP), reflecting the duration of pre-motor processes. A facilitation of perceptual stimulus categotization for personally familiar faces displaying happiness is suggested. The discrimination of expressions was not facilitated in further experiments using famous or experimentally familiarized, and unfamiliar faces. Part II raises the question of whether there is an interaction between facial expression and the discrimination of facial familiarity. In this task a facilitation was only observable for personally familiar faces displaying a neutral or happy expression, but not for experimentally familiarized, or unfamiliar faces. Event-related potentials reveal a shorter S-LRP interval for personally familiar faces, hence, suggesting a facilitated response selection stage. In summary, the results suggest that an interaction of facial familiarity and facial expression might be possible under some circumstances. Finally, the results are discussed in the context of possible interpretations, previous results, and face recognition models.
Chu, Baptiste. "Neutralisation des expressions faciales pour améliorer la reconnaissance du visage". Thesis, Ecully, Ecole centrale de Lyon, 2015. http://www.theses.fr/2015ECDL0005/document.
Texto completoExpression and pose variations are major challenges for reliable face recognition (FR) in 2D. In this thesis, we aim to endow state of the art face recognition SDKs with robustness to simultaneous facial expression variations and pose changes by using an extended 3D Morphable Model (3DMM) which isolates identity variations from those due to facial expressions. Specifically, given a probe with expression, a novel view of the face is generated where the pose is rectified and the expression neutralized. We present two methods of expression neutralization. The first one uses prior knowledge to infer the neutral expression from an input image. The second method, specifically designed for verification, is based on the transfer of the gallery face expression to the probe. Experiments using rectified and neutralized view with a standard commercial FR SDK on two 2D face databases show significant performance improvement and demonstrates the effectiveness of the proposed approach. Then, we aim to endow the state of the art FR SDKs with the capabilities to recognize faces in videos. Finally, we present different methods for improving biometric performances for specific cases
Bezerra, Giuliana Silva. "A framework for investigating the use of face features to identify spontaneous emotions". Universidade Federal do Rio Grande do Norte, 2014. http://repositorio.ufrn.br/handle/123456789/19595.
Texto completoApproved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2016-01-15T18:57:11Z (GMT) No. of bitstreams: 1 GiulianaSilvaBezerra_DISSERT.pdf: 12899912 bytes, checksum: 413f2be6aef4a909500e6834e7b0ae63 (MD5)
Made available in DSpace on 2016-01-15T18:57:11Z (GMT). No. of bitstreams: 1 GiulianaSilvaBezerra_DISSERT.pdf: 12899912 bytes, checksum: 413f2be6aef4a909500e6834e7b0ae63 (MD5) Previous issue date: 2014-12-12
Emotion-based analysis has raised a lot of interest, particularly in areas such as forensics, medicine, music, psychology, and human-machine interface. Following this trend, the use of facial analysis (either automatic or human-based) is the most common subject to be investigated once this type of data can easily be collected and is well accepted in the literature as a metric for inference of emotional states. Despite this popularity, due to several constraints found in real world scenarios (e.g. lightning, complex backgrounds, facial hair and so on), automatically obtaining affective information from face accurately is a very challenging accomplishment. This work presents a framework which aims to analyse emotional experiences through naturally generated facial expressions. Our main contribution is a new 4-dimensional model to describe emotional experiences in terms of appraisal, facial expressions, mood, and subjective experiences. In addition, we present an experiment using a new protocol proposed to obtain spontaneous emotional reactions. The results have suggested that the initial emotional state described by the participants of the experiment was different from that described after the exposure to the eliciting stimulus, thus showing that the used stimuli were capable of inducing the expected emotional states in most individuals. Moreover, our results pointed out that spontaneous facial reactions to emotions are very different from those in prototypic expressions due to the lack of expressiveness in the latter.
Emotion-based analysis has raised a lot of interest, particularly in areas such as forensics, medicine, music, psychology, and human-machine interface. Following this trend, the use of facial analysis (either automatic or human-based) is the most common subject to be investigated once this type of data can easily be collected and is well accepted in the literature as a metric for inference of emotional states. Despite this popularity, due to several constraints found in real world scenarios (e.g. lightning, complex backgrounds, facial hair and so on), automatically obtaining affective information from face accurately is a very challenging accomplishment. This work presents a framework which aims to analyse emotional experiences through naturally generated facial expressions. Our main contribution is a new 4-dimensional model to describe emotional experiences in terms of appraisal, facial expressions, mood, and subjective experiences. In addition, we present an experiment using a new protocol proposed to obtain spontaneous emotional reactions. The results have suggested that the initial emotional state described by the participants of the experiment was different from that described after the exposure to the eliciting stimulus, thus showing that the used stimuli were capable of inducing the expected emotional states in most individuals. Moreover, our results pointed out that spontaneous facial reactions to emotions are very different from those in prototypic expressions due to the lack of expressiveness in the latter.
Al-Nuaimi, Tufool. "Face recognition and computer graphics for modelling expressive faces in 3D". Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/38333.
Texto completoIncludes bibliographical references (leaves 47-48).
This thesis addresses the problem of the lack of verisimilitude in animation. Since computer vision has been aimed at creating photo-realistic representations of environments and face recognition creates replicas of faces for recognition purposes, we research face recognition techniques to produce photo-realistic models of expressive faces that could be further developed and applied in animation. We use two methods that are commonly used in face recognition to gather information about the subject: 3D scanners and multiple 2D images. For the latter method, Maya is used for modeling. Both methods produced accurate 3D models for a neutral face, but Maya allowed us to manually build 3D models and was therefore more successful in creating exaggerated facial expressions.
by Tufool Al-Nuaimi.
M.Eng.
Alves, Cláudia Daniela Andrade Carvalho. "Transplantação da Face Humana: estudo de caso com Carmen Tarleton - efeitos neuropsicofisiológicos na exibição e no reconhecimento das emoções básicas". Doctoral thesis, [s.n.], 2015. http://hdl.handle.net/10284/5201.
Texto completoO transplante da face é um procedimento considerado experimental que originou intenso debate sobre os riscos e os benefícios de realizar este tipo de cirurgia. Os resultados mostram que, a partir de um ponto de vista clínico, técnico, estético, funcional, imunológico e psicológico, o transplante facial alcançou a reabilitação funcional, estética, social e psicológica em pacientes com desfiguração severa da face, e encontram-se descritos em publicações de equipas de transplante de todo o mundo. A experiência clínica demonstrou a viabilidade do transplante facial como uma valiosa opção de reconstrução, mas ainda continua a ser considerada como um procedimento experimental com questões não resolvidas por esclarecer. Os resultados funcionais e estéticos têm sido muito encorajadores, com boa recuperação motora e sensorial e melhorias para as funções faciais observadas. Como previsto, têm sido comuns episódios de rejeição aguda, mas facilmente controlados com o aumento da imunossupressão sistémica. Complicações de mortalidade e imunossupressão nos pacientes também foram observadas. As melhorias psicológicas têm sido notáveis e resultaram na reintegração dos pacientes para o mundo exterior, redes sociais e até mesmo no local de trabalho. As equipas de transplante facial têm destacado a seleção rigorosa dos pacientes como o indicador chave do sucesso. Os primeiros resultados globais do programa de transplante da face têm sido geralmente mais positivos do que o expectável. Este sucesso inicial, a divulgação de resultados e o refinamento contínuo do procedimento podem possibilitar que o transplante facial seja, futuramente, uma opção primordial de reconstrução para aqueles com extensas deformações faciais. Assim, é de suma importância compreender o processo neuropsicofisiológico na exibição e no reconhecimento das emoções básicas após o transplante da face. Os resultados obtidos indicam que lesões músculo-esqueléticas na face afetam a capacidade de exibição da expressão da emocionalidade e, por consequência, dificultam o reconhecimento da mesma pelos outros, prejudicando a eficácia da comunicação. Este estudo pretende contribuir para o desenvolvimento da investigação científica sobre a expressão facial da emoção, aplicável em contexto pioneiro e único em Portugal, como é o caso do transplante facial.
Face transplant is considered an experimental procedure that gave rise to intense debate about the risks and benefits of performing this type of surgery. The results show that, from a clinical technical, aesthetic, functional, immunological and psychological point of view, face transplant has reached functional, aesthetic, social and psychological rehabilitation in patients with severe disfiguration of the face, and are described in publications of transplant teams all over the world. Clinical experience demonstrates the feasibility of the face transplant as a valuable reconstruction option, yet is still considered as an experimental procedure with issues unresolved unclear. The functional and aesthetic results have been very encouraging, with good motor and sensory recovery and improvements to facial features observed. As expected, it has been common acute rejection, but easily controlled with increased systemic immunosuppression. Mortality and complications of immunosuppression in patients were also observed. Psychological improvements have been remarkable and resulted in reintegration of patients to the outside world, social networks and even in the workplace. Face transplant teams have highlighted the rigorous selection of patients as the key indicator of success. The first global results of face transplant program have been generally more positive than expected. This initial success, the dissemination of results and the ongoing refinement of the procedure may enable the facial transplant to be in future, a major reconstruction option for those with extensive facial deformities. Thus, it is of paramount importance to understand the neuropsychophysiological process in the display and recognition of basic emotions after transplantation of the face. The results indicate that musculoskeletal injuries on the face affect the display capacity of the emotionality expression and therefore hindering the recognition of the same for others, undermining the effectiveness of communication. This study aims to contribute to the development of scientific research on the facial expression of emotion, applicable in pioneering and unique context in Portugal, such as the face transplant.
La greffe du visage est considéré comme procédure expérimentale qui a suscité un intense débat sur les risques et les avantages de l'exécution de ce type de chirurgie. Les résultats montrent que, d'un point de vue clinique, technique, esthétique, fonctionnel, immunologique et psychologique de la greffe du visage a pris la réadaptation fonctionnelle, esthétique, psychologique et social chez les patients atteints souffrant de grave défigurement du visage, et sont décrits dans les publications de les équipes de transplantation du monde entier. L'expérience clinique démontre la faisabilité de la greffe comme une option valable de reconstruction faciale, mais toujours considéré comme une procédure expérimentale avec des problèmes non résolus. Les résultats fonctionnels et esthétiques ont été très encourageants, avec une bonne récupération moteur et sensorielle et l'amélioration caractéristiques faciales observées. Comme prévu, il a été le rejet aigu communs, mais facilement contrôlé avec une augmentation d'immunosuppression systémique. La mortalité et les complications de l'immunosuppression chez les patients atteints ont également été observées. Améliorations psychologiques ont été remarquables et ont abouti à la réintégration des patients vers le monde extérieur, les réseaux sociaux et même dans le lieu de travail. Les équipes de transplantation du visage ont mis en évidence la sélection rigoureuse des patients comme l'indicateur clé du succès. Les première résultat du programme mondial de la greffe du visage ont été généralement plus positive que prévu. Ce succès initial, la diffusion des résultats et de l'amélioration permanente de la procédure peut permettre à la greffe du visage être, avenir, une majeure option de reconstruction pour ceux qui ont de vastes malformations faciales. Ainsi, il est d'une importance primordiale pour comprendre le neuropsychophysiologique processus dans l'affichage et la reconnaissance des émotions de base après la transplantation du visage. Les résultats indiquent que les lésions musculo-squelettiques sur le visage affectent la capacité d'affichage de l'expression de la émotionnalité et donc difficile à reconnaître le même pour les autres, compromettre l'efficacité de la communication. Cette étude vise à contribuer au développement de la recherche scientifique sur l'expression du visage de l'émotion, applicable dans un contexte pionnier et unique au Portugal, comme la greffe de visage.
Coelho-Moreira, Ana Cristina Gonçalves. "As falas da face: processo Casa Pia - aplicação da análise da expressão facial à luz do Direito Penal Português". Doctoral thesis, [s.n.], 2015. http://hdl.handle.net/10284/4950.
Texto completoO Processo Casa Pia (PrCP) teve um impacto avassalador na sociedade portuguesa, escrutinando publicamente as instituições estatais que acolhiam crianças. As repercussões foram de tal forma intensas que não só foram alteradas as metodologias e as concepções de protecção do Estado a crianças desfavorecidas como o próprio código penal foi alterado como consequência directa das suas implicações. A emoção e a sua expressão facial desempenham um papel fundamental no desenvolvimento do indivíduo e da sua interacção com a sociedade. O estudo da expressão facial da emoção de alguns alguns dos intervenientes do PrCP, procurou encontrar respostas sobre a manifestação e a exibição na face da culpa e sobre o seu processamento ao nível neuropsicológico. O conceito de culpa no âmbito da expressão facial da emoção, ainda que hoje objecto de aceso debate no seio da comunidade científico é, à luz do direito penal um dos principais instrumentos utilizados para apurar a censurabilidade dos agentes e das suas acções. Ainda que a culpa seja considerada pelo direito penal como algo intrínseco ao agente e às suas acções, seja ela dolosa ou apenas negligente, o seu apuramento permite ao direito penal sustentar e aplicar uma sanção, promovendo a dissuasão de comportamentos idênticos, mantendo, assim a paz, a ordem social, e o respeito pelas instituições e agentes representativos do Estado. Assim, combinando o objectivo último do direito penal e o contributo da análise de expressão facial em contexto forense, foi elaborado um estudo de caso, por recurso metodologia qualitativa comparativa. O principal objectivo foi, procurando resposta às hipóteses colocadas, desenvolver matrizes de análise e medição da culpa, dados os diferentes tipos e níveis de influência que a mesma exerce nos processos de adaptação dos indivíduos à sociedade e às circunstâncias. Os resultado obtidos indicam e sustentam a evidência de uma configuração específica de Au’s na Upper Face e associadas à manifestação na face de culpa, independentemente das circunstâncias (negação ou assunção) que a provocam. Desta feita, o presente estudo poderá representar o início de um necessária colaboração entre a aplicação da análise da expressão facial da emoção e a aplicação do direito em todas as suas vertentes e instituições uma vez que reforça a aplicação do princípio da culpa e , por consequência, as suas dimensões jurídico-penal e ética. The Casa Pia sexual child abuse scandal had a devastating impact on Portuguese society, publicly scrutinizing the state institutions that sheltered children. The repercussions were so intense that, not only the methodologies and state protection concepts for disadvantaged children have changed, but also the criminal law was changed as a direct result of its implications. The emotion and there facial expression plays a key role in the development of the individual and their interaction with society. The study of facial expression of emotion of some of the players of PrCP, sought to find answers on demonstration and display on the face of guilt and the processing at a neurological and psychological level. The concept of guilt within the facial expression of emotion, although today the subject of heated debate within the scientific community is in the light of criminal law one of the main instruments used to determine the reprehensibility of agents and their actions. Although the guilt is considered by criminal law as something intrinsic to the agent and their actions, whether intentional or just negligent, its establishment allows the criminal law uphold and apply a sanction, promoting deterrence identical behaviors, thus maintaining the peace, social order, and respect for state institutions and representative agents. Thus, combining the ultimate goal of criminal law and the contribution of analysis facial expression of emotion in forensic context, a case study was elaborated with a qualitative methodology by comparison. The main objective was looking for answers to the hypotheses, develop patterns of analysis and measurement of guilt, given the different types and levels of influence which it exerts on the processes of adaptation of the individual to society and circumstances. The results obtained indicate and support the evidence of a specific configuration of Au's, in Upper Face associated with the expression on the face of guilt, regardless of the circumstances (denial or assumption) that underlie it. Therefore, the present study may represent the beginning of a close collaboration between the application of analysis of facial expression of emotion and the application of law in all its aspects and institutions, as it strengthens the principle of guilt and therefore its dimensions legal, criminal and ethics. Le Casa Pia enfant scandale des abus sexuelles a eu un impact dévastateur sur la société portugaise, scrutant publiquement institutions de l'Etat qui ont accueilli les enfants. Les répercussions étaient si intenses donc, ne ont pas seulement changé les méthodes et concepts de protection de l'État pour les enfants défavorisés, comme le droit pénal a été modifié en conséquence directe de ses implications. L'emotion et sa expression faciale joue un rôle clé dans le développement de l'individu et de leur interaction avec la société. L'étude de l'expression du face de l'émotion de quelques uns des acteurs de PrCP, a cherché à trouver des réponses sur la démonstration et l’exposition sur le face de la culpabilité et le traitement de niveau neurologique et psychologique. Le concept de culpabilité dans l'expression du visage de l'émotion, même si aujourd'hui l'un vif débat au sein de la communauté scientifique est à la lumière du droit pénal l'un des principaux instruments utilisés pour déterminer le caractère répréhensible d'agents et de leurs actions. Bien que la faute est considérée par la loi pénale comme quelque chose d'intrinsèque à l'agent et leurs actions, qu'elles soient intentionnelles ou tout simplement preuve de négligence, sa clairance permet la loi pénale faire respecter et d'appliquer une sanction, la promotion de la dissuasion des comportements identiques, maintenant ainsi la la paix, l'ordre social, et le respect des institutions de l'Etat et des agents représentatifs. Ainsi, en combinant le but ultime du droit pénal et de la contribution des expressions analyse facias dans le contexte médico-légale, une étude de cas a été préparé avec une méthodologie qualitative par comparaison. L'objectif principal a été la recherche de réponses aux hypothèses, élaborer des tableaux d'analyse et de mesure de la culpabilité, étant donné les différents types et niveaux d'influence qu'il exerce sur les processus d'adaptation de l'individu à la société et les circonstances. Les résultats obtenus indiquent et soutiennent la preuve d'une configuration spécifique de Au son, Upper Face associée à l'expression sur la face de culpabilité, quelles que soient les circonstances (négation ou la véracité) qui la sous-tendent. Cette fois, la présente étude peut représenter le début d'une étroite collaboration entre l’application de l'analyse de l'expression facial de l'émotion et de l'application du droit dans tous ses aspects et les institutions, car il renforce le principe de la culpabilité et donc ses dimensions juridique, pénale et de l’éthique.
Orvoen, Hadrien. "Expressions faciales émotionnelles et Prise de décisions coopératives". Thesis, Paris, EHESS, 2017. http://www.theses.fr/2017EHES0032/document.
Texto completoFor few decades, rational choice theories failed to properly account for cooperative behaviors. This was illustrated by social dilemmas, games where a self-motivated individual will be tempted to exploit others' cooperative behavior, harming them for his own personal profit. I will first detail how cooperation may rise as a reasonable --- if not rational --- behavior, provided that we consider social interactions in a more realistic context that rational choice theories initially did. From anthropology to neurobiology, cooperation is understood as an efficient adaptation to this natural environment rather than a quirky, self-defeating behavior. Because pertinent information is often lacking or overwhelming, too complex or ambiguous to deal with, it is essential to communicate, to share, and to trust others. Emotions, and their expression, are a cornerstone of humans' natural and effortless navigation in their social environment. Smiles for instance are universally known as a signal of satisfaction, approbation and cooperation. Like other emotional expressions, they are automatically and preferentially treated. They elicit trust and cooperative behaviors in observers, and are ubiquitous in successful collaborative interactions. Beside that however, few is known about how others' expressions are integrated into decision making. That was the focus of the experimental study I relate in this manuscript. More specifically, I investigated how decisions in a trust-based social dilemma are influenced by smiles which are either displayed along a cooperative or defective behavior (``congruently'' and ``incongruently'', resp.). I carried out two experiments where participants played an investment game with different computerized virtual partners playing the role of trustees. Virtual trustees, which were personalised with a facial avatar, could either take and keep participants investment, or reciprocate it with interests. Moreover, they also displayed facial reactions, that were either congruent or incongruent with their computerized ``decision'' to reciprocate or not. Even if the two experiments presented some methodological differences, they were coherent in that they both showed that participants were altered in remembering a virtual trustee's behavior if the latter expressed incongruent emotions. This was observed from participants' investments in game, and from their post-experimental explicit reports. If many improvements to my experimental approach remain to be done, I think it already completes the existing literature with original results. Many interesting perspectives are left open, which appeal for a deeper investigation of face-to-face decision making. I think it constitutes a theoretical and practical necessity, for which researchers will be required to unify the wide knowledge of the major cognitive functions which was gathered over the last decades
Domingues, Daniel Chinen. "Reconhecimento automático de expressões faciais por dispositivos móveis". reponame:Repositório Institucional da UFABC, 2014.
Buscar texto completoDissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia da Informação, 2014.
A computação atual vem demandando, cada vez mais, formas avançadas de interação com os computadores. A interface do humano com seus dispositivos móveis carece de métodos mais avançados, e um recurso automático de reconhecimento de expressões faciais seria uma maneira de alcançar patamares maiores nessa escala de evolução. A forma como se dá o reconhecimento de emoções humanas e o que as expressões faciais representam em uma comunicação face-a-face vem sendo referência no desenvolvimento desses sistemas computacionais e com isso, pode-se elencar três grandes desafios para implementar o algoritmo de analise de expressões: Localizar o rosto na imagem, extrair os elementos faciais relevantes e classificar os estados de emoções. O melhor método de resolução de cada um desses sub-desafios, que se relacionam fortemente, determinará a viabilidade, a eficiência e a relevância de um novo sistema de análise de expressões embarcada nos dispositivos portáteis. Este estudo tem como objetivo avaliar a viabilidade da implantação de um sistema automático de reconhecimento de expressões faciais por imagens, em dispositivo móvel, utilizando a plataforma iOS da Apple, integrada com a biblioteca de código aberto e muito utilizada na comunidade da computação visual, o OpenCV. O algoritmo Local Binary Pattern, implementado pelo OpenCV, foi escolhido como lógica de rastreamento da face. Os algorítmos Adaboost e Eigenface foram ,respectivamente, adotados para extração e classificação da emoção e ambos são também suportados pela mencionada biblioteca. O Módulo de Classificação Eigenface demandou um treinamento adicional em um ambiente de maior capacidade de processamento e externo a plataforma móvel; posteriormente, apenas o arquivo de treino foi exportado e consumido pelo aplicativo modelo. O estudo permitiu concluir que o Local Binary Pattern é muito robusto a variações de iluminação e muito eficiente no rastreamento da face; o Adaboost e Eigenface produziram eficiência de aproximadamente 65% na classificação da emoção, quando utilizado apenas as imagens de pico no treino do módulo, condição essa, necessária para manutenção do arquivo de treino em um tamanho compatível com o armazenamento disponível nos dispositivos dessa categoria.
The actual computing is demanding, more and more, advanced forms of interaction with computers. The interfacing from man with their mobile devices lacks more advanced methods, and automatic facial expression recognition would be a way to achieve greater levels in this scale of evolution. The way how is the human emotion recognition and what facial expressions represents in a face to face communication is being reference for development of these computer systems and thus, it can list three major implementation challenges for algorithm analysis of expressions: location of the face in the image, extracting the relevant facial features and emotions¿ states classification. The best method to solve each of these strongly related sub- challenges, determines the feasibility, the efficiency and the relevance of a new expressions analysis system, embedded in portable devices. To evaluate the feasibility of developing an automatic recognition of facial expressions in images, we implemented a mobile system model in the iOS platform with integration to an open source library that is widely used in visual computing community: the OpenCV. The Local Binary Pattern algorithm implemented by OpenCV, was chosen as the face tracking logic; the Eigenface and AdaBoost, respectively, were adopted for extraction and classification of emotion and both are also supported by the library. The Eigenface Classification Module was trained in a more robust and external environment to the mobile platform and subsequently only the training file was exported and consumed by the model application. With this experiment it was concluded that the Local Binary Pattern is very robust to lighting variations and very efficient to tracking the face; the Adaboot and Eigenface resulted in approximately 65% of efficiency when used only maximum emotion images to training the module, a condition necessary for maintenance of the cascade file in a compatible size to available storage on the mobile platform.
Li, Huibin. "Towards three-dimensional face recognition in the real". Phd thesis, Ecole Centrale de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00998798.
Texto completoBozed, Kenz Amhmed. "Detection of facial expressions based on time dependent morphological features". Thesis, University of Bedfordshire, 2011. http://hdl.handle.net/10547/145618.
Texto completoAly, Sherin Fathy Mohammed Gaber. "Techniques for Facial Expression Recognition Using the Kinect". Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/89220.
Texto completoPHD
Han, Xia. "Towards the Development of an Efficient Integrated 3D Face Recognition System. Enhanced Face Recognition Based on Techniques Relating to Curvature Analysis, Gender Classification and Facial Expressions". Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5347.
Texto completoMaas, Casey. "Decoding Faces: The Contribution of Self-Expressiveness Level and Mimicry Processes to Emotional Understanding". Scholarship @ Claremont, 2014. http://scholarship.claremont.edu/scripps_theses/406.
Texto completoDing, Huaxiong. "Combining 2D facial texture and 3D face morphology for estimating people's soft biometrics and recognizing facial expressions". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEC061/document.
Texto completoSince soft biometrics traits can provide sufficient evidence to precisely determine the identity of human, there has been increasing attention for face based soft biometrics identification in recent years. Among those face based soft biometrics, gender and ethnicity are both key demographic attributes of human beings and they play a very fundamental and important role in automatic machine based face analysis. Meanwhile, facial expression recognition is another challenge problem in face analysis because of the diversity and hybridity of human expressions among different subjects in different cultures, genders and contexts. This Ph.D thesis work is dedicated to combine 2D facial Texture and 3D face morphology for estimating people’s soft biometrics: gender, ethnicity, etc., and recognizing facial expression. For the gender and ethnicity recognition, we present an effective and efficient approach on this issue by combining both boosted local texture and shape features extracted from 3D face models, in contrast to the existing ones that only depend on either 2D texture or 3D shape of faces. In order to comprehensively represent the difference between different genders or ethnics groups, we propose a novel local descriptor, namely local circular patterns (LCP). LCP improves the widely utilized local binary patterns (LBP) and its variants by replacing the binary quantization with a clustering based one, resulting in higher discriminative power as well as better robustness to noise. Meanwhile, the following Adaboost based feature selection finds the most discriminative gender- and ethnic-related features and assigns them with different weights to highlight their importance in classification, which not only further raises the performance but reduces the time and memory cost as well. Experimental results achieved on the FRGC v2.0 and BU-3DFE data sets clearly demonstrate the advantages of the proposed method. For facial expression recognition, we present a fully automatic multi-modal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU–3DFE database. Our approach combines multi-order gradientbased local texture and shape descriptors in order to achieve efficiency a nd robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar–CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are employed to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both featurelevel and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU–3DFE benchmark to compare our approach to the state-of-the-art ones. Our multi-modal feature-based approach outperforms the others by achieving an average recognition accuracy of 86,32%. Moreover, a good generalization ability is shown on the Bosphorus database
BRENNA, VIOLA. "Positive and negative facial emotional expressions: the effect on infants' and children's facial identity recognition". Doctoral thesis, Università degli Studi di Milano-Bicocca, 2013. http://hdl.handle.net/10281/46845.
Texto completoAngeli, Valentina. "Infants' early representation of faces: the role of dynamic cues". Doctoral thesis, Università degli studi di Padova, 2015. http://hdl.handle.net/11577/3427123.
Texto completoIl presente lavoro di tesi si propone di indagare come il movimento semi-rigido del volto influenzi la codifica e la elaborazione di alcune informazioni socialmente rilevanti estraibili dal volto stesso, come l'identità e le espressioni emotive, in bambini al di sotto del primo anno di vita. In particolare, l'ipotesi è che il movimento facciale possa promuovere la costruzione di una rappresentazione mentale che, a sua volta, faciliti il riconoscimento degli stimoli in compiti di abituazione e familiarizzazione visiva. Inoltre, è stata analizzata la capacità degli infanti di processare l'informazione cinetica del volto quando altre informazioni pittoriche, come le forme, i colori, ecc., non sono presenti. Nel primo studio è stato indagato come il movimento facciale veicolato dall'espressione facciale di felicità possa influenzare sulla costruzione della rappresentazione del volto in bambini con un massimo di 3 giorni di vita). Precedenti studi alla nascita hanno dimostrato che quando alcune caratteristiche facciali del volto da riconoscere cambiano, la capacità di riconoscimento dell'identità di un volto viene inibita (e.g., Turati et al., 2008). In questi casi, è stato dimostrato come sia il movimento rigido che quello non-rigido del volto facilitino il riconoscimento dell'identità alla nascita (Bulf & Turati, 2010; Leo et al., in prep.). Attraverso quattro esperimenti, si è voluta verificare l'ipotesi che l'effetto di beneficio del movimento semi-rigido sia legato alla costruzione di una rappresentazione del volto meno legata all'immagine pittorica immagazzinata in memoria. Anzitutto, i dati dimostrano che il movimento facciale non favorisce il riconoscimento quando viene aumentata la distanza percettiva tra il volto memorizzato e quello da riconoscere (Esperimento 1). Coerentemente, quando tale distanza percettiva è minima, i neonati sono in grado di riconoscere lo stesso volto anche in condizioni statiche (Esperimento 2). Il terzo studio mostra che un movimento biologicamente impossibile ostacola il riconoscimento dell'identità alla nascita (Esperimento 3). Infine, è stato dimostrato come le stesse informazioni pittoriche presentate staticamente in sequenza non portano ad alcun beneficio nel riconoscimento (Esperimento 4). Nel complesso, il movimento non-rigido sembra promuovere una rappresentazione del volto resiliente ai cambiamenti, ma soltanto quando la differenza percettiva tra le diverse immagini dello stesso volto è limitata. Il secondo studio ha indagato se l'utilizzo di stimoli facciali emotivi dinamici consenta l'astrazione di caratteristiche comuni permettendo la categorizzazione delle espressioni facciali di felicità e paura già a 3 mesi di vita. La letteratura sulla capacità di categorizzazione negli infanti, infatti, indica che tale abilità si sviluppi soltanto tra i 5 e i 7 mesi di vita (e.g., deHaan & Nelson, 1998). Tuttavia, nella quasi totalità degli studi sono stati utilizzati stimoli statici. Dati provenienti dalle osservazioni naturalistiche delle interazioni madre-bambino (e.g., Nadel et al., 2005), nonché da studi che utilizzano altri paradigmi sperimentali, come preferenze di tipo intermodale (e.g., Kahana-Kalman & Walker-Andrews, 2001), in cui gli stimoli facciali sono dinamici, suggeriscono una sensibilità al tono emotivo delle espressioni facciali (in particolare, quella di felicità) ben più precoce di quella indicata dagli studi di laboratorio. In un disegno within-subjects, bambini di 3 mesi sono stati familiarizzati a 4 differenti identità che mostravano 4 differenti intensità di felicità e paura presentate sequenzialmente in modo da creare una percezione di dinamicità. I risultati hanno mostrato come l'espressione di felicità viene categorizzata già a tre mesi di vita, mentre questo non succede per quella di paura. Tale differenza è riconducibile al diverso grado di familiarità delle due espressioni (Malatesta & Haviland, 1982). Questi risultati supportano l'ipotesi che il movimento facciale promuova l'astrazione di caratteristiche invarianti del volto, facilitando la categorizzazione delle espressioni facciali. Il terzo studio si è proposto di analizzare la capacità di processare la sola informazione cinetica del volto, scorporata dagli altri indici pittorici. A tal fine, sono stati creati stimoli facciali di tipo point-light (Johansson, 1973) raffigurati la dinamicità delle espressioni di felicità e paura. Nell'esperimento 1, tramite abituazione visiva, è stata indagata la capacità di infanti di 3, 6 e 9 mesi di vita di discriminare queste due espressioni facciali sulla base del solo movimento del volto, come precedentemente dimostrato negli adulti (e.g., Bassili, 1978). Gli stimoli sono stati presentati sia dritti che invertiti, al fine di verificare che il movimento fosse processato come un movimento del volto. I risultati hanno mostrato anzitutto un effetto inversione, che indica che l'insieme dei punti in movimento viene organizzato secondo lo schema volto. Inoltre, quando abituati all'espressione di felicità, i bambini di tutte le tre età dimostrano capacità di discriminazione. Al contrario, quando abituati alla paura, solo i bambini di 3 mesi mostrano capacità di discriminazione, mentre a 6 e 9 mesi questa abilità sembra scomparire. L'esperimento 2 ha escluso la possibilità che una preferenza a priori per l'espressione paura possa aver causato questo andamento. I risultati sembrano indicare che la capacità di processare le espressioni facciali sulla sola base cinetica si evolvi secondo una traiettoria di sviluppo che prevede una iniziale elaborazione di attributi del volto 'low-level', in cui i movimenti vengono processati come movimenti del volto, verso una più sofisticata elaborazione di attributi del volto 'high-level', in cui il movimento è processato come espressione facciale. Nel complesso, i dati di questo lavoro di tesi sembrano suggerire che il movimento facciale possa promuovere l'elaborazione delle informazioni sociali trasmissibili dal volto fin dai primi mesi di vita, attraverso un rafforzamento della costruzione di una rappresentazione del volto. Inoltre, i dati hanno mostrato che la capacità di processare le espressioni facciali sulla sola base del movimento emerge tra i 6 e i 9 mesi di vita.
Marshall, Amy D. "Violent husbands' recognition of emotional expressions among the faces of strangers and their wives". [Bloomington, Ind.] : Indiana University, 2004. http://wwwlib.umi.com/dissertations/fullcit/3162247.
Texto completoTitle from PDF t.p. (viewed Dec. 1, 2008). Source: Dissertation Abstracts International, Volume: 66-01, Section: B, page: 0564. Chair: Amy Holtzworth-Munroe.
Beer, Jenay Michelle. "Recognizing facial expression of virtual agents, synthetic faces, and human faces: the effects of age and character type on emotion recognition". Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33984.
Texto completoNeto, Wolme Cardoso Alves. "Efeitos do escitalopram sobre a identificação de expressões faciais". Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/17/17148/tde-25032009-210215/.
Texto completoALVES NETO, W.C. Effects of escitalopram on the processing of emotional faces. Ribeirão Preto, SP: Faculty of Medicine of Ribeirão Preto, University of São Paulo; 2008. The selective serotonin reuptake inhibitors (SSRI) have been used successfully for the treatment of various psychiatry disorders. The SSRI clinical efficacy is attributed to an enhancement of the serotonergic neurotransmission, but little is known about the neuropsychological mechanisms underlying this process. Several evidences suggest that serotonin is involved with the regulation of social behavior, learning and memory process and emotional processing. The recognition of basic emotions on facial expressions represents an useful task to study the emotional processing, since they are a condensate, uniform and important stimuli for social functioning. The aim of the study was to verify the effects of the SSRI escitalopram on the recognition of facial emotional expressions. Twelve healthy males completed two experimental sessions each (crossover design), in a randomized, balanced order, double-blind design. An oral dose of 10 mg of escitalopram was administered 3 hours before they performed an emotion recognition task with six basic emotions angry, fear, sadness, disgust, happiness and surprise and neutral expression. The faces were digitally morphed between 10% and 100% of each emotional standard, creating a 10% steps gradient. The subjective mood and anxiety states through the task were recorded and the performance through the task was defined by the accuracy measure (number of correct answers divided by the total of stimuli presented). In general, except of fear, escitalopram interfered with all the emotions tested. Specifically, facilitated the recognition of sadness, while impaired the identification of happiness. When the gender of the faces was analyzed, this effect was seen in male, but not female faces, where it improves the recognition of happiness. In addition, improves the recognition of angry and disgusted faces when administered at the second session and impaired the identification of surprised faces at intermediate levels of intensity. It also showed a global positive effect on task performance when administered at the second session. The results indicate a serotonergic modulation on the recognition of emotional faces and the recall of previous learned items.
Julin, Fredrik. "Vision based facial emotion detection using deep convolutional neural networks". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-42622.
Texto completoSilva, Jadiel Caparrós da [UNESP]. "Aplicação de sistemas imunológicos artificiais para biometria facial: Reconhecimento de identidade baseado nas características de padrões binários". Universidade Estadual Paulista (UNESP), 2015. http://hdl.handle.net/11449/127901.
Texto completoO presente trabalho tem como objetivo realizar o reconhecimento de identidade por meio de um método baseado nos Sistemas Imunológicos Artificiais de Seleção Negativa. Para isso, foram explorados os tipos de recursos e alternativas adequadas para a análise de expressões faciais 3D, abordando a técnica de Padrão Binário que tem sido aplicada com sucesso para o problema 2D. Inicialmente, a geometria facial 3D foi convertida em duas representações em 2D, a Depth Map e a APDI, que foram implementadas com uma variedade de tipos de recursos, tais como o Local Phase Quantisers, Gabor Filters e Monogenic Filters, a fim de produzir alguns descritores para então fazer-se a análise de expressões faciais. Posteriormente, aplica-se o Algoritmo de Seleção Negativa onde são realizadas comparações e análises entre as imagens e os detectores previamente criados. Havendo afinidade entre as imagens previamente estabelecidas pelo operador, a imagem é classificada. Esta classificação é chamada de casamento. Por fim, para validar e avaliar o desempenho do método foram realizados testes com imagens diretamente da base de dados e posteriormente com dez descritores desenvolvidos a partir dos padrões binários. Esses tipos de testes foram realizados tendo em vista três objetivos: avaliar quais os melhores descritores e as melhores expressões para se realizar o reconhecimento de identidade e, por fim, validar o desempenho da nova solução de reconhecimento de identidades baseado nos Sistemas Imunológicos Artificiais. Os resultados obtidos pelo método apresentaram eficiência, robustez e precisão no reconhecimento de identidade facial
This work aims to perform the identity recognition by a method based on Artificial Immune Systems, the Negative Selection Algorithm. Thus, the resources and adequate alternatives for analyzing 3D facial expressions were explored, exploring the Binary Pattern technique that is successfully applied for the 2D problem. Firstly, the 3D facial geometry was converted in two 2D representations. The Depth Map and the Azimuthal Projection Distance Image were implemented with other resources such as the Local Phase Quantisers, Gabor Filters and Monogenic Filters to produce descriptors to perform the facial expression analysis. Afterwards, the Negative Selection Algorithm is applied, and comparisons and analysis with the images and the detectors previously created are done. If there is affinity with the images, than the image is classified. This classification is called matching. Finally, to validate and evaluate the performance of the method, tests were realized with images from the database and after with ten descriptors developed from the binary patterns. These tests aim to: evaluate which are the best descriptors and the best expressions to recognize the identities, and to validate the performance of the new solution of identity recognition based on Artificial Immune Systems. The results show efficiency, robustness and precision in recognizing facial identity
Silva, Jadiel Caparrós da. "Aplicação de sistemas imunológicos artificiais para biometria facial: Reconhecimento de identidade baseado nas características de padrões binários /". Ilha Solteira, 2015. http://hdl.handle.net/11449/127901.
Texto completoCo-orientador: Jorge Manuel M. C. Pereira Batista
Banca: Carlos Roberto Minussi
Banca: Ricardo Luiz Barros de Freitas
Banca: Díbio Leandro Borges
Banca: Gelson da Cruz Junior
Resumo: O presente trabalho tem como objetivo realizar o reconhecimento de identidade por meio de um método baseado nos Sistemas Imunológicos Artificiais de Seleção Negativa. Para isso, foram explorados os tipos de recursos e alternativas adequadas para a análise de expressões faciais 3D, abordando a técnica de Padrão Binário que tem sido aplicada com sucesso para o problema 2D. Inicialmente, a geometria facial 3D foi convertida em duas representações em 2D, a Depth Map e a APDI, que foram implementadas com uma variedade de tipos de recursos, tais como o Local Phase Quantisers, Gabor Filters e Monogenic Filters, a fim de produzir alguns descritores para então fazer-se a análise de expressões faciais. Posteriormente, aplica-se o Algoritmo de Seleção Negativa onde são realizadas comparações e análises entre as imagens e os detectores previamente criados. Havendo afinidade entre as imagens previamente estabelecidas pelo operador, a imagem é classificada. Esta classificação é chamada de casamento. Por fim, para validar e avaliar o desempenho do método foram realizados testes com imagens diretamente da base de dados e posteriormente com dez descritores desenvolvidos a partir dos padrões binários. Esses tipos de testes foram realizados tendo em vista três objetivos: avaliar quais os melhores descritores e as melhores expressões para se realizar o reconhecimento de identidade e, por fim, validar o desempenho da nova solução de reconhecimento de identidades baseado nos Sistemas Imunológicos Artificiais. Os resultados obtidos pelo método apresentaram eficiência, robustez e precisão no reconhecimento de identidade facial
Abstract: This work aims to perform the identity recognition by a method based on Artificial Immune Systems, the Negative Selection Algorithm. Thus, the resources and adequate alternatives for analyzing 3D facial expressions were explored, exploring the Binary Pattern technique that is successfully applied for the 2D problem. Firstly, the 3D facial geometry was converted in two 2D representations. The Depth Map and the Azimuthal Projection Distance Image were implemented with other resources such as the Local Phase Quantisers, Gabor Filters and Monogenic Filters to produce descriptors to perform the facial expression analysis. Afterwards, the Negative Selection Algorithm is applied, and comparisons and analysis with the images and the detectors previously created are done. If there is affinity with the images, than the image is classified. This classification is called matching. Finally, to validate and evaluate the performance of the method, tests were realized with images from the database and after with ten descriptors developed from the binary patterns. These tests aim to: evaluate which are the best descriptors and the best expressions to recognize the identities, and to validate the performance of the new solution of identity recognition based on Artificial Immune Systems. The results show efficiency, robustness and precision in recognizing facial identity
Doutor
Löfdahl, Tomas y Mattias Wretman. "Långsammare igenkänning av emotioner i ansiktsuttryck hos individer med utmattningssyndrom : En pilotstudie". Thesis, Mittuniversitetet, Institutionen för samhällsvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17590.
Texto completo