Rozprawy doktorskie na temat „Face emotion recognition”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Face emotion recognition.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Face emotion recognition”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Bate, Sarah. "The role of emotion in face recognition". Thesis, University of Exeter, 2008. http://hdl.handle.net/10036/51993.

Pełny tekst źródła
Streszczenie:
This thesis examines the role of emotion in face recognition, using measures of the visual scanpath as indicators of recognition. There are two key influences of emotion in face recognition: the emotional expression displayed upon a face, and the emotional feelings evoked within a perceiver in response to a familiar person. An initial set of studies examined these processes in healthy participants. First, positive emotional expressions were found to facilitate the processing of famous faces, and negative expressions facilitated the processing of novel faces. A second set of studies examined the role of emotional feelings in recognition. Positive feelings towards a face were also found to facilitate processing, in both an experimental study using newly learned faces and in the recognition of famous faces. A third set of studies using healthy participants examined the relative influences of emotional expression and emotional feelings in face recognition. For newly learned faces, positive expressions and positive feelings had a similar influence in recognition, with no presiding role of either dimension. However, emotional feelings had an influence over and above that of expression in the recognition of famous faces. A final study examined whether emotional valence could influence covert recognition in developmental prosopagnosia, and results suggested the patients process faces according to emotional valence rather than familiarity per se. Specifically, processing was facilitated for studied-positive faces compared to studied-neutral and novel faces, but impeded for studied-negative faces. This pattern of findings extends existing reports of a positive-facilitation effect in face recognition, and suggests there may be a closer relationship between facial familiarity and emotional valence than previously envisaged. The implications of these findings are discussed in relation to models of normal face recognition and theories of covert recognition in prosopagnosia.
Style APA, Harvard, Vancouver, ISO itp.
2

Tomlinson, Eleanor Katharine. "Face-processing and emotion recognition in schizophrenia". Thesis, University of Birmingham, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433700.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Kuhn, Lisa Katharina. "Emotion recognition in the human face and voice". Thesis, Brunel University, 2015. http://bura.brunel.ac.uk/handle/2438/11216.

Pełny tekst źródła
Streszczenie:
At a perceptual level, faces and voices consist of very different sensory inputs and therefore, information processing from one modality can be independent of information processing from another modality (Adolphs & Tranel, 1999). However, there may also be a shared neural emotion network that processes stimuli independent of modality (Peelen, Atkinson, & Vuilleumier, 2010) or emotions may be processed on a more abstract cognitive level, based on meaning rather than on perceptual signals. This thesis therefore aimed to examine emotion recognition across two separate modalities in a within-subject design, including a cognitive Chapter 1 with 45 British adults, a developmental Chapter 2 with 54 British children as well as a cross-cultural Chapter 3 with 98 German and British children, and 78 German and British adults. Intensity ratings as well as choice reaction times and correlations of confusion analyses of emotions across modalities were analysed throughout. Further, an ERP Chapter investigated the time-course of emotion recognition across two modalities. Highly correlated rating profiles of emotions in faces and voices were found which suggests a similarity in emotion recognition across modalities. Emotion recognition in primary-school children improved with age for both modalities although young children relied mainly on faces. British as well as German participants showed comparable patterns for rating basic emotions, but subtle differences were also noted and Germans perceived emotions as less intense than British. Overall, behavioural results reported in the present thesis are consistent with the idea of a general, more abstract level of emotion processing which may act independently of modality. This could be based, for example, on a shared emotion brain network or some more general, higher-level cognitive processes which are activated across a range of modalities. Although emotion recognition abilities are already evident during childhood, this thesis argued for a contribution of ‘nurture’ to emotion mechanisms as recognition was influenced by external factors such as development and culture.
Style APA, Harvard, Vancouver, ISO itp.
4

Durrani, Sophia J. "Studies of emotion recognition from multiple communication channels". Thesis, University of St Andrews, 2005. http://hdl.handle.net/10023/13140.

Pełny tekst źródła
Streszczenie:
Crucial to human interaction and development, emotions have long fascinated psychologists. Current thinking suggests that specific emotions, regardless of the channel in which they are communicated, are processed by separable neural mechanisms. Yet much research has focused only on the interpretation of facial expressions of emotion. The present research addressed this oversight by exploring recognition of emotion from facial, vocal, and gestural tasks. Happiness and disgust were best conveyed by the face, yet other emotions were equally well communicated by voices and gestures. A novel method for exploring emotion perception, by contrasting errors, is proposed. Studies often fail to consider whether the status of the perceiver affects emotion recognition abilities. Experiments presented here revealed an impact of mood, sex, and age of participants. Dysphoric mood was associated with difficulty in interpreting disgust from vocal and gestural channels. To some extent, this supports the concept that neural regions are specialised for the perception of disgust. Older participants showed decreased emotion recognition accuracy but no specific pattern of recognition difficulty. Sex of participant and of actor affected emotion recognition from voices. In order to examine neural mechanisms underlying emotion recognition, an exploration was undertaken using emotion tasks with Parkinson's patients. Patients showed no clear pattern of recognition impairment across channels of communication. In this study, the exclusion of surprise as a stimulus and response option in a facial emotion recognition task yielded results contrary to those achieved without this modification. Implications for this are discussed. Finally, this thesis gives rise to three caveats for neuropsychological research. First, the impact of the observers' status, in terms of mood, age, and sex, should not be neglected. Second, exploring multiple channels of communication is important for understanding emotion perception. Third, task design should be appraised before conclusions regarding impairments in emotion perception are presumed.
Style APA, Harvard, Vancouver, ISO itp.
5

Alashkar, Taleb. "3D dynamic facial sequences analysis for face recognition and emotion detection". Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10109/document.

Pełny tekst źródła
Streszczenie:
L’étude menée dans le cadre de cette thèse vise l’étude du rôle de la dynamique de formes faciales 3D à révéler l’identité des personnes et leurs états émotionnels. Pour se faire, nous avons proposé un cadre géométrique pour l’étude des formes faciales 3D et leurs dynamiques dans le temps. Une séquence 3D est d’abord divisée en courtes sous-séquences, puis chacune des sous-séquences obtenues est représentée dans une variété de Grassmann (ensemble des sous-espaces linéaires de dimension fixe). Nous avons exploité la géométrie de ces variétés pour comparer des sous-séquences 3D, calculer des statistiques (telles que des moyennes) et quantifier la divergence entre des éléments d’une même variété Grassmannienne. Nous avons aussi proposé deux représentations possibles pour les deux applications cibles – (1) la première est basée sur les dictionnaires (de sous-espaces) associée à des techniques de Dictionary Learning Sparse Coding pour la reconnaissance d’identité et (2) le représentation par des trajectoires paramétrées par le temps sur les Grassmanniennes couplée avec une variante de l’algorithme de classification SVM, permettant un apprentissage avec des données partielles, pour la détection précoce des émotions spontanée. Les expérimentations réalisées sur les bases publiques BU-4DFE, Cam3D et BP4D-Spontaneous montrent à la fois l’intérêt du cadre géométrique proposé (en terme de temps de calcul et de robustesse au bruit et aux données manquantes) et les représentations adoptées (dictionnaires pour la reconnaissance d’identité et trajectoires pour la détection précoce des émotions spontanées)
In this thesis, we have investigated the problems of identity recognition and emotion detection from facial 3D shapes animations (called 4D faces). In particular, we have studied the role of facial (shapes) dynamics in revealing the human identity and their exhibited spontaneous emotion. To this end, we have adopted a comprehensive geometric framework for the purpose of analyzing 3D faces and their dynamics across time. That is, a sequence of 3D faces is first split to an indexed collection of short-term sub-sequences that are represented as matrix (subspace) which define a special matrix manifold called, Grassmann manifold (set of k-dimensional linear subspaces). The geometry of the underlying space is used to effectively compare the 3D sub-sequences, compute statistical summaries (e.g. sample mean, etc.) and quantify densely the divergence between subspaces. Two different representations have been proposed to address the problems of face recognition and emotion detection. They are respectively (1) a dictionary (of subspaces) representation associated to Dictionary Learning and Sparse Coding techniques and (2) a time-parameterized curve (trajectory) representation on the underlying space associated with the Structured-Output SVM classifier for early emotion detection. Experimental evaluations conducted on publicly available BU-4DFE, BU4D-Spontaneous and Cam3D Kinect datasets illustrate the effectiveness of these representations and the algorithmic solutions for identity recognition and emotion detection proposed in this thesis
Style APA, Harvard, Vancouver, ISO itp.
6

Bui, Kim-Kim. "Face Processing in Schizophrenia : Deficit in Face Perception or in Recognition of Facial Emotions?" Thesis, University of Skövde, School of Humanities and Informatics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-3349.

Pełny tekst źródła
Streszczenie:

Schizophrenia is a psychiatric disorder characterized by social dysfunction. People with schizophrenia misinterpret social information and it is suggested that this difficulty may result from visual processing deficits. As faces are one of the most important sources of social information it is hypothesized that people suffering from the disorder have impairments in the visual face processing system. It is unclear which mechanism of the face processing system is impaired but two types of deficits are most often proposed: a deficit in face perception in general (i.e., processing of facial features as such) and a deficit in facial emotion processing (i.e., recognition of emotional facial expressions). Due to the contradictory evidence from behavioural, electrophysiological as well as neuroimaging studies offering support for the involvement of one or the other deficit in schizophrenia it is early to make any conclusive statements as to the nature and level of impairment. Further studies are needed for a better understanding of the key mechanism and abnormalities underlying social dysfunction in schizophrenia.

Style APA, Harvard, Vancouver, ISO itp.
7

Bellegarde, Lucille Gabrielle Anna. "Perception of emotions in small ruminants". Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/25915.

Pełny tekst źródła
Streszczenie:
Animals are sentient beings, capable of experiencing emotions. Being able to assess emotional states in farm animals is crucial to improving their welfare. Although the function of emotion is not primarily for communication, the outward expression of an emotional state involves changes in posture, vocalisations, odours and facial expressions. These changes can be perceived and used as indicators of emotional state by other animals. Since emotions can be perceived between conspecifics, understanding how emotions are identified and how they can spread within a social group could have a major impact on improving the welfare of farmed species, which are mostly reared in groups. A recently developed method for the evaluation of emotions in animals is based on cognitive biases such as judgment biases, i.e. an individual in a negative emotional state will show pessimistic judgments while and individual in a positive emotional state will show optimistic judgments. The aims of this project were to (A) establish whether sheep and goats can discriminate between images of faces of familiar conspecifics taken in different positive and negative situations, (B) establish whether sheep and goats perceive the valence (positive of negative) of the emotion expressed by the animal on the image, (C) validate the use of images of faces in cognitive bias studies. The use of images of faces of conspecifics as emotional stimuli was first validated, using a discrimination task in a two-armed maze. A new methodology was then developed across a series of experiments to assess spontaneous reactions of animals exposed to video clips or to images of faces of familiar conspecifics. Detailed observations of ear postures were used as the main behavioural indicator. Individual characteristics (dominance status within the herd, dominance pairwise relationships and humananimal relationship) were also recorded during preliminary tests and included in the analyses. The impact of a low-mood state on the perception of emotions was assessed in sheep after subjecting half of the animals to unpredictable negative housing conditions and keeping the other half in good standard housing conditions. Sheep were then presented with videos of conspecifics filmed in situations of varying valence. Reactions to ambiguous stimuli were evaluated by presenting goats with images of morphed faces. Goats were also presented with images of faces of familiar conspecifics taken situations of varying emotional intensity. Sheep could discriminate images of faces of conspecifics taken either in a negative or in a neutral situation and their learning process of the discrimination task was affected by the type of emotion displayed. Sheep reacted differently depending on the valence of the video clips (P < 0.05); however, there was no difference between the control and the low-mood groups (P > 0.05). Goats also showed different behavioural reactions to images of faces photographed in different situations (P < 0.05), indicating that they perceived the images as different. Responses to morphed images were not necessarily intermediate to responses to negative and positive images and not gradual either, which poses a major problem to the potential use of facial images in cognitive bias experiments. Overall, animals were more attentive towards images or videos of conspecifics in negative situations, i.e., presumably, in a negative emotional state. This suggests that sheep and goats are able to perceive the valence of the emotional state. The identity of the individual on the photo also affected the animals’ spontaneous reaction to the images. Social relationships such as dominance, but also affinity between the tested and photographed individual seem to influence emotion perception.
Style APA, Harvard, Vancouver, ISO itp.
8

Chiller-Glaus, Sarah. "Testing the limits of face recognition : identification from photographs in travel documents and dynamic aspects of emotion recognition /". [S.l.] : [s.n.], 2009. http://opac.nebis.ch/cgi-bin/showAbstract.pl?sys=000281129.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Merz, Sabine Psychology Faculty of Science UNSW. "Face emotion recognition in children and adolescents; effects of puberty and callous unemotional traits in a community sample". Publisher:University of New South Wales. Psychology, 2008. http://handle.unsw.edu.au/1959.4/41247.

Pełny tekst źródła
Streszczenie:
Previous research suggests that as well as behavioural difficulties, a small subset of aggressive and antisocial children show callous unemotional (CU) personality traits (i.e., lack of remorse and absence of empathy) that set them apart from their low-CU peers. These children have been identified as being most at risk to follow a path of severe and persistent antisocial behaviour, showing distinct behavioural patterns, and have been found to respond less to traditional treatment programs. One particular focus of this thesis is that emerging findings have shown emotion recognition deficits within both groups. Whereas children who only show behavioural difficulties (in the absence of CU traits) have been found to misclassify vague and neutral expressions as anger, the presence of CU traits has been associated with an inability to correctly identify fear and to a lesser extend, sadness. Furthermore, emotion recognition competence varies with age and development. In general, emotion recognition improves with age, but interestingly there is some evidence that it may become less efficient during puberty. No research could be located, however, that assessed emotion recognition through childhood and adolescence for children high and low on CU traits and antisocial behaviour. The primary focus of this study was to investigate the impact of these personality traits and pubertal development on emotion recognition competence in isolation and in combination. A specific aim was to assess if puberty would exacerbate these deficits in children with pre-existing deficits in emotion recognition. The effect of gender, emotion type and measure characteristics, in particular the age of the target face, was also examined. A community sample of 703 children and adolescents aged 7-17 were administered the Strength and Difficulties Questionnaire to assess adjustment, the Antisocial Process Screening Device to assess antisocial traits, and the Pubertal Development Scale was administered to evaluate pubertal stage. Empathy was assessed using the Bryant Index of Empathy for Children and Adolescents. Parents or caregivers completed parent version of these measures for their children. Emotion recognition ability was measured using the newly developed UNSW FACES task (Dadds, Hawes & Merz, 2004). Description of the development and validation of this measure are included. Contrary to expectations, emotion recognition accuracy was not negatively affected by puberty. In addition, no overall differences in emotion recognition ability were found due to participant’s gender or target face age group characteristics. The hypothesis that participants would be better at recognising emotions expressed by their own age group was therefore not supported. In line with expectations, significant negative associations between CU traits and fear recognition were found. However, these were small, and contrary to expectations, were found for girls rather than boys. Also, puberty did not exacerbate emotion recognition deficits in high CU children. However, the relationship between CU traits and emotion recognition was affected differently by pubertal status. The implications of these results are discussed in relation to future research into emotion recognition deficits within this population. In addition, theoretical and practical implications of these findings for the development of antisocial behaviour and the treatment of children showing CU traits are explored.
Style APA, Harvard, Vancouver, ISO itp.
10

Bloom, Elana. "Recognition, expression, and understanding facial expressions of emotion in adolescents with nonverbal and general learning disabilities". Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=100323.

Pełny tekst źródła
Streszczenie:
Students with learning disabilities (LD) have been found to exhibit social difficulties compared to those without LD (Wong, 2004). Recognition, expression, and understanding of facial expressions of emotions have been shown to be important for social functioning (Custrini & Feldman, 1989; Philippot & Feldman, 1990). LD subtypes have been studied (Rourke, 1999) and children with nonverbal learning disabilities (NVLD) have been observed to be worse at recognizing facial expressions compared to children with verbal learning disabilities (VLD), no learning disability (NLD; Dimitrovsky, Spector, Levy-Shiff, & Vakil, 1998; Dimitrovsky, Spector, & Levy-Shiff, 2000), and those with psychiatric difficulties without LD controls (Petti, Voelker, Shore, & Hyman-Abello, 2003). However, little has been done in this area with adolescents with NVLD. Recognition, expression and understanding facial expressions of emotion, as well as general social functioning have yet to be studied simultaneously among adolescents with NVLD, NLD, and general learning disabilities (GLD). The purpose of this study was to examine abilities of adolescents with NVLD, GLD, and without LD to recognize, express, and understand facial expressions of emotion, in addition to their general social functioning.
Adolescents aged 12 to 15 were screened for LD and NLD using the Wechsler Intelligence Scale for Children---Third Edition (WISC-III; Weschler, 1991) and the Wide Range Achievement Test---Third Edition (WRAT3; Wilkinson, 1993) and subtyped into NVLD and GLD groups based on the WRAT3. The NVLD ( n = 23), matched NLD (n = 23), and a comparable GLD (n = 23) group completed attention, mood, and neuropsychological measures. The adolescent's ability to recognize (Pictures of Facial Affect; Ekman & Friesen, 1976), express, and understand facial expressions of emotion, and their general social functioning was assessed. Results indicated that the GLD group was significantly less accurate at recognizing and understanding facial expressions of emotion compared to the NVLD and NLD groups, who did not differ from each other. No differences emerged between the NVLD, NLD, and GLD groups on the expression or social functioning tasks. The neuropsychological measures did not account for a significant portion of the variance on the emotion tasks. Implications regarding severity of LD are discussed.
Style APA, Harvard, Vancouver, ISO itp.
11

St-Hilaire, Annie. "Are paranoid schizophrenia patients really more accurate than other people at recognizing spontaneous expressions of negative emotion? : a study of the putative association between emotion recognition and thinking errors in paranoia". [Kent, Ohio] : Kent State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=kent1215450307.

Pełny tekst źródła
Streszczenie:
Thesis (Ph.D.)--Kent State University, 2008.
Title from PDF t.p. (viewed Nov. 10, 2009). Advisor: Nancy Docherty. Keywords: schizophrenia, paranoia, emotion recognition, posed expressions, spontaneous expressions, cognition. Includes bibliographical references (p. 122-144).
Style APA, Harvard, Vancouver, ISO itp.
12

Shreve, Matthew Adam. "Automatic Macro- and Micro-Facial Expression Spotting and Applications". Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4770.

Pełny tekst źródła
Streszczenie:
Automatically determining the temporal characteristics of facial expressions has extensive application domains such as human-machine interfaces for emotion recognition, face identification, as well as medical analysis. However, many papers in the literature have not addressed the step of determining when such expressions occur. This dissertation is focused on the problem of automatically segmenting macro- and micro-expressions frames (or retrieving the expression intervals) in video sequences, without the need for training a model on a specific subset of such expressions. The proposed method exploits the non-rigid facial motion that occurs during facial expressions by modeling the strain observed during the elastic deformation of facial skin tissue. The method is capable of spotting both macro expressions which are typically associated with emotions such as happiness, sadness, anger, disgust, and surprise, and rapid micro- expressions which are typically, but not always, associated with semi-suppressed macro-expressions. Additionally, we have used this method to automatically retrieve strain maps generated from peak expressions for human identification. This dissertation also contributes a novel 3-D surface strain estimation algorithm using commodity 3-D sensors aligned with an HD camera. We demonstrate the feasibility of the method, as well as the improvements gained when using 3-D, by providing empirical and quantitative comparisons between 2-D and 3-D strain estimations.
Style APA, Harvard, Vancouver, ISO itp.
13

Stein, Jan-Philipp, i Peter Ohler. "Saving Face in Front of the Computer? Culture and Attributions of Human Likeness Influence Users' Experience of Automatic Facial Emotion Recognition". Frontiers Media S.A, 2018. https://monarch.qucosa.de/id/qucosa%3A31524.

Pełny tekst źródła
Streszczenie:
In human-to-human contexts, display rules provide an empirically sound construct to explain intercultural differences in emotional expressivity. A very prominent finding in this regard is that cultures rooted in collectivism—such as China, South Korea, or Japan—uphold norms of emotional suppression, contrasting with ideals of unfiltered self-expression found in several Western societies. However, other studies have shown that collectivistic cultures do not actually disregard the whole spectrum of emotional expression, but simply prefer displays of socially engaging emotions (e.g., trust, shame) over the more disengaging expressions favored by the West (e.g., pride, anger). Inspired by the constant advancement of affective technology, this study investigates if such cultural factors also influence how people experience being read by emotion-sensitive computers. In a laboratory experiment, we introduce 47 Chinese and 42 German participants to emotion recognition software, claiming that it would analyze their facial micro-expressions during a brief cognitive task. As we actually present standardized results (reporting either socially engaging or disengaging emotions), we manipulate participants' impression of having matched or violated culturally established display rules in a between-subject design. First, we observe a main effect of culture on the cardiovascular response to the digital recognition procedure: Whereas Chinese participants quickly return to their initial heart rate, German participants remain longer in an agitated state. A potential explanation for this—East Asians might be less stressed by sophisticated technology than people with a Western socialization—concurs with recent literature, highlighting different human uniqueness concepts across cultural borders. Indeed, while we find no cultural difference in subjective evaluations of the emotion-sensitive computer, a mediation analysis reveals a significant indirect effect from culture over perceived human likeness of the technology to its attractiveness. At the same time, violations of cultural display rules remain mostly irrelevant for participants' reaction; thus, we argue that inter-human norms for appropriate facial expressions might be loosened if faces are read by computers, at least in settings that are not associated with any social consequence.
Style APA, Harvard, Vancouver, ISO itp.
14

Lausen, Adi [Verfasser], Annekathrin [Akademischer Betreuer] Schacht, Annekathrin [Gutachter] Schacht i Kurt [Gutachter] Hammerschmidt. "Emotion recognition from expressions in voice and face – Behavioral and Endocrinological evidence – / Adi Lausen ; Gutachter: Annekathrin Schacht, Kurt Hammerschmidt ; Betreuer: Annekathrin Schacht". Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2019. http://d-nb.info/1188464817/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Bezerra, Giuliana Silva. "A framework for investigating the use of face features to identify spontaneous emotions". Universidade Federal do Rio Grande do Norte, 2014. http://repositorio.ufrn.br/handle/123456789/19595.

Pełny tekst źródła
Streszczenie:
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2016-01-14T18:48:05Z No. of bitstreams: 1 GiulianaSilvaBezerra_DISSERT.pdf: 12899912 bytes, checksum: 413f2be6aef4a909500e6834e7b0ae63 (MD5)
Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2016-01-15T18:57:11Z (GMT) No. of bitstreams: 1 GiulianaSilvaBezerra_DISSERT.pdf: 12899912 bytes, checksum: 413f2be6aef4a909500e6834e7b0ae63 (MD5)
Made available in DSpace on 2016-01-15T18:57:11Z (GMT). No. of bitstreams: 1 GiulianaSilvaBezerra_DISSERT.pdf: 12899912 bytes, checksum: 413f2be6aef4a909500e6834e7b0ae63 (MD5) Previous issue date: 2014-12-12
Emotion-based analysis has raised a lot of interest, particularly in areas such as forensics, medicine, music, psychology, and human-machine interface. Following this trend, the use of facial analysis (either automatic or human-based) is the most common subject to be investigated once this type of data can easily be collected and is well accepted in the literature as a metric for inference of emotional states. Despite this popularity, due to several constraints found in real world scenarios (e.g. lightning, complex backgrounds, facial hair and so on), automatically obtaining affective information from face accurately is a very challenging accomplishment. This work presents a framework which aims to analyse emotional experiences through naturally generated facial expressions. Our main contribution is a new 4-dimensional model to describe emotional experiences in terms of appraisal, facial expressions, mood, and subjective experiences. In addition, we present an experiment using a new protocol proposed to obtain spontaneous emotional reactions. The results have suggested that the initial emotional state described by the participants of the experiment was different from that described after the exposure to the eliciting stimulus, thus showing that the used stimuli were capable of inducing the expected emotional states in most individuals. Moreover, our results pointed out that spontaneous facial reactions to emotions are very different from those in prototypic expressions due to the lack of expressiveness in the latter.
Emotion-based analysis has raised a lot of interest, particularly in areas such as forensics, medicine, music, psychology, and human-machine interface. Following this trend, the use of facial analysis (either automatic or human-based) is the most common subject to be investigated once this type of data can easily be collected and is well accepted in the literature as a metric for inference of emotional states. Despite this popularity, due to several constraints found in real world scenarios (e.g. lightning, complex backgrounds, facial hair and so on), automatically obtaining affective information from face accurately is a very challenging accomplishment. This work presents a framework which aims to analyse emotional experiences through naturally generated facial expressions. Our main contribution is a new 4-dimensional model to describe emotional experiences in terms of appraisal, facial expressions, mood, and subjective experiences. In addition, we present an experiment using a new protocol proposed to obtain spontaneous emotional reactions. The results have suggested that the initial emotional state described by the participants of the experiment was different from that described after the exposure to the eliciting stimulus, thus showing that the used stimuli were capable of inducing the expected emotional states in most individuals. Moreover, our results pointed out that spontaneous facial reactions to emotions are very different from those in prototypic expressions due to the lack of expressiveness in the latter.
Style APA, Harvard, Vancouver, ISO itp.
16

Ruivo, João Pedro Prospero. "Um modelo para inferência do estado emocional baseado em superfícies emocionais dinâmicas planares". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-28022018-110833/.

Pełny tekst źródła
Streszczenie:
Emoções exercem influência direta sobre a vida humana, mediando a maneira como os indivíduos interagem e se relacionam, seja em âmbito pessoal ou social. Por essas razões, o desenvolvimento de interfaces homem-máquina capazes de manter interações mais naturais e amigáveis com os seres humanos se torna importante. No desenvolvimento de robôs sociais, assunto tratado neste trabalho, a adequada interpretação do estado emocional dos indivíduos que interagem com os robôs é indispensável. Assim, este trabalho trata do desenvolvimento de um modelo matemático para o reconhecimento do estado emocional humano por meio de expressões faciais. Primeiramente, a face humana é detectada e rastreada por meio de um algoritmo; então, características descritivas são extraídas da mesma e são alimentadas no modelo de reconhecimento de estados emocionais desenvolvidos, que consiste de um classificador de emoções instantâneas, um filtro de Kalman e um classificador dinâmico de emoções, responsável por fornecer a saída final do modelo. O modelo é otimizado através de um algoritmo de têmpera simulada e é testado sobre diferentes bancos de dados relevantes, tendo seu desempenho medido para cada estado emocional considerado.
Emotions have direct influence on the human life and are of great importance in relationships and in the way interactions between individuals develop. Because of this, they are also important for the development of human-machine interfaces that aim to maintain natural and friendly interactions with its users. In the development of social robots, which this work aims for, a suitable interpretation of the emotional state of the person interacting with the social robot is indispensable. The focus of this work is the development of a mathematical model for recognizing emotional facial expressions in a sequence of frames. Firstly, a face tracker algorithm is used to find and keep track of a human face in images; then relevant information is extracted from this face and fed into the emotional state recognition model developed in this work, which consists of an instantaneous emotional expression classifier, a Kalman filter and a dynamic classifier, which gives the final output of the model. The model is optimized via a simulated annealing algorithm and is experimented on relevant datasets, having its performance measured for each of the considered emotional states.
Style APA, Harvard, Vancouver, ISO itp.
17

Schacht, Annekathrin. "Emotions in visual word processing". Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2008. http://dx.doi.org/10.18452/15727.

Pełny tekst źródła
Streszczenie:
Die Einflüsse von Emotionen auf Informationsverarbeitungsprozesse zählen zu einem der zentralen Aspekte kognitionspsychologischer und neurowissenschaftlicher Forschung. Studien zur Prozessierung affektiver Bilder und emotionaler Gesichtsausdrücke haben gezeigt, daß emotionale Stimuli – vermutlich aufgrund ihrer starken intrinsischen Relevanz für den Organismus – in besonderem Maße Aufmerksamkeit binden und hierdurch einer präferierten und elaborierteren Weiterverarbeitung zugeführt werden. Evidenz zur Aktivierung und Verarbeitung emotionaler Valenz in der visuellen Wortverarbeitung ist hingegen gering und größtenteils inkonsistent. In einer Serie von Experimenten, die in der vorliegenden Arbeit zusammenfassend beschrieben und diskutiert werden, wurde mit Hilfe Ereigniskorrelierter Potentiale (EKPs) versucht, die Effekte emotionaler Valenz von deutschsprachigen Verben innerhalb des Wortverarbeitungsprozesses zu lokalisieren. In den EKPs zeigen sich – hinsichtlich ihrer Latenz und Topographie – dissoziierbare emotionsrelatierte Komponenten, die mit unterschiedlichen Stufen der Verarbeitungsprozesse in Verbindung gebracht werden können. Die Befunde legen nahe, daß die emotionale Valenz von Verben auf einer (post-) lexikalischen Verarbeitungsstufe aktiviert wird. Dieser frühen Registrierung liegen wahrscheinlich domänenunspezifische neuronale Mechanismen zugrunde, die weitestgehend ressourcen- und aufgabenunabhängig wirken. Auf späteren Stufen hingegen scheinen emotions-relatierte Prozesse durch zahlreiche weitere Faktoren beeinflußt zu werden. Die Modulation der Dynamik früher, nicht aber später Emotionsprozessierung durch nicht-valente Kontextinformation sowie in Abhängigkeit der Stimulusdomäne legt einen zeitlich variablen Verarbeitungsprozeß emotionaler Information nahe, der mit streng seriellen Modellen der Informationsverarbeitung nicht vereinbar ist, und möglicherweise der flexiblen Verhaltensanpassung an verschiedene Umweltbedingungen dient.
In recent cognitive and neuroscientific research the influences of emotion on information processing are of special interest. As has been shown in several studies on affective picture as well as facial emotional expression processing, emotional stimuli tend to involuntarily draw attentional resources and preferential and sustained processing, possibly caused by their high intrinsic relevance. However, evidence for emotion effects in visual word processing is scant and heterogeneous. As yet, little is known about at which stage and under what conditions the specific emotional content of a word is activated. A series of experiments which will be summarized and discussed in the following section aimed to localize the effects of emotion in visual word processing by recording event-related potentials (ERPs). Distinct effects of emotional valence on ERPs were found which were distinguishable with regard to their temporal and spatial distribution and might be therefore related to different stages within the processing stream. As a main result, the present findings indicate that the activation of emotional valence of verbs occurs on a (post-) lexical stage. The underlying neural mechanisms of this early registration appear to be domain-unspecific, and further, largely independent of processing resources and task demands. On later stages, emotional processes are modulated by several different factors. Further, the findings of an acceleration of early but not late emotion effects caused by neutral context information as well as by domain-specifity indicate a flexible dynamic of emotional processes which would be hard to account for by strictly serial processing models.
Style APA, Harvard, Vancouver, ISO itp.
18

Löfdahl, Tomas, i Mattias Wretman. "Långsammare igenkänning av emotioner i ansiktsuttryck hos individer med utmattningssyndrom : En pilotstudie". Thesis, Mittuniversitetet, Institutionen för samhällsvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17590.

Pełny tekst źródła
Streszczenie:
Syftet med denna pilotstudie var att skapa hypoteser om och hur utmattningssyndrom påverkar förmågan att känna igen emotioner i ansiktsuttryck. En grupp patienter med utmattningssyndrom jämfördes med en matchad frisk kontrollgrupp (N=14). Grupperna undersöktes med ett datorbaserat test beståendes av färgbilder av autentiska ansiktsuttryck som gradvis i steg om 10% förändrades från ett neutralt ansiktsuttryck till någon av de fem grundemotionerna ilska, avsky, rädsla, glädje och ledsenhet. Mätningarna gjordes i termer av igenkänningsprecision och responshastighet. Resultatet visade att patientgruppen responderade signifikant långsammare än kontrollgruppen sett över samtliga emotioner i testet. Inga emotionsspecifika skillnader såväl som skillnader i igenkänningsprecision kunde påvisas mellan grupperna. Orsakerna till diskrepansen i responshastighet diskuterades utifrån fyra tänkbara förklaringsområden: ansiktsperceptuell funktion, visuell uppmärksamhet, självfokuserad uppmärksamhet samt noggrannhet/oro. Rekommendationer gjordes till framtida forskning om att utforska dessa områden närmare.
Style APA, Harvard, Vancouver, ISO itp.
19

Julin, Fredrik. "Vision based facial emotion detection using deep convolutional neural networks". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-42622.

Pełny tekst źródła
Streszczenie:
Emotion detection, also known as Facial expression recognition, is the art of mapping an emotion to some sort of input data taken from a human. This is a powerful tool to extract valuable information from individuals which can be used as data for many different purposes, ranging from medical conditions such as depression to customer feedback. To be able to solve the problem of facial expression recognition, smaller subtasks are required and all of them together form the complete system to the problem. Breaking down the bigger task at hand, one can think of these smaller subtasks in the form of a pipeline that implements the necessary steps for classification of some input to then give an output in the form of emotion. In recent time with the rise of the art of computer vision, images are often used as input for these systems and have shown great promise to assist in the task of facial expression recognition as the human face conveys the subjects emotional state and contain more information than other inputs, such as text or audio. Many of the current state-of-the-art systems utilize computer vision in combination with another rising field, namely AI, or more specifically deep learning. These proposed methods for deep learning are in many cases using a special form of neural network called convolutional neural network that specializes in extracting information from images. Then performing classification using the SoftMax function, acting as the last part before the output in the facial expression pipeline. This thesis work has explored these methods of utilizing convolutional neural networks to extract information from images and builds upon it by exploring a set of machine learning algorithms that replace the more commonly used SoftMax function as a classifier, in attempts to further increase not only the accuracy but also optimize the use of computational resources. The work also explores different techniques for the face detection subtask in the pipeline by comparing two approaches. One of these approaches is more frequently used in the state-of-the-art and is said to be more viable for possible real-time applications, namely the Viola-Jones algorithm. The other is a deep learning approach using a state-of-the-art convolutional neural network to perform the detection, in many cases speculated to be too computationally intense to run in real-time. By applying a state-of-the-art inspired new developed convolutional neural network together with the SoftMax classifier, the final performance did not reach state-of-the-art accuracy. However, the machine-learning classifiers used shows promise and bypass the SoftMax function in performance in several cases when given a massively smaller number of samples as training. Furthermore, the results given from implementing and testing a pure deep learning approach, using deep learning algorithms for both the detection and classification stages of the pipeline, shows that deep learning might outperform the classic Viola-Jones algorithm in terms of both detection rate and frames per second.
Style APA, Harvard, Vancouver, ISO itp.
20

Wild-Wall, Nele. "Is there an interaction between facial expression and facial familiarity?" Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2004. http://dx.doi.org/10.18452/15042.

Pełny tekst źródła
Streszczenie:
Entgegen traditioneller Gesichtererkennungsmodelle konnte in einigen Studien gezeigt werden, dass die Erkennung des Emotionsausdrucks und der Bekanntheit interagieren. In dieser Dissertation wurde mit Hilfe von ereigniskorrelierten Potentialen untersucht, welche funktionalen Prozesse bei einer Interaktion moduliert werden. Teil I untersuchte, ob die Bekanntheit eines Gesichtes die Emotionsdiskrimination erleichtert. In mehreren Experimenten diskriminierten Versuchspersonen zwei Emotionen, die von bekannten und unbekannten Gesichtern praesentiert wurden . Dabei war die Entscheidung fuer persoenlich bekannte Gesichter mit froehlichem Ausdruck schneller und fehlerfreier. Dies zeigt sich in einer kuerzeren Latenz der P300 Komponente (Trend), welche die Dauer der Reizklassifikation auswies, sowie in einem verkuerzten Intervall zwischen Stimulus und Beginn des Lateralisierten Bereitschaftspotentials (S-LRP), welches die handspezifische Reaktionsauswahl anzeigt. Diese Befunde sprechen fuer eine Erleichterung der Emotionsdiskrimination auf spaeten perzeptuellen Verarbeitungsstufen bei persoenlich bekannten Gesichtern. In weiteren Experimenten mit oeffentlich bekannten, gelernten und unbekannten Gesichtern zeigte sich keine Erleichterung der Emotionsdiskrimination für bekannte Gesichter. Teil II untersuchte, ob es einen Einfluss des Emotionsausdrucks auf die Bekanntheitsentscheidung gibt. Eine Erleichterung zeigte sich fuer neutrale oder froehliche Emotionen nur bei persoenlich bekannten Gesichtern, nicht aber bei gelernten oder unbekannten Gesichtern. Sie spiegelt sich in einer Verkuerzung des S-LRP fuer persoenlich bekannte Gesichter wider, was eine Erleichterung der Reaktionsauswahl nahelegt. Zusammenfassend konnte gezeigt werden, dass eine Interaktion der Bekanntheit mit der Emotionserkennung unter bestimmten Bedingungen auftritt. In einer abschließenden Diskussion werden die experimentellen Ergebnisse in Beziehung gesetzt und in Hinblick auf bisherige Befunde diskutiert.
Contrasting traditional face recognition models previous research has revealed that the recognition of facial expressions and familiarity may not be independent. This dissertation attempts to localize this interaction within the information processing system by means of performance data and event-related potentials. Part I elucidated upon the question of whether there is an interaction between facial familiarity and the discrimination of facial expression. Participants had to discriminate two expressions which were displayed on familiar and unfamiliar faces. The discrimination was faster and less error prone for personally familiar faces displaying happiness. Results revealed a shorter peak latency for the P300 component (trend), reflecting stimulus categorization time, and for the onset of the lateralized readiness potential (S-LRP), reflecting the duration of pre-motor processes. A facilitation of perceptual stimulus categotization for personally familiar faces displaying happiness is suggested. The discrimination of expressions was not facilitated in further experiments using famous or experimentally familiarized, and unfamiliar faces. Part II raises the question of whether there is an interaction between facial expression and the discrimination of facial familiarity. In this task a facilitation was only observable for personally familiar faces displaying a neutral or happy expression, but not for experimentally familiarized, or unfamiliar faces. Event-related potentials reveal a shorter S-LRP interval for personally familiar faces, hence, suggesting a facilitated response selection stage. In summary, the results suggest that an interaction of facial familiarity and facial expression might be possible under some circumstances. Finally, the results are discussed in the context of possible interpretations, previous results, and face recognition models.
Style APA, Harvard, Vancouver, ISO itp.
21

Alves, Cláudia Daniela Andrade Carvalho. "Transplantação da Face Humana: estudo de caso com Carmen Tarleton - efeitos neuropsicofisiológicos na exibição e no reconhecimento das emoções básicas". Doctoral thesis, [s.n.], 2015. http://hdl.handle.net/10284/5201.

Pełny tekst źródła
Streszczenie:
Tese apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Doutor em Ciências Sociais, especialidade em Psicologia
O transplante da face é um procedimento considerado experimental que originou intenso debate sobre os riscos e os benefícios de realizar este tipo de cirurgia. Os resultados mostram que, a partir de um ponto de vista clínico, técnico, estético, funcional, imunológico e psicológico, o transplante facial alcançou a reabilitação funcional, estética, social e psicológica em pacientes com desfiguração severa da face, e encontram-se descritos em publicações de equipas de transplante de todo o mundo. A experiência clínica demonstrou a viabilidade do transplante facial como uma valiosa opção de reconstrução, mas ainda continua a ser considerada como um procedimento experimental com questões não resolvidas por esclarecer. Os resultados funcionais e estéticos têm sido muito encorajadores, com boa recuperação motora e sensorial e melhorias para as funções faciais observadas. Como previsto, têm sido comuns episódios de rejeição aguda, mas facilmente controlados com o aumento da imunossupressão sistémica. Complicações de mortalidade e imunossupressão nos pacientes também foram observadas. As melhorias psicológicas têm sido notáveis e resultaram na reintegração dos pacientes para o mundo exterior, redes sociais e até mesmo no local de trabalho. As equipas de transplante facial têm destacado a seleção rigorosa dos pacientes como o indicador chave do sucesso. Os primeiros resultados globais do programa de transplante da face têm sido geralmente mais positivos do que o expectável. Este sucesso inicial, a divulgação de resultados e o refinamento contínuo do procedimento podem possibilitar que o transplante facial seja, futuramente, uma opção primordial de reconstrução para aqueles com extensas deformações faciais. Assim, é de suma importância compreender o processo neuropsicofisiológico na exibição e no reconhecimento das emoções básicas após o transplante da face. Os resultados obtidos indicam que lesões músculo-esqueléticas na face afetam a capacidade de exibição da expressão da emocionalidade e, por consequência, dificultam o reconhecimento da mesma pelos outros, prejudicando a eficácia da comunicação. Este estudo pretende contribuir para o desenvolvimento da investigação científica sobre a expressão facial da emoção, aplicável em contexto pioneiro e único em Portugal, como é o caso do transplante facial.
Face transplant is considered an experimental procedure that gave rise to intense debate about the risks and benefits of performing this type of surgery. The results show that, from a clinical technical, aesthetic, functional, immunological and psychological point of view, face transplant has reached functional, aesthetic, social and psychological rehabilitation in patients with severe disfiguration of the face, and are described in publications of transplant teams all over the world. Clinical experience demonstrates the feasibility of the face transplant as a valuable reconstruction option, yet is still considered as an experimental procedure with issues unresolved unclear. The functional and aesthetic results have been very encouraging, with good motor and sensory recovery and improvements to facial features observed. As expected, it has been common acute rejection, but easily controlled with increased systemic immunosuppression. Mortality and complications of immunosuppression in patients were also observed. Psychological improvements have been remarkable and resulted in reintegration of patients to the outside world, social networks and even in the workplace. Face transplant teams have highlighted the rigorous selection of patients as the key indicator of success. The first global results of face transplant program have been generally more positive than expected. This initial success, the dissemination of results and the ongoing refinement of the procedure may enable the facial transplant to be in future, a major reconstruction option for those with extensive facial deformities. Thus, it is of paramount importance to understand the neuropsychophysiological process in the display and recognition of basic emotions after transplantation of the face. The results indicate that musculoskeletal injuries on the face affect the display capacity of the emotionality expression and therefore hindering the recognition of the same for others, undermining the effectiveness of communication. This study aims to contribute to the development of scientific research on the facial expression of emotion, applicable in pioneering and unique context in Portugal, such as the face transplant.
La greffe du visage est considéré comme procédure expérimentale qui a suscité un intense débat sur les risques et les avantages de l'exécution de ce type de chirurgie. Les résultats montrent que, d'un point de vue clinique, technique, esthétique, fonctionnel, immunologique et psychologique de la greffe du visage a pris la réadaptation fonctionnelle, esthétique, psychologique et social chez les patients atteints souffrant de grave défigurement du visage, et sont décrits dans les publications de les équipes de transplantation du monde entier. L'expérience clinique démontre la faisabilité de la greffe comme une option valable de reconstruction faciale, mais toujours considéré comme une procédure expérimentale avec des problèmes non résolus. Les résultats fonctionnels et esthétiques ont été très encourageants, avec une bonne récupération moteur et sensorielle et l'amélioration caractéristiques faciales observées. Comme prévu, il a été le rejet aigu communs, mais facilement contrôlé avec une augmentation d'immunosuppression systémique. La mortalité et les complications de l'immunosuppression chez les patients atteints ont également été observées. Améliorations psychologiques ont été remarquables et ont abouti à la réintégration des patients vers le monde extérieur, les réseaux sociaux et même dans le lieu de travail. Les équipes de transplantation du visage ont mis en évidence la sélection rigoureuse des patients comme l'indicateur clé du succès. Les première résultat du programme mondial de la greffe du visage ont été généralement plus positive que prévu. Ce succès initial, la diffusion des résultats et de l'amélioration permanente de la procédure peut permettre à la greffe du visage être, avenir, une majeure option de reconstruction pour ceux qui ont de vastes malformations faciales. Ainsi, il est d'une importance primordiale pour comprendre le neuropsychophysiologique processus dans l'affichage et la reconnaissance des émotions de base après la transplantation du visage. Les résultats indiquent que les lésions musculo-squelettiques sur le visage affectent la capacité d'affichage de l'expression de la émotionnalité et donc difficile à reconnaître le même pour les autres, compromettre l'efficacité de la communication. Cette étude vise à contribuer au développement de la recherche scientifique sur l'expression du visage de l'émotion, applicable dans un contexte pionnier et unique au Portugal, comme la greffe de visage.
Style APA, Harvard, Vancouver, ISO itp.
22

Coelho-Moreira, Ana Cristina Gonçalves. "As falas da face: processo Casa Pia - aplicação da análise da expressão facial à luz do Direito Penal Português". Doctoral thesis, [s.n.], 2015. http://hdl.handle.net/10284/4950.

Pełny tekst źródła
Streszczenie:
Tese apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Doutor em Ciências Sociais, especialidade Psicofisiologia da Expressão Facial da Emoção
O Processo Casa Pia (PrCP) teve um impacto avassalador na sociedade portuguesa, escrutinando publicamente as instituições estatais que acolhiam crianças. As repercussões foram de tal forma intensas que não só foram alteradas as metodologias e as concepções de protecção do Estado a crianças desfavorecidas como o próprio código penal foi alterado como consequência directa das suas implicações. A emoção e a sua expressão facial desempenham um papel fundamental no desenvolvimento do indivíduo e da sua interacção com a sociedade. O estudo da expressão facial da emoção de alguns alguns dos intervenientes do PrCP, procurou encontrar respostas sobre a manifestação e a exibição na face da culpa e sobre o seu processamento ao nível neuropsicológico. O conceito de culpa no âmbito da expressão facial da emoção, ainda que hoje objecto de aceso debate no seio da comunidade científico é, à luz do direito penal um dos principais instrumentos utilizados para apurar a censurabilidade dos agentes e das suas acções. Ainda que a culpa seja considerada pelo direito penal como algo intrínseco ao agente e às suas acções, seja ela dolosa ou apenas negligente, o seu apuramento permite ao direito penal sustentar e aplicar uma sanção, promovendo a dissuasão de comportamentos idênticos, mantendo, assim a paz, a ordem social, e o respeito pelas instituições e agentes representativos do Estado. Assim, combinando o objectivo último do direito penal e o contributo da análise de expressão facial em contexto forense, foi elaborado um estudo de caso, por recurso metodologia qualitativa comparativa. O principal objectivo foi, procurando resposta às hipóteses colocadas, desenvolver matrizes de análise e medição da culpa, dados os diferentes tipos e níveis de influência que a mesma exerce nos processos de adaptação dos indivíduos à sociedade e às circunstâncias. Os resultado obtidos indicam e sustentam a evidência de uma configuração específica de Au’s na Upper Face e associadas à manifestação na face de culpa, independentemente das circunstâncias (negação ou assunção) que a provocam. Desta feita, o presente estudo poderá representar o início de um necessária colaboração entre a aplicação da análise da expressão facial da emoção e a aplicação do direito em todas as suas vertentes e instituições uma vez que reforça a aplicação do princípio da culpa e , por consequência, as suas dimensões jurídico-penal e ética. The Casa Pia sexual child abuse scandal had a devastating impact on Portuguese society, publicly scrutinizing the state institutions that sheltered children. The repercussions were so intense that, not only the methodologies and state protection concepts for disadvantaged children have changed, but also the criminal law was changed as a direct result of its implications. The emotion and there facial expression plays a key role in the development of the individual and their interaction with society. The study of facial expression of emotion of some of the players of PrCP, sought to find answers on demonstration and display on the face of guilt and the processing at a neurological and psychological level. The concept of guilt within the facial expression of emotion, although today the subject of heated debate within the scientific community is in the light of criminal law one of the main instruments used to determine the reprehensibility of agents and their actions. Although the guilt is considered by criminal law as something intrinsic to the agent and their actions, whether intentional or just negligent, its establishment allows the criminal law uphold and apply a sanction, promoting deterrence identical behaviors, thus maintaining the peace, social order, and respect for state institutions and representative agents. Thus, combining the ultimate goal of criminal law and the contribution of analysis facial expression of emotion in forensic context, a case study was elaborated with a qualitative methodology by comparison. The main objective was looking for answers to the hypotheses, develop patterns of analysis and measurement of guilt, given the different types and levels of influence which it exerts on the processes of adaptation of the individual to society and circumstances. The results obtained indicate and support the evidence of a specific configuration of Au's, in Upper Face associated with the expression on the face of guilt, regardless of the circumstances (denial or assumption) that underlie it. Therefore, the present study may represent the beginning of a close collaboration between the application of analysis of facial expression of emotion and the application of law in all its aspects and institutions, as it strengthens the principle of guilt and therefore its dimensions legal, criminal and ethics. Le Casa Pia enfant scandale des abus sexuelles a eu un impact dévastateur sur la société portugaise, scrutant publiquement institutions de l'Etat qui ont accueilli les enfants. Les répercussions étaient si intenses donc, ne ont pas seulement changé les méthodes et concepts de protection de l'État pour les enfants défavorisés, comme le droit pénal a été modifié en conséquence directe de ses implications. L'emotion et sa expression faciale joue un rôle clé dans le développement de l'individu et de leur interaction avec la société. L'étude de l'expression du face de l'émotion de quelques uns des acteurs de PrCP, a cherché à trouver des réponses sur la démonstration et l’exposition sur le face de la culpabilité et le traitement de niveau neurologique et psychologique. Le concept de culpabilité dans l'expression du visage de l'émotion, même si aujourd'hui l'un vif débat au sein de la communauté scientifique est à la lumière du droit pénal l'un des principaux instruments utilisés pour déterminer le caractère répréhensible d'agents et de leurs actions. Bien que la faute est considérée par la loi pénale comme quelque chose d'intrinsèque à l'agent et leurs actions, qu'elles soient intentionnelles ou tout simplement preuve de négligence, sa clairance permet la loi pénale faire respecter et d'appliquer une sanction, la promotion de la dissuasion des comportements identiques, maintenant ainsi la la paix, l'ordre social, et le respect des institutions de l'Etat et des agents représentatifs. Ainsi, en combinant le but ultime du droit pénal et de la contribution des expressions analyse facias dans le contexte médico-légale, une étude de cas a été préparé avec une méthodologie qualitative par comparaison. L'objectif principal a été la recherche de réponses aux hypothèses, élaborer des tableaux d'analyse et de mesure de la culpabilité, étant donné les différents types et niveaux d'influence qu'il exerce sur les processus d'adaptation de l'individu à la société et les circonstances. Les résultats obtenus indiquent et soutiennent la preuve d'une configuration spécifique de Au son, Upper Face associée à l'expression sur la face de culpabilité, quelles que soient les circonstances (négation ou la véracité) qui la sous-tendent. Cette fois, la présente étude peut représenter le début d'une étroite collaboration entre l’application de l'analyse de l'expression facial de l'émotion et de l'application du droit dans tous ses aspects et les institutions, car il renforce le principe de la culpabilité et donc ses dimensions juridique, pénale et de l’éthique.
Style APA, Harvard, Vancouver, ISO itp.
23

ALTIERI, ALEX. "Yacht experience, ricerca e sviluppo di soluzioni basate su intelligenza artificiale per il comfort e la sicurezza in alto mare". Doctoral thesis, Università Politecnica delle Marche, 2021. http://hdl.handle.net/11566/287605.

Pełny tekst źródła
Streszczenie:
La tesi descrive i risultati dell’attività di ricerca e sviluppo di nuove tecnologie basate su tecniche di intelligenza artificiale, capaci di raggiungere un’interazione empatica e una connessione emotiva tra l’uomo e “le macchine” così da migliorare il comfort e la sicurezza a bordo di uno yacht. Tale interazione è ottenuta grazie al riconoscimento di emozioni e comportamenti e alla successiva attivazione di tutti quegli apparati multimediali presenti nell’ambiente a bordo, che si adattano al mood del soggetto all’interno della stanza. Il sistema prototipale sviluppato durante i tre anni di dottorato è oggi in grado di gestire i contenuti multimediali (ad es. brani musicali, video riprodotti nei LED screen) e scenari di luce, basati sull'emozione dell'utente, riconosciute dalle espressioni facciali riprese da una qualsiasi fotocamera installata all’interno dello spazio. Per poter rendere l’interazione adattativa, il sistema sviluppato implementa algoritmi di Deep Learning per riconoscere l’identità degli utenti a bordo (riconoscimento facciale), il grado di attenzione del comandante (Gaze Detection e Drowsiness) e gli oggetti con cui egli interagisce (telefono, auricolari, ecc.). Tali informazioni vengono processate all’interno del sistema per identificare eventuali situazioni di rischio per la sicurezza delle persone presenti a bordo e per controllare l’intero ambiente. L’applicazione di queste tecnologie, in questo settore sempre aperto all’introduzione delle ultime innovazioni a bordo, apre a diverse sfide di ricerca.
The thesis describes the results of the research and development of new technologies based on artificial intelligence techniques, able to achieve an empathic interaction and an emotional connection between man and "the machines" in order to improve comfort and safety on board of yachts. This interaction is achieved through the recognition of emotions and behaviors and the following activation of all those multimedia devices available in the environment on board, which are adapted to the mood of the subject inside the room. The prototype system developed during the three years of PhD is now able to manage multimedia content (e.g. music tracks, videos played on LED screens) and light scenarios, based on the user's emotion, recognized by facial expressions taken from any camera installed inside the space. In order to make the interaction adaptive, the developed system implements Deep Learning algorithms to recognize the identity of the users on board (Facial Recognition), the degree of attention of the commander (Gaze Detection and Drowsiness) and the objects with which he interacts (phone, earphones, etc.). This information is processed within the system to identify any situations of risk to the safety of people on board and to monitor the entire environment. The application of these technologies, in this domain that is always open to the introduction of the latest innovations on board, opens up several research challenges.
Style APA, Harvard, Vancouver, ISO itp.
24

Najafi, Modjtaba. "L’espace public solidaire face aux séismes de Bam et d’Azerbaïdjan en 2003 et en 2012 : de l’Iran civil à l’Iran des réseaux". Thesis, Paris 3, 2019. http://www.theses.fr/2019PA030040.

Pełny tekst źródła
Streszczenie:
Etudier la constitution d’un public solidaire lors des séismes de Bam et d’Azerbaïdjan est l’objet de cette recherche. Elle aborde la question selon une approche pragmatiste en vue de montrer comment les évolutions technico- démographiques et politico-sociales dans les années 2000 ont favorisé la formation du public iranien lors des séismes de Bam en 2003 et de celui d’Azerbaïdjan en 2012.A partir de ce constat, il s’agit d’étudier la contribution des citoyens internautes sur les blogs et sur les réseaux sociaux, ce qui a permis le rassemblement des Iraniens qui ont fait l’enquête pour vérifier une situation problématique en partant des effets des séismes pour arriver à leurs causes. Cette thèse examine également la couverture médiatique de la presse iranienne, soit conservatrice, soit réformatrice, pour découvrir les différents aspects de ces événements.En s’appuyant sur l’analyse du discours, cette recherche montre comment l’espace public iranien est plus proche de la lecture pragmatiste et plus éloigné de la lecture habermassienne. Selon l’approche pragmatiste, l’émotion est unificatrice et motrice, elle est prise comme facteur d’unité et de complétude dans l’expérience. Les émotions partagées par les différents acteurs de ces événements ont participé à la création d’un Nous, rassemblé par un objectif central. On y voit une argumentation émotionnelle, caractérisée par l’utilisation massive des témoignages et récits, des poésies, des métaphores et des métonymies et des images au sein du discours politico-social. Cette étude montre comment les citoyens indignés et choqués se sont rassemblés pour la reconstruction de l’Iran. Cette recherche fait apparaître la nouvelle image de l’Iran contemporain : l’Iran civil et l’Iran des réseaux. Le premier se caractérise par l’apparition d’une nouvelle société civile apparue particulièrement à la fin des années 1990 et au début des années 2000 et le deuxième se distingue par l’élargissement de l’espace public grâce au développement de l’internet notamment par les réseaux sociaux
Studying the constitution of a united public during the earthquakes of Bam and Azerbaijan is the subject of this research. It tackles the issue with a pragmatist approach to show how techno-demographic and politico-social developments in the 2000s fostered the raising of the Iranian public during the earthquakes of Bam in 2003 and that of Azerbaijan in 2012.From this observation, this research aims at studying the contribution of citizen Internet users on the blogs and on the social networks, which allowed the gathering of the Iranians who made the investigation to check a problematic situation starting from the effects of the earthquakes to reach their causes.This thesis also analyses the media coverage of the Iranian press, either conservative or reformist, to discover the various aspects of these events.Based on discourse analysis, this research shows how the Iranian public sphere is closer to pragmatist approach and further away from Habermasian approach. According to the pragmatist approach, emotion is unifying and driving the individuals, it is conceived as a factor of unity and completeness in the experience. The emotions shared by the different actors of these events contributed to the creation of a We, brought together by a central objective. We see an emotional argument, characterized by the massive use of testimonies and stories, poems, metaphors and metonymies and images in a politico-social discourse. This study shows how indignant and shocked citizens gathered for the reconstruction of Iran.From this thesis work, arises the new image of contemporary Iran: civil Iran and networked Iran. The first is characterized by the emergence of a new civil society appeared particularly in the late 1990s and early 2000s and the second is distinguished by the expansion of the public sphere through the development of the internet including social networks
Style APA, Harvard, Vancouver, ISO itp.
25

Riley, Helen. "Maternal attachment and recognition of infant emotion". Thesis, University of Exeter, 2014. http://hdl.handle.net/10871/16569.

Pełny tekst źródła
Streszczenie:
Objective: The overall aim of this study was to investigate whether maternal emotion recognition of infant faces in a facial morphing task differed by maternal attachment style, and if this was moderated by a secure attachment prime, such that it would ameliorate the effects of maternal attachment insecurity. Method: 87 mothers of children aged 0-18 months completed measures of global and mother-specific trait attachment, post-natal depression, mood and state attachment alongside 2 sessions of an emotion recognition task. This task was made up of short movies created from photographs of infant faces, changing from neutral to either happy or sad. It was designed to assess sensitivity (accuracy of responses and intensity of emotion required to recognize the emotion) to changes in emotions expressed in the faces of infants. Participants also underwent a prime manipulation that was either attachment-based (experimental group) or neutral (control group). Results: There were no significant effects for global attachment scores (i.e., avoidant, anxious). However, there was a significant interaction effect of condition x maternal avoidant attachment for accuracy of recognition of happy infant faces. Explication of this interaction yielded an unexpected finding: participants reporting avoidant attachment relationships with their own mothers were less accurate in recognizing happy infant faces following the attachment prime than participants with maternal avoidant attachment in the control condition. Conclusions: Future research directions suggest ways to improve strength of effects and variability in attachment insecurity. Clinical implications of the study center on the preliminary evidence presented that supports carefully selected and executed interventions for mothers with attachment problems.
Style APA, Harvard, Vancouver, ISO itp.
26

Beer, Jenay Michelle. "Recognizing facial expression of virtual agents, synthetic faces, and human faces: the effects of age and character type on emotion recognition". Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33984.

Pełny tekst źródła
Streszczenie:
An agent's facial expression may communicate emotive state to users both young and old. The ability to recognize emotions has been shown to differ with age, with older adults more commonly misidentifying the facial emotions of anger, fear, and sadness. This research study examined whether emotion recognition of facial expressions differed between different types of on-screen agents, and between age groups. Three on-screen characters were compared: a human, a synthetic human, and a virtual agent. In this study 42 younger (age 28-28) and 42 older (age 65-85) adults completed an emotion recognition task with static pictures of the characters demonstrating four basic emotions (anger, fear, happiness, and sadness) and neutral. The human face resulted in the highest proportion match, followed by the synthetic human, then the virtual agent with the lowest proportion match. Both the human and synthetic human faces resulted in age-related differences for the emotions anger, fear, sadness, and neutral, with younger adults showing higher proportion match. The virtual agent showed age-related differences for the emotions anger, fear, happiness, and neutral, with younger adults showing higher proportion match. The data analysis and interpretation of the present study differed from previous work by utilizing two unique approaches to understanding emotion recognition. First, misattributions participants made when identifying emotion were investigated. Second, a similarity index of the feature placement between any two virtual agent emotions was calculated, suggesting that emotions were commonly misattributed as other emotions similar in appearance. Overall, these results suggest that age-related differences transcend human faces to other types of on-screen characters, and differences between older and younger adults in emotion recognition may be further explained by perceptual discrimination between two emotions of similar feature appearance.
Style APA, Harvard, Vancouver, ISO itp.
27

Ostmeyer-Kountzman, Katrina. "Emotion Recognition of Dynamic Faces in Children with Autism Spectrum Disorder". Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/32771.

Pełny tekst źródła
Streszczenie:
Studies examining impaired emotion recognition and perceptual processing in autism spectrum disorders (ASD) show inconsistent results (Harms, Martin, & Wallace, 2010; Jemel, Mottron, & Dawson, 2006), and many of these studies include eye tracking data. The current study utilizes a novel task, emotion recognition of a dynamic talking face with sound, to compare children with ASD (n=8; aged 6-10, 7 male) with mental age (MA) and gender matched controls (n=8; aged 4-10, 7 male) on an emotion identification and eye tracking task. Children were asked to watch several short video clips (2.5-5 seconds) portraying the emotions of happy, sad, excited, scared, and angry and identify the emotion portrayed in the video. A mixed factorial ANOVA analysis was conducted to examine group differences in attention when viewing the stimuli. Differences in emotion identification ability were examined using a t-test and Fisherâ s exact tests of independence. Findings indicated that children with ASD spent less time looking at faces and the mouth region than controls. Additionally, the amount of time children with ASD spent looking at the mouth region predicted better performance on the emotion identification task. The study was underpowered; however, so these results were preliminary and require replication. Results are discussed in relation to natural processing of emotion and social stimuli.

[revised ETD per Dean DePauw 10/25/12 GMc]
Master of Science

Style APA, Harvard, Vancouver, ISO itp.
28

Kaltwasser, Laura. "Influence of interpersonal abilities on social decisions and their physiological correlates". Doctoral thesis, Humboldt-Universität zu Berlin, Lebenswissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17435.

Pełny tekst źródła
Streszczenie:
Das Konzept der interpersonellen Fähigkeiten bezieht sich auf Leistungsaufgaben der sozialen Kognition. Diese Aufgaben messen die Fähigkeiten Gesichter zu erkennen und sich diese zu merken sowie Emotionen zu erkennen und diese auszudrücken. Ziel dieser Dissertation war die Untersuchung des Einflusses von interpersonellen Fähigkeiten auf soziale Entscheidungen. Ein besonderer Fokus lag auf der Quantifizierung von individuellen Unterschieden in zugrundeliegenden neuronalen Mechanismen. Studie 1 erweiterte bestehende Evidenz zu Beziehungen zwischen psychometrischen Konstrukten der Gesichterkognition und Ereigniskorrelierten Potentialen, welche mit den verschiedenen Stadien der Gesichterverarbeitung (Enkodierung, Wahrnehmung, Gedächtnis) während einer Bekanntheitsentscheidung assoziiert sind. Unsere Ergebnisse bestätigen eine substantielle Beziehung zwischen der N170 Latenz und der Amplitude des frühen Wiederholungseffektes (ERE) mit drei Faktoren der Gesichterkognition. Je kürzer die N170 Latenz und je ausgeprägter die ERE Amplitude, umso genauer und schneller ist die Gesichterkognition. Studie 2 ergab, dass die Fähigkeit ängstliche Gesichter zu erkennen sowie die generelle spontane Expressivität während der sozialen Interaktion mit prosozialen Entscheidungen korreliert. Sensitivität für das Leid anderer sowie emotionale Expressivität scheinen reziproke Interaktionen mit Gleichgesinnten zu fördern. Studie 3 bestätigte das Modell der starken Reziprozität, da Prosozialität die negative Reziprozität im Ultimatum Spiel beeinflusste. Unter der Verwendung von Strukturgleichungsmodellen entdeckten wir, dass Menschen mit ausgeprägter Reziprozität eine größere Amplitude der relativen feedback-negativity auf das Gesicht von Spielpartnern zeigen. Insgesamt sprechen die Ergebnisse dafür, dass die etablierten individuellen Unterschiede in den Verhaltensmaßen der interpersonellen Fähigkeiten zum Teil auf individuelle Unterschiede in neuronalen Mechanismen zurückzuführen sind.
The concept of interpersonal abilities refers to performance measures of social cognition such as the abilities to perceive and remember faces and the abilities to recognize and express emotions. The aim of this dissertation was to examine the influence of interpersonal abilities on social decisions. A particular focus lay on the quantification of individual differences in brain-behavior relationships associated with processing interpersonally relevant stimuli. Study 1 added to existing evidence on brain-behavior relationships, specifically between psychometric constructs of face cognition and event-related potentials associated with different stages of face processing (encoding, perception, and memory) in a familiarity decision. Our findings confirm a substantial relationship between the N170 latency and the early-repetition effect (ERE) amplitude with three established face cognition ability factors. The shorter the N170 latency and the more pronounced the ERE amplitude, the better is the performance in face perception and memory and the faster is the speed of face cognition. Study 2 found that the ability to recognize fearful faces as well as the general spontaneous expressiveness during social interaction are linked to prosocial choices in several socio-economic games. Sensitivity to the distress of others and spontaneous expressiveness foster reciprocal interactions with prosocial others. Study 3 confirmed the model of strong reciprocity in that prosociality drives negative reciprocity in the ultimatum game. Using multilevel structural equation modeling in order to estimate brain-behavior relationships of fairness preferences, we found strong reciprocators to show more pronounced relative feedback-negativity amplitude in response to the faces of bargaining partners. Thus, the results of this dissertation suggest that established individual differences in behavioral measures of interpersonal ability are partly due to individual differences in brain mechanisms.
Style APA, Harvard, Vancouver, ISO itp.
29

Mainsant, Marion. "Apprentissage continu sous divers scénarios d'arrivée de données : vers des applications robustes et éthiques de l'apprentissage profond". Electronic Thesis or Diss., Université Grenoble Alpes, 2023. http://www.theses.fr/2023GRALS045.

Pełny tekst źródła
Streszczenie:
Le cerveau humain reçoit en continu des informations en provenance de stimuli externes. Il a alors la capacité de s’adapter à de nouvelles connaissances tout en conservant une mémoire précise de la connaissance apprise par le passé. De plus en plus d’algorithmes d’intelligence artificielle visent à apprendre des connaissances à la manière d’un être humain. Ils doivent alors être mis à jour sur des données variées arrivant séquentiellement et disponibles sur un temps limité. Cependant, un des verrous majeurs de l’apprentissage profond réside dans le fait que lors de l’apprentissage de nouvelles connaissances, les anciennes sont quant-à-elles perdues définitivement, c’est ce que l’on appelle « l’oubli catastrophique ». De nombreuses méthodes ont été proposées pour répondre à cette problématique, mais celles-ci ne sont pas toujours applicables à une mise en situation réelle car elles sont construites pour obtenir les meilleures performances possibles sur un seul scénario d’arrivée de données à la fois. Par ailleurs, les meilleures méthodes existant dans l’état de l’art sont la plupart du temps ce que l’on appelle des méthodes à « rejeu de données » qui vont donc conserver une petite mémoire du passé, posant ainsi un problème dans la gestion de la confidentialité des données ainsi que dans la gestion de la taille mémoire disponible.Dans cette thèse, nous proposons d’explorer divers scénarios d’arrivée de données existants dans la littérature avec, pour objectif final, l’application à la reconnaissance faciale d’émotion qui est essentielle pour les interactions humain-machine. Pour cela nous présenterons l’algorithme Dream Net – Data-Free qui est capable de s’adapter à un vaste nombre de scenarii d’arrivée des données sans stocker aucune donnée passée. Cela lui permet donc de préserver la confidentialité des données apprises. Après avoir montré la robustesse de cet algorithme comparé aux méthodes existantes de l’état de l’art sur des bases de données classiques de la vision par ordinateur (Mnist, Cifar-10, Cifar-100 et Imagenet-100), nous verrons qu’il fonctionne également sur des bases de données de reconnaissance faciale d’émotions. En s’appuyant sur ces résultats, nous proposons alors un démonstrateur embarquant l’algorithme sur une carte Nvidia Jetson nano. Enfin nous discuterons la pertinence de notre approche pour la réduction des biais en intelligence artificielle ouvrant ainsi des perspectives vers une IA plus robuste et éthique
The human brain continuously receives information from external stimuli. It then has the ability to adapt to new knowledge while retaining past events. Nowadays, more and more artificial intelligence algorithms aim to learn knowledge in the same way as a human being. They therefore have to be able to adapt to a large variety of data arriving sequentially and available over a limited period of time. However, when a deep learning algorithm learns new data, the knowledge contained in the neural network overlaps old one and the majority of the past information is lost, a phenomenon referred in the literature as catastrophic forgetting. Numerous methods have been proposed to overcome this issue, but as they were focused on providing the best performance, studies have moved away from real-life applications where algorithms need to adapt to changing environments and perform, no matter the type of data arrival. In addition, most of the best state of the art methods are replay methods which retain a small memory of the past and consequently do not preserve data privacy.In this thesis, we propose to explore data arrival scenarios existing in the literature, with the aim of applying them to facial emotion recognition, which is essential for human-robot interactions. To this end, we present Dream Net - Data-Free, a privacy preserving algorithm, able to adapt to a large number of data arrival scenarios without storing any past samples. After demonstrating the robustness of this algorithm compared to existing state-of-the-art methods on standard computer vision databases (Mnist, Cifar-10, Cifar-100 and Imagenet-100), we show that it can also adapt to more complex facial emotion recognition databases. We then propose to embed the algorithm on a Nvidia Jetson nano card creating a demonstrator able to learn and predict emotions in real-time. Finally, we discuss the relevance of our approach for bias mitigation in artificial intelligence, opening up perspectives towards a more ethical AI
Style APA, Harvard, Vancouver, ISO itp.
30

Debladis, Jimmy. "Traitement des signaux de communication dans le syndrome de Prader-Willi : aspects descriptifs, analytiques et évolutifs". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30036.

Pełny tekst źródła
Streszczenie:
Le syndrome de Prader-Willi (SPW) est un syndrome génétique rare qui touche 1 naissance sur 20 000 en France dont les deux origines génétiques les plus fréquentes sont la délétion de la région 15q11q12 du chromosome 15 paternel et la disomie maternelle. Ce syndrome est marqué par une hypotonie néonatale, puis au cours du développement, apparaissent l'hyperphagie, les troubles de la satiété et des troubles comportementaux. Sur le plan social, ces patients ont des interactions sociales atypiques, faisant référence à celles décrites dans les troubles du spectre de l'autisme (TSA). Dans le SPW, les données concernant les troubles du comportement et les troubles des interactions sociales sont rares. Il est détaillé que ces patients ont des déficits de reconnaissance des émotions et des signatures cérébrales en réponse aux visages atypiques. Néanmoins, beaucoup de processus de traitement des signaux sociaux restent encore inexplorés. Cette thèse permet d'apporter de nouvelles données sur les processus de traitement des voix et des visages qui pourraient être altérés dans le SPW. Nous avons développé un ensemble complet de tests comportementaux simples qui visent à étudier le traitement des voix et des visages. Nous avons démontré que les patients avec un SPW avaient une lenteur motrice et perceptive. De plus, nous relevons un déficit de traitement des visages, mais qui n'est pas généralisable aux voix. Selon nous, les déficits présents sur le traitement des visages, pourraient provenir d'un trouble dans la perception globale et dans l'unification de plusieurs sources d'informations entre elles, faisant référence à la cohérence centrale. Enfin, nous avons montré que globalement, les patients avec une disomie souffrent de troubles sociaux plus sévères que les patients avec une délétion. Par ailleurs, un versant thérapeutique est développé avec l'administration d'ocytocine (OT) chez les enfants et les adultes avec un SPW. L'OT a, au cours des dernières années, fait l'objet d'un vif intérêt pour les populations ayant des troubles des interactions sociales. Ce versant thérapeutique permettra d'étudier les effets à long terme de l'OT sur des enfants et les potentiels bénéfices d'un traitement sur les comportements alimentaires et sociaux
Prader-Willi syndrome (PWS) is a rare genetic syndrome affecting around 1 in 20,000 births in France. The two most frequent genetic origins are either a deletion in the 15q11q12 region on the paternal chromosome 15 or maternal uniparental disomy. This syndrome is easily identified through hypotonia and feeding difficulties observed at birth; then marked by hyperphagia, a constant sensation of hunger and behavioural difficulties that appear in time. From a social point of view, these patients present with atypical social interactions, similar to those reported in autism spectrum disorder (ASD). In PWS, very little research has been done concerning the behavioural and social interaction difficulties observed. Previous research has shown that these patients have deficits in recognizing emotions as well as atypical cortical signatures in response to faces. Nonetheless, an unexplored gap remains regarding how social signals are treated and analyzed. This thesis brings new data on potentially altered vocal and facial treatment processes in PWS. We developed a completed battery of behavioural tests aiming to study how voices and faces are processed. We demonstrated that patients with PWS have slower motor and perceptive skills. Furthermore, we identified a facial processing deficit that is not present for voiced. We suggest that the facial processing deficits observed could originate from a global perception deficit and the unification of several sources of information, thereby relating to the central coherence. Finally, we showed that patients with a materal disomy suffered from more severe social interaction difficulties than patients presenting with a deletion. Additionally, a therapeutic axis will be developed with the administration of oxytocin in children and adults with PWS. Oxytocin, over these past few years, has gained renewed interest for individuals with social interaction deficits. This therapeutic axis will allow us to study the long-term effects of oxytocin on children and the potential benefits of a treatment on the social and feeding behaviours
Style APA, Harvard, Vancouver, ISO itp.
31

Shostak, Lisa. "Social information processing, emotional face recognition and emotional response style in offending and non-offending adolescents". Thesis, King's College London (University of London), 2007. https://kclpure.kcl.ac.uk/portal/en/theses/social-information-processing-emotional-face-recognition-and-emotional-response-style-in-offending-and-nonoffending-adolescents(15ff1b2d-1e52-46b7-be1a-736098263ce1).html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Wei, Xiaozhou. "3D facial expression modeling and analysis with topographic information". Diss., Online access via UMI:, 2008.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Maas, Casey. "Decoding Faces: The Contribution of Self-Expressiveness Level and Mimicry Processes to Emotional Understanding". Scholarship @ Claremont, 2014. http://scholarship.claremont.edu/scripps_theses/406.

Pełny tekst źródła
Streszczenie:
Facial expressions provide valuable information in making judgments about internal emotional states. Evaluation of facial expressions can occur through mimicry processes via the mirror neuron system (MNS) pathway, where a decoder mimics a target’s facial expression and proprioceptive perception prompts emotion recognition. Female participants rated emotional facial expressions when mimicry was inhibited by immobilization of facial muscles and when mimicry was uncontrolled, and were evaluated for self-expressiveness level. A mixed ANOVA was conducted to determine how self-expressiveness level and manipulation of facial muscles impacted recognition accuracy for facial expressions. Main effects of self-expressiveness level and facial muscle manipulation were not found to be significant (p > .05), nor did these variables appear to interact (p > .05). The results of this study suggest that an individual’s self-expressiveness level and use of mimicry processes may not play a central role in emotion recognition.
Style APA, Harvard, Vancouver, ISO itp.
34

Beall, Paula M. "Automaticity and Hemispheric Specialization in Emotional Expression Recognition: Examined using a modified Stroop Task". Thesis, University of North Texas, 2002. https://digital.library.unt.edu/ark:/67531/metadc3267/.

Pełny tekst źródła
Streszczenie:
The main focus of this investigation was to examine the automaticity of facial expression recognition through valence judgments in a modified photo-word Stroop paradigm. Positive and negative words were superimposed across male and female faces expressing positive (happy) and negative (angry, sad) emotions. Subjects categorized the valence of each stimulus. Gender biases in judgments of expressions (better recognition for male angry and female sad expressions) and the valence hypothesis of hemispheric advantages for emotions (left hemisphere: positive; right hemisphere: negative) were also examined. Four major findings emerged. First, the valence of expressions was processed automatically (robust interference effects). Second, male faces interfered with processing the valence of words. Third, no posers' gender biases were indicated. Finally, the emotionality of facial expressions and words was processed similarly by both hemispheres.
Style APA, Harvard, Vancouver, ISO itp.
35

Sun, Luning. "Using the Ekman 60 faces test to detect emotion recognition deficit in brain injury patients". Thesis, University of Cambridge, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708553.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Sergerie, Karine. "A face to remember : an fMRI study of the effects of emotional expression on recognition memory". Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82422.

Pełny tekst źródła
Streszczenie:
Emotion can exert a modulatory role on declarative memory. Several studies have shown that emotional stimuli (e.g., words, pictures) are better remembered than neutral ones. Although facial expressions are powerful emotional stimuli and have been shown to influence perception and attention processes, little is known about their effect on memory. We conducted an event-related fMRI study in 18 healthy individuals (9 men) to investigate the effects of expression on recognition memory for faces. During the encoding phase, participants viewed 84 faces of different individuals, depicting happy, fearful or neutral expressions. Subjects were asked to perform a gender discrimination task and remember the faces for later. In the recognition part subjects performed an old/new decision task on 168 faces (84 new). Both runs were scanned. Our findings highlight the importance of the amygdala, hippocampus and prefrontal cortex on the formation and retrieval of memories with emotional content.
Style APA, Harvard, Vancouver, ISO itp.
37

BRENNA, VIOLA. "Positive and negative facial emotional expressions: the effect on infants' and children's facial identity recognition". Doctoral thesis, Università degli Studi di Milano-Bicocca, 2013. http://hdl.handle.net/10281/46845.

Pełny tekst źródła
Streszczenie:
Aim of the present study was to investigate the origin and the development of the interdipendence between identity recognition and facial emotional expression processing, suggested by recent models on face processing (Calder & Young, 2005) and supported by outcomes on adults (e.g. Baudouin, Gilibert, Sansone, & Tiberghien, 2000; Schweinberger & Soukup, 1998). Particularly the effect of facial emotional expressions on infants’ and children’s ability to recognize identity of a face was explored. Studies on adults describe a different role of positive and negative emotional expressions on identity recognition (e.g. Lander & Metcalfe, 2007), i.e. positive expressions have a catalytic effect, increasing rating of familiarity of a face, conversely negative expression reduce familiarity judgments, producing an interference effect. Using respectively familiarization paradigm and a delayed two alternative forced-choice matching-to-sample task, 3-month-old infants (Experiment 1, 2, 3) and 4- and 5-year-old children (Experiment 4, 5) were tested. Results of Experiment 1 and 2 suggested an adult-like pattern at 3 months of age. Infants familiarized with a smiling face recognized the new identity in the test phase, but when they were shown with a woman’s face conveying negative expression, both anger or fear, they were not able to discriminate between the new and familiar face stimulus during the test. Moreover, evidence from Experiment 3 demonstrated that a single feature of a happy face (i.e. smiling mouth or “happy eyes”) is sufficient to drive the observed facilitator effect on identity recognition. Conversely, outcomes obtained in experiments with pre-school aged suggested that both positive and negative emotions have a distracting effect on children identity recognition. A decrement in children's performance was observed when faces 8 displayed an emotional expression (i.e. happiness, anger and fear) rather than a neutral expression (Experiment 4). This detrimental effect of a happy expression on face identity recognition emerged independently of the processing stage -i.e., encoding, recognition, encoding and recognition- at which emotional information was provided (Experiment 5). Overall, these findings suggest that, both in infancy and in childhood, facial emotional processing interacts with identity recognition. Moreover, observed outcomes seem to describe an U-shaped developmental trend of the relation between identity recognition and facial emotional expressions processing. The results are discussed by referring to Karmiloff-Smith’s Representational Redescription Model (1992).
Style APA, Harvard, Vancouver, ISO itp.
38

Neto, Wolme Cardoso Alves. "Efeitos do escitalopram sobre a identificação de expressões faciais". Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/17/17148/tde-25032009-210215/.

Pełny tekst źródła
Streszczenie:
ALVES NETO, W.C. Efeitos do escitalopram sobre a identificação de expressões faciais. Ribeirão Preto, SP: Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo; 2008. Os inibidores seletivos da recaptura de serotonina (ISRS) têm sido utilizados com sucesso para o tratamento de diversas patologias psiquiátricas. Sua eficácia clínica é atribuída a uma potencialização da neurotransmissão serotoninérgica, mas pouco ainda é conhecido sobre os mecanismos neuropsicológicos envolvidos nesse processo. Várias evidências sugerem que a serotonina estaria envolvida, entre outras funções, na regulação do comportamento social, nos processos de aprendizagem e memória e no processamento de emoções. O reconhecimento de expressões faciais de emoções básicas representa um valioso paradigma para o estudo do processamento de emoções, pois são estímulos condensados, uniformes e de grande relevância para o funcionamento social. O objetivo do estudo foi avaliar os efeitos da administração aguda e por via oral do escitalopram, um ISRS, no reconhecimento de expressões faciais de emoções básicas. Uma dose oral de 10 mg de escitalopram foi administrada a doze voluntários saudáveis do sexo masculino, em modelo duplo-cego, controlado por placebo, em delineamento cruzado, ordem randômica, 3 horas antes de realizarem a tarefa de reconhecimento de expressões faciais, com seis emoções básicas raiva, medo, tristeza, asco, alegria e surpresa mais a expressão neutra. As faces foram digitalmente modificadas de forma a criar um gradiente de intensidade entre 10 e 100% de cada emoção, com incrementos sucessivos de 10%. Foram registrados os estados subjetivos de humor e ansiedade ao longo da tarefa e o desempenho foi avaliado pela medida de acurácia (número de acertos sobre o total de estímulos apresentados). De forma geral, o escitalopram interferiu no reconhecimento de todas as expressões faciais, à exceção de medo. Especificamente, facilitou a identificação das faces de tristeza e prejudicou o reconhecimento de alegria. Quando considerado o gênero das faces, esse efeito foi observado para as faces masculinas, enquanto que para as faces femininas o escitalopram não interferiu com o reconhecimento de tristeza e aumentou o de alegria. Além disso, aumentou o reconhecimento das faces de raiva e asco quando administrado na segunda sessão e prejudicou a identificação das faces de surpresa nas intensidades intermediárias de gradação. Também apresentou um efeito positivo global sobre o desempenho na tarefa quando administrado na segunda sessão. Os resultados sugerem uma modulação serotoninérgica sobre o reconhecimento de expressões faciais emocionais e sobre a evocação de material previamente aprendido.
ALVES NETO, W.C. Effects of escitalopram on the processing of emotional faces. Ribeirão Preto, SP: Faculty of Medicine of Ribeirão Preto, University of São Paulo; 2008. The selective serotonin reuptake inhibitors (SSRI) have been used successfully for the treatment of various psychiatry disorders. The SSRI clinical efficacy is attributed to an enhancement of the serotonergic neurotransmission, but little is known about the neuropsychological mechanisms underlying this process. Several evidences suggest that serotonin is involved with the regulation of social behavior, learning and memory process and emotional processing. The recognition of basic emotions on facial expressions represents an useful task to study the emotional processing, since they are a condensate, uniform and important stimuli for social functioning. The aim of the study was to verify the effects of the SSRI escitalopram on the recognition of facial emotional expressions. Twelve healthy males completed two experimental sessions each (crossover design), in a randomized, balanced order, double-blind design. An oral dose of 10 mg of escitalopram was administered 3 hours before they performed an emotion recognition task with six basic emotions angry, fear, sadness, disgust, happiness and surprise and neutral expression. The faces were digitally morphed between 10% and 100% of each emotional standard, creating a 10% steps gradient. The subjective mood and anxiety states through the task were recorded and the performance through the task was defined by the accuracy measure (number of correct answers divided by the total of stimuli presented). In general, except of fear, escitalopram interfered with all the emotions tested. Specifically, facilitated the recognition of sadness, while impaired the identification of happiness. When the gender of the faces was analyzed, this effect was seen in male, but not female faces, where it improves the recognition of happiness. In addition, improves the recognition of angry and disgusted faces when administered at the second session and impaired the identification of surprised faces at intermediate levels of intensity. It also showed a global positive effect on task performance when administered at the second session. The results indicate a serotonergic modulation on the recognition of emotional faces and the recall of previous learned items.
Style APA, Harvard, Vancouver, ISO itp.
39

Ali, Afiya. "Recognition of facial affect in individuals scoring high and low in psychopathic personality characteristics". The University of Waikato, 2007. http://adt.waikato.ac.nz/public/adt-uow20070129.190938/index.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Marshall, Amy D. "Violent husbands' recognition of emotional expressions among the faces of strangers and their wives". [Bloomington, Ind.] : Indiana University, 2004. http://wwwlib.umi.com/dissertations/fullcit/3162247.

Pełny tekst źródła
Streszczenie:
Thesis (Ph.D.)--Indiana University, Dept. of Psychology, 2004.
Title from PDF t.p. (viewed Dec. 1, 2008). Source: Dissertation Abstracts International, Volume: 66-01, Section: B, page: 0564. Chair: Amy Holtzworth-Munroe.
Style APA, Harvard, Vancouver, ISO itp.
41

Gracioso, Ana Carolina Nicolosi da Rocha. "Avaliação da influência de emoções na tomada de decisão de sistemas computacionais". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-28062016-085426/.

Pełny tekst źródła
Streszczenie:
Este trabalho avalia a influência das emoções humanas expressas pela mímica da face na tomada de decisão de sistemas computacionais, com o objetivo de melhorar a experiência do usuário. Para isso, foram desenvolvidos três módulos: o primeiro trata-se de um sistema de computação assistiva - uma prancha de comunicação alternativa e ampliada em versão digital. O segundo módulo, aqui denominado Módulo Afetivo, trata-se de um sistema de computação afetiva que, por meio de Visão Computacional, capta a mímica da face do usuário e classifica seu estado emocional. Este segundo módulo foi implementado em duas etapas, as duas inspiradas no Sistema de Codificação de Ações Faciais (FACS), que identifica expressões faciais com base no sistema cognitivo humano. Na primeira etapa, o Módulo Afetivo realiza a inferência dos estados emocionais básicos: felicidade, surpresa, raiva, medo, tristeza, aversão e, ainda, o estado neutro. Segundo a maioria dos pesquisadores da área, as emoções básicas são inatas e universais, o que torna o módulo afetivo generalizável a qualquer população. Os testes realizados com o modelo proposto apresentaram resultados 10,9% acima dos resultados que usam metodologias semelhantes. Também foram realizadas análises de emoções espontâneas, e os resultados computacionais aproximam-se da taxa de acerto dos seres humanos. Na segunda etapa do desenvolvimento do Módulo Afetivo, o objetivo foi identificar expressões faciais que refletem a insatisfação ou a dificuldade de uma pessoa durante o uso de sistemas computacionais. Assim, o primeiro modelo do Módulo Afetivo foi ajustado para este fim. Por fim, foi desenvolvido um Módulo de Tomada de Decisão que recebe informações do Módulo Afetivo e faz intervenções no Sistema Computacional. Parâmetros como tamanho do ícone, arraste convertido em clique e velocidade de varredura são alterados em tempo real pelo Módulo de Tomada de Decisão no sistema computacional assistivo, de acordo com as informações geradas pelo Módulo Afetivo. Como o Módulo Afetivo não possui uma etapa de treinamento para inferência do estado emocional, foi proposto um algoritmo de face neutra para resolver o problema da inicialização com faces contendo emoções. Também foi proposto, neste trabalho, a divisão dos sinais faciais rápidos entre sinais de linha base (tique e outros ruídos na movimentação da face que não se tratam de sinais emocionais) e sinais emocionais. Os resultados dos Estudos de Caso realizados com os alunos da APAE de Presidente Prudente demonstraram que é possível melhorar a experiência do usuário, configurando um sistema computacional com informações emocionais expressas pela mímica da face.
The influence of human emotions expressed by facial mimics in decision-taking of computer systems is analyzed to improve user´s experience. Three modules were developed: the first module comprises a system of assistive computation - a digital alternative and amplified communication board. The second module, called the Affective Module, is a system of affective computation which, through a Computational Vision, retrieves the user\'s facial mimic and classifies their emotional state. The second module was implemented in two stages derived from the Facial Action Codification System (FACS) which identifies facial expressions based on the human cognitive system. In the first stage, the Affective Module infers the basic emotional stages, namely, happiness, surprise, anger, fear, sadness, disgust, and the neutral state. According to most researchers, basic emotions are innate and universal. Thus, the affective module is common to any population. Tests undertaken with the suggested model provided results which were 10.9% above those that employ similar methodologies. Spontaneous emotions were also undertaken and computer results were close to human score rates. The second stage for the development of the Affective Module, facial expressions that reflect dissatisfaction or difficulties during the use of computer systems were identified. The first model of the Affective Module was adjusted to this end. A Decision-taking Module which receives information from the Affective Module and intervenes in the Computer System was developed. Parameters such as icon size, draw transformed into a click, and scanning speed are changed in real time by the Decision-taking Module in the assistive computer system, following information by the Affective Module. Since the Affective Module does not have a training stage to infer the emotional stage, a neutral face algorithm has been suggested to solve the problem of initialing with emotion-featuring faces. Current assay also suggests the distinction between quick facial signals among the base signs (a click or any other sound in the face movement which is not an emotional sign) and emotional signs. Results from Case Studies with APAE children in Presidente Prudente SP Brazil showed that user´s experience may be improved through a computer system with emotional information expressed by facial mimics.
Style APA, Harvard, Vancouver, ISO itp.
42

Paleari, Marco. "Informatique Affective : Affichage, Reconnaissance, et Synthèse par Ordinateur des Émotions". Phd thesis, Télécom ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00005615.

Pełny tekst źródła
Streszczenie:
L'informatique Affective regarde la computation que se rapporte, surgit de, ou influence délibérément les émotions et trouve son domaine d'application naturel dans les interactions homme-machine a haut niveau d'abstraction. L'informatique affective peut être divisée en trois sujets principaux, à savoir: l'affichage,l'identification, et la synthèse. La construction d'une machine intelligente capable dinteragir'de façon naturelle avec son utilisateur passe forcement par ce trois phases. Dans cette thèse nous proposions une architecture basée principalement sur le modèle dite "Multimodal Affective User Interface" de Lisetti et la théorie psychologique des émotions nommé "Component Process Theory" de Scherer. Dans nos travaux nous avons donc recherché des techniques pour l'extraction automatique et en temps-réel des émotions par moyen des expressions faciales et de la prosodie vocale. Nous avons aussi traité les problématiques inhérentes la génération d'expressions sur de différentes plateformes, soit elles des agents virtuel ou robotique. Finalement, nous avons proposé et développé une architecture pour des agents intelligents capable de simuler le processus humaine d'évaluation des émotions comme décrit par Scherer.
Style APA, Harvard, Vancouver, ISO itp.
43

Orvoen, Hadrien. "Expressions faciales émotionnelles et Prise de décisions coopératives". Thesis, Paris, EHESS, 2017. http://www.theses.fr/2017EHES0032/document.

Pełny tekst źródła
Streszczenie:
Les comportements sociaux coopératifs sont longtemps restés un obstacle aux modèles de choix rationnel, obstacle qu'incarnent des dilemmes sociaux où un individu suivant son intérêt personnel est incité à exploiter la coopération d'autrui à son seul avantage. Je détaillerai tout d'abord comment la coopération peut apparaître un choix sensé lorsque elle est envisagée dans un contexte naturel et réel. Un regard à travers l'anthropologie, la psychologie et la neurobiologie conduit à appréhender la coopération davantage comme une adaptation et un apprentissage que comme un défaut de rationalité. Les émotions jouent un rôle essentiel dans ces processus, et je présenterai en quoi les exprimer aide les êtres humains à se synchroniser et à coopérer. Le sourire est souvent invoqué comme exemple d'un signal universel de coopération et d'approbation, une propriété intimement liée à son expression répétée lors de tâches collaboratives. Malgré tout, on en sait encore peu sur la manière précise dont le sourire et les autres expressions interviennent dans la prise de décision sociale, et en particulier sur le traitement des situations d'incongruence où un sourire accompagnerait une défection. Ce point est le cœur de l'étude expérimentale que je rapporte dans ce manuscrit. J'ai réalisé deux expériences confrontant les participants à un dilemme social dans lequel ils pouvaient investir une somme d'argent auprès de différents joueurs informatisés susceptibles de se l'accaparer, ou, au contraire, de la rétribuer avec intérêts. Les joueurs virtuels étaient personnalisés par un visage dont l'expression pouvait changer après le choix du participant: certains affichaient ainsi des émotions incongruentes avec leur ``décision'' subséquente de rétribuer ou non l'investissement du sujet. Malgré les différences méthodologiques, ces deux expériences ont montré que les expressions incongruentes altéraient la capacité des participants à jauger la propension des joueurs virtuels à rétribuer leurs investissements après une ou plusieurs interactions. Cet effet s'est manifesté tant au travers de rapports explicites que dans les investissements effectués. Dans leurs détails, les résultats de ces expériences ouvrent de nombreuses perspectives expérimentales, et appellent à la construction d'un modèle unifié de la décision sociale face-à-face qui intégrerait les nombreuses connaissances apportées ces dernières années par l'étude des grandes fonctions cognitives, tant au niveau expérimental, théorique que neurobiologique
For few decades, rational choice theories failed to properly account for cooperative behaviors. This was illustrated by social dilemmas, games where a self-motivated individual will be tempted to exploit others' cooperative behavior, harming them for his own personal profit. I will first detail how cooperation may rise as a reasonable --- if not rational --- behavior, provided that we consider social interactions in a more realistic context that rational choice theories initially did. From anthropology to neurobiology, cooperation is understood as an efficient adaptation to this natural environment rather than a quirky, self-defeating behavior. Because pertinent information is often lacking or overwhelming, too complex or ambiguous to deal with, it is essential to communicate, to share, and to trust others. Emotions, and their expression, are a cornerstone of humans' natural and effortless navigation in their social environment. Smiles for instance are universally known as a signal of satisfaction, approbation and cooperation. Like other emotional expressions, they are automatically and preferentially treated. They elicit trust and cooperative behaviors in observers, and are ubiquitous in successful collaborative interactions. Beside that however, few is known about how others' expressions are integrated into decision making. That was the focus of the experimental study I relate in this manuscript. More specifically, I investigated how decisions in a trust-based social dilemma are influenced by smiles which are either displayed along a cooperative or defective behavior (``congruently'' and ``incongruently'', resp.). I carried out two experiments where participants played an investment game with different computerized virtual partners playing the role of trustees. Virtual trustees, which were personalised with a facial avatar, could either take and keep participants investment, or reciprocate it with interests. Moreover, they also displayed facial reactions, that were either congruent or incongruent with their computerized ``decision'' to reciprocate or not. Even if the two experiments presented some methodological differences, they were coherent in that they both showed that participants were altered in remembering a virtual trustee's behavior if the latter expressed incongruent emotions. This was observed from participants' investments in game, and from their post-experimental explicit reports. If many improvements to my experimental approach remain to be done, I think it already completes the existing literature with original results. Many interesting perspectives are left open, which appeal for a deeper investigation of face-to-face decision making. I think it constitutes a theoretical and practical necessity, for which researchers will be required to unify the wide knowledge of the major cognitive functions which was gathered over the last decades
Style APA, Harvard, Vancouver, ISO itp.
44

Shen, Yu-Hong, i 沈育弘. "Image Recognition for Face Emotion Analysis". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/musq5n.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣海洋大學
電機工程學系
107
Along with the technological development, influence of smart phones and the face recognition technology subsidy, a face can be used for doing many things, e.g., recognition of user identity. Recently, a simple and smart biometric method is used for replacing the traditional password input. The detailed facial expression can apply to different fields such as a robot, life, and medical treatment. For example, the design of the robot has more humanity in human-machine interaction. Then, the robot can learn how to recognize the facial expression and know what to respond to the human facial expression. It is necessary to enhance the anthropomorphic design. This study starts from the perspective of image processing, uses Haar-Like feature as detection method, finds the facial expression for capturing, uses integral image to get the pixels accumulation, speeds up the operation time of feature points, and uses AdaBoost learning algorithm. Find out the key feature from a large number of features to make the classifier become easy and more efficient. The classifier can classify many weak classifiers. Therefore, we use a weak classifier cascading method to combine a strong classifier, and real-time reflect in reducing the complexity of the classifier. Reduce irrelevant image data by layer-by-layer refinement of image. Finally, divide the interested areas and blocks to determine the facial expression.
Style APA, Harvard, Vancouver, ISO itp.
45

Wang, Ching-Yu, i 王晴右. "The effect of emotion clue and familiarity on face recognition". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/19165800197554982350.

Pełny tekst źródła
Streszczenie:
碩士
中原大學
心理學研究所
91
Face recognition is a cognitive processing for individual to encode、store and retrieve face. The studies from neuropsychology、prosopagnosia and face inversion effect suggested that face recognition processing can be indeed distincted from object and word recognition . Although previous models of face recognition (e.g., Functional Model、Interactive Activation Model and Pattern Adjust Model)tried to clarify the processes of face recognition, but there were two problems remained:the roles of emotional clue and familiarity. Therefore a two-stage face recognition model was proposed account for the effect of emotional clue and familiarity on face recognition. Two-stage face recognition model took the emotion analyses and features-whole analyses as two separate perceptual routes, and these information will integrate into a internal facial representation. Therefore internal facial representation should include emotional information, rather than an expressionless face. On the other hand, as the familiarity increased, individual face node would become clear, or even unique. It helps individual to retrieve face more automation easily and stable. While individual stored familiar faces, the distance between nodes would be too close to induce a competition effect. Therefore, four variation pattern faces were used in study 1 to verify face internal representation is not an expressionless one. Study 2 uses three different tasks to verify emotion and features-whole information will integrate into a internal facial representation. Study 3 uses two display ways and two collection sizes to verify that face nodes would be unique and clear as familiarity increased. But we find that familiar face was still influenced by display way and collection size. Finally, study 4 uses high/low familiar faces to verify the role of similarity on whether it could shorten the distance between face nodes and causes competition effect. This study demonstrates emotion clues and familiarity play important roles in face recognition. More studies are required to clarify the effect of familiarity. And some suggestions for the future study were also presented.
Style APA, Harvard, Vancouver, ISO itp.
46

LAI, YU-DIAN, i 賴育鈿. "A Mirroring and Monitoring System Using Face and Emotion Recognition Techniques". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/vh4649.

Pełny tekst źródła
Streszczenie:
碩士
逢甲大學
資訊工程學系
107
Personnel management is usually required in classrooms, small companies, or laboratories. Thus, an entrance monitoring system may be installed at the entrance and exit to facilitate management. We use face recognition technology in the entrance surveillance system to identify the person's identity. In this system, a smart mirror is designed, which can display the information of users. Managers can know the identity and emotions of members, which will help members and managers at the same time. In this study, we proposed a smart mirror and entrance monitoring system by using deep learning and context-aware technology. Among them, we use the deep learning model of VGG-Face. Using transfer learning, the system can be identified without collecting too much photo training. We proposed the Top-K method to improve the accuracy of the identification system. The smart mirror can know the user's information, such as age, gender, emotion, and identify the person. In addition, it can send a message alert to the administrator via the LINE bot. There are two types of message alert, including regular notifications and warning notifications. This message alert feature sends a regular notification if the system recognizes a member. Conversely, if the system identifies a stranger, this message alert feature sends a warning to notify the administrator. This smart mirror helps managers manage better. The system is suitable for small groups such as laboratories, homes, and companies.
Style APA, Harvard, Vancouver, ISO itp.
47

Lausen, Adi. "Emotion recognition from expressions in voice and face – Behavioral and Endocrinological evidence –". Doctoral thesis, 2019. http://hdl.handle.net/21.11130/00-1735-0000-0003-C121-D.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Ping-ShengHsu i 許斌聖. "An Implementation of Live Video Conferencing Agent Base on Face Emotion Recognition". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/38341097389806288334.

Pełny tekst źródła
Streszczenie:
碩士
國立成功大學
工程科學系碩博士班
101
Internet video conferencing system has become a very important message communication tool so far. People located at different places can be easily connected via Internet video conferencing system. In this thesis, a live video conferencing agent system has been proposed. The proposed system provides the basic functions, text messages, video images and audio of a video conferencing system. It also designed as an agent system. While the contact person is not on line, the video conferencing system is switched to act as an agent system. The agent system uses facial emotion recognition system to decide the user’s emotion from the images captured by webcam and response the proper information extracted from the data base according to the decided user’s emotion. Five emotions: happy, angry, sad, surprised, and neutral expressions are provided and some corresponding information for each emotion has been created in the data base. The user’s emotion is determined via the comparing the features of captured on line user image with that of user’s neutral expression. This system is tested in real conditions to get recognition rate of 93.5, and the agent system could response the proper information in the result.
Style APA, Harvard, Vancouver, ISO itp.
49

Cassidy, S., D. Ropar, Peter Mitchell i P. Chapman. "Can adults with autism spectrum disorders infer what happened to someone from their emotional response". 2013. http://hdl.handle.net/10454/17897.

Pełny tekst źródła
Streszczenie:
Yes
Can adults with autism spectrum disorders (ASD) infer what happened to someone from their emotional response? Millikan has argued that in everyday life, others' emotions are most commonly used to work out the antecedents of behavior, an ability termed retrodictive mindreading. As those with ASD show difficulties interpreting others' emotions, we predicted that these individuals would have difficulty with retrodictive mindreading. Sixteen adults with high-functioning autism or Asperger's syndrome and 19 typically developing adults viewed 21 video clips of people reacting to one of three gifts (chocolate, monopoly money, or a homemade novelty) and then inferred what gift the recipient received and the emotion expressed by that person. Participants' eye movements were recorded while they viewed the videos. Results showed that participants with ASD were only less accurate when inferring who received a chocolate or homemade gift. This difficulty was not due to lack of understanding what emotions were appropriate in response to each gift, as both groups gave consistent gift and emotion inferences significantly above chance (genuine positive for chocolate and feigned positive for homemade). Those with ASD did not look significantly less to the eyes of faces in the videos, and looking to the eyes did not correlate with accuracy on the task. These results suggest that those with ASD are less accurate when retrodicting events involving recognition of genuine and feigned positive emotions, and challenge claims that lack of attention to the eyes causes emotion recognition difficulties in ASD.
University of Nottingham, School of Psychology
Style APA, Harvard, Vancouver, ISO itp.
50

Henriques, Bruno Filipe Maia. "Management of digital contents in multiple displays". Master's thesis, 2021. http://hdl.handle.net/10773/31296.

Pełny tekst źródła
Streszczenie:
With the generalized use of systems for digital contents dissemination arises the opportunity for implementing solutions capable of evaluating audience reaction. This dissertation reflects the implementation of one of those solutions. To this end, the development involved adapting a previously functional digital signage system. In this sense, digital cameras were paired to the content display terminals in order to capture information from the area in front of them. Using computer vision technologies, the terminals detect, in real time, people who appear in the cameras’ field of view, and this information is communicated to a server for data extraction. On the server, methods are used to perform face and emotion recognition, and also to extract data indicating the position of the head, which allows the calculation of an attention coefficient. The data is stored in a relational database, and can be consulted through a web platform, where they are presented associated with the contents corresponding to the moment of their capture and extraction. This solution thus allows the evaluation of the impact of the digital contents presented by the system.
Com a utilização generalizada de sistemas de disseminação de conteúdos digitais, surge a oportunidade de implementar soluções capazes de avaliar a reacção do público. Esta dissertação reflete a implementação de uma dessas soluções. Para isso, o desenvolvimento passou pela adaptação de um sistema de sinalização digital previamente funcional. Neste sentido, aos terminais de exposição de conteúdos, foram emparelhadas câmaras digitais de modo a permitir a captação de informação da área à frente destes. Com recurso a tecnologias de visão de computador, os terminais fazem, em tempo real, deteção de pessoas que apareçam no campo de visão das câmaras, sendo esta informação comunicada a um servidor para extração de dados. No servidor, são utilizados métodos para realização de reconhecimento de faces e emoções, e também é feita extração de dados indicadores da posição da cabeça, o que permite o cálculo de um coeficiente de atenção. Os dados são guardados numa base de dados relacional e podem ser consultados através de uma plataforma web, onde são apresentados associados aos contéudos correspondentes ao momento de captação e extração destes. Esta solução, permite, assim, a avaliação do impacto dos conteúdos digitais apresentados pelo sistema.
Mestrado em Engenharia de Computadores e Telemática
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii