Teses / dissertações sobre o tema "Reconnaissance des émotions vocales"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Reconnaissance des émotions vocales".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Gharsalli, Sonia. "Reconnaissance des émotions par traitement d’images". Thesis, Orléans, 2016. http://www.theses.fr/2016ORLE2075/document.
Texto completo da fonteEmotion recognition is one of the most complex scientific domains. In the last few years, various emotion recognition systems are developed. These innovative applications are applied in different domains such as autistic children, video games, human-machine interaction… Different channels are used to express emotions. We focus on facial emotion recognition specially the six basic emotions namely happiness, anger, fear, disgust, sadness and surprise. A comparative study between geometric method and appearance method is performed on CK+ database as the posed emotion database, and FEEDTUM database as the spontaneous emotion database. We consider different constraints in this study such as different image resolutions, the low number of labelled images in learning step and new subjects. We evaluate afterward various fusion schemes on new subjects, not included in the training set. Good recognition rate is obtained for posed emotions (more than 86%), however it is still low for spontaneous emotions. Based on local feature study, we develop local features fusion methods. These ones increase spontaneous emotions recognition rates. A feature selection method is finally developed based on features importance scores. Compared with two methods, our developed approach increases the recognition rate
Deschamps-Berger, Théo. "Social Emotion Recognition with multimodal deep learning architecture in emergency call centers". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG036.
Texto completo da fonteThis thesis explores automatic speech-emotion recognition systems in a medical emergency context. It addresses some of the challenges encountered when studying emotions in social interactions. It is rooted in modern theories of emotions, particularly those of Lisa Feldman Barrett on the construction of emotions. Indeed, the manifestation of emotions in human interactions is complex and often characterized by nuanced, mixed, and is highly linked to the context. This study is based on the CEMO corpus, which is composed of telephone conversations between callers and emergency medical dispatchers (EMD) from a French emergency call center. This corpus provides a rich dataset to explore the capacity of deep learning systems, such as Transformers and pre-trained models, to recognize spontaneous emotions in spoken interactions. The applications could be to provide emotional cues that could improve call handling and decision-making by EMD, or to summarize calls. The work carried out in my thesis focused on different techniques related to speech emotion recognition, including transfer learning from pre-trained models, multimodal fusion strategies, dialogic context integration, and mixed emotion detection. An initial acoustic system based on temporal convolutions and recurrent networks was developed and validated on an emotional corpus widely used by the affective community, called IEMOCAP, and then on the CEMO corpus. Extensive research on multimodal systems, pre-trained in acoustics and linguistics and adapted to emotion recognition, is presented. In addition, the integration of dialog context in emotion recognition was explored, underlining the complex dynamics of emotions in social interactions. Finally, research has been initiated towards developing multi-label, multimodal systems capable of handling the subtleties of mixed emotions, often due to the annotator's perception and social context. Our research highlights some solutions and challenges in recognizing emotions in the wild. The CNRS AI HUMAAINE Chair: HUman-MAchine Affective Interaction & Ethics funded this thesis
Vazquez, Rodriguez Juan Fernando. "Transformateurs multimodaux pour la reconnaissance des émotions". Electronic Thesis or Diss., Université Grenoble Alpes, 2023. http://www.theses.fr/2023GRALM057.
Texto completo da fonteMental health and emotional well-being have significant influence on physical health, and are especially important for healthy aging. Continued progress on sensors and microelectronics has provided a number of new technologies that can be deployed in homes and used to monitor health and well-being. These can be combined with recent advances in machine learning to provide services that enhance the physical and emotional well-being of individuals to promote healthy aging. In this context, an automatic emotion recognition system can provide a tool to help assure the emotional well-being of frail people. Therefore, it is desirable to develop a technology that can draw information about human emotions from multiple sensor modalities and can be trained without the need for large labeled training datasets.This thesis addresses the problem of emotion recognition using the different types of signals that a smart environment may provide, such as visual, audio, and physiological signals. To do this, we develop different models based on the Transformer architecture, which has useful characteristics such as their capacity to model long-range dependencies, as well as their capability to discern the relevant parts of the input. We first propose a model to recognize emotions from individual physiological signals. We propose a self-supervised pre-training technique that uses unlabeled physiological signals, showing that that pre-training technique helps the model to perform better. This approach is then extended to take advantage of the complementarity of information that may exist in different physiological signals. For this, we develop a model that combines different physiological signals and also uses self-supervised pre-training to improve its performance. We propose a method for pre-training that does not require a dataset with the complete set of target signals, but can rather, be trained on individual datasets from each target signal.To further take advantage of the different modalities that a smart environment may provide, we also propose a model that uses as inputs multimodal signals such as video, audio, and physiological signals. Since these signals are of a different nature, they cover different ways in which emotions are expressed, thus they should provide complementary information concerning emotions, and therefore it is appealing to use them together. However, in real-world scenarios, there might be cases where a modality is missing. Our model is flexible enough to continue working when a modality is missing, albeit with a reduction in its performance. To address this problem, we propose a training strategy that reduces the drop in performance when a modality is missing.The methods developed in this thesis are evaluated using several datasets, obtaining results that demonstrate the effectiveness of our approach to pre-train Transformers to recognize emotions from physiological signals. The results also show the efficacy of our Transformer-based solution to aggregate multimodal information, and to accommodate missing modalities. These results demonstrate the feasibility of the proposed approaches to recognizing emotions from multiple environmental sensors. This opens new avenues for deeper exploration of using Transformer-based approaches to process information from environmental sensors and allows the development of emotion recognition technologies robust to missing modalities. The results of this work can contribute to better care for the mental health of frail people
Bherer, François. "Expressions vocales spontanées et émotion : de l'extériorisation au jugement". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq33572.pdf.
Texto completo da fonteHenry, Mylène. "La reconnaissance des émotions chez des enfants maltraités". Thèse, Université du Québec à Trois-Rivières, 2011. http://depot-e.uqtr.ca/2069/1/030183277.pdf.
Texto completo da fonteAouati, Amar. "Utilisation des technologies vocales dans une application multicanaux". Paris 11, 1985. http://www.theses.fr/1985PA112373.
Texto completo da fonteAttabi, Yazid. "Reconnaissance automatique des émotions à partir du signal acoustique". Mémoire, École de technologie supérieure, 2008. http://espace.etsmtl.ca/168/1/ATTABI_Yazid.pdf.
Texto completo da fontePaleari, Marco. "Informatique Affective : Affichage, Reconnaissance, et Synthèse par Ordinateur des Émotions". Phd thesis, Télécom ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00005615.
Texto completo da fontePaleari, Marco. "Computation affective : affichage, reconnaissance et synthèse par ordinateur des émotions". Paris, Télécom ParisTech, 2009. https://pastel.hal.science/pastel-00005615.
Texto completo da fonteAffective Computing refers to computing that relates to, arises from, or deliberately influences emotions and has is natural application domain in highly abstracted human--computer interactions. Affective computing can be divided into three main parts, namely display, recognition, and synthesis. The design of intelligent machines able to create natural interactions with the users necessarily implies the use of affective computing technologies. We propose a generic architecture based on the framework “Multimodal Affective User Interface” by Lisetti and the psychological “Component Process Theory” by Scherer which puts the user at the center of the loop exploiting these three parts of affective computing. We propose a novel system performing automatic, real-time, emotion recognition through the analysis of human facial expressions and vocal prosody. We also discuss about the generation of believable facial expressions for different platforms and we detail our system based on Scherer theory. Finally we propose an intelligent architecture that we have developed capable of simulating the process of appraisal of emotions as described by Scherer
Vaudable, Christophe. "Analyse et reconnaissance des émotions lors de conversations de centres d'appels". Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00758650.
Texto completo da fonteTremblay, Marie-Pier. "Le rôle de la mémoire sémantique dans la reconnaissance des émotions". Doctoral thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/28013.
Texto completo da fonteSemantic memory underlies several cognitive processes and recent research suggests that it is involved in emotion recognition. Nevertheless, the role of semantic memory in the recognition of emotional valence and basic emotions conveyed by different stimuli remains controversial. Therefore, this thesis aims at investigating the role of semantic memory in emotion recognition. To do so, emotion recognition is examined in people presenting with the semantic variant of primary progressive aphasia (svPPA), a neurodegenerative disorder characterized by a gradual and selective loss of semantic memory. In a first study, svPPA is used as a model of semantic memory impairment. Performances are compared between individuals with svPPA (n = 10) and healthy controls (n = 33) on three tasks assessing the recognition of 1) basic emotions conveyed by facial expressions, 2) prosody scripts, and 3) emotional valence conveyed by photographic scenes. Results reveal that individuals with svPPA show deficits in the recognition of basic emotions, except for happiness and surprise conveyed by facial expressions, and emotional valence. These results suggest that semantic memory has a central role in the recognition of emotional valence and basic emotions, but that its contribution varies according to stimulus and emotion category. In a second study, the formal association between the recognition of emotional valence and basic emotions, on the one hand, and semantic knowledge, on the other hand, is examined. Performances of the same participants are compared in two tasks assessing the recognition of emotional valence conveyed by written words and basic emotions conveyed by musical excerpts. Moreover, performance of individuals with svPPA is associated with the recognition of words and musical excerpts, as well as with the ability to associate words and musical excerpts with concepts. Findings indicate that the recognition of emotional valence conveyed by words and basic emotions conveyed by musical excerpts depends on the recognition of words and music, but not on the ability to associate words and musical excerpts with concepts. These results reveal that the activation of semantic representations related to words and musical excerpts is not required for emotion recognition. Altogether, results from this thesis suggest that semantic memory plays a central role in the recognition of emotional valence and basic emotions, but that the activation of semantic representations related to emotional stimuli is not required for emotion recognition. These conclusions contribute to refining existing models of emotion recognition, word and music processing, as well as models of semantic memory.
Etienne, Caroline. "Apprentissage profond appliqué à la reconnaissance des émotions dans la voix". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS517.
Texto completo da fonteThis thesis deals with the application of artificial intelligence to the automatic classification of audio sequences according to the emotional state of the customer during a commercial phone call. The goal is to improve on existing data preprocessing and machine learning models, and to suggest a model that is as efficient as possible on the reference IEMOCAP audio dataset. We draw from previous work on deep neural networks for automatic speech recognition, and extend it to the speech emotion recognition task. We are therefore interested in End-to-End neural architectures to perform the classification task including an autonomous extraction of acoustic features from the audio signal. Traditionally, the audio signal is preprocessed using paralinguistic features, as part of an expert approach. We choose a naive approach for data preprocessing that does not rely on specialized paralinguistic knowledge, and compare it with the expert approach. In this approach, the raw audio signal is transformed into a time-frequency spectrogram by using a short-term Fourier transform. In order to apply a neural network to a prediction task, a number of aspects need to be considered. On the one hand, the best possible hyperparameters must be identified. On the other hand, biases present in the database should be minimized (non-discrimination), for example by adding data and taking into account the characteristics of the chosen dataset. We study these aspects in order to develop an End-to-End neural architecture that combines convolutional layers specialized in the modeling of visual information with recurrent layers specialized in the modeling of temporal information. We propose a deep supervised learning model, competitive with the current state-of-the-art when trained on the IEMOCAP dataset, justifying its use for the rest of the experiments. This classification model consists of a four-layer convolutional neural networks and a bidirectional long short-term memory recurrent neural network (BLSTM). Our model is evaluated on two English audio databases proposed by the scientific community: IEMOCAP and MSP-IMPROV. A first contribution is to show that, with a deep neural network, we obtain high performances on IEMOCAP, and that the results are promising on MSP-IMPROV. Another contribution of this thesis is a comparative study of the output values of the layers of the convolutional module and the recurrent module according to the data preprocessing method used: spectrograms (naive approach) or paralinguistic indices (expert approach). We analyze the data according to their emotion class using the Euclidean distance, a deterministic proximity measure. We try to understand the characteristics of the emotional information extracted autonomously by the network. The idea is to contribute to research focused on the understanding of deep neural networks used in speech emotion recognition and to bring more transparency and explainability to these systems, whose decision-making mechanism is still largely misunderstood
Golouboff, Nathalie. "La reconnaissance des émotions faciales : développement chez l'enfant sain et épileptique". Paris 5, 2007. http://www.theses.fr/2007PA05H059.
Texto completo da fonteThe aim of this research is (1) to develop a new facial emotion recognition test for children (the TREFE) to assess their ability to recognize 5 basic emotions (happiness, fear, anger, disgust, sadness) against neutrality (no emotion), (2) to describe the normal developmental trajectories of facial emotion recognition in 279 healthy subjects aged from 7 to 25 and (3) to investigate the impact of epilepsy on the development of this competence in 37 children and adolescents with partial epilepsy. In the normal population, results reveal that the ability to recognize emotions in facial expressions is functional from pre-adolescence (7-8 years) and improves until adulthood (16-25 years). In patients, results show the impact of epilepsy and its topography on the development of emotion recognition from childhood. As in adults, early-onset temporal lobe epilepsy is associated with impairments in fear recognition
Mariéthoz, Johnny. "Algorithmes d'apprentissage discriminants en vérification du locuteur". Lyon 2, 2006. http://theses.univ-lyon2.fr/documents/lyon2/2006/mariethoz_j.
Texto completo da fonteDans cette thèse le problème de la vérification du locuteur indépendante du texte est abordée du point de vue de l'apprentissage statistique (machine learning). Les théories développées en apprentissage statistique permettent de mieux définir ce problème, de développer de nouvelles mesures de performance non-biaisées et de proposer de nouveaux tests statistiques afin de comparer objectivement les modèles proposés. Une nouvelle interprétation des modèles de l'état de l'art basée sur des mixtures de gaussiennes (GMM) montre que ces modèles sont en fait discriminants et équivalents à une mixture d'experts linéaires. Un cadre théorique général pour la normalisation des scores est aussi proposé pour des modèles probabilistes et non-probabilistes. Grâce à ce nouveau cadre théorique, les hypothèses faites lors de l'utilisation de la normalisation Z et T (T- and Z-norm) sont mises en évidence. Différents modèles discriminants sont proposés. On présente un nouveau noyau utilisé par des machines à vecteurs de support (SVM) qui permet de traîter des séquences. Ce noyau est en fait la généralisation d'un noyau déjà existant qui présente l'inconvénient d'être limité à une forme polynomiale. La nouvelle approche proposée permet la projection des données dans un espace de dimension infinie, comme c'est le cas, par exemple, avec l'utilisation d'un noyau gaussien. Une variante de ce noyau cherchant le meilleur vecteur acoustique (frame) dans la séquence à comparer, améliore les résultats actuellement connus. Comme cette approche est particulièrement coûteuse pour les séquences longues, un algorithme de regroupement (clustering) est utilisé pour en réduire la complexité. Finalement, cette thèse aborde aussi des problèmes spécifiques de la vé-ri-fi-ca-tion du locuteur, comme le fait que les nombres d'exemples positifs et négatifs sont très déséquilibrés et que la distribution des distances intra et inter classes est spécifique de ce type de problème. Ainsi, le noyau est modifié en ajoutant un bruit gaussien sur chaque exemple négatif. Même si cette approche manque de justification théorique pour l'instant, elle produit de très bons résultats empiriques et ouvre des perspectives intéressantes pour de futures recherches
Mariéthoz, Johnny Paugam-Moisy Hélène. "Discriminant models for text-independent speaker verification". Lyon : Université Lumière Lyon 2, 2006. http://theses.univ-lyon2.fr/sdx/theses/lyon2/2006/mariethoz_j.
Texto completo da fonteMaassara, Reem. "La reconnaissance des expressions faciales des émotions: profil de développement et utilisation des catégories émotionnelles au cours de l’enfance". Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34415.
Texto completo da fonteMerlin, Teva. "AMIRAL, une plateforme générique pour la reconnaissance automatique du locuteur - de l'authentification à l'indexation". Avignon, 2005. http://www.theses.fr/2004AVIG0136.
Texto completo da fonteAbdat, Faiza. "Reconnaissance automatique des émotions par données multimodales : expressions faciales et des signaux physiologiques". Thesis, Metz, 2010. http://www.theses.fr/2010METZ035S/document.
Texto completo da fonteThis thesis presents a generic method for automatic recognition of emotions from a bimodal system based on facial expressions and physiological signals. This data processing approach leads to better extraction of information and is more reliable than single modality. The proposed algorithm for facial expression recognition is based on the distance variation of facial muscles from the neutral state and on the classification by means of Support Vector Machines (SVM). And the emotion recognition from physiological signals is based on the classification of statistical parameters by the same classifier. In order to have a more reliable recognition system, we have combined the facial expressions and physiological signals. The direct combination of such information is not trivial giving the differences of characteristics (such as frequency, amplitude, variation, and dimensionality). To remedy this, we have merged the information at different levels of implementation. At feature-level fusion, we have tested the mutual information approach for selecting the most relevant and principal component analysis to reduce their dimensionality. For decision-level fusion we have implemented two methods; the first based on voting process and another based on dynamic Bayesian networks. The optimal results were obtained with the fusion of features based on Principal Component Analysis. These methods have been tested on a database developed in our laboratory from healthy subjects and inducing with IAPS pictures. A self-assessment step has been applied to all subjects in order to improve the annotation of images used for induction. The obtained results have shown good performance even in presence of variability among individuals and the emotional state variability for several days
Abdat, Faiza. "Reconnaissance automatique des émotions par données multimodales : expressions faciales et des signaux physiologiques". Electronic Thesis or Diss., Metz, 2010. http://www.theses.fr/2010METZ035S.
Texto completo da fonteThis thesis presents a generic method for automatic recognition of emotions from a bimodal system based on facial expressions and physiological signals. This data processing approach leads to better extraction of information and is more reliable than single modality. The proposed algorithm for facial expression recognition is based on the distance variation of facial muscles from the neutral state and on the classification by means of Support Vector Machines (SVM). And the emotion recognition from physiological signals is based on the classification of statistical parameters by the same classifier. In order to have a more reliable recognition system, we have combined the facial expressions and physiological signals. The direct combination of such information is not trivial giving the differences of characteristics (such as frequency, amplitude, variation, and dimensionality). To remedy this, we have merged the information at different levels of implementation. At feature-level fusion, we have tested the mutual information approach for selecting the most relevant and principal component analysis to reduce their dimensionality. For decision-level fusion we have implemented two methods; the first based on voting process and another based on dynamic Bayesian networks. The optimal results were obtained with the fusion of features based on Principal Component Analysis. These methods have been tested on a database developed in our laboratory from healthy subjects and inducing with IAPS pictures. A self-assessment step has been applied to all subjects in order to improve the annotation of images used for induction. The obtained results have shown good performance even in presence of variability among individuals and the emotional state variability for several days
Sánchez-Soto, Eduardo. "Réseaux bayésiens dynamiques pour la vérification du locuteur". Paris, ENST, 2005. http://www.theses.fr/2005ENST0032.
Texto completo da fonteThis thesis is concerned with the statistical modeling of speech signal applied to Speaker Verification (SV) using Bayesian Networks (BNs). The main idea of this work is to use BNs as a mathematical tool to model pertinent speech features keeping its relations. It combines theoretical and experimental work. The difference between systems and humans performance in SV is the quantity of information and the relationships between the sources of information used to make decisions. A single statistical framework that keeps the conditional dependence and independence relations between those variables is difficult to attain. Therefore, the use of BNs as a tool for modeling the available information and their independence and dependence relationships is proposed. The first part of this work reviews the main modules of a SV system, the possible sources of information as well as the basic concepts of graphical models. The second part deals with Modeling. A new approach to the problems associated with the SV systems is proposed. The problem of inference and learning (parameters and structure)in BNs are presented. In order to obtain an adapted structure the relations of conditional independence among the variables are learned directly from the data. These relations are then used in order to build an adapted BN. In particular, a new model adaptation technique for BN has been proposed. This adaptation is based on a measure between Conditional Probability Distributions for discrete variables and on Regression Matrix for continuous variables used to model the relationships. In a large database for the SV task, the results have confirmed the potential of use the BNs approach
Clavel, Chloé. "Analyse et reconnaissance des manifestations acoustiques des émotions de type peur en situations anormales". Phd thesis, Télécom ParisTech, 2007. http://pastel.archives-ouvertes.fr/pastel-00002533.
Texto completo da fonteSuarez, Pardo Myrian Amanda. "Identification et attribution des expressions faciales et vocales émotionnelles chez l'enfant typique et avec autisme". Toulouse 2, 2009. http://www.theses.fr/2009TOU20004.
Texto completo da fonteSocial cognition is defined as our ability to interpret others' behaviour in terms of mental states (thoughts, intentions, desires, and beliefs), to empathize with others' state of mind and to predict how others will think and act. This kind of capability is used, for example, to « read » and to understand the emotional expressions of other people. Within the framework of this research we are interested in children's abilities to express and to interpret the emotional manifestations of other people as a highly mediating factor for their successful social adjustment. This question was explored from both a developmental and comparative perspective. We studied the developmental trajectories of 90 typically developing children, divided into three age groups of 4, 6 and 8 years, and compared them with those of 12 high-functioning autistic children. These groups were assessed with a number of tasks: an affective judgment task from pictures and stories, a narration task using scenes of emotional content and an interview about emotions (composed by production and evocation tasks). Results of the developmental study show that, as typical children get older, they increasingly provide adequate target responses, confusion between emotions decreases and finally they produce more complex narratives and develop expressive capabilities. Furthermore, results of the comparative study show that the autistic population is also able to recognize emotional information from faces, but they show significantly worse performance on other emotional tasks than typical children do. These results are discussed in relation to former research in the domain of emotional, pragmatic and theory of mind
Leconte, Francis. "Reconnaissance et stabilité d'une mémoire épisodique influencée par les émotions artificielles pour un robot autonome". Mémoire, Université de Sherbrooke, 2014. http://hdl.handle.net/11143/5953.
Texto completo da fonteSánchez-Soto, Eduardo. "Réseaux bayésiens dynamiques pour la vérification du locuteur /". Paris : École nationale supérieure des télécommunications, 2005. http://catalogue.bnf.fr/ark:/12148/cb40208312k.
Texto completo da fonteRingeval, Fabien. "Ancrages et modèles dynamiques de la prosodie : application à la reconnaissance des émotions actées et spontanées". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2011. http://tel.archives-ouvertes.fr/tel-00825312.
Texto completo da fonteHammal, Zakia. "Segmentation des traits du visage, analyse et reconnaissance d'expressions faciales par le modèle de croyance transférable". Université Joseph Fourier (Grenoble), 2006. http://www.theses.fr/2006GRE10059.
Texto completo da fonteThe aim of this work is the analysis and the classification of facial expressions. Experiments in psychology show that hum an is able to recognize the emotions based on the visualization of the temporal evolution of sorne characteristic fiducial points. Thus we firstly propose an automatic system for the extraction of the permanent facial features (eyes, eyebrows and lips). Ln this work we are interested in the problem of the segmentation of the eyes and the eyebrows. The segmentation of lips contours is based on a previous work developed in the laboratory. The proposed algorithm for eyes and eyebrows contours segmentation consists of three steps : firstly, the definition of parametric models to fit as accurate as possible the contour of each feature ; then, a whole set of characteristic points is detected to initialize the selected models in the face ; finally, the initial models are finally fitted by taking into account the luminance gradient information. The segmentation of the eyes, eyebrows and lips contours leads to what we cali skeletons of expressions. To measure the characteristic features deformation, five characteristic distances are defined on these skeletons. Based on the state of these distances a whole set of logical rules is defined for each one of the considered expression : Smile, Surprise, Disgust, Anger, Fear, Sadness and Neutral. These rules are compatible with the standard MPEG-4 which provides a description of the deformations undergone by each facial feature during the production of the six universal facial expressions. However the human behavior is not binary, a pure expression is rarely produced. To be able to model the doubt between several expressions and to model the unknown expressions, the Transferable Belief Model is used as a fusion process for the facial expressions classification. The classification system takes into account the evolution of the facial features deformation in the course of the time. Towards an audio-visual system for emotional expressions classification, a reliminary study on vocal expressions is also proposed
Prégent, Alexandra. "Informatique affective : l'utilisation des systèmes de reconnaissance des émotions est-elle en cohérence avec la justice sociale ?" Master's thesis, Université Laval, 2021. http://hdl.handle.net/20.500.11794/70318.
Texto completo da fonteEmotion recognition systems (ERS) offer the ability to identify the emotions of others, based on an analysis of their facial expressions and regardless of culture, ethnicity, context, gender or social class. By claiming universalism in the expression as we ll as in the recognition of emotions, we believe that ERS present significant risks of causing great harm to some individuals, in addition to targeting, in some contexts, specific social groups. Drawing on a wide range of multidisciplinary knowledge and anthropology incl uding philosophy, psychology, computer science this research project aims to identify the current limitations of ERS and the main risks that their use brings, with the goal of providing a clear and robust analysis of the use of ERS and t heir contribution to greater social justice. Pointing to technical limitations, we refute, on the one hand, the idea that ERS are able to prove the causal link between specific emotions and specific facial expressions. We support our argument with evidence of the inability of ERS to distinguish facial expressions of emotions from facial expressions as communication signals. On the other hand, due to the contextual and cultural limitations of current ERS, we refute the idea that ERS are able to recognise, with equal performance, the emotions of individuals, regardless of their culture, ethnicity, gender and social class. Our ethical analysis shows that the risks are considerably more numerous and important than the benefits that could be derived from using ER S. However, we have separated out a specific type of ERS, whose use is limited to the field of care, and which shows a remarkable potential to actively participate in social justice, not only by complying with the minimum requirements, but also by meeting the criterion of beneficence. While ERS currently pose significant risks, it is possible to consider the potential for specific types to participate in social justice and provide emotional and psychological support and assistance to certain members of society.
Tayari, Meftah Imen. "Modélisation, détection et annotation des états émotionnels à l'aide d'un espace vectoriel multidimensionnel". Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00838803.
Texto completo da fonteKerkeni, Leila. "Analyse acoustique de la voix pour la détection des émotions du locuteur". Thesis, Le Mans, 2020. http://www.theses.fr/2020LEMA1003.
Texto completo da fonteThe aim of this thesis is to propose a speech emotion recognition (SER) system for application in classroom. This system has been built up using novel features based on the amplitude and frequency (AM-FM) modulation model of speech signal. This model is based on the joint use of empirical mode decomposition (EMD) and the Teager-Kaiser energy operator (TKEO). In this system, the discrete (or categorical) emotion theory was chosen to represent the six basic emotions (sadness, anger, joy, disgust, fear and surprise) and neutral emotion.Automatic recognition has been optimized by finding the best combination of features, selecting the most relevant ones and comparing different classification approaches. Two reference speech emotional databases, in German and Spanish, were used to train and evaluate this system. A new database in French, more appropriate for the educational context was built, tested andvalidated
Galindo, losada Julian. "Adaptation des interfaces utilisateurs aux émotions". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM021/document.
Texto completo da fonteUser interfaces adaptation by using emotions.Perso2U, an approach to personalize user interfaces with user emotions.User experience (UX) is nowadays recognized as an important quality factor to make systems or software successful in terms of user take-up and frequency of usage. UX depends on dimensions like emotion, aesthetics or visual appearance, identification, stimulation, meaning/value or even fun, enjoyment, pleasure, or flow. Among these dimensions, the importance of usability and aesthetics is recognized. So, both of them need to be considered while designing user interfaces (UI).It raises the question how designers can check UX at runtime and improve it if necessary. To achieve a good UI quality in any context of use (i.e. user, platform and environment), plasticity proposes to adapt UI to the context while preserving user-centered properties. In a similar way, our goal is to preserve or improve UX at runtime, by proposing UI adaptations. Adaptations can concern aesthetics or usability. They can be triggered by the detection of specific emotion, that can express a problem with the UI.So the research question addressed in this PhD is how to drive UI adaptation with a model of the user based on emotions and user characteristics (age & gender) to check or improve UX if necessary.Our approach aims to personalize user interfaces with user emotions at run-time. An architecture, Perso2U, has been designed to adapt the UI according to emotions and user characteristics (age and gender). Perso2U includes three main components: (1) Inferring Engine, (2) Adaptation Engine and (3) Interactive System. First, the inferring engine recognizes the user’s situation and in particular him/her emotions (happiness, anger, disgust, sadness, surprise, fear, contempt) plus neutral which are into Ekman emotion model. Second, after emotion recognition, the best suitable UI structure is chosen and the set of UI parameters (audio, Font-size, Widgets, UI layout, etc.) is computed based on such detected emotions. Third, this computation of a suitable UI structure and parameters allows the UI to execute run-time changes aiming to provide a better UI. Since the emotion recognition is performed cyclically then it allows UI adaptation at run-time.To go further into the inferring engine examination, we run two experiments about the (1) genericity of the inferring engine and (2) UI influence on detected emotions regarding age and gender.Since this approach relies on emotion recognition tools, we run an experiment to study the similarity of detecting emotions from faces to understand whether this detection is independent from the emotion recognition tool or not. The results confirmed that the emotions detected by the tools provide similar emotion values with a high emotion detection similarity.As UX depends on user interaction quality factors like aesthetics and usability, and on individual characteristics such as age and gender, we run a second experimental analysis. It tends to show that: (1) UI quality factors (aesthetics and/or usability) influences user emotions differently based on age and gender, (2) the level (high and/or low) of UI quality factors seem to impact emotions differently based on age and gender. From these results, we define thresholds based on age and gender that allow the inferring engine to detect usability and/or aesthetics problems
Peillon, Stéphane. "Indexation vocale à vocabulaire illimité à base de décodage phonétique : application à la détection de clés vocales dans un flux de paroles". Avignon, 2002. http://www.theses.fr/2002AVIG0128.
Texto completo da fonteMultimedia data storage is currently confronted with a lack of effective document extraction and sorting tools. In the specific context of voice data, we suggest an indexing technique which will enable speech documents to be retrieved by content only. Positioning relevant indexes on the medium enables the amount of information needed later for the key search phase to be greatly reduced. We compare two phonetic index-based indexing methods: one is based on the best possible sequence of phonemes, the other on scales of phonetic hypotheses produced on an automatic a priori segmentation of the corpus. This second mode, called "phoneme synchronized lattice" offers better performance with low additional computation cost, and requires less training for the search engine parameters. In addition, the technique presented in this document enables the detection of voice keywords in both speech and text corpora
Cherbonnier, Anthony. "La reconnaissance des émotions à partir d’émoticônes graphiques : des recherches expérimentales à l’étude des usages sur une webradio". Thesis, Rennes 2, 2021. http://www.theses.fr/2021REN20005.
Texto completo da fonteEmoticons are often used in digital environments to convey emotions. Although a wide variety of emoticons exist, little is known about how they convey emotions compared to other modes of expression, and few studies have looked at their use in a school setting. In this thesis, four studies (N = 291) were carried out to design “new” emoticons to unambiguously represent the six basic emotions, three studies (N = 957) sought to compare the quality of recognition of emotions from these “new” emoticons in relation to other modes of expression, and particularly facial expressions. A final study examined the way in which these emoticons are used on a webradio by middle school students (N = 204). The results showed that the “new” emoticons convey emotions more effectively and more intensely than facial expressions and emoticons from Facebook and iOS. This improved recognition is mainly due to the negative emotions of disgust and sadness. By including these "new" emoticons on the Wikiradio© Saooti made it possible to study their uses in an academic setting. The results showed that, regardless of the gender of the middle school students, the use of the emoticon conveying happiness was the preferred way to express emotions toward programmes produced by peers. These results suggest there is a need to design specific emoticons to convey emotions unambiguously in digital environments and to study their effects on behaviour
Yang, Wenlu. "Personalized physiological-based emotion recognition and implementation on hardware". Electronic Thesis or Diss., Sorbonne université, 2018. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2018SORUS064.pdf.
Texto completo da fonteThis thesis investigates physiological-based emotion recognition in a digital game context and the feasibility of implementing the model on an embedded system. The following chanllenges are addressed: the relationship between emotional states and physiological responses in the game context, individual variabilities of the pschophysiological responses and issues of implementation on an embedded system. The major contributions of this thesis are : Firstly, we construct a multi-modal Database for Affective Gaming (DAG). This database contains multiple measurements concerning objective modalities: physiological signals (ECG, EDA, EMG, Respiration), screen recording, and player's face recording, as well as subjective assessments on both game event and match level. We presented statistics of the database and run a series of analysis on issues such as emotional moment detection and emotion classification, influencing factors of the overall game experience using various machine learning methods. Secondly, we investigate the individual variability in the collected data by creating an user-specific model and analyzing the optimal feature set for each individual. We proposed a personalized group-based model created the similar user groups by using the clustering techniques based on physiological traits deduced from optimal feature set. We showed that the proposed personalized group-based model performs better than the general model and user-specific model. Thirdly, we implemente the proposed method on an ARM A9 system and showed that the proposed method can meet the requirement of computation time
Argaud, Soizic. "Reconnaissance et mimétisme des émotions exprimées sur le visage : vers une compréhension des mécanismes à travers le modèle parkinsonien". Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1B023/document.
Texto completo da fonteParkinson’s disease is a neurodegenerative condition primarily resulting from a dysfunction of the basal ganglia following a progressive loss of midbrain dopamine neurons. Alongside the well-known motor symptoms, PD patients also suffer from emotional disorders including difficulties to recognize and to produce facial emotions. Here, there is a question whether the emotion recognition impairments in Parkinson’s disease could be in part related to motor symptoms. Indeed, according to embodied simulation theory, understanding other people’s emotions would be fostered by facial mimicry. Automatic and non-conscious, facial mimicry is characterized by congruent valence-related facial responses to the emotion expressed by others. In this context, disturbed motor processing could lead to impairments in emotion recognition. Yet, one of the most distinctive clinical features in Parkinson’s disease is facial amimia, a reduction in facial expressiveness. Thus, we studied the ability to mimic facial expression in Parkinson’s disease, its effective influence on emotion recognition as well as the effect of dopamine replacement therapy both on emotion recognition and facial mimicry. For these purposes, we investigated electromyographic responses (corrugator supercilii, zygomaticus major and orbicularis oculi) to facial emotion among patients suffering from Parkinson’s disease and healthy participants in a facial emotion recognition paradigm (joy, anger, neutral). Our results showed that the facial emotion processing in Parkinson’s disease could be swung from a normal to a pathological, noisy, functioning because of a weaker signal-to-noise ratio. Besides, facial mimicry could have a beneficial effect on the recognition of emotion. Nevertheless, the negative impact of Parkinson’s disease on facial mimicry and its influence on emotion recognition would depend on the muscles involved in the production of the emotional expression to decode. Indeed, the corrugator relaxation would be a stronger predictor of the recognition of joy expressions than the zygomatic or orbicularis contractions. On the other hand, we cannot conclude here that the corrugator reactions foster the recognition of anger. Furthermore, we proposed this experiment to a group of patients under dopamine replacement therapy but also during a temporary withdrawal from treatment. The preliminary results are in favour of a beneficial effect of dopaminergic medication on both emotion recognition and facial mimicry. The potential positive “peripheral” impact of dopamine replacement therapy on emotion recognition through restoration of facial mimicry has still to be tested. We discussed these findings in the light of recent considerations about the role of basal ganglia-based circuits and embodied simulation theory ending with the results’ clinical significances
Cohendet, Romain. "Prédiction computationnelle de la mémorabilité des images : vers une intégration des informations extrinsèques et émotionnelles". Thesis, Nantes, 2016. http://www.theses.fr/2016NANT4033/document.
Texto completo da fonteThe study of image memorability in computer science is a recent topic. First attempts were based on learning algorithms, used to infer the extent to which a picture is memorable from a set of low-level visual features. In this dissertation, we first investigate theoretical foundations of image memorability; we especially focus on the emotions the images convey, closely related to their memorability. In this light, we propose to widen the scope of image memorability prediction, to incorporate not only intrinsic, but also extrinsic image information, related to their context of presentation and to the observers. Accordingly, we build a new database for the study of image memorability; this database will be useful to test the existing models, trained on the unique database available so far. We then introduce deep learning for image memorability prediction: our model obtains the best performance to date. To improve its prediction accuracy, we try to model contextual and individual influences on image memorability. In the final part, we test the performance of computational models of visual attention, that attract growing interest for memorability prediction, for images which vary according to their degree of memorability and the emotion they convey. Finally, we present the "emotional" interactive movie, which enable us to study the links between emotion and visual attention for videos
Le, Tallec Marc. "Compréhension de parole et détection des émotions pour robot compagnon". Thesis, Tours, 2012. http://www.theses.fr/2012TOUR4044.
Texto completo da fonteAjili, Insaf. "Reconnaissance des gestes expressifs inspirée du modèle LMA pour une interaction naturelle homme-robot". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLE037/document.
Texto completo da fonteIn this thesis, we deal with the problem of gesture recognition in a human-robot interaction context. New contributions are being made on this subject. Our system consists in recognizing human gestures based on a motion analysis method that describes movement in a precise way.As part of this study, a higher level module is integrated to recognize the emotions of the person through the movement of her body. Three approaches are carried out: the first deals with the recognition of dynamic gestures by applying the hidden Markov model (HMM) as a classification method. A local motion descriptor is implemented based on a motion analysis method, called LMA(Laban Movement Analysis), which describes the movement of the person in its different aspects.Our system is invariant to the initial positions and orientations of people. A sampling algorithm has been developed in order to reduce the size of our descriptor and also adapt the data to hidden Markov models. A contribution is made to HMMs to analyze the movement in two directions (its natural and opposite directions) and thus improve the classification of similar gestures. Severalexperiments are done using public action databases, as well as our database composed of controlgestures. In the second approach, an expressive gestures recognition system is set up to recognizethe emotions of people through their gestures. A second contribution consists of the choice of a global motion descriptor based on the local characteristics proposed in the first approach to describethe entire gesture. The LMA Effort component is quantified to describe the expressiveness of the gesture with its four factors (space, time, weight and flow). The classification of expressive gesturesis carried out with four well-known machine learning methods (random decision forests, multilayerperceptron, support vector machines: one-against-one and one-against-all. A comparative study is made between these 4 methods in order to choose the best one. The approach is validated with public databases and our database of expressive gestures. The third approach is a statistical studybased on human perception to evaluate the recognition system as well as the proposed motiondescriptor. This allows us to estimate the ability of our system to classify and analyze emotionsas a human. In this part, two tasks are carried out with the two classifiers (the RDF learning method that gave the best results in the second approach and the human classifier): the classification of emotions and the study of the importance of our motion features to discriminate each emotion
Allaert, Benjamin. "Analyse des expressions faciales dans un flux vidéo". Thesis, Lille 1, 2018. http://www.theses.fr/2018LIL1I021/document.
Texto completo da fonteFacial expression recognition has attracted great interest over the past decade in wide application areas, such as human behavior analysis, e-health and marketing. In this thesis we explore a new approach to step forward towards in-the-wild expression recognition. Special attention has been paid to encode respectively small/large facial expression amplitudes, and to analyze facial expressions in presence of varying head pose. The first challenge addressed concerns varying facial expression amplitudes. We propose an innovative motion descriptor called LMP. This descriptor takes into account mechanical facial skin deformation properties. When extracting motion information from the face, the unified approach deals with inconsistencies and noise, caused by face characteristics. The main originality of our approach is a unified approach for both micro and macro expression recognition, with the same facial recognition framework. The second challenge addressed concerns important head pose variations. In facial expression analysis, the face registration step must ensure that minimal deformation appears. Registration techniques must be used with care in presence of unconstrained head pose as facial texture transformations apply. Hence, it is valuable to estimate the impact of alignment-related induced noise on the global recognition performance. For this, we propose a new database, called SNaP-2DFe, allowing to study the impact of head pose and intra-facial occlusions on expression recognition approaches. We prove that the usage of face registration approach does not seem adequate for preserving the features encoding facial expression deformations
Abbou, Samir Hakim. "Une application de la transformée en ondelettes à la reconnaissance des commandes vocales en milieu bruité et sa mise en oeuvre par processeur dédié au traitement du signal /". Montréal : École de technologie supérieure, 2006. http://wwwlib.umi.com/cr/etsmtl/fullcit?pMR11528.
Texto completo da fonte"Mémoire présenté à l'École de technologie supérieure comme exigence partielle à l'obtention de la maîtrise en génie électrique." Bibliogr.: f. [95]-98. Également disponible en version électronique.
Abbou, Samir Hakim. "Une application de la transformée en ondelettes à la reconnaissance des commandes vocales en milieu bruité et sa mise en oeuvre par processeur dédié au traitement du signal". Mémoire, École de technologie supérieure, 2006. http://espace.etsmtl.ca/467/1/ABBOU_Samir_Hakim.pdf.
Texto completo da fonteYang, Wenlu. "Personalized physiological-based emotion recognition and implementation on hardware". Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS064.
Texto completo da fonteThis thesis investigates physiological-based emotion recognition in a digital game context and the feasibility of implementing the model on an embedded system. The following chanllenges are addressed: the relationship between emotional states and physiological responses in the game context, individual variabilities of the pschophysiological responses and issues of implementation on an embedded system. The major contributions of this thesis are : Firstly, we construct a multi-modal Database for Affective Gaming (DAG). This database contains multiple measurements concerning objective modalities: physiological signals (ECG, EDA, EMG, Respiration), screen recording, and player's face recording, as well as subjective assessments on both game event and match level. We presented statistics of the database and run a series of analysis on issues such as emotional moment detection and emotion classification, influencing factors of the overall game experience using various machine learning methods. Secondly, we investigate the individual variability in the collected data by creating an user-specific model and analyzing the optimal feature set for each individual. We proposed a personalized group-based model created the similar user groups by using the clustering techniques based on physiological traits deduced from optimal feature set. We showed that the proposed personalized group-based model performs better than the general model and user-specific model. Thirdly, we implemente the proposed method on an ARM A9 system and showed that the proposed method can meet the requirement of computation time
Rapp, Vincent. "Analyse du visage pour l'interprétation de l'état émotionnel". Paris 6, 2013. http://www.theses.fr/2013PA066345.
Texto completo da fonteThis thesis addresses the problem of face analysis for human affect prediction from images or sequences. Two main topics are investigated : facial landmark localization and face analysis for affect prediction. We first propose MuKAM (\textit{Multi-Kernel Appearance Model}), an automatic facial salient points detector (e. G. , eye and mouth corners, nose and chin tips). The first part of this system is an independent facial features detector based on two stages. The first stage quickly locates, for each sought point, a set of candidate locations. To represent the face, we use multi-scale features combined using multiple kernel learning for Support Vector Machine. The second stage employs higher level features and a non-linear kernel to estimate the candidate likelihoods. Moreover, we improve system robustness by introducing constraints between points. To introduce these constraints, we propose an alignment process step relying on a deformable model fitting: according to probabilities obtained at the end of the second stage, we want to find the set of parameters that best fit the model on the face. Extensive experiments have been carried out on different databases, assessing the accuracy and the robustness of the proposed approach. The second part of the thesis is dedicated to face analysis for human affect prediction. To this end, we propose two systems. The first one aims at detecting facial micro-movements, named Action Units (AU), occurring during a facial expression. To represent the face, we use heterogeneous features, characterizing its appearance and its shape, combined using multiple kernel learning. The second system aims at predicting human affect during an interaction thought a subjective and continuous representation of emotion (in terms of valence, arousal, expectancy and power). Dynamical descriptors are extracted from different cues (shape, global and local appearance, audio), and are associated to kernel regressions to obtain several independent predictions. These predictions are then fused to obtain a final prediction per dimensions. Both systems have been evaluated during international challenges (FERA'11 and AVEC'12), held in conjunction with major conferences of the field. The first place obtained in each challenge show the progress achieved in human affect prediction
Hamdi, Hamza. "Plate-forme multimodale pour la reconnaissance d'émotions via l'analyse de signaux physiologiques : Application à la simulation d'entretiens d'embauche". Phd thesis, Université d'Angers, 2012. http://tel.archives-ouvertes.fr/tel-00997249.
Texto completo da fonteRamezanpanah, Zahra. "Bi-lateral interaction between a humanoid robot and a human in mixed reality". Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG039.
Texto completo da fonteThis thesis can be divided into two parts: action recognition and emotion recognition. Each part is done in two method, classic method of Machine Learning and deep network. In the Action Recognition section, we first defined a local descriptor based on the LMA, to describe the movements. LMA is an algorithm to describe a motion by using its four components: Body, Space, Shape and Effort. Since the only goal in this part is gesture recognition, only the first three factors have been used. The DTW, algorithm is implemented to find the similarities of the curves obtained from the descriptor vectors obtained by the LMA method. Finally SVM, algorithm is used to train and classify the data. In the second part of this section, we constructed a new descriptor based on the geometric coordinates of different parts of the body to present a movement. To do this, in addition to the distances between hip centre and other joints of the body and the changes of the quaternion angles in time, we define the triangles formed by the different parts of the body and calculated their area. We also calculate the area of the single conforming 3-D boundary around all the joints of the body. At the end we add the velocity of different joint in the proposed descriptor. We used LSTM to evaluate this descriptor. In second section of this thesis, we first presented a higher-level module to identify the inner feelings of human beings by observing their body movements. In order to define a robust descriptor, two methods are carried out: The first method is the LMA, which by adding the "Effort" factor has become a robust descriptor, which describes a movement and the state in which it was performed. In addition, the second on is based on a set of spatio-temporal features. In the continuation of this section, a pipeline of recognition of expressive motions is proposed in order to recognize the emotions of people through their gestures by the use of machine learning methods. A comparative study is made between these 2 methods in order to choose the best one. The second part of this part consists of a statistical study based on human perception in order to evaluate the recognition system as well as the proposed motion descriptor
Ouellet, Claudie. "Les émotions suscitées par les préférences politiques peuvent-elles être révélées par une tâche de bissection temporelle?" Doctoral thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/37055.
Texto completo da fonteKhan, Rizwan Ahmed. "Détection des émotions à partir de vidéos dans un environnement non contrôlé". Thesis, Lyon 1, 2013. http://www.theses.fr/2013LYO10227/document.
Texto completo da fonteCommunication in any form i.e. verbal or non-verbal is vital to complete various daily routine tasks and plays a significant role inlife. Facial expression is the most effective form of non-verbal communication and it provides a clue about emotional state, mindset and intention. Generally automatic facial expression recognition framework consists of three step: face tracking, feature extraction and expression classification. In order to built robust facial expression recognition framework that is capable of producing reliable results, it is necessary to extract features (from the appropriate facial regions) that have strong discriminative abilities. Recently different methods for automatic facial expression recognition have been proposed, but invariably they all are computationally expensive and spend computational time on whole face image or divides the facial image based on some mathematical or geometrical heuristic for features extraction. None of them take inspiration from the human visual system in completing the same task. In this research thesis we took inspiration from the human visual system in order to find from where (facial region) to extract features. We argue that the task of expression analysis and recognition could be done in more conducive manner, if only some regions are selected for further processing (i.e.salient regions) as it happens in human visual system. In this research thesis we have proposed different frameworks for automatic recognition of expressions, all getting inspiration from the human vision. Every subsequently proposed addresses the shortcomings of the previously proposed framework. Our proposed frameworks in general, achieve results that exceeds state-of-the-artmethods for expression recognition. Secondly, they are computationally efficient and simple as they process only perceptually salient region(s) of face for feature extraction. By processing only perceptually salient region(s) of the face, reduction in feature vector dimensionality and reduction in computational time for feature extraction is achieved. Thus making them suitable for real-time applications
Péron, Julie. "Rôle du noyau sous-thalamique et de ses connexions cortico-sous-corticales dans la reconnaissance des émotions communiquées par le visage et par la voix". Rennes 1, 2008. http://www.theses.fr/2008REN1B118.
Texto completo da fonteGirard, Éric. "Le jugement des expressions faciales dynamiques : l'importance de l'intensité maximale et finale, et de la moyenne globale". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0021/NQ43074.pdf.
Texto completo da fonteYang, Yu-Fang. "Contribution des caractéristiques diagnostiques dans la reconnaissance des expressions faciales émotionnelles : une approche neurocognitive alliant oculométrie et électroencéphalographie". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS099/document.
Texto completo da fonteProficient recognition of facial expression is crucial for social interaction. Behaviour, event-related potentials (ERPs), and eye-tracking techniques can be used to investigate the underlying brain mechanisms supporting this seemingly effortless processing of facial expression. Facial expression recognition involves not only the extraction of expressive information from diagnostic facial features, known as part-based processing, but also the integration of featural information, known as configural processing. Despite the critical role of diagnostic features in emotion recognition and extensive research in this area, it is still not known how the brain decodes configural information in terms of emotion recognition. The complexity of facial information integration becomes evident when comparing performance between healthy subjects and individuals with schizophrenia because those patients tend to process featural information on emotional faces. The different ways in examining faces possibly impact on social-cognitive ability in recognizing emotions. Therefore, this thesis investigates the role of diagnostic features and face configuration in the recognition of facial expression. In addition to behavior, we examined both the spatiotemporal dynamics of fixations using eye-tracking, and early neurocognitive sensitivity to face as indexed by the P100 and N170 ERP components. In order to address the questions, we built a new set of sketch face stimuli by transforming photographed faces from the Radboud Faces Database through the removal of facial texture and retaining only the diagnostic features (e.g., eyes, nose, mouth) with neutral and four facial expressions - anger, sadness, fear, happiness. Sketch faces supposedly impair configural processing in comparison with photographed faces, resulting in increased sensitivity to diagnostic features through part-based processing. The direct comparison of neurocognitive measures between sketch and photographed faces expressing basic emotions has never been tested. In this thesis, we examined (i) eye fixations as a function of stimulus type, and (ii) neuroelectric response to experimental manipulations such face inversion and deconfiguration. The use of these methods aimed to reveal which face processing drives emotion recognition and to establish neurocognitive markers of emotional sketch and photographed faces processing. Overall, the behavioral results showed that sketch faces convey sufficient expressive information (content of diagnostic features) as in photographed faces for emotion recognition. There was a clear emotion recognition advantage for happy expressions as compared to other emotions. In contrast, recognizing sad and angry faces was more difficult. Concomitantly, results of eye-tracking showed that participants employed more part-based processing on sketch and photographed faces during second fixation. The extracting information from the eyes is needed when the expression conveys more complex emotional information and when stimuli are impoverished (e.g., sketch). Using electroencephalographic (EEG), the P100 and N170 components are used to study the effect of stimulus type (sketch, photographed), orientation (inverted, upright), and deconfiguration, and possible interactions. Results also suggest that sketch faces evoked more part-based processing. The cues conveyed by diagnostic features might have been subjected to early processing, likely driven by low-level information during P100 time window, followed by a later decoding of facial structure and its emotional content in the N170 time window. In sum, this thesis helped elucidate elements of the debate about configural and part-based face processing for emotion recognition, and extend our current understanding of the role of diagnostic features and configural information during neurocognitive processing of facial expressions of emotion
Milcent, Anne-Sophie. "Les agents virtuels expressifs et leur impact sur l'induction de l'empathie chez l'utilisateur : application au domaine de la formation médicale". Thesis, Paris, HESAM, 2020. http://www.theses.fr/2020HESAE014.
Texto completo da fontePedagogical agents, non-player characters, virtual advisors or assistants, virtual agents are more and more present in our daily life. Some attract our attention, seem to show us interest, to be able to communicate and to express their emotional states. These agents have been the subject of numerous investigations in various fields of research such as computerscience, psychology or cognitive sciences. The work of this PhD thesis focuses on the expressive virtual agents and their impact on the user empathy induction. The evolution of computer graphics techniques now makes it possible to create virtual agents that are visually and behaviorally realistic. The expressiveness of agents is an important issue for human-computer interactions. However, it is still rare for virtual agents to be equipped with facial expressions, thus limiting their ability toinduce empathy in the user.Our work follows up on the perspectives opened by researchers in the field concerning the transcription of emotions on a virtual agent, and contributes to extend the knowledge concerning interactions with agents, in particular the impact of their expressiveness on the implementation of an empathetic situation. To carry out this work, we conducted two experiments. The first one deals with the recognition of basic emotions on a virtual agent designed using advanced modeling techniques. This study also allowed us to evaluate the relevance of human expressiveness factors on the agent, notably the presence of expression wrinkles and the variation of pupil size according to the emotional state, to facilitate the perception of the agent’s emotions. Our second experiment focuses on the impact of the virtual agent’s facial expressiveness on the user’s empathy. Depending on the context, the results show that user’s perspective taking, the cognitive component of empathy, is greater when the realistic virtual agent presents emotional facial expressions compared to an agent who has no facial expressions. Finally, we studied the impact of the agents’ expressiveness on the user’s engagementand social presence. This study opens perspectives on a potential correlation between the notions of empathy, social presence and engagement