Academic literature on the topic 'Reconnaissance des émotions vocales'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Reconnaissance des émotions vocales.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Reconnaissance des émotions vocales"
Debaecker, Jean. "Reconnaissance des émotions dans la musique." Les cahiers du numérique 7, no. 2 (June 30, 2011): 135–55. http://dx.doi.org/10.3166/lcn.7.2.135-155.
Full textDa Fonseca, D., and C. Deruelle. "Reconnaissance des émotions et syndrome d’Asperger." Neuropsychiatrie de l'Enfance et de l'Adolescence 58, no. 6-7 (September 2010): 405–9. http://dx.doi.org/10.1016/j.neurenf.2010.05.005.
Full textKrolak-Salmon, P. "La reconnaissance des émotions dans les maladies neurodégénératives." La Revue de Médecine Interne 32, no. 12 (December 2011): 721–23. http://dx.doi.org/10.1016/j.revmed.2011.08.005.
Full textGaudelus, B., J. Virgile, E. Peyroux, A. Leleu, J. Y. Baudouin, and N. Franck. "Mesure du déficit de reconnaissance des émotions faciales dans la schizophrénie. Étude préliminaire du test de reconnaissance des émotions faciales (TREF)." L'Encéphale 41, no. 3 (June 2015): 251–59. http://dx.doi.org/10.1016/j.encep.2014.08.013.
Full textCamoreyt, Aurore, Marie-Camille Berthel-Tàtray, Maylis Burle, Mariano Musacchio, Nathalie Ehrlé, and François Sellal. "Troubles de la reconnaissance des émotions après lésion cérébelleuse focale." Revue Neurologique 173 (March 2017): S184—S185. http://dx.doi.org/10.1016/j.neurol.2017.01.359.
Full textGranato, P., O. Godefroy, J. P. Van Gansberghe, and R. Bruyer. "La reconnaissance visuelle des émotions faciales dans la schizophrénie chronique." Annales Médico-psychologiques, revue psychiatrique 167, no. 10 (December 2009): 753–58. http://dx.doi.org/10.1016/j.amp.2009.03.012.
Full textBediou, B., I. Riff, M. Milliéry, B. Mercier, A. Vighetto, M. Bonnefoy, and P. Krolak-Salmon. "Altération de la reconnaissance des émotions dans la maladie d'Alzheimer légère." La Revue de Médecine Interne 27 (December 2006): S374. http://dx.doi.org/10.1016/j.revmed.2006.10.212.
Full textDe Moura, Marie, Bruno Lenne, Jacques Honoré, Arnaud Kwiatkowski, Patrick Hautecoeur, and Henrique Sequeira. "Reconnaissance des émotions dans la sclérose en plaques. Une approche neurocomportementale." Revue Neurologique 172 (April 2016): A93—A94. http://dx.doi.org/10.1016/j.neurol.2016.01.226.
Full textPochon, R., P. Brun, and D. Mellier. "Développement de la reconnaissance des émotions chez l'enfant avec trisomie 21." Psychologie Française 51, no. 4 (December 2006): 381–90. http://dx.doi.org/10.1016/j.psfr.2006.05.003.
Full textMENANT, O., A. DESTREZ, V. DEISS, A. BOISSY, P. DELAGRANGE, L. CALANDREAU, and Elodie CHAILLOU. "Régulation des émotions chez l’animal d’élevage : focus sur les acteurs neurobiologiques." INRA Productions Animales 29, no. 4 (December 13, 2019): 241–54. http://dx.doi.org/10.20870/productions-animales.2016.29.4.2966.
Full textDissertations / Theses on the topic "Reconnaissance des émotions vocales"
Gharsalli, Sonia. "Reconnaissance des émotions par traitement d’images." Thesis, Orléans, 2016. http://www.theses.fr/2016ORLE2075/document.
Full textEmotion recognition is one of the most complex scientific domains. In the last few years, various emotion recognition systems are developed. These innovative applications are applied in different domains such as autistic children, video games, human-machine interaction… Different channels are used to express emotions. We focus on facial emotion recognition specially the six basic emotions namely happiness, anger, fear, disgust, sadness and surprise. A comparative study between geometric method and appearance method is performed on CK+ database as the posed emotion database, and FEEDTUM database as the spontaneous emotion database. We consider different constraints in this study such as different image resolutions, the low number of labelled images in learning step and new subjects. We evaluate afterward various fusion schemes on new subjects, not included in the training set. Good recognition rate is obtained for posed emotions (more than 86%), however it is still low for spontaneous emotions. Based on local feature study, we develop local features fusion methods. These ones increase spontaneous emotions recognition rates. A feature selection method is finally developed based on features importance scores. Compared with two methods, our developed approach increases the recognition rate
Deschamps-Berger, Théo. "Social Emotion Recognition with multimodal deep learning architecture in emergency call centers." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG036.
Full textThis thesis explores automatic speech-emotion recognition systems in a medical emergency context. It addresses some of the challenges encountered when studying emotions in social interactions. It is rooted in modern theories of emotions, particularly those of Lisa Feldman Barrett on the construction of emotions. Indeed, the manifestation of emotions in human interactions is complex and often characterized by nuanced, mixed, and is highly linked to the context. This study is based on the CEMO corpus, which is composed of telephone conversations between callers and emergency medical dispatchers (EMD) from a French emergency call center. This corpus provides a rich dataset to explore the capacity of deep learning systems, such as Transformers and pre-trained models, to recognize spontaneous emotions in spoken interactions. The applications could be to provide emotional cues that could improve call handling and decision-making by EMD, or to summarize calls. The work carried out in my thesis focused on different techniques related to speech emotion recognition, including transfer learning from pre-trained models, multimodal fusion strategies, dialogic context integration, and mixed emotion detection. An initial acoustic system based on temporal convolutions and recurrent networks was developed and validated on an emotional corpus widely used by the affective community, called IEMOCAP, and then on the CEMO corpus. Extensive research on multimodal systems, pre-trained in acoustics and linguistics and adapted to emotion recognition, is presented. In addition, the integration of dialog context in emotion recognition was explored, underlining the complex dynamics of emotions in social interactions. Finally, research has been initiated towards developing multi-label, multimodal systems capable of handling the subtleties of mixed emotions, often due to the annotator's perception and social context. Our research highlights some solutions and challenges in recognizing emotions in the wild. The CNRS AI HUMAAINE Chair: HUman-MAchine Affective Interaction & Ethics funded this thesis
Vazquez, Rodriguez Juan Fernando. "Transformateurs multimodaux pour la reconnaissance des émotions." Electronic Thesis or Diss., Université Grenoble Alpes, 2023. http://www.theses.fr/2023GRALM057.
Full textMental health and emotional well-being have significant influence on physical health, and are especially important for healthy aging. Continued progress on sensors and microelectronics has provided a number of new technologies that can be deployed in homes and used to monitor health and well-being. These can be combined with recent advances in machine learning to provide services that enhance the physical and emotional well-being of individuals to promote healthy aging. In this context, an automatic emotion recognition system can provide a tool to help assure the emotional well-being of frail people. Therefore, it is desirable to develop a technology that can draw information about human emotions from multiple sensor modalities and can be trained without the need for large labeled training datasets.This thesis addresses the problem of emotion recognition using the different types of signals that a smart environment may provide, such as visual, audio, and physiological signals. To do this, we develop different models based on the Transformer architecture, which has useful characteristics such as their capacity to model long-range dependencies, as well as their capability to discern the relevant parts of the input. We first propose a model to recognize emotions from individual physiological signals. We propose a self-supervised pre-training technique that uses unlabeled physiological signals, showing that that pre-training technique helps the model to perform better. This approach is then extended to take advantage of the complementarity of information that may exist in different physiological signals. For this, we develop a model that combines different physiological signals and also uses self-supervised pre-training to improve its performance. We propose a method for pre-training that does not require a dataset with the complete set of target signals, but can rather, be trained on individual datasets from each target signal.To further take advantage of the different modalities that a smart environment may provide, we also propose a model that uses as inputs multimodal signals such as video, audio, and physiological signals. Since these signals are of a different nature, they cover different ways in which emotions are expressed, thus they should provide complementary information concerning emotions, and therefore it is appealing to use them together. However, in real-world scenarios, there might be cases where a modality is missing. Our model is flexible enough to continue working when a modality is missing, albeit with a reduction in its performance. To address this problem, we propose a training strategy that reduces the drop in performance when a modality is missing.The methods developed in this thesis are evaluated using several datasets, obtaining results that demonstrate the effectiveness of our approach to pre-train Transformers to recognize emotions from physiological signals. The results also show the efficacy of our Transformer-based solution to aggregate multimodal information, and to accommodate missing modalities. These results demonstrate the feasibility of the proposed approaches to recognizing emotions from multiple environmental sensors. This opens new avenues for deeper exploration of using Transformer-based approaches to process information from environmental sensors and allows the development of emotion recognition technologies robust to missing modalities. The results of this work can contribute to better care for the mental health of frail people
Bherer, François. "Expressions vocales spontanées et émotion : de l'extériorisation au jugement." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq33572.pdf.
Full textHenry, Mylène. "La reconnaissance des émotions chez des enfants maltraités." Thèse, Université du Québec à Trois-Rivières, 2011. http://depot-e.uqtr.ca/2069/1/030183277.pdf.
Full textAouati, Amar. "Utilisation des technologies vocales dans une application multicanaux." Paris 11, 1985. http://www.theses.fr/1985PA112373.
Full textAttabi, Yazid. "Reconnaissance automatique des émotions à partir du signal acoustique." Mémoire, École de technologie supérieure, 2008. http://espace.etsmtl.ca/168/1/ATTABI_Yazid.pdf.
Full textPaleari, Marco. "Informatique Affective : Affichage, Reconnaissance, et Synthèse par Ordinateur des Émotions." Phd thesis, Télécom ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00005615.
Full textPaleari, Marco. "Computation affective : affichage, reconnaissance et synthèse par ordinateur des émotions." Paris, Télécom ParisTech, 2009. https://pastel.hal.science/pastel-00005615.
Full textAffective Computing refers to computing that relates to, arises from, or deliberately influences emotions and has is natural application domain in highly abstracted human--computer interactions. Affective computing can be divided into three main parts, namely display, recognition, and synthesis. The design of intelligent machines able to create natural interactions with the users necessarily implies the use of affective computing technologies. We propose a generic architecture based on the framework “Multimodal Affective User Interface” by Lisetti and the psychological “Component Process Theory” by Scherer which puts the user at the center of the loop exploiting these three parts of affective computing. We propose a novel system performing automatic, real-time, emotion recognition through the analysis of human facial expressions and vocal prosody. We also discuss about the generation of believable facial expressions for different platforms and we detail our system based on Scherer theory. Finally we propose an intelligent architecture that we have developed capable of simulating the process of appraisal of emotions as described by Scherer
Vaudable, Christophe. "Analyse et reconnaissance des émotions lors de conversations de centres d'appels." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00758650.
Full textBooks on the topic "Reconnaissance des émotions vocales"
Picard, Chantal Suzanne. La reconnaissance des émotions chez les enfants d'âge scolaire. Sudbury, Ont: Département de psychologie, Université Laurentienne, 1996.
Find full textRabouin-Coursol, Sylvie. La reconnaissance et l'intensité des émotions étudiées et analysées à la lumière d'une théorie de l'individualité liée à modulation de l'intensité d'un stimulus (MIS). Sudbury, Ont: Département de psychologie, Université Laurentienne, 1992.
Find full textLe juste et l'injuste: Émotions, reconnaissance et actions collectives. Paris: L'Harmattan, 2009.
Find full textBook chapters on the topic "Reconnaissance des émotions vocales"
BEAUCOUSIN, Virginie. "Apprendre à reconnaître les autres : effet des émotions vocales." In Processus émotionnels en situation d’apprentissage, 189–216. ISTE Group, 2022. http://dx.doi.org/10.51926/iste.9042.ch7.
Full textGosselin, Pierre. "La reconnaissance de I'expression faciale des émotions." In Cognition et émotions, 97–114. Imprensa da Universidade de Coimbra, 2004. http://dx.doi.org/10.14195/978-989-26-0805-1_5.
Full textGaudelus, Baptiste. "Gaïa : entraînement de la reconnaissance des émotions faciales." In Traité de Réhabilitation Psychosociale, 629–43. Elsevier, 2018. http://dx.doi.org/10.1016/b978-2-294-75915-4.00065-7.
Full textCollignon, Amélie, Marine Thomasson, Arnaud Saj, Didier Grandjean, Frédéric Assal, and Julie Péron. "Cas 10. Reconnaissance de la prosodie émotionnelle suite à un accident vasculaire du cervelet." In 13 cas cliniques en neuropsychologie des émotions, 269–90. Dunod, 2018. http://dx.doi.org/10.3917/dunod.peron.2018.01.0269.
Full textDuclos, Harmony, Béatrice Desgranges, and Mickael Laisney. "Cas 4. Reconnaissance des émotions et théorie de l’esprit affective chez un patient présentant une variante droite de démence sémantique." In 13 cas cliniques en neuropsychologie des émotions, 93–112. Dunod, 2018. http://dx.doi.org/10.3917/dunod.peron.2018.01.0093.
Full text