Littérature scientifique sur le sujet « Multimodal perception of emotion »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Multimodal perception of emotion ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Multimodal perception of emotion"

1

de Boer, Minke J., Deniz Başkent et Frans W. Cornelissen. « Eyes on Emotion : Dynamic Gaze Allocation During Emotion Perception From Speech-Like Stimuli ». Multisensory Research 34, no 1 (7 juillet 2020) : 17–47. http://dx.doi.org/10.1163/22134808-bja10029.

Texte intégral
Résumé :
Abstract The majority of emotional expressions used in daily communication are multimodal and dynamic in nature. Consequently, one would expect that human observers utilize specific perceptual strategies to process emotions and to handle the multimodal and dynamic nature of emotions. However, our present knowledge on these strategies is scarce, primarily because most studies on emotion perception have not fully covered this variation, and instead used static and/or unimodal stimuli with few emotion categories. To resolve this knowledge gap, the present study examined how dynamic emotional auditory and visual information is integrated into a unified percept. Since there is a broad spectrum of possible forms of integration, both eye movements and accuracy of emotion identification were evaluated while observers performed an emotion identification task in one of three conditions: audio-only, visual-only video, or audiovisual video. In terms of adaptations of perceptual strategies, eye movement results showed a shift in fixations toward the eyes and away from the nose and mouth when audio is added. Notably, in terms of task performance, audio-only performance was mostly significantly worse than video-only and audiovisual performances, but performance in the latter two conditions was often not different. These results suggest that individuals flexibly and momentarily adapt their perceptual strategies to changes in the available information for emotion recognition, and these changes can be comprehensively quantified with eye tracking.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Vallverdú, Jordi, Gabriele Trovato et Lorenzo Jamone. « Allocentric Emotional Affordances in HRI : The Multimodal Binding ». Multimodal Technologies and Interaction 2, no 4 (6 novembre 2018) : 78. http://dx.doi.org/10.3390/mti2040078.

Texte intégral
Résumé :
The concept of affordance perception is one of the distinctive traits of human cognition; and its application to robots can dramatically improve the quality of human-robot interaction (HRI). In this paper we explore and discuss the idea of “emotional affordances” by proposing a viable model for implementation into HRI; which considers allocentric and multimodal perception. We consider “2-ways” affordances: perceived object triggering an emotion; and perceived human emotion expression triggering an action. In order to make the implementation generic; the proposed model includes a library that can be customised depending on the specific robot and application scenario. We present the AAA (Affordance-Appraisal-Arousal) model; which incorporates Plutchik’s Wheel of Emotions; and we outline some numerical examples of how it can be used in different scenarios.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Shackman, Jessica E., et Seth D. Pollak. « Experiential Influences on Multimodal Perception of Emotion ». Child Development 76, no 5 (septembre 2005) : 1116–26. http://dx.doi.org/10.1111/j.1467-8624.2005.00901.x.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Barabanschikov, V. A., et E. V. Suvorova. « Gender Differences in the Recognition of Emotional States ». Психологическая наука и образование 26, no 6 (2021) : 107–16. http://dx.doi.org/10.17759/pse.2021260608.

Texte intégral
Résumé :
As a rule, gender differences in the perception of human emotional states are studied on the basis of static pictures of face, gestures or poses. The dynamics and multiplicity of the emotion expression remain in the «blind zone». This work is aimed at finding relationships in the perception of the procedural characteristics of the emotion expression. The influence of gender and age on the identification of human emotional states is experimentally investigated in ecologically and socially valid situations. The experiments were based on the Russian-language version of the Geneva Emotion Recognition Test (GERT).83 audio-video clips of fourteen emotional states expressed by ten specially trained professional actors (five men and five women, average age 37 years) were randomly demonstrated to Russian participants (48 women and 48 men, Europeans, ages ranged from 20 to 62 years, with a mean age of 34 (SD = 9,4).It is shown that women recognize multimodal dynamic emotions more accurately, especially those which were expressed by women. Gender and age differences in identification accuracy are statistically significant for five emotions: joy, amusement, irritation, anger, and surprise. On women’s faces, joy, surprise, irritation and anger are more accurately recognized by women over 35 years of age (p<0,05).On male faces, surprise is less accurately recognized by men under 35 (p<0,05); amusement, irritation, anger — in men over 35 (p<0,05). The gender factor of perception of multimodal dynamic expressions of the state acts as a system of determinants that changes its characteristics depending on a specific communicative situation.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Yamauchi, Takashi, Jinsil Seo et Annie Sungkajun. « Interactive Plants : Multisensory Visual-Tactile Interaction Enhances Emotional Experience ». Mathematics 6, no 11 (29 octobre 2018) : 225. http://dx.doi.org/10.3390/math6110225.

Texte intégral
Résumé :
Using a multisensory interface system, we examined how people’s emotional experiences change as their tactile sense (touching a plant) was augmented with visual sense (“seeing” their touch). Our system (the Interactive Plant system) senses the electrical capacitance of the human body and visualizes users’ tactile information on a flat screen (when the touch is gentle, the program draws small and thin roots around the pot; when the touch is more harsh or abrupt, big and thick roots are displayed). We contrasted this multimodal combination (touch + vision) with a unimodal interface (touch only or watch only) and measured the impact of the multimodal interaction on participants’ emotion. We found significant emotional gains in the multimodal interaction. Participants’ self-reported positive affect, joviality, attentiveness and self-assurance increased dramatically in multimodal interaction relative to unimodal interaction; participants’ electrodermal activity (EDA) increased in the multimodal condition, suggesting that our plant-based multisensory visual-tactile interaction raised arousal. We suggest that plant-based tactile interfaces are advantageous for emotion generation because haptic perception is by nature embodied and emotional.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Vani Vivekanand, Chettiyar. « Performance Analysis of Emotion Classification Using Multimodal Fusion Technique ». Journal of Computational Science and Intelligent Technologies 2, no 1 (16 avril 2021) : 14–20. http://dx.doi.org/10.53409/mnaa/jcsit/2103.

Texte intégral
Résumé :
As the central processing unit of the human body, the human brain is in charge of several activities, including cognition, perception, emotion, attention, action, and memory. Emotions have a significant impact on human well-being in their life. Methodologies for accessing emotions of human could be essential for good user-machine interactions. Comprehending BCI (Brain-Computer Interface) strategies for identifying emotions can also help people connect with the world more naturally. Many approaches for identifying human emotions have been developed using signals of EEG for classifying happy, neutral, sad, and angry emotions, discovered to be effective. The emotions are elicited by various methods, including displaying participants visuals of happy and sad facial expressions, listening to emotionally linked music, visuals, and, sometimes, both of these. In this research, a multi-model fusion approach for emotion classification utilizing BCI and EEG data with various classifiers was proposed. The 10-20 electrode setup was used to gather the EEG data. The emotions were classified using the sentimental analysis technique based on user ratings. Simultaneously, Natural Language Processing (NLP) is implemented for increasing accuracy. This analysis classified the assessment parameters as happy, neutral, sad, and angry emotions. Based on these emotions, the proposed model’s performance was assessed in terms of accuracy and overall accuracy. The proposed model has a 93.33 percent overall accuracy and increased performance in all emotions identified.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Lavan, Nadine, et Carolyn McGettigan. « Increased Discriminability of Authenticity from Multimodal Laughter is Driven by Auditory Information ». Quarterly Journal of Experimental Psychology 70, no 10 (octobre 2017) : 2159–68. http://dx.doi.org/10.1080/17470218.2016.1226370.

Texte intégral
Résumé :
We present an investigation of the perception of authenticity in audiovisual laughter, in which we contrast spontaneous and volitional samples and examine the contributions of unimodal affective information to multimodal percepts. In a pilot study, we demonstrate that listeners perceive spontaneous laughs as more authentic than volitional ones, both in unimodal (audio-only, visual-only) and multimodal contexts (audiovisual). In the main experiment, we show that the discriminability of volitional and spontaneous laughter is enhanced for multimodal laughter. Analyses of relationships between affective ratings and the perception of authenticity show that, while both unimodal percepts significantly predict evaluations of audiovisual laughter, it is auditory affective cues that have the greater influence on multimodal percepts. We discuss differences and potential mismatches in emotion signalling through voices and faces, in the context of spontaneous and volitional behaviour, and highlight issues that should be addressed in future studies of dynamic multimodal emotion processing.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Portnova, Galina, Aleksandra Maslennikova, Natalya Zakharova et Olga Martynova. « The Deficit of Multimodal Perception of Congruent and Non-Congruent Fearful Expressions in Patients with Schizophrenia : The ERP Study ». Brain Sciences 11, no 1 (13 janvier 2021) : 96. http://dx.doi.org/10.3390/brainsci11010096.

Texte intégral
Résumé :
Emotional dysfunction, including flat affect and emotional perception deficits, is a specific symptom of schizophrenia disorder. We used a modified multimodal odd-ball paradigm with fearful facial expressions accompanied by congruent and non-congruent emotional vocalizations (sounds of women screaming and laughing) to investigate the impairment of emotional perception and reactions to other people’s emotions in schizophrenia. We compared subjective ratings of emotional state and event-related potentials (EPPs) in response to congruent and non-congruent stimuli in patients with schizophrenia and healthy controls. The results showed the altered multimodal perception of fearful stimuli in patients with schizophrenia. The amplitude of N50 was significantly higher for non-congruent stimuli than congruent ones in the control group and did not differ in patients. The P100 and N200 amplitudes were higher in response to non-congruent stimuli in patients than in controls, implying impaired sensory gating in schizophrenia. The observed decrease of P3a and P3b amplitudes in patients could be associated with less attention, less emotional arousal, or incorrect interpretation of emotional valence, as patients differed from healthy controls in the emotion scores of non-congruent stimuli. The difficulties in identifying the incoherence of facial and audial components of emotional expression could be significant in understanding the psychopathology of schizophrenia.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Mittal, Trisha, Aniket Bera et Dinesh Manocha. « Multimodal and Context-Aware Emotion Perception Model With Multiplicative Fusion ». IEEE MultiMedia 28, no 2 (1 avril 2021) : 67–75. http://dx.doi.org/10.1109/mmul.2021.3068387.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Montembeault, Maxime, Estefania Brando, Kim Charest, Alexandra Tremblay, Élaine Roger, Pierre Duquette et Isabelle Rouleau. « Multimodal emotion perception in young and elderly patients with multiple sclerosis ». Multiple Sclerosis and Related Disorders 58 (février 2022) : 103478. http://dx.doi.org/10.1016/j.msard.2021.103478.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Multimodal perception of emotion"

1

Cox, A. G. « Multimodal emotion perception from facial and vocal signals ». Thesis, University of Cambridge, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.598105.

Texte intégral
Résumé :
The perception of emotion in other people is a fundamental part of social communication. Emotional expressions are often multimodal in nature and like human speech both auditory and visual components are used for comprehension. Up to this date however, the majority of emotion research has focused on the perception of emotion from facial or vocal expressions in isolation. This thesis investigated the behavioural and neural consequences of perceiving emotion from facial and vocal emotional signals simultaneously. Initial experiments demonstrated that a congruent, but unattended, vocal expression produced faster emotion-categorisation decisions to facial expressions, relative to incongruent or neutral voices. Similarly, simultaneously presented facial expressions had the same effect on the categorisation of vocal expressions. Subsequent experiments showed that other pairings of emotional stimuli (vocal expressions and emotion pictures; facial expressions and emotion pictures) did not have bi-directional effects on each other, but rather asymmetric effects that were consistent with interactions between these stimuli at post-perceptual stages of processing. Facial and vocal signals are naturalistic pairings, and evidence that these signals are integrated at a ‘perceptual’ level was provided by a final experiment using functional magnetic resonance imaging. Congruent facial-vocal pairings produced enhanced activity in the superior temporal sulcus; a region implicated in cross-modal integration of sensory inputs. The data from this thesis suggest that facial and vocal signals of emotion are automatically integrated at a perceptual processing stage to create a single unified percept to facilitate social communication.
Styles APA, Harvard, Vancouver, ISO, etc.
2

REALDON, OLIVIA. « Differenze culturali nella percezione multimodale delle emozioni ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2012. http://hdl.handle.net/10281/37944.

Texte intégral
Résumé :
The research question in the present study concerns how culture shapes the way in which simultaneous facial and vocalization cues are combined in emotion perception. The matter is not whether culture influences such process: cultures supply systems of meaning that make salient different core emotional themes, different sets of emotions, their ostensible expression, and action tendencies. Therefore, research doesn’t regard whether, but how and at what level of analysis culture shapes these processes (Matsumoto, 2001). Cultural variability was tested within the methodological framework of cultural priming studies (Matsumoto & Yoo, 2006). In such a methodological option culture is not viewed as consensual, enduring, and context-general, but as fragmented, fluctuating, and context-specific (situated cognition model; Oyserman & Sorensen, 2009). Bicultural individuals that, through enduring exposure to at least two cultures, possess systems of meaning and practices of both cultures, can therefore switch between such cultural orientations alternating them depending on the cultural cues (cultural primers) available in the immediate context (cultural frame switching; Hong et al. 2000). The present research investigated cultural differences in the way visual and auditory cues of fear and disgust are combined in emotion perception by Italian-Japanese biculturals primed with Japanese and Italian cultural cues. Bicultural participants were randomly assigned to Italian or Japanese priming conditions and were shown dynamic faces and vocalizations expressing either congruent (i.e., fear-fear) or incongruent (i.e. fear-disgust) emotion and were asked to identify the emotion expressed ignoring the one or the other modality (cross-modal bias paradigm; Bertelson & de Gelder, 2004). The effect of to-be-ignored vocalization cues was larger for participants in the Japanese priming condition, while the effect of to-be-ignored dynamic face cues was larger for participants in the Italian priming condition. This pattern of results was investigated also within current perspectives on embodied cognition, that, regarding emotion perception, assume that perceivers subtly mimic a target’s facial expression, so that contractions in the perceiver’s face generate an afferent muscolar feedback from the face to the brain, leading the perceiver to use this feedback to reproduce and thus understand the perceived expressions (Barsalou, 2009; Niedenthal, 2007). In other words, mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. A mimicry-interfering (with the facial expressions of fear and disgust; Oberman, Winkielman & Ramachandran, 2007) manipulation with bicultural participants performing the same task above described generated no cultural differences in the effect of to-be-ignored vocalizations, showing that the interference effect of vocalizations on faces turns out to be larger for participants in the Italian priming condition. Altogether, these results can be interpreted within the cultural syndromes highlighting the independent vs. interdependent and socially embedded nature of self, providing meaning systems that encourage and make available a different weighting of nonverbal cues in emotion perception depending on their relying, respectively, on more (or less) face exposure (meant as individual exposure) in modulating social relationships and less (or more) vocal exposure (more subtle and time-dependent than the face) in order to enhance individual standing and autonomy (vs. establish and maintain social harmony and interpersonal respect). Current perspectives sketching how human cognitive functioning works through a situated (Mesquita, Barrett, & Smith, 2010) and embodied (simulative) mind (Barsalou, 2009), and their implications in emotion perception are briefly described as the theoretical framework guiding the research question addressed in the empirical contribution.
Styles APA, Harvard, Vancouver, ISO, etc.
3

ur, Réhman Shafiq. « Expressing emotions through vibration for perception and control ». Doctoral thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-32990.

Texte intégral
Résumé :
This thesis addresses a challenging problem: “how to let the visually impaired ‘see’ others emotions”. We, human beings, are heavily dependent on facial expressions to express ourselves. A smile shows that the person you are talking to is pleased, amused, relieved etc. People use emotional information from facial expressions to switch between conversation topics and to determine attitudes of individuals. Missing emotional information from facial expressions and head gestures makes the visually impaired extremely difficult to interact with others in social events. To enhance the visually impaired’s social interactive ability, in this thesis we have been working on the scientific topic of ‘expressing human emotions through vibrotactile patterns’. It is quite challenging to deliver human emotions through touch since our touch channel is very limited. We first investigated how to render emotions through a vibrator. We developed a real time “lipless” tracking system to extract dynamic emotions from the mouth and employed mobile phones as a platform for the visually impaired to perceive primary emotion types. Later on, we extended the system to render more general dynamic media signals: for example, render live football games through vibration in the mobile for improving mobile user communication and entertainment experience. To display more natural emotions (i.e. emotion type plus emotion intensity), we developed the technology to enable the visually impaired to directly interpret human emotions. This was achieved by use of machine vision techniques and vibrotactile display. The display is comprised of a ‘vibration actuators matrix’ mounted on the back of a chair and the actuators are sequentially activated to provide dynamic emotional information. The research focus has been on finding a global, analytical, and semantic representation for facial expressions to replace state of the art facial action coding systems (FACS) approach. We proposed to use the manifold of facial expressions to characterize dynamic emotions. The basic emotional expressions with increasing intensity become curves on the manifold extended from the center. The blends of emotions lie between those curves, which could be defined analytically by the positions of the main curves. The manifold is the “Braille Code” of emotions. The developed methodology and technology has been extended for building assistive wheelchair systems to aid a specific group of disabled people, cerebral palsy or stroke patients (i.e. lacking fine motor control skills), who don’t have ability to access and control the wheelchair with conventional means, such as joystick or chin stick. The solution is to extract the manifold of the head or the tongue gestures for controlling the wheelchair. The manifold is rendered by a 2D vibration array to provide user of the wheelchair with action information from gestures and system status information, which is very important in enhancing usability of such an assistive system. Current research work not only provides a foundation stone for vibrotactile rendering system based on object localization but also a concrete step to a new dimension of human-machine interaction.
Taktil Video
Styles APA, Harvard, Vancouver, ISO, etc.
4

Fernández, Carbonell Marcos. « Automated Multimodal Emotion Recognition ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-282534.

Texte intégral
Résumé :
Being able to read and interpret affective states plays a significant role in human society. However, this is difficult in some situations, especially when information is limited to either vocal or visual cues. Many researchers have investigated the so-called basic emotions in a supervised way. This thesis holds the results of a multimodal supervised and unsupervised study of a more realistic number of emotions. To that end, audio and video features are extracted from the GEMEP dataset employing openSMILE and OpenFace, respectively. The supervised approach includes the comparison of multiple solutions and proves that multimodal pipelines can outperform unimodal ones, even with a higher number of affective states. The unsupervised approach embraces a traditional and an exploratory method to find meaningful patterns in the multimodal dataset. It also contains an innovative procedure to better understand the output of clustering techniques.
Att kunna läsa och tolka affektiva tillstånd spelar en viktig roll i det mänskliga samhället. Detta är emellertid svårt i vissa situationer, särskilt när information är begränsad till antingen vokala eller visuella signaler. Många forskare har undersökt de så kallade grundläggande känslorna på ett övervakat sätt. Det här examensarbetet innehåller resultaten från en multimodal övervakad och oövervakad studie av ett mer realistiskt antal känslor. För detta ändamål extraheras ljud- och videoegenskaper från GEMEP-data med openSMILE respektive OpenFace. Det övervakade tillvägagångssättet inkluderar jämförelse av flera lösningar och visar att multimodala pipelines kan överträffa unimodala sådana, även med ett större antal affektiva tillstånd. Den oövervakade metoden omfattar en konservativ och en utforskande metod för att hitta meningsfulla mönster i det multimodala datat. Den innehåller också ett innovativt förfarande för att bättre förstå resultatet av klustringstekniker.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Nguyen, Tien Dung. « Multimodal emotion recognition using deep learning techniques ». Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/180753/1/Tien%20Dung_Nguyen_Thesis.pdf.

Texte intégral
Résumé :
This thesis investigates the use of deep learning techniques to address the problem of machine understanding of human affective behaviour and improve the accuracy of both unimodal and multimodal human emotion recognition. The objective was to explore how best to configure deep learning networks to capture individually and jointly, the key features contributing to human emotions from three modalities (speech, face, and bodily movements) to accurately classify the expressed human emotion. The outcome of the research should be useful for several applications including the design of social robots.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Gay, R. « Morality : Emotion, perception and belief ». Thesis, University of Oxford, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.371649.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Lawrie, Louisa. « Adult ageing and emotion perception ». Thesis, University of Aberdeen, 2018. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=239235.

Texte intégral
Résumé :
Older adults are worse than young adults at perceiving emotions in others. However, it is unclear why these age-related differences in emotion perception exist. The studies presented in this thesis investigated the cognitive, emotional and motivational factors influencing age differences in emotion perception. Study 1 revealed no age differences in mood congruence effects: sad faces were rated as more sad when participants experienced negative mood. In contrast, Study 2 demonstrated that sad mood impaired recognition accuracy for sad faces. Together, findings suggested that different methods of assessing emotion perception engage the use of discrete processing strategies. These mood influences on emotion perception are similar in young and older adults. Studies 3 and 4 investigated age differences in emotion perception tasks which are more realistic and contextualised than still photographs of facial expressions. Older adults were worse than young at recognising emotions from silent dynamic displays; however, older adults outperformed young in a film task that displayed emotional information in multiple modalities (Study 3). Study 4 suggested that the provision of vocal information was particularly beneficial to older adults. Furthermore, vocabulary mediated the relationship between age and performance on the contextual film task. However, age-related deficits in decoding basic emotions were established in a separate multi-modal video-based task. In addition, age differences in the perception of neutral expressions were also examined. Neutral expressions were interpreted as displaying positive emotions by older adults. Using a dual-task paradigm, Study 5 suggested that working memory processes are involved in decoding emotions. However, age-related declines in working memory were not driving age effects in emotion perception. Neuropsychological, motivational and cognitive explanations for these results are evaluated. Implications of these findings for older adults' social functioning are discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Abrilian, Sarkis. « Représentation de comportements emotionnels multimodaux spontanés : perception, annotation et synthèse ». Phd thesis, Université Paris Sud - Paris XI, 2007. http://tel.archives-ouvertes.fr/tel-00620827.

Texte intégral
Résumé :
L'objectif de cette thèse est de représenter les émotions spontanées et les signes multimodaux associés pour contribuer à la conception des futurs systèmes affectifs interactifs. Les prototypes actuels sont généralement limités à la détection et à la génération de quelques émotions simples et se fondent sur des données audio ou vidéo jouées par des acteurs et récoltées en laboratoire. Afin de pouvoir modéliser les relations complexes entre les émotions spontanées et leurs expressions dans différentes modalités, une approche exploratoire est nécessaire. L'approche exploratoire que nous avons choisie dans cette thèse pour l'étude de ces émotions spontanées consiste à collecter et annoter un corpus vidéo d'interviews télévisées. Ce type de corpus comporte des émotions plus complexes que les 6 émotions de base (colère, peur, joie, tristesse, surprise, dégoût). On observe en effet dans les comportements émotionnels spontanés des superpositions, des masquages, des conflits entre émotions positives et négatives. Nous rapportons plusieurs expérimentations ayant permis la définition de plusieurs niveaux de représentation des émotions et des paramètres comportementaux multimodaux apportant des informations pertinentes pour la perception de ces émotions complexes spontanées. En perspective, les outils développés durant cette thèse (schémas d'annotation, programmes de mesures, protocoles d'annotation) pourront être utilisés ultérieurement pour concevoir des modèles utilisables par des systèmes interactifs affectifs capables de détecter/synthétiser des expressions multimodales d'émotions spontanées.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Angelica, Lim. « MEI : Multimodal Emotional Intelligence ». 京都大学 (Kyoto University), 2014. http://hdl.handle.net/2433/188869.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Kosti, Ronak. « Visual scene context in emotion perception ». Doctoral thesis, Universitat Oberta de Catalunya, 2019. http://hdl.handle.net/10803/667808.

Texte intégral
Résumé :
Els estudis psicològics demostren que el context de l'escena, a més de l'expressió facial i la postura corporal, aporta informació important a la nostra percepció de les emocions de les persones. Tot i això, el processament del context per al reconeixement automàtic de les emocions no s'ha explorat a fons, en part per la manca de dades adequades. En aquesta tesi presentem EMOTIC, un conjunt de dades d'imatges de persones en situacions naturals i diferents anotades amb la seva aparent emoció. La base de dades EMOTIC combina dos tipus de representació d'emocions diferents: (1) un conjunt de 26 categories d'emoció i (2) les dimensions contínues valència, excitació i dominància. També presentem una anàlisi estadística i algorítmica detallada del conjunt de dades juntament amb l'anàlisi d'acords d'anotadors. Els models CNN estan formats per EMOTIC, combinant característiques de la persona amb funcions d'escena (context). Els nostres resultats mostren com el context d'escena aporta informació important per reconèixer automàticament els estats emocionals i motiven més recerca en aquesta direcció.
Los estudios psicológicos muestran que el contexto de la escena, además de la expresión facial y la pose corporal, aporta información importante a nuestra percepción de las emociones de las personas. Sin embargo, el procesamiento del contexto para el reconocimiento automático de emociones no se ha explorado en profundidad, en parte debido a la falta de datos adecuados. En esta tesis presentamos EMOTIC, un conjunto de datos de imágenes de personas en situaciones naturales y diferentes anotadas con su aparente emoción. La base de datos EMOTIC combina dos tipos diferentes de representación de emociones: (1) un conjunto de 26 categorías de emociones y (2) las dimensiones continuas de valencia, excitación y dominación. También presentamos un análisis estadístico y algorítmico detallado del conjunto de datos junto con el análisis de concordancia de los anotadores. Los modelos CNN están entrenados en EMOTIC, combinando características de la persona con características de escena (contexto). Nuestros resultados muestran cómo el contexto de la escena aporta información importante para reconocer automáticamente los estados emocionales, lo cual motiva más investigaciones en esta dirección.
Psychological studies show that the context of a setting, in addition to facial expression and body language, lends important information that conditions our perception of people's emotions. However, context's processing in the case of automatic emotion recognition has not been explored in depth, partly due to the lack of sufficient data. In this thesis we present EMOTIC, a dataset of images of people in various natural scenarios annotated with their apparent emotion. The EMOTIC database combines two different types of emotion representation: (1) a set of 26 emotion categories, and (2) the continuous dimensions of valence, arousal and dominance. We also present a detailed statistical and algorithmic analysis of the dataset along with the annotators' agreement analysis. CNN models are trained using EMOTIC, combining a person's features with those of the setting (context). Our results not only show how the context of a setting contributes important information for automatically recognizing emotional states but also promote further research in this direction.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Multimodal perception of emotion"

1

André, Elisabeth, Laila Dybkjær, Wolfgang Minker, Heiko Neumann, Roberto Pieraccini et Michael Weber, dir. Perception in Multimodal Dialogue Systems. Berlin, Heidelberg : Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-69369-7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Stiefelhagen, Rainer, Rachel Bowers et Jonathan Fiscus, dir. Multimodal Technologies for Perception of Humans. Berlin, Heidelberg : Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-68585-2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Stiefelhagen, Rainer, et John Garofolo, dir. Multimodal Technologies for Perception of Humans. Berlin, Heidelberg : Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-69568-4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

1953-, Zimmer H. D., dir. Human memory : A multimodal approach. Seattle : Hogrefe & Huber Publishers, 1994.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Vernon, McDonald P., Bloomberg Jacob et Lyndon B. Johnson Space Center., dir. Multimodal perception and multicriterion control of nested systems. Houston, Tex : National Aeronautics and Space Administration, Lyndon B. Johnson Space Center, 1999.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Riccio, Gary E. Multimodal perception and multicriterion control of nested systems. Houston, Tex : National Aeronautics and Space Administration, Lyndon B. Johnson Space Center, 1999.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Riccio, Gary E. Multimodal perception and multicriterion control of nested systems. Washington, D.C : National Aeronautics and Space Administration, 1998.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Badenhop, Dennis. Praktische Anschauung : Sinneswahrnehmung, Emotion und moralisches Begründen. Freiburg : Verlag Karl Alber, 2015.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Seymour, Julie, Abigail Hackett et Lisa Procter. Children's spatialities : Embodiment, emotion and agency. Houndmills, Basingstoke Hampshire : Palgrave Macmillan, 2015.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

D, Ellis Ralph, et Newton Natika, dir. Consciousness & emotion : Agency, conscious choice, and selective perception. Amsterdam : John Benjamins Pub., 2004.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Multimodal perception of emotion"

1

Esposito, Anna, Domenico Carbone et Maria Teresa Riviello. « Visual Context Effects on the Perception of Musical Emotional Expressions ». Dans Biometric ID Management and Multimodal Communication, 73–80. Berlin, Heidelberg : Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04391-8_10.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Li, Aijun. « Perception of Multimodal Emotional Expressions By Japanese and Chinese ». Dans Encoding and Decoding of Emotional Speech, 33–83. Berlin, Heidelberg : Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-47691-8_2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Hunter, Patrick G., et E. Glenn Schellenberg. « Music and Emotion ». Dans Music Perception, 129–64. New York, NY : Springer New York, 2010. http://dx.doi.org/10.1007/978-1-4419-6114-3_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Bratton, John, Peter Sawchuk, Carolyn Forshaw, Militza Callinan et Martin Corbett. « Perception and emotion ». Dans Work and Organizational Behaviour, 128–58. London : Macmillan Education UK, 2010. http://dx.doi.org/10.1007/978-0-230-36602-2_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kotsia, Irene, Stefanos Zafeiriou, George Goudelis, Ioannis Patras et Kostas Karpouzis. « Multimodal Sensing in Affective Gaming ». Dans Emotion in Games, 59–84. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41316-7_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Vasilescu, Ioana. « Emotion Perception and Recognition ». Dans Emotion-Oriented Systems, 191–213. Hoboken, NJ, USA : John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118601938.ch7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Wagner, Johannes, Florian Lingenfelser et Elisabeth André. « Building a Robust System for Multimodal Emotion Recognition ». Dans Emotion Recognition, 379–410. Hoboken, NJ, USA : John Wiley & Sons, Inc., 2015. http://dx.doi.org/10.1002/9781118910566.ch15.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Kenemans, Leon, et Nick Ramsey. « Perception, Attention and Emotion ». Dans Psychology in the Brain, 178–98. London : Macmillan Education UK, 2013. http://dx.doi.org/10.1007/978-1-137-29614-6_8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Kret, Mariska E., Charlotte B. A. Sinke et Beatrice de Gelder. « Emotion Perception and Health ». Dans Emotion Regulation and Well-Being, 261–80. New York, NY : Springer New York, 2010. http://dx.doi.org/10.1007/978-1-4419-6953-8_16.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Chen, Fang, Jianlong Zhou, Yang Wang, Kun Yu, Syed Z. Arshad, Ahmad Khawaji et Dan Conway. « Emotion and Cognitive Load ». Dans Robust Multimodal Cognitive Load Measurement, 173–83. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-31700-7_11.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Multimodal perception of emotion"

1

Zhang, Biqiao, Georg Essl et Emily Mower Provost. « Predicting the distribution of emotion perception : capturing inter-rater variability ». Dans ICMI '17 : INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA : ACM, 2017. http://dx.doi.org/10.1145/3136755.3136792.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Zhang, Yue, Wanying Ding, Ran Xu et Xiaohua Hu. « Visual Emotion Representation Learning via Emotion-Aware Pre-training ». Dans Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California : International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/234.

Texte intégral
Résumé :
Despite recent progress in deep learning, visual emotion recognition remains a challenging problem due to ambiguity of emotion perception, diverse concepts related to visual emotion and lack of large-scale annotated dataset. In this paper, we present a large-scale multimodal pre-training method to learn visual emotion representation by aligning emotion, object, attribute triplet with a contrastive loss. We conduct our pre-training on a large web dataset with noisy tags and fine-tune on visual emotion classification datasets. Our method achieves state-of-the-art performance for visual emotion classification.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Ghaleb, Esam, Mirela Popa et Stylianos Asteriadis. « Multimodal and Temporal Perception of Audio-visual Cues for Emotion Recognition ». Dans 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 2019. http://dx.doi.org/10.1109/acii.2019.8925444.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Chen, Yishan, Zhiyang Jia, Kaoru Hirota et Yaping Dai. « A Multimodal Emotion Perception Model based on Context-Aware Decision-Level Fusion ». Dans 2022 41st Chinese Control Conference (CCC). IEEE, 2022. http://dx.doi.org/10.23919/ccc55666.2022.9902799.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Nivedhan, Abilash, Line Ahm Mielby et Qian Janice Wang. « The Influence of Emotion-Oriented Extrinsic Visual and Auditory Cues on Coffee Perception : A Virtual Reality Experiment ». Dans ICMI '20 : INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA : ACM, 2020. http://dx.doi.org/10.1145/3395035.3425646.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Su, Qi, Fei Chen, Hanfei Li, Nan Yan et Lan Wang. « Multimodal Emotion Perception in Children with Autism Spectrum Disorder by Eye Tracking Study ». Dans 2018 IEEE-EMBS Conference on Biomedical Engineering and Sciences (IECBES). IEEE, 2018. http://dx.doi.org/10.1109/iecbes.2018.8626642.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Dussard, Claire, Anahita Basirat, Nacim Betrouni, Caroline Moreau, David Devos, François Cabestaing et José Rouillard. « Preliminary Study of the Perception of Emotions Expressed by Virtual Agents in the Context of Parkinson's Disease ». Dans ICMI '20 : INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA : ACM, 2020. http://dx.doi.org/10.1145/3395035.3425219.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Kunpeng Huang. « Multimodal detecting and analytic system for visual perception and emotional response ». Dans 2017 IEEE Integrated STEM Education Conference (ISEC). IEEE, 2017. http://dx.doi.org/10.1109/isecon.2017.7910224.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Ranasinghe, Nimesha, Meetha Nesam James, Michael Gecawicz, Jonathan Bland et David Smith. « Influence of Electric Taste, Smell, Color, and Thermal Sensory Modalities on the Liking and Mediated Emotions of Virtual Flavor Perception ». Dans ICMI '20 : INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA : ACM, 2020. http://dx.doi.org/10.1145/3382507.3418862.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Hong, Alexander, Yuma Tsuboi, Goldie Nejat et Beno Benhabib. « Multimodal Affect Recognition for Assistive Human-Robot Interactions ». Dans 2017 Design of Medical Devices Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/dmd2017-3332.

Texte intégral
Résumé :
Socially assistive robots can provide cognitive assistance with activities of daily living, and promote social interactions to those suffering from cognitive impairments and/or social disorders. They can be used as aids for a number of different populations including those living with dementia or autism spectrum disorder, and for stroke patients during post-stroke rehabilitation [1]. Our research focuses on developing socially assistive intelligent robots capable of partaking in natural human-robot interactions (HRI). In particular, we have been working on the emotional aspects of the interactions to provide engaging settings, which in turn lead to better acceptance by the intended users. Herein, we present a novel multimodal affect recognition system for the robot Luke, Fig. 1(a), to engage in emotional assistive interactions. Current multimodal affect recognition systems mainly focus on inputs from facial expressions and vocal intonation [2], [3]. Body language has also been used to determine human affect during social interactions, but has yet to be explored in the development of multimodal recognition systems. Body language has been strongly correlated to vocal intonation [4]. The combined modalities provide emotional information due to the temporal development underlying the neural interaction in audiovisual perception [5]. In this paper, we present a novel multimodal recognition system that uniquely combines inputs from both body language and vocal intonation in order to autonomously determine user affect during assistive HRI.
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Multimodal perception of emotion"

1

Maia, Maercio, Abrahão Baptista, Patricia Vanzella, Pedro Montoya et Henrique Lima. Neural correlates of the perception of emotions elicited by dance movements. A scope review. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, février 2023. http://dx.doi.org/10.37766/inplasy2023.2.0086.

Texte intégral
Résumé :
Review question / Objective: The main question of the study is "how do dance neuroscience studies define and assess emotions?" The main objective is to establish, through the available literature, a scientific overview of studies in dance neuroscience that address the perception of emotions in the context of neuroaesthetics. Specifically, it is expected to verify if there is methodological homogeneity in studies involving the evaluation of emotions within the context of dance neuroscience; whether the definition of emotion is shared in these studies and, furthermore, whether in multimodal studies in which dance and music are concomitantly present, whether there is any form of distinction between the contribution of each language on the perception of emotions evoked by the stimulus.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Rúas-Araújo, J., M. I. Punín Larrea, H. Gómez Alvarado, P. Cuesta-Morales et S. Ratté. Neuroscience applied to perception analysis : Heart and emotion when listening to Ecuador’s national anthem. Revista Latina de Comunicación Social, juin 2015. http://dx.doi.org/10.4185/rlcs-2015-1052en.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Balali, Vahid, Arash Tavakoli et Arsalan Heydarian. A Multimodal Approach for Monitoring Driving Behavior and Emotions. Mineta Transportation Institute, juillet 2020. http://dx.doi.org/10.31979/mti.2020.1928.

Texte intégral
Résumé :
Studies have indicated that emotions can significantly be influenced by environmental factors; these factors can also significantly influence drivers’ emotional state and, accordingly, their driving behavior. Furthermore, as the demand for autonomous vehicles is expected to significantly increase within the next decade, a proper understanding of drivers’/passengers’ emotions, behavior, and preferences will be needed in order to create an acceptable level of trust with humans. This paper proposes a novel semi-automated approach for understanding the effect of environmental factors on drivers’ emotions and behavioral changes through a naturalistic driving study. This setup includes a frontal road and facial camera, a smart watch for tracking physiological measurements, and a Controller Area Network (CAN) serial data logger. The results suggest that the driver’s affect is highly influenced by the type of road and the weather conditions, which have the potential to change driving behaviors. For instance, when the research defines emotional metrics as valence and engagement, results reveal there exist significant differences between human emotion in different weather conditions and road types. Participants’ engagement was higher in rainy and clear weather compared to cloudy weather. More-over, engagement was higher on city streets and highways compared to one-lane roads and two-lane highways.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie