Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Multimodal perception of emotion.

Articles de revues sur le sujet « Multimodal perception of emotion »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Multimodal perception of emotion ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

de Boer, Minke J., Deniz Başkent et Frans W. Cornelissen. « Eyes on Emotion : Dynamic Gaze Allocation During Emotion Perception From Speech-Like Stimuli ». Multisensory Research 34, no 1 (7 juillet 2020) : 17–47. http://dx.doi.org/10.1163/22134808-bja10029.

Texte intégral
Résumé :
Abstract The majority of emotional expressions used in daily communication are multimodal and dynamic in nature. Consequently, one would expect that human observers utilize specific perceptual strategies to process emotions and to handle the multimodal and dynamic nature of emotions. However, our present knowledge on these strategies is scarce, primarily because most studies on emotion perception have not fully covered this variation, and instead used static and/or unimodal stimuli with few emotion categories. To resolve this knowledge gap, the present study examined how dynamic emotional auditory and visual information is integrated into a unified percept. Since there is a broad spectrum of possible forms of integration, both eye movements and accuracy of emotion identification were evaluated while observers performed an emotion identification task in one of three conditions: audio-only, visual-only video, or audiovisual video. In terms of adaptations of perceptual strategies, eye movement results showed a shift in fixations toward the eyes and away from the nose and mouth when audio is added. Notably, in terms of task performance, audio-only performance was mostly significantly worse than video-only and audiovisual performances, but performance in the latter two conditions was often not different. These results suggest that individuals flexibly and momentarily adapt their perceptual strategies to changes in the available information for emotion recognition, and these changes can be comprehensively quantified with eye tracking.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Vallverdú, Jordi, Gabriele Trovato et Lorenzo Jamone. « Allocentric Emotional Affordances in HRI : The Multimodal Binding ». Multimodal Technologies and Interaction 2, no 4 (6 novembre 2018) : 78. http://dx.doi.org/10.3390/mti2040078.

Texte intégral
Résumé :
The concept of affordance perception is one of the distinctive traits of human cognition; and its application to robots can dramatically improve the quality of human-robot interaction (HRI). In this paper we explore and discuss the idea of “emotional affordances” by proposing a viable model for implementation into HRI; which considers allocentric and multimodal perception. We consider “2-ways” affordances: perceived object triggering an emotion; and perceived human emotion expression triggering an action. In order to make the implementation generic; the proposed model includes a library that can be customised depending on the specific robot and application scenario. We present the AAA (Affordance-Appraisal-Arousal) model; which incorporates Plutchik’s Wheel of Emotions; and we outline some numerical examples of how it can be used in different scenarios.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Shackman, Jessica E., et Seth D. Pollak. « Experiential Influences on Multimodal Perception of Emotion ». Child Development 76, no 5 (septembre 2005) : 1116–26. http://dx.doi.org/10.1111/j.1467-8624.2005.00901.x.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Barabanschikov, V. A., et E. V. Suvorova. « Gender Differences in the Recognition of Emotional States ». Психологическая наука и образование 26, no 6 (2021) : 107–16. http://dx.doi.org/10.17759/pse.2021260608.

Texte intégral
Résumé :
As a rule, gender differences in the perception of human emotional states are studied on the basis of static pictures of face, gestures or poses. The dynamics and multiplicity of the emotion expression remain in the «blind zone». This work is aimed at finding relationships in the perception of the procedural characteristics of the emotion expression. The influence of gender and age on the identification of human emotional states is experimentally investigated in ecologically and socially valid situations. The experiments were based on the Russian-language version of the Geneva Emotion Recognition Test (GERT).83 audio-video clips of fourteen emotional states expressed by ten specially trained professional actors (five men and five women, average age 37 years) were randomly demonstrated to Russian participants (48 women and 48 men, Europeans, ages ranged from 20 to 62 years, with a mean age of 34 (SD = 9,4).It is shown that women recognize multimodal dynamic emotions more accurately, especially those which were expressed by women. Gender and age differences in identification accuracy are statistically significant for five emotions: joy, amusement, irritation, anger, and surprise. On women’s faces, joy, surprise, irritation and anger are more accurately recognized by women over 35 years of age (p<0,05).On male faces, surprise is less accurately recognized by men under 35 (p<0,05); amusement, irritation, anger — in men over 35 (p<0,05). The gender factor of perception of multimodal dynamic expressions of the state acts as a system of determinants that changes its characteristics depending on a specific communicative situation.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Yamauchi, Takashi, Jinsil Seo et Annie Sungkajun. « Interactive Plants : Multisensory Visual-Tactile Interaction Enhances Emotional Experience ». Mathematics 6, no 11 (29 octobre 2018) : 225. http://dx.doi.org/10.3390/math6110225.

Texte intégral
Résumé :
Using a multisensory interface system, we examined how people’s emotional experiences change as their tactile sense (touching a plant) was augmented with visual sense (“seeing” their touch). Our system (the Interactive Plant system) senses the electrical capacitance of the human body and visualizes users’ tactile information on a flat screen (when the touch is gentle, the program draws small and thin roots around the pot; when the touch is more harsh or abrupt, big and thick roots are displayed). We contrasted this multimodal combination (touch + vision) with a unimodal interface (touch only or watch only) and measured the impact of the multimodal interaction on participants’ emotion. We found significant emotional gains in the multimodal interaction. Participants’ self-reported positive affect, joviality, attentiveness and self-assurance increased dramatically in multimodal interaction relative to unimodal interaction; participants’ electrodermal activity (EDA) increased in the multimodal condition, suggesting that our plant-based multisensory visual-tactile interaction raised arousal. We suggest that plant-based tactile interfaces are advantageous for emotion generation because haptic perception is by nature embodied and emotional.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Vani Vivekanand, Chettiyar. « Performance Analysis of Emotion Classification Using Multimodal Fusion Technique ». Journal of Computational Science and Intelligent Technologies 2, no 1 (16 avril 2021) : 14–20. http://dx.doi.org/10.53409/mnaa/jcsit/2103.

Texte intégral
Résumé :
As the central processing unit of the human body, the human brain is in charge of several activities, including cognition, perception, emotion, attention, action, and memory. Emotions have a significant impact on human well-being in their life. Methodologies for accessing emotions of human could be essential for good user-machine interactions. Comprehending BCI (Brain-Computer Interface) strategies for identifying emotions can also help people connect with the world more naturally. Many approaches for identifying human emotions have been developed using signals of EEG for classifying happy, neutral, sad, and angry emotions, discovered to be effective. The emotions are elicited by various methods, including displaying participants visuals of happy and sad facial expressions, listening to emotionally linked music, visuals, and, sometimes, both of these. In this research, a multi-model fusion approach for emotion classification utilizing BCI and EEG data with various classifiers was proposed. The 10-20 electrode setup was used to gather the EEG data. The emotions were classified using the sentimental analysis technique based on user ratings. Simultaneously, Natural Language Processing (NLP) is implemented for increasing accuracy. This analysis classified the assessment parameters as happy, neutral, sad, and angry emotions. Based on these emotions, the proposed model’s performance was assessed in terms of accuracy and overall accuracy. The proposed model has a 93.33 percent overall accuracy and increased performance in all emotions identified.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Lavan, Nadine, et Carolyn McGettigan. « Increased Discriminability of Authenticity from Multimodal Laughter is Driven by Auditory Information ». Quarterly Journal of Experimental Psychology 70, no 10 (octobre 2017) : 2159–68. http://dx.doi.org/10.1080/17470218.2016.1226370.

Texte intégral
Résumé :
We present an investigation of the perception of authenticity in audiovisual laughter, in which we contrast spontaneous and volitional samples and examine the contributions of unimodal affective information to multimodal percepts. In a pilot study, we demonstrate that listeners perceive spontaneous laughs as more authentic than volitional ones, both in unimodal (audio-only, visual-only) and multimodal contexts (audiovisual). In the main experiment, we show that the discriminability of volitional and spontaneous laughter is enhanced for multimodal laughter. Analyses of relationships between affective ratings and the perception of authenticity show that, while both unimodal percepts significantly predict evaluations of audiovisual laughter, it is auditory affective cues that have the greater influence on multimodal percepts. We discuss differences and potential mismatches in emotion signalling through voices and faces, in the context of spontaneous and volitional behaviour, and highlight issues that should be addressed in future studies of dynamic multimodal emotion processing.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Portnova, Galina, Aleksandra Maslennikova, Natalya Zakharova et Olga Martynova. « The Deficit of Multimodal Perception of Congruent and Non-Congruent Fearful Expressions in Patients with Schizophrenia : The ERP Study ». Brain Sciences 11, no 1 (13 janvier 2021) : 96. http://dx.doi.org/10.3390/brainsci11010096.

Texte intégral
Résumé :
Emotional dysfunction, including flat affect and emotional perception deficits, is a specific symptom of schizophrenia disorder. We used a modified multimodal odd-ball paradigm with fearful facial expressions accompanied by congruent and non-congruent emotional vocalizations (sounds of women screaming and laughing) to investigate the impairment of emotional perception and reactions to other people’s emotions in schizophrenia. We compared subjective ratings of emotional state and event-related potentials (EPPs) in response to congruent and non-congruent stimuli in patients with schizophrenia and healthy controls. The results showed the altered multimodal perception of fearful stimuli in patients with schizophrenia. The amplitude of N50 was significantly higher for non-congruent stimuli than congruent ones in the control group and did not differ in patients. The P100 and N200 amplitudes were higher in response to non-congruent stimuli in patients than in controls, implying impaired sensory gating in schizophrenia. The observed decrease of P3a and P3b amplitudes in patients could be associated with less attention, less emotional arousal, or incorrect interpretation of emotional valence, as patients differed from healthy controls in the emotion scores of non-congruent stimuli. The difficulties in identifying the incoherence of facial and audial components of emotional expression could be significant in understanding the psychopathology of schizophrenia.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Mittal, Trisha, Aniket Bera et Dinesh Manocha. « Multimodal and Context-Aware Emotion Perception Model With Multiplicative Fusion ». IEEE MultiMedia 28, no 2 (1 avril 2021) : 67–75. http://dx.doi.org/10.1109/mmul.2021.3068387.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Montembeault, Maxime, Estefania Brando, Kim Charest, Alexandra Tremblay, Élaine Roger, Pierre Duquette et Isabelle Rouleau. « Multimodal emotion perception in young and elderly patients with multiple sclerosis ». Multiple Sclerosis and Related Disorders 58 (février 2022) : 103478. http://dx.doi.org/10.1016/j.msard.2021.103478.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Bai, Jie. « Optimized Piano Music Education Model Based on Multimodal Information Fusion for Emotion Recognition in Multimedia Video Networks ». Mobile Information Systems 2022 (24 août 2022) : 1–12. http://dx.doi.org/10.1155/2022/1882739.

Texte intégral
Résumé :
Emotion is the important information that people transmit in the process of communication, and the change of emotional state affects people’s perception and decision-making, which introduces the emotional dimension into human-computer interaction. The modes of emotional expression include facial expressions, speech, posture, physiological signals, text, and so on. Emotion recognition is essentially a multimodal fusion problem. This paper investigates the different teaching modes of the teachers and students of our school, designs the load capacity through the K-means algorithm, builds a multimedia network sharing classroom, and creates a piano music situation to stimulate students’ learning interest, using audiovisual and other tools to mobilize students’ emotions, using multimedia guidance to extend students’ piano music knowledge, and comprehensively improve students’ aesthetic ability and autonomous learning ability. Comparing the changes of students after 3 months of teaching, the results of the study found that multimedia sharing classrooms can be up to 50% ahead of traditional teaching methods in enhancing students’ interest, and teachers’ acceptance of multimedia network sharing classrooms is also high.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Hancock, Megan R., et Tessa Bent. « Multimodal emotion perception : Influences of autism spectrum disorder and autism-like traits ». Journal of the Acoustical Society of America 148, no 4 (octobre 2020) : 2765. http://dx.doi.org/10.1121/1.5147698.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Bänziger, Tanja, Marcello Mortillaro et Klaus R. Scherer. « Introducing the Geneva Multimodal expression corpus for experimental research on emotion perception. » Emotion 12, no 5 (2012) : 1161–79. http://dx.doi.org/10.1037/a0025827.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Horii, Takato, Yukie Nagai et Minoru Asada. « Modeling Development of Multimodal Emotion Perception Guided by Tactile Dominance and Perceptual Improvement ». IEEE Transactions on Cognitive and Developmental Systems 10, no 3 (septembre 2018) : 762–75. http://dx.doi.org/10.1109/tcds.2018.2809434.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Barabanschikov, V. A., et O. A. Korolkova. « Perception of “Live” Facial Expressions ». Experimental Psychology (Russia) 13, no 3 (2020) : 55–73. http://dx.doi.org/10.17759/exppsy.2020130305.

Texte intégral
Résumé :
The article provides a review of experimental studies of interpersonal perception on the material of static and dynamic facial expressions as a unique source of information about the person’s inner world. The focus is on the patterns of perception of a moving face, included in the processes of communication and joint activities (an alternative to the most commonly studied perception of static images of a person outside of a behavioral context). The review includes four interrelated topics: face statics and dynamics in the recognition of emotional expressions; specificity of perception of moving face expressions; multimodal integration of emotional cues; generation and perception of facial expressions in communication processes. The analysis identifies the most promising areas of research of face in motion. We show that the static and dynamic modes of facial perception complement each other, and describe the role of qualitative features of the facial expression dynamics in assessing the emotional state of a person. Facial expression is considered as part of a holistic multimodal manifestation of emotions. The importance of facial movements as an instrument of social interaction is emphasized.
Styles APA, Harvard, Vancouver, ISO, etc.
16

de Gelder, Beatrice, et Jean Vroomen. « Rejoinder - Bimodal emotion perception : integration across separate modalities, cross-modal perceptual grouping or perception of multimodal events ? » Cognition & ; Emotion 14, no 3 (mai 2000) : 321–24. http://dx.doi.org/10.1080/026999300378842.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Luna-Jiménez, Cristina, Ricardo Kleinlein, David Griol, Zoraida Callejas, Juan M. Montero et Fernando Fernández-Martínez. « A Proposal for Multimodal Emotion Recognition Using Aural Transformers and Action Units on RAVDESS Dataset ». Applied Sciences 12, no 1 (30 décembre 2021) : 327. http://dx.doi.org/10.3390/app12010327.

Texte intégral
Résumé :
Emotion recognition is attracting the attention of the research community due to its multiple applications in different fields, such as medicine or autonomous driving. In this paper, we proposed an automatic emotion recognizer system that consisted of a speech emotion recognizer (SER) and a facial emotion recognizer (FER). For the SER, we evaluated a pre-trained xlsr-Wav2Vec2.0 transformer using two transfer-learning techniques: embedding extraction and fine-tuning. The best accuracy results were achieved when we fine-tuned the whole model by appending a multilayer perceptron on top of it, confirming that the training was more robust when it did not start from scratch and the previous knowledge of the network was similar to the task to adapt. Regarding the facial emotion recognizer, we extracted the Action Units of the videos and compared the performance between employing static models against sequential models. Results showed that sequential models beat static models by a narrow difference. Error analysis reported that the visual systems could improve with a detector of high-emotional load frames, which opened a new line of research to discover new ways to learn from videos. Finally, combining these two modalities with a late fusion strategy, we achieved 86.70% accuracy on the RAVDESS dataset on a subject-wise 5-CV evaluation, classifying eight emotions. Results demonstrated that these modalities carried relevant information to detect users’ emotional state and their combination allowed to improve the final system performance.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Miao, Haotian, Yifei Zhang, Daling Wang et Shi Feng. « Multi-Output Learning Based on Multimodal GCN and Co-Attention for Image Aesthetics and Emotion Analysis ». Mathematics 9, no 12 (20 juin 2021) : 1437. http://dx.doi.org/10.3390/math9121437.

Texte intégral
Résumé :
With the development of social networks and intelligent terminals, it is becoming more convenient to share and acquire images. The massive growth of the number of social images makes people have higher demands for automatic image processing, especially in the aesthetic and emotional perspective. Both aesthetics assessment and emotion recognition require a higher ability for the computer to simulate high-level visual perception understanding, which belongs to the field of image processing and pattern recognition. However, existing methods often ignore the prior knowledge of images and intrinsic relationships between aesthetic and emotional perspectives. Recently, machine learning and deep learning have become powerful methods for researchers to solve mathematical problems in computing, such as image processing and pattern recognition. Both images and abstract concepts can be converted into numerical matrices and then establish the mapping relations using mathematics on computers. In this work, we propose an end-to-end multi-output deep learning model based on multimodal Graph Convolutional Network (GCN) and co-attention for aesthetic and emotion conjoint analysis. In our model, a stacked multimodal GCN network is proposed to encode the features under the guidance of the correlation matrix, and a co-attention module is designed to help the aesthetics and emotion feature representation learn from each other interactively. Experimental results indicate that our proposed model achieves competitive performance on the IAE dataset. Progressive results on the AVA and ArtPhoto datasets also prove the generalization ability of our model.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Berkane, Mohamed, Kenza Belhouchette et Hacene Belhadef. « Emotion Recognition Approach Using Multilayer Perceptron Network and Motion Estimation ». International Journal of Synthetic Emotions 10, no 1 (janvier 2019) : 38–53. http://dx.doi.org/10.4018/ijse.2019010102.

Texte intégral
Résumé :
Man-machine interaction is an interdisciplinary field of research that provides natural and multimodal ways of interaction between humans and computers. For this purpose, the computer must understand the emotional state of the person with whom it interacts. This article proposes a novel method for detecting and classify the basic emotions like sadness, joy, anger, fear, disgust, surprise, and interest that was introduced in previous works. As with all emotion recognition systems, the approach follows the basic steps, such as: facial detection and facial feature extraction. In these steps, the contribution is expressed by using strategic face points and interprets motions as action units extracted by the FACS system. The second contribution is at the level of the classification step, where two classifiers were used: Kohonen self-organizing maps (KSOM) and multilayer perceptron (MLP) in order to obtain the best results. The obtained results show that the recognition rate of basic emotions has improved, and the running time was minimized by reducing resource use.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Babaoğlu, Gizem, Başak Yazgan, Pınar Erturk, Etienne Gaudrain, Laura Rachman, Leanne Nagels, Stefan Launer et al. « Vocal emotion recognition by native Turkish children with normal hearing and with hearing aids ». Journal of the Acoustical Society of America 151, no 4 (avril 2022) : A278. http://dx.doi.org/10.1121/10.0011335.

Texte intégral
Résumé :
Development of vocal emotion recognition in children with normal hearing takes many years before reaching adult-like levels. In children with hearing loss, decreased audibility and potential loss of sensitivity to relevant acoustic cues may additionally affect vocal emotion perception. Hearing aids (HAs) are traditionally optimized for speech understanding, and it is not clear how children with HAs are performing in perceiving vocal emotions. In this study, we investigated vocal emotion recognition in native Turkish normal hearing children (NHC, age range: 5–18 years), normal hearing adults (NHA, age range: 18–45 years), and children with HAs (HAC, age range: 5–18 years), using pseudo-speech sentences expressed in one of the three emotions, happy, sad, or angry (Geneva Multimodal Emotion Portrayal (GEMEP) Corpus by Banziger and Scherer, 2010; EmoHI Test by Nagels et al., 2021). Visual inspection of the preliminary data suggests that performance increases with increasing age for NHC and that in general, HAC have lower recognition scores compared to NHC. Further analyses will be presented, along with acoustical analysis of the stimuli and an exploration of effects of HA settings. In addition, for cross-language comparison, these data will be compared to previously collected data with the same paradigm in children from the UK and the Netherlands.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Prajapati, Vrinda, Rajlakshmi Guha et Aurobinda Routray. « Multimodal prediction of trait emotional intelligence–Through affective changes measured using non-contact based physiological measures ». PLOS ONE 16, no 7 (9 juillet 2021) : e0254335. http://dx.doi.org/10.1371/journal.pone.0254335.

Texte intégral
Résumé :
Inability to efficiently deal with emotionally laden situations, often leads to poor interpersonal interactions. This adversely affects the individual’s psychological functioning. A higher trait emotional intelligence (EI) is not only associated with psychological wellbeing, educational attainment, and job-related success, but also with willingness to seek professional and non-professional help for personal-emotional problems, depression and suicidal ideation. Thus, it is important to identify low (EI) individuals who are more prone to mental health problems than their high EI counterparts, and give them the appropriate EI training, which will aid in preventing the onset of various mood related disorders. Since people may be unaware of their level of EI/emotional skills or may tend to fake responses in self-report questionnaires in high stake situations, a system that assesses EI using physiological measures can prove affective. We present a multimodal method for detecting the level of trait Emotional intelligence using non-contact based autonomic sensors. To our knowledge, this is the first work to predict emotional intelligence level from physiological/autonomic (cardiac and respiratory) response patterns to emotions. Trait EI of 50 users was measured using Schutte Self Report Emotional Intelligence Test (SSEIT) along with their cardiovascular and respiratory data, which was recorded using FMCW radar sensor both at baseline and while viewing affective movie clips. We first examine relationships between users’ Trait EI scores and autonomic response and reactivity to the clips. Our analysis suggests a significant relationship between EI and autonomic response and reactivity. We finally attempt binary EI level detection using linear SVM. We also attempt to classify each sub factor of EI, namely–perception of emotion, managing own emotions, managing other’s emotions, and utilization of emotions. The proposed method achieves an EI classification accuracy of 84%, while accuracies ranging from 58 to 76% is achieved for recognition of the sub factors. This is the first step towards identifying EI of an individual purely through physiological responses. Limitation and future directions are discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Piwek, Lukasz, Karin Petrini et Frank E. Pollick. « Auditory signal dominates visual in the perception of emotional social interactions ». Seeing and Perceiving 25 (2012) : 112. http://dx.doi.org/10.1163/187847612x647450.

Texte intégral
Résumé :
Multimodal perception of emotions has been typically examined using displays of a solitary character (e.g., the face–voice and/or body–sound of one actor). We extend investigation to more complex, dyadic point-light displays combined with speech. A motion and voice capture system was used to record twenty actors interacting in couples with happy, angry and neutral emotional expressions. The obtained stimuli were validated in a pilot study and used in the present study to investigate multimodal perception of emotional social interactions. Participants were required to categorize happy and angry expressions displayed visually, auditorily, or using emotionally congruent and incongruent bimodal displays. In a series of cross-validation experiments we found that sound dominated the visual signal in the perception of emotional social interaction. Although participants’ judgments were faster in the bimodal condition, the accuracy of judgments was similar for both bimodal and auditory-only conditions. When participants watched emotionally mismatched bimodal displays, they predominantly oriented their judgments towards the auditory rather than the visual signal. This auditory dominance persisted even when the reliability of auditory signal was decreased with noise, although visual information had some effect on judgments of emotions when it was combined with a noisy auditory signal. Our results suggest that when judging emotions from observed social interaction, we rely primarily on vocal cues from the conversation, rather then visual cues from their body movement.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Deaca, Mircea Valeriu. « Circular Causality of Emotions in Moving Pictures ». Acta Universitatis Sapientiae, Film and Media Studies 20, no 1 (1 novembre 2021) : 86–110. http://dx.doi.org/10.2478/ausfm-2021-0016.

Texte intégral
Résumé :
Abstract In the framework of predictive coding, as explained by Giovanni Pezzulo in his article Why do you fear the bogeyman? An embodied predictive coding model of perceptual inference (2014), humans construct instances of emotions by a double arrow of explanation of stimuli. Top-down cognitive models explain in a predictive fashion the emotional value of stimuli. At the same time, feelings and emotions depend on the perception of internal changes in the body. When confronted with uncertain auditory and visual information, a multimodal internal state assigns more weight to interoceptive information (rather than auditory and visual information) like visceral and autonomic states as hunger or thirst (motivational conditions). In short, an emotional mood can constrain the construction of a particular instance of emotion. This observation suggests that the dynamics of generative processes of Bayesian inference contain a mechanism of bidirectional link between perceptual and cognitive inference and feelings and emotions. In other words, “subjective feeling states and emotions influence perceptual and cognitive inference, which in turn produce new subjective feeling states and emotions” as a self-fulfilling prophecy (Pezzulo 2014, 908). This article focuses on the short introductory scene from Steven Spielberg’s Jaws (1975), claiming that the construction / emergence of the fear and sadness emotions are created out of the circular causal coupling instantiated between cinematic bottom-up mood cues and top-down cognitive explanations.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Barabanschikov, V. A., et E. V. Suvorova. « Part-Whole Perception of Audiovideoimages of Multimodal Emotional States of a Person ». Experimental Psychology (Russia) 15, no 4 (2022) : 4–21. http://dx.doi.org/10.17759/exppsy.2022150401.

Texte intégral
Résumé :
<p>The patterns of perception of a part and a whole of multimodal emotional dynamic states of people unfamiliar to observers are studied. Audio-video clips of fourteen key emotional states expressed by specially trained actors were randomly presented to two groups of observers. In one group (N=96, average age &mdash; 34, SD &mdash; 9.4l.), each audio&mdash;video image was shown in full, in the other (N=78, average age &mdash; 25, SD &mdash; 9.6l.), it was divided into two parts of equal duration from the beginning to the conditional middle (short phonetic pause) and from the middle to the end of the exposure. The stimulus material contained facial expressions, gestures, head and eye movements, changes in the position of the body of the sitters, who voiced pseudolinguistic statements accompanied by affective intonations. The accuracy of identification and the structure of categorical fields were evaluated depending on the modality and form (whole/part) of the exposure of affective states. After the exposure of each audio-video image from the presented list of emotions, observers were required to choose the one that best corresponds to what they saw. According to the data obtained, the accuracy of identifying the emotions of the initial and final fragments of audio-video images practically coincide, but significantly less than with full exposure. Functional differences in the perception of fragmented audio-video images of the same emotional states are revealed. The modes of transitions from the initial stage to the final one and the conditions affecting the relative speed of the perceptual process are shown. The uneven formation of the information basis of multimodal expressions and the heterochronous perceptogenesis of emotional states of actors are demonstrated.</p>
Styles APA, Harvard, Vancouver, ISO, etc.
25

Tiihonen, Marianne, Thomas Jacobsen, Niels Trusbak Haumann, Suvi Saarikallio et Elvira Brattico. « I know what i like when i see it : Likability is distinct from pleasantness since early stages of multimodal emotion evaluation ». PLOS ONE 17, no 9 (13 septembre 2022) : e0274556. http://dx.doi.org/10.1371/journal.pone.0274556.

Texte intégral
Résumé :
Liking and pleasantness are common concepts in psychological emotion theories and in everyday language related to emotions. Despite obvious similarities between the terms, several empirical and theoretical notions support the idea that pleasantness and liking are cognitively different phenomena, becoming most evident in the context of emotion regulation and art enjoyment. In this study it was investigated whether liking and pleasantness indicate behaviourally measurable differences, not only in the long timespan of emotion regulation, but already within the initial affective responses to visual and auditory stimuli. A cross-modal affective priming protocol was used to assess whether there is a behavioural difference in the response time when providing an affective rating to a liking or pleasantness task. It was hypothesized that the pleasantness task would be faster as it is known to rely on rapid feature detection. Furthermore, an affective priming effect was expected to take place across the sensory modalities and the presentative and non-presentative stimuli. A linear mixed effect analysis indicated a significant priming effect as well as an interaction effect between the auditory and visual sensory modalities and the affective rating tasks of liking and pleasantness: While liking was rated fastest across modalities, it was significantly faster in vision compared to audition. No significant modality dependent differences between the pleasantness ratings were detected. The results demonstrate that liking and pleasantness rating scales refer to separate processes already within the short time scale of one to two seconds. Furthermore, the affective priming effect indicates that an affective information transfer takes place across modalities and the types of stimuli applied. Unlike hypothesized, liking rating took place faster across the modalities. This is interpreted to support emotion theoretical notions where liking and disliking are crucial properties of emotion perception and homeostatic self-referential information, possibly overriding pleasantness-related feature analysis. Conclusively, the findings provide empirical evidence for a conceptual delineation of common affective processes.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Zhi, Junnan, Tingting Song, Kang Yu, Fengen Yuan, Huaqiang Wang, Guangyang Hu et Hao Yang. « Multi-Attention Module for Dynamic Facial Emotion Recognition ». Information 13, no 5 (19 avril 2022) : 207. http://dx.doi.org/10.3390/info13050207.

Texte intégral
Résumé :
Video-based dynamic facial emotion recognition (FER) is a challenging task, as one must capture and distinguish tiny facial movements representing emotional changes while ignoring the facial differences of different objects. Recent state-of-the-art studies have usually adopted more complex methods to solve this task, such as large-scale deep learning models or multimodal analysis with reference to multiple sub-models. According to the characteristics of the FER task and the shortcomings of existing methods, in this paper we propose a lightweight method and design three attention modules that can be flexibly inserted into the backbone network. The key information for the three dimensions of space, channel, and time is extracted by means of convolution layer, pooling layer, multi-layer perception (MLP), and other approaches, and attention weights are generated. By sharing parameters at the same level, the three modules do not add too many network parameters while enhancing the focus on specific areas of the face, effective feature information of static images, and key frames. The experimental results on CK+ and eNTERFACE’05 datasets show that this method can achieve higher accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Mishra, Sudhakar, Narayanan Srinivasan et Uma Shanker Tiwary. « Cardiac–Brain Dynamics Depend on Context Familiarity and Their Interaction Predicts Experience of Emotional Arousal ». Brain Sciences 12, no 6 (29 mai 2022) : 702. http://dx.doi.org/10.3390/brainsci12060702.

Texte intégral
Résumé :
Our brain continuously interacts with the body as we engage with the world. Although we are mostly unaware of internal bodily processes, such as our heartbeats, they may be influenced by and in turn influence our perception and emotional feelings. Although there is a recent focus on understanding cardiac interoceptive activity and interaction with brain activity during emotion processing, the investigation of cardiac–brain interactions with more ecologically valid naturalistic emotional stimuli is still very limited. We also do not understand how an essential aspect of emotions, such as context familiarity, influences affective feelings and is linked to statistical interaction between cardiac and brain activity. Hence, to answer these questions, we designed an exploratory study by recording ECG and EEG signals for the emotional events while participants were watching emotional movie clips. Participants also rated their familiarity with the stimulus on the familiarity scale. Linear mixed effect modelling was performed in which the ECG power and familiarity were considered as predictors of EEG power. We focused on three brain regions, including prefrontal (PF), frontocentral (FC) and parietooccipital (PO). The analyses showed that the interaction between the power of cardiac activity in the mid-frequency range and the power in specific EEG bands is dependent on familiarity, such that the interaction is stronger with high familiarity. In addition, the results indicate that arousal is predicted by cardiac–brain interaction, which also depends on familiarity. The results support emotional theories that emphasize context dependency and interoception. Multimodal studies with more realistic stimuli would further enable us to understand and predict different aspects of emotional experience.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Zahn, Nina N., Greice Pinho Dal Molin et Soraia Raupp Musse. « Investigating sentiments in Brazilian and German Blogs ». Journal of the Brazilian Computer Society 28, no 1 (30 décembre 2022) : 96–103. http://dx.doi.org/10.5753/jbcs.2022.2214.

Texte intégral
Résumé :
Social interactions have changed in recent years. People post their thoughts, opinions and sentiments on social media platforms more often, through images and videos, providing a very rich source of data about population of different countries, communities, etc. Due to the increase in the amount of data on the internet, it becomes impossible to perform any analysis in a manual manner, requiring the automation of the process. In this work, we use two blog corpora that contain images and texts. Cross-Media German Blog (CGB) corpus consists of German blog posts, while Cross-Media Brazilian Blog (CBB) contains Brazilian blog posts. Both blogs have the Ground Truth (GT) of images and texts feelings (sentiments), classified according to human perceptions. In previous work, Machine Learning and lexicons technologies were applied to both corpora to detect the sentiments (negative, neutral or positive) of images and texts and compare the results with ground truth (based on subjects perception). In this work, we investigated a new hypothesis, by detecting faces and their emotions, to improve the sentiment classification accuracy in both CBB and CGB datasets. We use two methodologies to detect polarity on the faces and evaluated the results with the images GT and the multimodal GT (the complete blog using text and image). Our results indicate that the facial emotion can be a relevant feature in the classification of blogs sentiment.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Cominelli, Lorenzo, Nicola Carbonaro, Daniele Mazzei, Roberto Garofalo, Alessandro Tognetti et Danilo De Rossi. « A Multimodal Perception Framework for Users Emotional State Assessment in Social Robotics ». Future Internet 9, no 3 (1 août 2017) : 42. http://dx.doi.org/10.3390/fi9030042.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Wang, Cheng-Hung, et Hao-Chiang Koong Lin. « Emotional Design Tutoring System Based on Multimodal Affective Computing Techniques ». International Journal of Distance Education Technologies 16, no 1 (janvier 2018) : 103–17. http://dx.doi.org/10.4018/ijdet.2018010106.

Texte intégral
Résumé :
In a traditional class, the role of the teacher is to teach and that of the students is to learn. However, the constant and rapid technological advancements have transformed education in numerous ways. For instance, in addition to traditional, face to face teaching, E-learning is now possible. Nevertheless, face to face teaching is unavailable in distance education, preventing the teacher from understanding the student's learning emotions and states; hence, a system can be adopted to collect information on students' learning emotions, thereby compiling data to analyze their learning progresses. Hence, this study established an emotional design tutoring system (EDTS) and investigated whether this system influences user interaction satisfaction and elevates learning motivation. This study determined that the learners' perception of affective tutoring systems fostered positive attitudes toward learning and thereby promoted learning effects. The experimental results offer teachers and learners an efficient technique for boosting students' learning effects and learning satisfaction. In the future, affective computing is expected to be widely used in teaching. This can enable students to enjoy learning in a multilearning environment; thus, they can exhibit higher learning satisfaction and gain considerable learning effects.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Dove, Guy. « Language as a disruptive technology : abstract concepts, embodiment and the flexible mind ». Philosophical Transactions of the Royal Society B : Biological Sciences 373, no 1752 (18 juin 2018) : 20170135. http://dx.doi.org/10.1098/rstb.2017.0135.

Texte intégral
Résumé :
A growing body of evidence suggests that cognition is embodied and grounded. Abstract concepts, though, remain a significant theoretical challenge. A number of researchers have proposed that language makes an important contribution to our capacity to acquire and employ concepts, particularly abstract ones. In this essay, I critically examine this suggestion and ultimately defend a version of it. I argue that a successful account of how language augments cognition should emphasize its symbolic properties and incorporate a view of embodiment that recognizes the flexible, multimodal and task-related nature of action, emotion and perception systems. On this view, language is an ontogenetically disruptive cognitive technology that expands our conceptual reach. This article is part of the theme issue ‘Varieties of abstract concepts: development, use and representation in the brain’.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Iosifyan, Marina, Olga Korolkova et Igor Vlasov. « Emotional and Semantic Associations Between Cinematographic Aesthetics and Haptic Perception ». Multisensory Research 30, no 7-8 (2017) : 783–98. http://dx.doi.org/10.1163/22134808-00002597.

Texte intégral
Résumé :
This study investigates systematic links between haptic perception and multimodal cinema perception. It differs from previous research conducted on cross-modal associations as it focuses on a complex intermodal stimulus, close to one people experience in reality: cinema. Participants chose materials that are most/least consistent with three-minute samples of films with elements of beauty and ugliness. We found that specific materials are associated with certain films significantly different from chance. Silk was associated with films including elements of beauty, while sandpaper was associated with films including elements of ugliness. To investigate the nature of this phenomenon, we tested the mediation effect of emotional/semantic representations on cinema–haptic associations. We found that affective representations at least partly explain the cross-modal associations between films and materials.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Wu, Lin. « Multimodal Opera Performance Form Based on Human-Computer Interaction Technology ». International Transactions on Electrical Energy Systems 2022 (8 octobre 2022) : 1–13. http://dx.doi.org/10.1155/2022/4003245.

Texte intégral
Résumé :
“Audience Engagement (AE)” describes how a stage performance affects the audience’s thoughts, provokes a bodily response, and spurs cognitive growth. With little audience involvement, theatre performing arts like opera typically have difficulty keeping audiences’ attention. The brain-computer interaction (BCI) technology could be used in opera performances to alter the audience’s emotional experience. Nevertheless, for such BCI systems to function, they must accurately identify their participants’ present emotional states. Although difficult to evaluate, audience participation is a vital sign of how well an opera performs. Practical methodological approaches for real-time perception and comprehension of audience emotions include psychological and physiological assessments. Hence, a multimodal emotional state detection technique (MESDT) for enhancing the AE in opera performance using BCI has been proposed. Three essential steps make up a conceptual MESDT architecture. An electroencephalogram (EEG) and other biological signs from the audience are first captured. Second, the acquired signals are processed, and the BCI tries to determine the user’s present psychological response. Third, an adaptive performance stimulus (APS) is triggered to enhance AE in opera performance, as determined by a rule base. To give the opera audience a high-quality viewing experience, the immersive theatre performance has been simulated. Fifty individuals have been used in the experimental assessment and performance studies. The findings demonstrated that the proposed technology had been able to accurately identify the decline in AE and that performing stimuli had a good impact on enhancing AE during an opera performance. It has been further shown that the suggested design improves the overall performance of AE by 5.8% when compared to a typical BCI design (one that uses EEG characteristics solely) for the proposed MESDT framework with BCI.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Warrenburg, Lindsay A., Lindsey Reymore et Daniel Shanahan. « The communication of melancholy, grief, and fear in dance with and without music ». Human Technology 16, no 3 (30 novembre 2020) : 283–309. http://dx.doi.org/10.17011/ht/urn.202011256766.

Texte intégral
Résumé :
Professional dancers were video recorded dancing with the intention of expressing melancholy, grief, or fear. We used these recordings as stimuli in two studies designed to investigate the perception and sociality of melancholy, grief, and fear expressions during unimodal (dancing in silence) and multimodal (dancing to music) conditions. In Study 1, viewers rated their perceptions of social connection among the dancers in these videos. In Study 2, the same videos were coded for the amount of time that dancers spent in physical contact. Results revealed that dancers expressing grief and fear exhibited more social interactions than dancers expressing melancholy. Combined with the findings of Warrenburg (2020b, 2020c), results support the idea that—in an artistic context—grief and fear are expressed with overt emotional displays, whereas melancholy is expressed with covert emotional displays.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Martinez, Antigona, Russell Tobe, Melissa Breland, Alexis Lieval, Babak Ardekani et Daniel Javitt. « 39.2 MULTIMODAL INVESTIGATION OF CONVERGENT AND DIVERGENT PATTERNS OF ATYPICAL VISUAL PROCESSING UNDERLYING FACE EMOTION RECOGNITION AND MOTION PERCEPTION IN SCHIZOPHRENIA AND AUTISM ». Schizophrenia Bulletin 45, Supplement_2 (avril 2019) : S151—S152. http://dx.doi.org/10.1093/schbul/sbz022.160.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Barabanschikov, V. A., M. M. Marinova et A. D. Abramov. « Virtual Personality of a Moving Thatcherized Face ». Психологическая наука и образование 26, no 1 (2021) : 5–18. http://dx.doi.org/10.17759/pse.2021000001.

Texte intégral
Résumé :
The study explores the patterns of perception of a virtual model with a ‘Thatcherized face’.Using the Deepfake IT-technology and the Adobe After Effects video editor, a stimulus model reproducing the face of a young actress with the eye and mouth areas inverted by 180° was created.It was shown that the phenomena of perception of the Thatcherized face, registered earlier under static conditions, are preserved when the dynamic model is exposed, and acquire a new content.In particular, the inversion effect in the dynamic mode is stronger than in the static one.Under multimodal exposure, its magnitude decreases, while the adequacy of the direct exposure evaluations increases.The age of the virtual model as compared to the real person is overestimated.The adequacy of evaluations of the models’ gender and behavior exceeds 85%.Estimates of attractiveness and emotional states of the virtual models directly depend on the type of situation they are in.The similarity of the main patterns found in the study of the perceptions of Thatcherized and chimerical faces is demonstrated.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Koltsova, Elena A., et Faina I. Kartashkova. « Digital Communication and Multimodal Features : Functioning of Emoji in Interpersonal Communication ». RUDN Journal of Language Studies, Semiotics and Semantics 13, no 3 (30 septembre 2022) : 769–83. http://dx.doi.org/10.22363/2313-2299-2022-13-3-769-783.

Texte intégral
Résumé :
Technical advances and digital means of communication have led to the development of digital semiotics which is characterised by its multimodality and abounds in paralinguistic elements such as emojis, emoticons, memes, etc. These extralinguistic elements serve as a compensatory mechanism in the new communication means. The increasing interest of users in various iconic signs and symbols generates the research interest in different fields of knowledge. The study aims to consider cognitive, semiotic and psycholinguistic features of emojis in interpersonal communication through analysing their functions in text messages and in social network messages. An attempt to reveal their persuasive mechanism is made. The research is based on a large scale dataset comprised of the private text messages as well as public posts on social networks which include verbal and nonverbal / iconic elements. The research data presents a multilingual bank of English, Russian and French sources. The research methods include context analysis, linguistic and pragmatic analysis and content analysis. The findings show that emojis in private interpersonal communication perform a number of functions, namely nonverbal, emotive, pragmatic, punctuation, substitutional, decorative and rhetorical functions. These iconic symbols incorporated in the interpersonal digital communication present a compensatory mechanism and the means of persuasion of a message addressee / recipient. The combination of verbal and iconic elements triggers a double focusing mechanism, and the perception is shaped by all cognitive mechanisms including rational and emotional, unconscious components.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Chow, Ho Ming, Raymond A. Mar, Yisheng Xu, Siyuan Liu, Suraji Wagage et Allen R. Braun. « Embodied Comprehension of Stories : Interactions between Language Regions and Modality-specific Neural Systems ». Journal of Cognitive Neuroscience 26, no 2 (février 2014) : 279–95. http://dx.doi.org/10.1162/jocn_a_00487.

Texte intégral
Résumé :
The embodied view of language processing proposes that comprehension involves multimodal simulations, a process that retrieves a comprehender's perceptual, motor, and affective knowledge through reactivation of the neural systems responsible for perception, action, and emotion. Although evidence in support of this idea is growing, the contemporary neuroanatomical model of language suggests that comprehension largely emerges as a result of interactions between frontotemporal language areas in the left hemisphere. If modality-specific neural systems are involved in comprehension, they are not likely to operate in isolation but should interact with the brain regions critical to language processing. However, little is known about the ways in which language and modality-specific neural systems interact. To investigate this issue, we conducted a functional MRI study in which participants listened to stories that contained visually vivid, action-based, and emotionally charged content. Activity of neural systems associated with visual-spatial, motor, and affective processing were selectively modulated by the relevant story content. Importantly, when functional connectivity patterns associated with the left inferior frontal gyrus (LIFG), the left posterior middle temporal gyrus (pMTG), and the bilateral anterior temporal lobes (aTL) were compared, both LIFG and pMTG, but not the aTL, showed enhanced connectivity with the three modality-specific systems relevant to the story content. Taken together, our results suggest that language regions are engaged in perceptual, motor, and affective simulations of the described situation, which manifest through their interactions with modality-specific systems. On the basis of our results and past research, we propose that the LIFG and pMTG play unique roles in multimodal simulations during story comprehension.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Wöllner, Clemens. « Is empathy related to the perception of emotional expression in music ? A multimodal time-series analysis. » Psychology of Aesthetics, Creativity, and the Arts 6, no 3 (août 2012) : 214–23. http://dx.doi.org/10.1037/a0027392.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

Cui, Cui. « Intelligent Analysis of Exercise Health Big Data Based on Deep Convolutional Neural Network ». Computational Intelligence and Neuroscience 2022 (28 juin 2022) : 1–11. http://dx.doi.org/10.1155/2022/5020150.

Texte intégral
Résumé :
In this paper, the algorithm of the deep convolutional neural network is used to conduct in-depth research and analysis of sports health big data, and an intelligent analysis system is designed for the practical process. A convolutional neural network is one of the most popular methods of deep learning today. The convolutional neural network has the feature of local perception, which allows a complete image to be divided into several small parts, by learning the characteristic features of each local part and then merging the local information at the high level to get the full representation information. In this paper, we first apply a convolutional neural network for four classifications of brainwave data and analyze the accuracy and recall of the model. The model is then further optimized to improve its accuracy and is compared with other models to confirm its effectiveness. A demonstration platform of emotional fatigue detection with multimodal data feature fusion was established to realize data acquisition, emotional fatigue detection, and emotion feedback functions. The emotional fatigue detection platform was tested to verify that the proposed model can be used for time-series data feature learning. According to the platform requirement analysis and detailed functional design, the development of each functional module of the platform was completed and system testing was conducted. The big data platform constructed in this study can meet the basic needs of health monitoring for data analysis, which is conducive to the formation of a good situation of orderly and effective interaction among multiple subjects, thus improving the information service level of health monitoring and promoting comprehensive health development.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Yang, Youngeun. « Comprehending Dance through Empathy : A Spectator’s Total Body-Mind Experience of Watching Wind of May (Moon, 2020) ». Dance Research 40, no 1 (mai 2022) : 61–84. http://dx.doi.org/10.3366/drs.2022.0358.

Texte intégral
Résumé :
This article advances a hermeneutico-phenomenological enquiry into the lived experience of watching the recently created ballet, Wind of May (Moon, 2020), to examine the role of empathy in comprehending this dance performance. Drawing on Edith Stein’s work, it shows how dance spectatorship entails dynamic interactions between the core processes of empathy formation, namely, direct multimodal perception, emotional engagement, and cognitive thinking. It further employs Dee Reynolds’s concept of ‘affect’ to highlight how empathic experience can be physically embodied yet non-emotional. In doing so, it argues that empathy involves a total mind-body experience that can be valuable to dance interpretation.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Wible, Cynthia Gayle. « Schizophrenia as a Disorder of Social Communication ». Schizophrenia Research and Treatment 2012 (2012) : 1–12. http://dx.doi.org/10.1155/2012/920485.

Texte intégral
Résumé :
Evidence is reviewed for the existence of a core system for moment-to-moment social communication that is based on the perception of dynamic gestures and other social perceptual processes in the temporal-parietal occipital junction (TPJ), including the posterior superior temporal sulcus (PSTS) and surrounding regions. Overactivation of these regions may produce the schizophrenic syndrome. The TPJ plays a key role in the perception and production of dynamic social, emotional, and attentional gestures for the self and others. These include dynamic gestures of the body, face, and eyes as well as audiovisual speech and prosody. Many negative symptoms are characterized by deficits in responding within these domains. Several properties of this system have been discovered through single neuron recording, brain stimulation, neuroimaging, and the study of neurological impairment. These properties map onto the schizophrenic syndrome. The representation of dynamic gestures is multimodal (auditory, visual, and tactile), matching the predominant hallucinatory categories in schizophrenia. Inherent in the perceptual signal of gesture representation is a computation of intention, agency, and anticipation or expectancy (for the self and others). The neurons are also tuned or biased to rapidly detect threat-related emotions. I review preliminary evidence that overactivation of this system can result in schizophrenia.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Kiefer, Markus, et Marcel Harpaintner. « Varieties of abstract concepts and their grounding in perception or action ». Open Psychology 2, no 1 (1 juillet 2020) : 119–37. http://dx.doi.org/10.1515/psych-2020-0104.

Texte intégral
Résumé :
AbstractFor a very long time, theorizing in the cognitive sciences was dominated by the assumption that abstract concepts, which lack a perceivable referent, can only be handled by amodal or verbal linguistic representations. In the last years, however, refined grounded cognition theories emphasizing the importance of emotional and introspective information for abstract concepts, in addition to verbal associations and sensorimotor information, have received increasing support. Here, we review theoretical accounts of the structure and neural basis of conceptual memory and evaluate them in light of recent empirical evidence with regard to the processing of concrete and abstract concepts. Based on this literature review, we argue that abstract concepts should not be treated as a homogenous conceptual category, whose meaning is established by one single specific type of representation. Instead, depending on the feature composition, there are different subgroups of abstract concepts, including those with strong relations to vision or action, which are represented in the visual and motor brain systems similar to concrete concepts. The reviewed findings with regard to concrete and abstract concepts can be accommodated best by hybrid theories of conceptual representation assuming an interaction between modality-specific, multimodal and amodal hub areas.
Styles APA, Harvard, Vancouver, ISO, etc.
44

VOVK, Olena, Lyubov ZENYA et Ilona BROVARSKA. « NEUROPEDAGOGY : A CONCEPT OF BRAIN COMPATIBLE TEACHING A FOREIGN LANGUAGE ». Cherkasy University Bulletin : Pedagogical Sciences, no 2 (2022) : 64–73. http://dx.doi.org/10.31651/2524-2660-2022-2-64-73.

Texte intégral
Résumé :
Introduction. This study focuses on the premises of teaching a foreign language (FL) within the framework of Neuropedagogy – an interdisciplinary field that integrates cognitive science, psychology and pedagogy, and combines scholarly achievements of modern neurosciences. The purpose of this article is to expose the theoretical assumptions on Neuropedagogy and elucidate how they can be applied in the FL classroom in order to enhance sought-for learning outcomes. Results. Due to the new data in the neuropedagogical domain, it is possible to identify cognitive profiles of students, which can significantly conduce to their FL acquisition. Specifically, Neuropedagogy addresses such issues as hemispheric lateralization and brain compatible learning; attention, memory, and learning; emotion, stress, and motivation; multisensory perception and sensory preferences; learning, cognitive and epistemic styles; personality types of students; their prominent multiple intelligences etc. Withal, a special emphasis in the article is placed on the brain waves. In particular, it is reported that the alpha-theta wave is most conducive to FL learning. This state can be achieved through the exposure to Baroque music, as it lulls learners into a relaxed psychological condition, whereas their mind is alert. Furthermore, since every learner acquires, processes and assimilates the incoming information in their own way, different types of students are characterized in terms of their learning and epistemic styles. They are compatible with the VARK sensory model and meant to assist instructors in developing effective strategies of teaching. The idea is also introduced to correlate Neuropedagogy with intensive learning, which is targeted at inducing in students a psychologically relaxed yet mentally alert state conducive to foreign language acquisition. Conclusion. Neuropedagogy offers an updated vista of FL acquisition grounded on neurobiological evidence. Its import is to ensure quality education based on knowledge about the structure and functions of the human brain, benefits of multisensory and multimodal perception, types of multiple intelligences, differences in brain hemisphere functions, styles of perceiving and processing the input, advantageous conditions for memorization, responses to stress and pressure, etc.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Iram Amjad. « A Multimodal Analysis of Qawwali ». Linguistics and Literature Review 3, no 1 (31 mars 2017) : 13–25. http://dx.doi.org/10.32350/llr.v3i1.263.

Texte intégral
Résumé :
This study explores qawwali from emotional, aesthetic as well as devotional aspects to get its true feel. The purpose is to trace the extent to which qawwali acts as a catalyst for ecstatic or trance-like states of spiritual experience. A multimodal framework for analysis was used to study the common emerging patterns like love of God and His last prophet Muhammad (P.B.U.H), beatitudes of God and paradox of spiritual or worldly love. The qawwali sung at the three famous shrines of Lahore, Pakistan was the main source of data. To trace its impact, perceptions of devotees were taken into account in the form of 16 case studies. It was found that qawwali’s emphatic rhythmical stress patterns repeating God’s name stirred devotees’ emotions for spiritual self-repositioning. But words fall short to capture these spiritual emotions. Hence, this study only described the emotions supposedly gaining spiritual proximity with God.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Edlund, Sara M., Matilda Wurm, Fredrik Holländare, Steven J. Linton, Alan E. Fruzzetti et Maria Tillfors. « Pain patients’ experiences of validation and invalidation from physicians before and after multimodal pain rehabilitation : Associations with pain, negative affectivity, and treatment outcome ». Scandinavian Journal of Pain 17, no 1 (1 octobre 2017) : 77–86. http://dx.doi.org/10.1016/j.sjpain.2017.07.007.

Texte intégral
Résumé :
AbstractBackground and aimsValidating and invalidating responses play an important role in communication with pain patients, for example regarding emotion regulation and adherence to treatment. However, it is unclear how patients’ perceptions of validation and invalidation relate to patient characteristics and treatment outcome. The aim of this study was to investigate the occurrence of subgroups based on pain patients’ perceptions of validation and invalidation from their physicians. The stability of these perceptions and differences between subgroups regarding pain, pain interference, negative affectivity and treatment outcome were also explored.MethodsA total of 108 pain patients answered questionnaires regarding perceived validation and invalidation, pain severity, pain interference, and negative affectivity before and after pain rehabilitation treatment. Two cluster analyses using perceived validation and invalidation were performed, one on pre-scores and one on post-scores. The stability of patient perceptions from pre- to post-treatment was investigated, and clusters were compared on pain severity, pain interference, and negative affectivity. Finally, the connection between perceived validation and invalidation and treatment outcome was explored.ResultsThree clusters emerged both before and after treatment: (1) low validation and heightened invalidation, (2) moderate validation and invalidation, and (3) high validation and low invalidation. Perceptions of validation and invalidation were generally stable over time, although there were individuals whose perceptions changed. When compared to the other two clusters, the low validation/heightened invalidation cluster displayed significantly higher levels of pain interference and negative affectivity post-treatment but not pre-treatment. The whole sample significantly improved on pain interference and depression, but treatment outcome was independent of cluster. Unexpectedly, differences between clusters on pain interference and negative affectivity were only found post-treatment. This appeared to be due to the pre- and post-heightened invalidation clusters not containing the same individuals. Therefore, additional analyses were conducted to investigate the individuals who changed clusters. Results showed that patients scoring high on negative affectivity ended up in the heightened invalidation cluster post-treatment.ConclusionsTaken together, most patients felt understood when communicating with their rehabilitation physician. However, a smaller group of patients experienced the opposite: low levels of validation and heightened levels of invalidation. This group stood out as more problematic, reporting greater pain interference and negative affectivity when compared to the other groups after treatment. Patient perceptions were typically stable over time, but some individuals changed cluster, and these movements seemed to be related to negative affectivity and pain interference. These results do not support a connection between perceived validation and invalidation from physicians (meeting the patients pre- and post-treatment) and treatment outcome. Overall, our results suggest that there is a connection between negative affectivity and pain interference in the patients, and perceived validation and invalidation from the physicians.ImplicationsIn clinical practice, it is important to pay attention to comorbid psychological problems and level of pain interference, since these factors may negatively influence effective communication. A focus on decreasing invalidating responses and/or increasing validating responses might be particularly important for patients with high levels of psychological problems and pain interference.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Egorova, Liudmila A. « Popular Science Discourse Development in the Cyberspace ». Advances in Language and Literary Studies 9, no 5 (31 octobre 2018) : 79. http://dx.doi.org/10.7575/aiac.alls.v.9n.5p.79.

Texte intégral
Résumé :
Popular science sphere of communication is acquiring new features of virtuality, globality, mosaic structure and social orientation, which are essential in fulfilling its functions in modern society. Based on the examination of 92 podcasts, the study deals with podcasting research identifying typical characteristics of the podcast and factors contributing to the spread of podcasting in the popular scientific hypermedia communication. The survey showed that increasing popularity of the podcast in the popular scientific sphere is explained by several factors. First, informing the user becomes more accessible, quicker and easier. Secondly, the listener takes part in interpersonal communication not with a virtual author, but with a real person, gets the opportunity to make his own conclusions based on sounding speech, which is more expressive, emotional, and has a strong impact on the addressee. Thirdly, most podcasts are interviews and discussions, which facilitates the perception and processing of new information making it more structured by means of questions, paraphrasing, exemplification, clarifications, etc. Analysis of the Nature publication podcast helped single out structural features that allow a podcast to function in a hypermedia environment as an independent multimodal node. The conclusions about the emergence of new virtual environment for intercultural interaction and cooperation were made.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Shkvorchenko, N. « SEMIOTIZATION OF POLITICAL TOXICITY IN THE MEDIA SPACES OF THE USA, GREAT BRITAIN AND UKRAINE : A MULTIMODAL ASPECT ». MESSENGER of Kyiv National Linguistic University. Series Philology 25, no 1 (26 août 2022) : 142–51. http://dx.doi.org/10.32589/2311-0821.1.2022.263132.

Texte intégral
Résumé :
The article attempts to build a multimodal model of toxic political communication and determine common and distinctive features of the semiotization of political toxicity in the media environment of the United States, Great Britain and Ukraine. Toxic political communication is interpreted as a type of interaction characterized by a high degree of aggressive (verbal and/or paraverbal) behavior of various participants in the political discourse, which causes moral harm or discriminates against the opponent based on race, nationality or gender resulting in such politician(s) being perceived and then defined as toxic. The constructed model of toxic political communication takes into account multimodal mechanisms of the discursive expression of toxicity (verbal, paraverbal, extralingual), modes of expanding the toxic effect (direct, indirect, and mediated), mechanisms of perception and image formation of politicians (toxic vs. positive) in the media environment of the respective countries.We determined that toxicity is manifested in derogatory statements by politicians, which contain insults, name-calling, ridiculing, emotional and inclusive utterances aimed at polarization and causing psychological and/or image damage to participants in the political debate (opponents). Toxic paraverbal co-speech means are divided into prosodic and gestural-mimic forms, which include aggressive, caustic, derogatory, paternalistic, pompous tone of speech, gestures that violate the personal boundaries of the interlocutor, exaggerated facial expressions. Extralingual forms of toxic communication include poster colors, electoral campaign symbols, clothing, rally sites, music, etc., which intensify the damaging effect of actions/utterances of a politician who is defined as toxic in the media. We found that contrasting forms of the semiotization of political toxicity in the media environment of the United States, Great Britain and Ukraine are determined by the relevant information agendas for each of the countries, for example, racism and intolerance towards migrants (USA), Partygate (Great Britain), zrada (betrayal) vs. peremoha (victory) (Ukraine) and others. Common to the three linguistic cultures is the aggressive type of politician-speaker, whose utterances/behavior are prone to dramatizing and aimed at causing psychological damage to the opponent’s personality through direct or indirect derogatory images accompanied by prosodic, gestural and facial emphases.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Mitchell, John T., Thomas S. Weisner, Peter S. Jensen, Desiree W. Murray, Brooke S. G. Molina, L. Eugene Arnold, Lily Hechtman et al. « How Substance Users With ADHD Perceive the Relationship Between Substance Use and Emotional Functioning ». Journal of Attention Disorders 22, no 9_suppl (1 février 2017) : 49S—60S. http://dx.doi.org/10.1177/1087054716685842.

Texte intégral
Résumé :
Objective: Although substance use (SU) is elevated in ADHD and both are associated with disrupted emotional functioning, little is known about how emotions and SU interact in ADHD. We used a mixed qualitative–quantitative approach to explore this relationship. Method: Narrative comments were coded for 67 persistent (50 ADHD, 17 local normative comparison group [LNCG]) and 25 desistent (20 ADHD, 5 LNCG) substance users from the Multimodal Treatment Study of Children with ADHD (MTA) adult follow-up (21.7-26.7 years-old). Results: SU persisters perceived SU positively affects emotional states and positive emotional effects outweigh negative effects. No ADHD group effects emerged. Qualitative analysis identified perceptions that cannabis enhanced positive mood for ADHD and LNCG SU persisters, and improved negative mood and ADHD for ADHD SU persisters. Conclusion: Perceptions about SU broadly and mood do not differentiate ADHD and non-ADHD SU persisters. However, perceptions that cannabis is therapeutic may inform ADHD-related risk for cannabis use.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Choe, Kyung-II. « An Emotion-Space Model of Multimodal Emotion Recognition ». Advanced Science Letters 24, no 1 (1 janvier 2018) : 699–702. http://dx.doi.org/10.1166/asl.2018.11791.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie