Journal articles on the topic 'Colour, recognition memory, emotion'

To see the other types of publications on this topic, follow the link: Colour, recognition memory, emotion.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Colour, recognition memory, emotion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lewandowska, Anna, Krystyna Górna, Krystyna Jaracz, and Janusz Rybakowski. "Neuropsychological performance facilitates emotion recognition in bipolar disorder." Archives of Psychiatry and Psychotherapy 24, no. 4 (December 23, 2022): 68–77. http://dx.doi.org/10.12740/app/156208.

Full text
Abstract:
Aim of the studyIn bipolar disorder (BD), evidence for both cognitive impairment and deficit in emotion recognition has been found. Several investigations indicate that cognition and face processing can be interrelated. In this study, we assessed the correlations between cognitive functioning and emotion recognition (face expression) in patients with BD during an acute manic and depressive episode as well as in remission using a large battery of neurocognitive tests.Subject or material and methodsTwenty-four manic subjects, 21 with bipolar depression, and 22 euthymics, age-, sex-, and education-matched were included. Cognitive functions were assessed by the Wisconsin Card Sorting Test (WCST), Trail Making Test (TMT, Stroop Color-Word Interference Test (SCWT), California Verbal Learning Test (CVLT), Benton Visual Memory Test (BVRT), Rey-Osterreich Complex Figure Test (ROFT), d2 test and Verbal Fluency Test (VFT). For emotion recognition, the Penn Emotion Recognition Test and Penn Emotion Discrimination Test were employed.ResultsIn mania, performances on selected measures of the WCST, TMT, SCWT, CVLT, ROFT, d2 test, and VFT, achieved 19 positive correlations with better recognition of happiness. In depression, conducting these tests obtained 20 correlations with finer recognition of sadness. In remission, such performances acquired 18 correlations with greater identification of sadness (10 replicated those obtained in depression).DiscussionBetter emotion recognition in manic patients concerns mostly happiness, while in depression and remission, mainly sadness.ConclusionsBetter neuropsychological performance can facilitate emotion recognition. We hypothesize that the identification of sadness could be considered a biological marker of mood disorders.
APA, Harvard, Vancouver, ISO, and other styles
2

Sutton, Tina M., and Jeanette Altarriba. "Emotion words in the mental lexicon." Emotion words in the monolingual and bilingual lexicon 3, no. 1 (April 7, 2008): 29–46. http://dx.doi.org/10.1075/ml.3.1.04sut.

Full text
Abstract:
The representation of emotion words in memory is a relatively new area of research within the cognitive domain. In the present paper, these words will be examined with the use of the Stroop paradigm. In the past, this paradigm has been used to investigate a wide variety of word types, including color words and color-related words. Only a few studies have examined emotion words. The current study investigates a particular set of emotion words that were either congruent or incongruent with the color they were presented in (e.g., ENVY in green ink or red ink), much like standard Stroop stimuli (RED in red ink or green ink). The results of Experiment 1 revealed that emotion stimuli can be studied in the same manner as color words and color-related words, such as fire. When the congruent and incongruent items were presented together, within the same block in Experiment 2, the color items and color-related emotion items still produced a Stroop interference effect, but the color-related emotionally neutral items did not. The results of Experiment 2 suggest that evaluative information (i.e., negative valence) is automatically accessed regardless of the task at hand. The current study speaks to the need to include negative valence as an important factor in models of word recognition.
APA, Harvard, Vancouver, ISO, and other styles
3

Walter, Martin, Liz Stuart, and Roman Borisyuk. "The Representation of Neural Data Using Visualization." Information Visualization 3, no. 4 (June 10, 2004): 245–56. http://dx.doi.org/10.1057/palgrave.ivs.9500071.

Full text
Abstract:
Currently, the focus of research within Information Visualization is steering towards genomic data visualization due to the level of activity that the Human Genome Project has generated. However, the Human Brain project, renowned within Neuroinformatics, is equally challenging and exciting. Its main aim is to increase current understanding of brain function such as memory, learning, attention, emotions and consciousness. It is understood that this task will require the ‘integration of information from the level of the gene to the level of behaviour'. The work presented in this paper focuses on the visualization of neural data. More specifically, the data being analysed is multi-dimensional spike train data. Traditional methods, such as the ‘raster plot’ and the ‘cross-correlogram', are still useful but they do not scale up for larger assemblies of neurons. In this paper, a new innovative method called the Tunnel is defined. Its design is based on the principles of Information Visualization; overview the data, zoom and filter data, data details on demand. The features of this visualization environment are described. This includes data filtering, navigation and a ‘flat map’ overview facility. Additionally, a ‘coincidence overlay map’ is presented. This map washes the Tunnel with colour, which encodes the coincidence of spikes.
APA, Harvard, Vancouver, ISO, and other styles
4

Chew, Esyin, and Xin Ni Chua. "Robotic Chinese language tutor: personalising progress assessment and feedback or taking over your job?" On the Horizon 28, no. 3 (July 6, 2020): 113–24. http://dx.doi.org/10.1108/oth-04-2020-0015.

Full text
Abstract:
Purpose The shortage of Chinese language teachers have been identified as a pressing issue globally. This paper aims to respond to the needs by investigating and designing the learning innovation with autonomous programmable robot, NAO. Design/methodology/approach By thoughtfully embedding NAO robot into teaching basic Chinese language, this research demonstrates an inquiry qualitative case study of artificial intelligence design principles and learning engagement with rule-based reasoning and progress test design. Findings This state-of-the arts robot use its emotion recognition and body language automated (LED eye with various colours) to demonstrate the Chinese words, to increase learners’ understanding and enhance their memory of the words learned. The responses conclude that the novel learning experience is more fun and interesting, thus the engagement from the axis of novelty, interactivity, motivation and interest is enhanced. Research limitations/implications It is recognised that the number of research participants was small, but the qualitative finding demonstrate key issues and recommendation that may inspire future empirical research. Practical implications Today, robotics is a rapidly growing field and has received significant attention in education. Humanoid robots are now increasingly used in fields such as education, hospitality, entertainment and health care. Educational robots are anticipated to serve as teaching assistants. Originality/value The learning engagement paradigm has shifted from manual engagement to personal response systems or mixed-reality on mobile platforms, and now with the humanoid robot, the recommendation of four principles and future work and for designing humanoid robot as a language tutor are discussed. The educational robot model can be changed to a newer robot such as CANBOT U05E.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Yawen. "The Colour-Emotion Association." Journal of Education, Humanities and Social Sciences 5 (November 23, 2022): 272–77. http://dx.doi.org/10.54097/ehss.v5i.2912.

Full text
Abstract:
It is suggested that there might be an association between colour and emotion. Most of previous research in this field did not investigate this topic from different perspectives (e.g., interpersonal, subjective). Therefore, this paper reviews several recent studies on the colour-emotion association to demonstrate how their results can specify and deepen the understanding about colour-emotion association. Studies on the subjective feeling of colour found that the effects of colour stimuli are not only determined by hue, but also by a combination of effects from the three dimensions of colour: hue, lightness, and saturation. Other studies explored the relationship between colour and expressive emotion through facial colour to analyse the association in social interaction. They identified the effects of facial colour on emotion interpretation, the recognition of facial emotions, and emoticons (emoji). Additionally, they compared the effects of facial colour with the background colour. Finally, some studies attempted to identify the mechanisms of colour-emotion associations. The mapping between the representational dimensions of colour and emotion revealed colour temperature as a mediator, with cultural and personal differences as secondary associations. Machine learning classifiers also quantified the influence of cultural differences on this relationship. It was suggested that different cultures can share common colour-emotion associations to some extent. Plus, there are specific associations related to each culture. Future studies could advance their research design by controlling colour stimuli in the three dimensions, applying different methods to assess emotional responses, and constructing experimental settings closer to real life. This paper can provide some guidance for future research to examine colour-emotion associations more systematically. It can also give some suggestions to the design of emotion related curriculum at school.
APA, Harvard, Vancouver, ISO, and other styles
6

Lai, Helang, Keke Wu, and Lingli Li. "Multimodal emotion recognition with hierarchical memory networks." Intelligent Data Analysis 25, no. 4 (July 9, 2021): 1031–45. http://dx.doi.org/10.3233/ida-205183.

Full text
Abstract:
Emotion recognition in conversations is crucial as there is an urgent need to improve the overall experience of human-computer interactions. A promising improvement in this field is to develop a model that can effectively extract adequate contexts of a test utterance. We introduce a novel model, termed hierarchical memory networks (HMN), to address the issues of recognizing utterance level emotions. HMN divides the contexts into different aspects and employs different step lengths to represent the weights of these aspects. To model the self dependencies, HMN takes independent local memory networks to model these aspects. Further, to capture the interpersonal dependencies, HMN employs global memory networks to integrate the local outputs into global storages. Such storages can generate contextual summaries and help to find the emotional dependent utterance that is most relevant to the test utterance. With an attention-based multi-hops scheme, these storages are then merged with the test utterance using an addition operation in the iterations. Experiments on the IEMOCAP dataset show our model outperforms the compared methods with accuracy improvement.
APA, Harvard, Vancouver, ISO, and other styles
7

Voyer, Daniel, Danielle Dempsey, and Jennifer A. Harding. "Response procedure, memory, and dichotic emotion recognition." Brain and Cognition 85 (March 2014): 180–90. http://dx.doi.org/10.1016/j.bandc.2013.12.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ghoshal, Abhishek, Aditya Aspat, and Elton Lemos. "OpenCV Image Processing for AI Pet Robot." International Journal of Applied Sciences and Smart Technologies 03, no. 01 (June 21, 2021): 65–82. http://dx.doi.org/10.24071/ijasst.v3i1.2765.

Full text
Abstract:
The Artificial Intelligence (AI) Pet Robot is a culmination of multiple fields of computer science. This paper showcases the capabilities of our robot. Most of the functionalities stem from image processing made available through OpenCV. The functions of the robot discussed in this paper are face tracking, emotion recognition and a colour-based follow routine. Face tracking allows the robot to keep the face of the user constantly in the frame to allow capturing of facial data. Using this data, emotion recognition achieved an accuracy of 66% on the FER-2013 dataset. The colour-based follow routine enables the robot to follow the user as they walk based on the presence of a specific colour.
APA, Harvard, Vancouver, ISO, and other styles
9

Reppa, Irene, Kate E. Williams, W. James Greville, and Jo Saunders. "The relative contribution of shape and colour to object memory." Memory & Cognition 48, no. 8 (June 15, 2020): 1504–21. http://dx.doi.org/10.3758/s13421-020-01058-w.

Full text
Abstract:
AbstractThe current studies examined the relative contribution of shape and colour in object representations in memory. A great deal of evidence points to the significance of shape in object recognition, with the role of colour being instrumental under certain circumstances. A key but yet unanswered question concerns the contribution of colour relative to shape in mediating retrieval of object representations from memory. Two experiments (N=80) used a new method to probe episodic memory for objects and revealed the relative contribution of colour and shape in recognition memory. Participants viewed pictures of objects from different categories, presented one at a time. During a practice phase, participants performed yes/no recognition with some of the studied objects and their distractors. Unpractised objects shared shape only (Rp–Shape), colour only (Rp–Colour), shape and colour (Rp–Both), or neither shape nor colour (Rp–Neither), with the practised objects. Interference effects in memory between practised and unpractised items were revealed in the forgetting of related unpractised items – retrieval-induced forgetting. Retrieval-induced forgetting was consistently significant for Rp–Shape and Rp–Colour objects. These findings provide converging evidence that colour is an automatically encoded object property, and present new evidence that both shape and colour act simultaneously and effectively to drive retrieval of objects from long-term memory.
APA, Harvard, Vancouver, ISO, and other styles
10

Dardagani, A., E. Dandi, S. Tsotsi, M. Nazou, A. Lagoudis, and V. P. Bozikas. "The relationship of emotion recognition with neuropsychological performance in patients with first episode psychosis." European Psychiatry 41, S1 (April 2017): S190. http://dx.doi.org/10.1016/j.eurpsy.2017.01.2118.

Full text
Abstract:
The relationship between neuropsychological dysfunction and emotion perception has been frequently noted in various studies. Attention, for example, has been found to play an important role in emotion processing and recognition. Not many studies though, have examined this relationship in first psychotic episode patients. The aim of the present study was to explore the nature of the relation between performance in cognitive tests and a test that measures emotion perception. In a sample of 46 first psychotic episode patients (22 male), we administered a comprehensive battery of neuropsychological non-verbal tests and an emotion recognition test. The cognitive domains of attention, memory, working memory, visuospatial ability and executive function were examined, by using specific tests of the Cambridge Neuropsychological Test Automated Battery (CANTAB). The emotion recognition assessment comprised a new test that includes 35 coloured pictures of individuals expressing six basic emotions (happiness, sadness, anger, disgust, surprise, fear) and a neutral emotion. We used partial correlation–controlling for the effect of age–and we found a statistically significant relationship between emotion recognition and overall cognitive performance. More specifically, attention, visual memory and visuospatial ability positively correlated with emotion recognition. In regard to specific cognitive domains, attention positively correlated with anger and fear, whereas visual memory correlated with happiness and fear. In conclusion, it seems that the role of underlying visual processes in emotion perception has to be further examined and evaluated in this group of patients.Disclosure of interestThe authors have not supplied their declaration of competing interest.
APA, Harvard, Vancouver, ISO, and other styles
11

Deng, Ruchen, and Aitao Lu. "Sleep Modulates Emotional Effect on False Memory." Psychology in Russia: State of the Art 15, no. 1 (2022): 154–78. http://dx.doi.org/10.11621/pir.2022.0110.

Full text
Abstract:
Background. Whereas sleep and emotion are important factors affecting false memory, there is a lack of empirical research on the interaction effect of sleep and emotion on false memory. Moreover, it should be investigated further that how the effects of emotion on false memory varies from presenting emotional content to eliciting emotional state. Objective. To examine how sleep and varying emotional context influence false memories. We predicted that sleep and emotion would interactively affect false memory when participants are presented with negative words in a learning session (Experiment 1) or when their emotional state is induced before a learning session (Experiment 2). Design. We used the Deese-Roediger-McDermott (DRM) task. Emotional words were used to elicit emotion during learning in Experiment 1 and video clips were used to induce a particular mood state before learning in Experiment 2. Participants were divided into a “sleep group” and a “wake group” and completed an initial learning session either in the evening or in the morning respectively. After a learning session, participants in the sleep group slept at night as usual and completed a recognition test in the morning, while participants in the wake group stayed awake during the daytime and completed their recognition test in the evening. All participants completed a recognition test after the same period of time. Results. In Experiment 1, the wake group falsely recognized more negative critical lure words than neutral ones, but no such difference existed in the sleep group, suggesting that sleep modulated the emotional effect on false memory. In Experiment 2, participants in either a positive or negative mood state showed more false recognition than those in a neutral state. There was no such difference in the wake group. We conclude that sleep and emotion interactively affect false memory
APA, Harvard, Vancouver, ISO, and other styles
12

Srinivasan, Narayanan, and Rashmi Gupta. "Emotion-attention interactions in recognition memory for distractor faces." Emotion 10, no. 2 (2010): 207–15. http://dx.doi.org/10.1037/a0018487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Li, Liqin. "Emotion Analysis Method of Teaching Evaluation Texts Based on Deep Learning in Big Data Environment." Computational Intelligence and Neuroscience 2022 (May 9, 2022): 1–8. http://dx.doi.org/10.1155/2022/9909209.

Full text
Abstract:
Accurate emotion analysis of teaching evaluation texts can help teachers effectively improve the quality of education and teaching. In order to improve the precision and accuracy of emotion analysis, this paper proposes an emotion recognition and analysis method based on deep learning model. First, LTP tool is used to effectively process the teaching evaluation texts data set to improve the completeness and reliability of the data. Based on bidirectional long short-term memory (BiLSTM) network, an emotion analysis model is constructed to enhance the long-term memory ability of the model, so as to learn the emotion feature information more fully. On the basis of this model, the attention interaction mechanism module is introduced to pay attention to the important information in the attribute sequence, mine the deeper emotion feature information, and further ensure the accuracy of emotion recognition of teaching evaluation texts. Experimental simulation results show that the accuracy and precision of emotion recognition of the proposed method are 0.9123 and 0.8214, which can meet the needs of accurate emotion analysis of complex teaching evaluation texts.
APA, Harvard, Vancouver, ISO, and other styles
14

Cai, Linqin, Yaxin Hu, Jiangong Dong, and Sitong Zhou. "Audio-Textual Emotion Recognition Based on Improved Neural Networks." Mathematical Problems in Engineering 2019 (December 31, 2019): 1–9. http://dx.doi.org/10.1155/2019/2593036.

Full text
Abstract:
With the rapid development in social media, single-modal emotion recognition is hard to satisfy the demands of the current emotional recognition system. Aiming to optimize the performance of the emotional recognition system, a multimodal emotion recognition model from speech and text was proposed in this paper. Considering the complementarity between different modes, CNN (convolutional neural network) and LSTM (long short-term memory) were combined in a form of binary channels to learn acoustic emotion features; meanwhile, an effective Bi-LSTM (bidirectional long short-term memory) network was resorted to capture the textual features. Furthermore, we applied a deep neural network to learn and classify the fusion features. The final emotional state was determined by the output of both speech and text emotion analysis. Finally, the multimodal fusion experiments were carried out to validate the proposed model on the IEMOCAP database. In comparison with the single modal, the overall recognition accuracy of text increased 6.70%, and that of speech emotion recognition soared 13.85%. Experimental results show that the recognition accuracy of our multimodal is higher than that of the single modal and outperforms other published multimodal models on the test datasets.
APA, Harvard, Vancouver, ISO, and other styles
15

Lawrence, Louise, and Deborah Abdel Nabi. "The Compilation and Validation of a Collection of Emotional Expression Images Communicated by Synthetic and Human Faces." International Journal of Synthetic Emotions 4, no. 2 (July 2013): 34–62. http://dx.doi.org/10.4018/ijse.2013070104.

Full text
Abstract:
The BARTA (Bolton Affect Recognition Tri-Stimulus Approach) is a unique database comprising over 400 colour images of the universally recognised basic emotional expressions and is the first compilation to include three different classes of validated face stimuli; emoticon, computer-generated cartoon and photographs of human faces. The validated tri-stimulus collection (all images received =70% inter-rater (child and adult) consensus) has been developed to promote pioneering research into the differential effects of synthetic emotion representation on atypical emotion perception, processing and recognition in autism spectrum disorders (ASD) and, given the recent evidence for an ASD synthetic-face processing advantage (Rosset et al., 2008), provides a means of investigating the benefits associated with the recruitment of synthetic face images in ASD emotion recognition training contexts.
APA, Harvard, Vancouver, ISO, and other styles
16

Jiao, Wenxiang, Michael Lyu, and Irwin King. "Real-Time Emotion Recognition via Attention Gated Hierarchical Memory Network." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 8002–9. http://dx.doi.org/10.1609/aaai.v34i05.6309.

Full text
Abstract:
Real-time emotion recognition (RTER) in conversations is significant for developing emotionally intelligent chatting machines. Without the future context in RTER, it becomes critical to build the memory bank carefully for capturing historical context and summarize the memories appropriately to retrieve relevant information. We propose an Attention Gated Hierarchical Memory Network (AGHMN) to address the problems of prior work: (1) Commonly used convolutional neural networks (CNNs) for utterance feature extraction are less compatible in the memory modules; (2) Unidirectional gated recurrent units (GRUs) only allow each historical utterance to have context before it, preventing information propagation in the opposite direction; (3) The Soft Attention for summarizing loses the positional and ordering information of memories, regardless of how the memory bank is built. Particularly, we propose a Hierarchical Memory Network (HMN) with a bidirectional GRU (BiGRU) as the utterance reader and a BiGRU fusion layer for the interaction between historical utterances. For memory summarizing, we propose an Attention GRU (AGRU) where we utilize the attention weights to update the internal state of GRU. We further promote the AGRU to a bidirectional variant (BiAGRU) to balance the contextual information from recent memories and that from distant memories. We conduct experiments on two emotion conversation datasets with extensive analysis, demonstrating the efficacy of our AGHMN models.
APA, Harvard, Vancouver, ISO, and other styles
17

Bozikas, V. P., S. Tsotsi, A. Dardagani, E. Dandi, E. I. Nazlidou, and G. Garyfallos. "No effect of cognitive performance on post-intervention improvement in emotion recognition." European Psychiatry 41, S1 (April 2017): S190. http://dx.doi.org/10.1016/j.eurpsy.2017.01.2119.

Full text
Abstract:
Deficits in emotion perception in patients with first episode of psychosis have been reported by many researchers. Till now, training programs have focused mainly in patients with schizophrenia and not in first psychotic episode (FEP) patients. We used a new intervention for facial affect recognition in a group of 35 FEP patients (26 male). The emotion recognition intervention included coloured pictures of individuals expressing six basic emotions (happiness, sadness, anger, disgust, surprise, fear) and a neutral emotion. The patients were trained to detect changes in facial features, according to the emotion displayed. A comprehensive battery of neuropsychological tests was also administered, measuring attention, memory, working memory, visuospatial ability and executive function by using specific tests of the Cambridge Neuropsychological Test Automated Battery (CANTAB). We tried to explore whether cognitive performance can explain the difference noted between the original assessment of emotion recognition and the post-intervention assessment. According to our data, overall cognitive performance did not correlate with post-intervention change in emotion recognition. Specific cognitive domains did not correlate with this change, either. According the above mentioned results, no significant correlation between neuropsychological performance and post-intervention improvement in emotion recognition was noted. This finding may suggest that interventions for emotion recognition may target specific processes that underlie emotion perception and their effect can be independent of general cognitive function.Disclosure of interestThe authors have not supplied their declaration of competing interest.
APA, Harvard, Vancouver, ISO, and other styles
18

Windmann, Sabine, and Marta Kutas. "Electrophysiological Correlates of Emotion-Induced Recognition Bias." Journal of Cognitive Neuroscience 13, no. 5 (July 1, 2001): 577–92. http://dx.doi.org/10.1162/089892901750363172.

Full text
Abstract:
The question of how emotions influence recognition memory is of interest not only within basic cognitive neuro-science but from clinical and forensic perspectives as well. Emotional stimuli can induce at “recognition bias” such that individuals are more likely to respond “old” to a negative item than to an emotionally neutral item, whether the item is actually old or new. We investigated this bias using event-related brain potential (ERP) measures by comparing the processing of words given “old” responses with accurate recognition of old/new differences. For correctly recognized items, the ERP difference between old items (hits) and new items (correct rejections, CR) was largely unaffected by emotional violence. That is, regardless of emotional valence, the ERP associated with hits was characterized by a widespread positivity between 300 and 700 msec relative to that for CRs. By contrast, the analysis of ERPs to old and new items that were judged “old” (hits and false alarms [FAs], respectively) revealed a differential effect of valence by 300 msec: Neutral items showed a large old/new difference over prefrontal sites, whereas negative items did not. These results are the first clear demonstration of response bias effects on ERPs linked to recognition memory. They are consistent with the idea that frontal cortex areas may be responsible for relaxing the retrieval criterion for negative stimuli so as to ensure that emotional events are not as easily “missed” or forgotten as neutral events.
APA, Harvard, Vancouver, ISO, and other styles
19

Rocca, Paola, Filomena Castagna, Tullia Mongini, Cristiana Montemagni, Roberta Rasetti, Giuseppe Rocca, and Filippo Bogetto. "Exploring the role of face processing in facial emotion recognition in schizophrenia." Acta Neuropsychiatrica 21, no. 6 (December 2009): 292–300. http://dx.doi.org/10.1111/j.1601-5215.2009.00421.x.

Full text
Abstract:
Objective:Impairment in emotion perception represents a fundamental feature of schizophrenia with important consequences in social functioning. A fundamental unresolved issue is the relationship between emotion perception and face perception. The aim of the present study was to examine whether facial identity recognition (Identity Discrimination) is a factor predicting facial emotion recognition in the context of the other factors, known as contributing to emotion perception, such as cognitive functions and symptoms.Methods:We enrolled 58 stable schizophrenic out-patients and 47 healthy subjects. Facial identity recognition and emotion perception were assessed with the Comprehensive Affect Testing System. Different multiple regression models with backward elimination were performed in order to discover the relation of each significant variable with emotion perception.Results:In a regression including the six significant variables (age, positive symptomatology, Identity Discrimination, attentive functions, verbal memory-learning, executive functions) versus emotion processing, only attentive functions (standardised β = 0.264, p = 0.038) and Identity Discrimination (standardised β = 0.279, p = 0.029) reached a significant level. Two partial regressions were performed including five variables, one excluding attentive functions and the other excluding Identity Discrimination. When we excluded attentive functions, the only significant variable was Identity Discrimination (standardised β = 0.278, p = 0.032). When we excluded Identity Discrimination, both verbal memory-learning (standardised β = 0.261, p = 0.042) and executive functions (standardised β = 0.253, p = 0.048) were significant.Conclusions:Our results emphasised the role of face perception and attentional abilities on affect perception in schizophrenia. We additionally found a role of verbal memory-learning and executive functions on emotion perception. The relationship between those above-mentioned variables and emotion processing could have implications for cognitive rehabilitation.
APA, Harvard, Vancouver, ISO, and other styles
20

Mendolia, Marilyn. "Facial Identity Memory Is Enhanced When Sender’s Expression Is Congruent to Perceiver’s Experienced Emotion." Psychological Reports 121, no. 5 (November 24, 2017): 892–908. http://dx.doi.org/10.1177/0033294117741655.

Full text
Abstract:
The role of the social context in facial identity recognition and expression recall was investigated by manipulating the sender’s emotional expression and the perceiver’s experienced emotion during encoding. A mixed-design with one manipulated between-subjects factor (perceiver’s experienced emotion) and two within-subjects factors (change in experienced emotion and sender’s emotional expression) was used. Senders’ positive and negative expressions were implicitly encoded while perceivers experienced their baseline emotion and then either a positive or a negative emotion. Facial identity recognition was then tested using senders’ neutral expressions. Memory for senders previously seen expressing positive or negative emotion was facilitated if the perceiver initially encoded the expression while experiencing a positive or a negative emotion, respectively. Furthermore, perceivers were confident of their decisions. This research provides a more detailed understanding of the social context by exploring how the sender–perceiver interaction affects the memory for the sender.
APA, Harvard, Vancouver, ISO, and other styles
21

Kirsh, Steven J. "Quality of attachment and recognition memory for emotion-laden drawings." Infant Behavior and Development 19 (April 1996): 541. http://dx.doi.org/10.1016/s0163-6383(96)90595-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Oliva, A., and P. G. Schyns. "Diagnostic Colours Influence Speeded Scene Recognition." Perception 25, no. 1_suppl (August 1996): 114. http://dx.doi.org/10.1068/v96l1007.

Full text
Abstract:
A critical aspect of early visual processes is to extract shape data for matching against memory representations for recognition. Many theories of recognition assume that this is achieved by luminance information. However, psychophysical studies have revealed that colour is being used by low-level visual modules such as motion, stereopsis, texture, and 2-D shapes. Should colour really be discarded from theories of recognition? Here we present two studies which seek to throw light on the role of chromatic information for the recognition of real scene pictures. We used three versions of scene pictures (gray levels, normally coloured and abnormally coloured) coming from two broad categories. In the first category, colour was diagnostic of the category (eg beach, forest, and valley). In the second category colour was not diagnostic (eg city, road, and room). In the second category colour was not diagnostic (eg city, road, and room). Results revealed that chromatic information is being registered and facilitates recognition even after a 30 ms exposure to the scene stimuli. However, influences of colour on speeded categorisations were only observed with the colour-diagnostic categories. No influence of colour was observed with the other categories. A similar pattern of results was observed with 120 ms exposure. However, there was an interference of the wrong colour on recognition in colour-diagnostic categories. In sum, colour, when it is diagnostic of the category, influences speeded scene recognition.
APA, Harvard, Vancouver, ISO, and other styles
23

Vernon, David, and Toby J. Lloyd-Jones. "The Role of Colour in Implicit and Explicit Memory Performance." Quarterly Journal of Experimental Psychology Section A 56, no. 5 (July 2003): 779–802. http://dx.doi.org/10.1080/02724980244000684.

Full text
Abstract:
We present two experiments that examine the effects of colour transformation between study and test (from black and white to colour and vice versa, or from incorrectly coloured to correctly coloured and vice versa) on implicit and explicit measures of memory for diagnostically coloured natural objects (e.g., yellow banana). For naming and coloured-object decision (i.e., deciding whether an object is correctly coloured), there were shorter response times to correctly coloured-objects than to black-and-white and incorrectly coloured-objects. Repetition priming was equivalent for the different stimulus types. Colour transformation did not influence priming of picture naming, but for coloured-object decision priming was evident only for objects remaining the same from study to test. This was the case for both naming and coloured-object decision as study tasks. When participants were asked to consciously recognize objects that they had named or made coloured-object decisions to previously, whilst ignoring their colour, colour transformation reduced recognition efficiency. We discuss these results in terms of the flexibility of object representations that mediate priming and recognition.
APA, Harvard, Vancouver, ISO, and other styles
24

Hamilton-Fletcher, Giles, Thomas D. Wright, and Jamie Ward. "Cross-Modal Correspondences Enhance Performance on a Colour-to-Sound Sensory Substitution Device." Multisensory Research 29, no. 4-5 (2016): 337–63. http://dx.doi.org/10.1163/22134808-00002519.

Full text
Abstract:
Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.
APA, Harvard, Vancouver, ISO, and other styles
25

Algarni, Mona, Faisal Saeed, Tawfik Al-Hadhrami, Fahad Ghabban, and Mohammed Al-Sarem. "Deep Learning-Based Approach for Emotion Recognition Using Electroencephalography (EEG) Signals Using Bi-Directional Long Short-Term Memory (Bi-LSTM)." Sensors 22, no. 8 (April 13, 2022): 2976. http://dx.doi.org/10.3390/s22082976.

Full text
Abstract:
Emotions are an essential part of daily human communication. The emotional states and dynamics of the brain can be linked by electroencephalography (EEG) signals that can be used by the Brain–Computer Interface (BCI), to provide better human–machine interactions. Several studies have been conducted in the field of emotion recognition. However, one of the most important issues facing the emotion recognition process, using EEG signals, is the accuracy of recognition. This paper proposes a deep learning-based approach for emotion recognition through EEG signals, which includes data selection, feature extraction, feature selection and classification phases. This research serves the medical field, as the emotion recognition model helps diagnose psychological and behavioral disorders. The research contributes to improving the performance of the emotion recognition model to obtain more accurate results, which, in turn, aids in making the correct medical decisions. A standard pre-processed Database of Emotion Analysis using Physiological signaling (DEAP) was used in this work. The statistical features, wavelet features, and Hurst exponent were extracted from the dataset. The feature selection task was implemented through the Binary Gray Wolf Optimizer. At the classification stage, the stacked bi-directional Long Short-Term Memory (Bi-LSTM) Model was used to recognize human emotions. In this paper, emotions are classified into three main classes: arousal, valence and liking. The proposed approach achieved high accuracy compared to the methods used in past studies, with an average accuracy of 99.45%, 96.87% and 99.68% of valence, arousal, and liking, respectively, which is considered a high performance for the emotion recognition model.
APA, Harvard, Vancouver, ISO, and other styles
26

Fotios, S., H. Castleton, C. Cheal, and B. Yang. "Investigating the chromatic contribution to recognition of facial expression." Lighting Research & Technology 49, no. 2 (August 3, 2016): 243–58. http://dx.doi.org/10.1177/1477153515616166.

Full text
Abstract:
A pedestrian may judge the intentions of another person by their facial expression amongst other cues and aiding such evaluation after dark is one aim of road lighting. Previous studies give mixed conclusions as to whether lamp spectrum affects the ability to make such judgements. An experiment was carried out using conditions better resembling those of pedestrian behaviour, using as targets photographs of actors portraying facial expressions corresponding to the six universally recognised emotions. Responses were sought using a forced-choice procedure, under two types of lamp and with colour and grey scale photographs. Neither lamp type nor image colour was suggested to have a significant effect on the frequency with which the emotion conveyed by facial expression was correctly identified.
APA, Harvard, Vancouver, ISO, and other styles
27

Vargas Fuentes, Nicole A., Judith F. Kroll, and Julio Torres. "What Heritage Bilinguals Tell Us about the Language of Emotion." Languages 7, no. 2 (June 6, 2022): 144. http://dx.doi.org/10.3390/languages7020144.

Full text
Abstract:
Variation in the language experience of bilinguals has consequences for cognitive and affective processes. In the current study, we examined how bilingual experience influences the relationship between language and emotion in English among a group of Spanish–English heritage bilinguals on an emotion–memory task. Participants rated the emotionality of English taboo, negative and neutral words and then completed an unexpected recognition test. To account for language experience, data were gathered on the participants’ language dominance and proficiency. Results showed emotion–memory effects in the Spanish–English heritage bilinguals’ English (the societal language): taboo words were recognized significantly better than neutral words, while the emotionality of negative words carried over and significantly affected the recognition of preceding neutral words. Furthermore, such effects were modulated by language dominance scores with more pronounced emotion–memory effects in more English-dominant bilinguals. The findings contribute to a growing body of evidence showing that emotions are not necessarily restricted to the first acquired home language. Critically, for heritage speakers, there is often a shift in language dominance from the home language to the societal language. The present study demonstrates that the effects of emotion on memory are seen in the acquired societal language.
APA, Harvard, Vancouver, ISO, and other styles
28

Vargas Fuentes, Nicole A., Judith F. Kroll, and Julio Torres. "What Heritage Bilinguals Tell Us about the Language of Emotion." Languages 7, no. 2 (June 6, 2022): 144. http://dx.doi.org/10.3390/languages7020144.

Full text
Abstract:
Variation in the language experience of bilinguals has consequences for cognitive and affective processes. In the current study, we examined how bilingual experience influences the relationship between language and emotion in English among a group of Spanish–English heritage bilinguals on an emotion–memory task. Participants rated the emotionality of English taboo, negative and neutral words and then completed an unexpected recognition test. To account for language experience, data were gathered on the participants’ language dominance and proficiency. Results showed emotion–memory effects in the Spanish–English heritage bilinguals’ English (the societal language): taboo words were recognized significantly better than neutral words, while the emotionality of negative words carried over and significantly affected the recognition of preceding neutral words. Furthermore, such effects were modulated by language dominance scores with more pronounced emotion–memory effects in more English-dominant bilinguals. The findings contribute to a growing body of evidence showing that emotions are not necessarily restricted to the first acquired home language. Critically, for heritage speakers, there is often a shift in language dominance from the home language to the societal language. The present study demonstrates that the effects of emotion on memory are seen in the acquired societal language.
APA, Harvard, Vancouver, ISO, and other styles
29

Megreya, Ahmed M., and Robert D. Latzman. "Individual differences in emotion regulation and face recognition." PLOS ONE 15, no. 12 (December 10, 2020): e0243209. http://dx.doi.org/10.1371/journal.pone.0243209.

Full text
Abstract:
Face recognition ability is highly variable among neurologically intact populations. Across three experiments, this study examined for the first time associations between individual differences in a range of adaptive versus maladaptive emotion regulation strategies and face recognition. Using an immediate face-memory paradigm, in which observers had to identify a self-paced learned unfamiliar face from a 10-face target-present/ target-absent line-up, Experiment 1 (N = 42) found high levels of expressive suppression (the ongoing efforts to inhibit emotion-expressive behaviors), but not cognitive reappraisal (the cognitive re-evaluation of emotional events to change their emotional consequences), were associated with a lower level of overall face-memory accuracy and higher rates of misidentifications and false positives. Experiment 2 (N = 53) replicated these finding using a range of face-matching tasks, where observers were asked to match pairs of same-race or different-race face images taken on the same day or during different times. Once again, high levels of expressive suppression were associated with a lower level of overall face-matching performance and higher rates of false positives, but cognitive reappraisal did not correlate with any face-matching measure. Finally, Experiment 3 (N = 52) revealed that the higher use of maladaptive cognitive emotion regulation strategies, especially catastrophizing, was associated with lower levels of overall face-matching performances and higher rates of false positives. All told, the current research provides new evidence concerning the important associations between emotion and cognition.
APA, Harvard, Vancouver, ISO, and other styles
30

Gottlob, Lawrence R., and Jonathan M. Golding. "Directed forgetting in the list method affects recognition memory for source." Quarterly Journal of Experimental Psychology 60, no. 11 (October 2007): 1524–39. http://dx.doi.org/10.1080/17470210601100506.

Full text
Abstract:
The effects of list-method directed forgetting on recognition memory were explored. In Experiment 1 ( N = 40), observers were instructed to remember words and their type-cases; in Experiment 2 ( N = 80), the instruction was to remember words and their colours. Two lists of 10 words were presented; after the first list, half of the observers ( forget) were instructed to forget that list, and the other half ( remember) were not given the forget instruction. Recognition of items (words) as well as source (encoding list + case/colour) was measured for forget and remember observers. The forget instruction affected case/colour memory more consistently than item and list memory; a multinomial analysis indicated that source information was affected by the forget instructions. The results indicated that recognition of source information may be a more sensitive indicator of forgetting than recognition of items.
APA, Harvard, Vancouver, ISO, and other styles
31

Lee, JeeEun, and Sun K. Yoo. "Recognition of Negative Emotion Using Long Short-Term Memory with Bio-Signal Feature Compression." Sensors 20, no. 2 (January 20, 2020): 573. http://dx.doi.org/10.3390/s20020573.

Full text
Abstract:
Negative emotion is one reason why stress causes negative feedback. Therefore, many studies are being done to recognize negative emotions. However, emotion is difficult to classify because it is subjective and difficult to quantify. Moreover, emotion changes over time and is affected by mood. Therefore, we measured electrocardiogram (ECG), skin temperature (ST), and galvanic skin response (GSR) to detect objective indicators. We also compressed the features associated with emotion using a stacked auto-encoder (SAE). Finally, the compressed features and time information were used in training through long short-term memory (LSTM). As a result, the proposed LSTM used with the feature compression model showed the highest accuracy (99.4%) for recognizing negative emotions. The results of the suggested model were 11.3% higher than with a neural network (NN) and 5.6% higher than with SAE.
APA, Harvard, Vancouver, ISO, and other styles
32

Anderson, Lisa, and Arthur P. Shimamura. "Influences of Emotion on Context Memory while Viewing Film Clips." American Journal of Psychology 118, no. 3 (October 1, 2005): 323–37. http://dx.doi.org/10.2307/30039069.

Full text
Abstract:
Abstract Participants listened to words while viewing film clips (audio off). Film clips were classified as neutral, positively valenced, negatively valenced, and arousing. Memory was assessed in three ways: recall of film content, recall of words, and context recognition. In the context recognition test, participants were presented a word and determined which film clip was showing when the word was originally presented. In two experiments, context memory performance was disrupted when words were presented during negatively valenced film clips, whereas it was enhanced when words were presented during arousing film clips. Free recall of words presented during the negatively valenced films was also disrupted. These findings suggest multiple influences of emotion on memory performance.
APA, Harvard, Vancouver, ISO, and other styles
33

Mammarella, Nicola, Beth Fairfield, Alberto Di Domenico, and Peter Walla. "Does emotion modulate the efficacy of spaced learning in recognition memory?" Cogent Psychology 1, no. 1 (November 24, 2014): 986922. http://dx.doi.org/10.1080/23311908.2014.986922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Meng, Xianxin, Ling Zhang, Wenwen Liu, XinSheng Ding, Hong Li, Jiemin Yang, and JiaJin Yuan. "The impact of emotion intensity on recognition memory: Valence polarity matters." International Journal of Psychophysiology 116 (June 2017): 16–25. http://dx.doi.org/10.1016/j.ijpsycho.2017.01.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Xiang, Zhenglong, Xialei Dong, Yuanxiang Li, Fei Yu, Xing Xu, and Hongrun Wu. "Bimodal Emotion Recognition Model for Minnan Songs." Information 11, no. 3 (March 4, 2020): 145. http://dx.doi.org/10.3390/info11030145.

Full text
Abstract:
Most of the existing research papers study the emotion recognition of Minnan songs from the perspectives of music analysis theory and music appreciation. However, these investigations do not explore any possibility of carrying out an automatic emotion recognition of Minnan songs. In this paper, we propose a model that consists of four main modules to classify the emotion of Minnan songs by using the bimodal data—song lyrics and audio. In the proposed model, an attention-based Long Short-Term Memory (LSTM) neural network is applied to extract lyrical features, and a Convolutional Neural Network (CNN) is used to extract the audio features from the spectrum. Then, two kinds of extracted features are concatenated by multimodal compact bilinear pooling, and finally, the concatenated features are input to the classifying module to determine the song emotion. We designed three experiment groups to investigate the classifying performance of combinations of the four main parts, the comparisons of proposed model with the current approaches and the influence of a few key parameters on the performance of emotion recognition. The results show that the proposed model exhibits better performance over all other experimental groups. The accuracy, precision and recall of the proposed model exceed 0.80 in a combination of appropriate parameters.
APA, Harvard, Vancouver, ISO, and other styles
36

Johannsdottir, Kamilla Run, Halldora Bjorg Rafnsdottir, Andri Haukstein Oddsson, and Haukur Freyr Gylfason. "The Impact of Emotion and Sex on Fabrication and False Memory Formation." International Journal of Environmental Research and Public Health 18, no. 22 (November 20, 2021): 12185. http://dx.doi.org/10.3390/ijerph182212185.

Full text
Abstract:
The aim of the present study was to examine how negative emotion and sex affect self-generated errors as in fabrication set-up and later false recognition of those errors. In total, 120 university students volunteered to take part in the study. Participants were assigned at random into two equal sized groups (N = 60) depending on the type of event they received (negative emotional or neutral). We expected that fabrication and false recognition would be enhanced for the emotional event compared to the neutral one. We further hypothesized that both the willingness to fabricate and later false recognition would be enhanced for women compared with men. The results partly confirmed the hypotheses. The results showed that emotional valence (negative) affects both the willingness to fabricate about events that never took place, and the recognition of the fabrication as true at a later point. Women and men were equally likely to fabricate but women were more likely to recognize their fabrication, particularly for the emotional event. The results are discussed in the context of prior work.
APA, Harvard, Vancouver, ISO, and other styles
37

Jo, A.-Hyeon, and Keun-Chang Kwak. "Speech Emotion Recognition Based on Two-Stream Deep Learning Model Using Korean Audio Information." Applied Sciences 13, no. 4 (February 8, 2023): 2167. http://dx.doi.org/10.3390/app13042167.

Full text
Abstract:
Identifying a person’s emotions is an important element in communication. In particular, voice is a means of communication for easily and naturally expressing emotions. Speech emotion recognition technology is a crucial component of human–computer interaction (HCI), in which accurately identifying emotions is key. Therefore, this study presents a two-stream-based emotion recognition model based on bidirectional long short-term memory (Bi-LSTM) and convolutional neural networks (CNNs) using a Korean speech emotion database, and the performance is comparatively analyzed. The data used in the experiment were obtained from the Korean speech emotion recognition database built by Chosun University. Two deep learning models, Bi-LSTM and YAMNet, which is a CNN-based transfer learning model, were connected in a two-stream architecture to design an emotion recognition model. Various speech feature extraction methods and deep learning models were compared in terms of performance. Consequently, the speech emotion recognition performance of Bi-LSTM and YAMNet was 90.38% and 94.91%, respectively. However, the performance of the two-stream model was 96%, which was a minimum of 1.09% and up to 5.62% improved compared with a single model.
APA, Harvard, Vancouver, ISO, and other styles
38

Bruzzone, Matteo, Elia Gatto, Tyrone Lucon Xiccato, Luisa Dalla Valle, Camilla Maria Fontana, Giacomo Meneghetti, and Angelo Bisazza. "Measuring recognition memory in zebrafish larvae: issues and limitations." PeerJ 8 (April 27, 2020): e8890. http://dx.doi.org/10.7717/peerj.8890.

Full text
Abstract:
Recognition memory is the capacity to recognize previously encountered objects, events or places. This ability is crucial for many fitness-related activities, and it appears very early in the development of several species. In the laboratory, recognition memory is most often investigated using the novel object recognition test (NORt), which exploits the tendency of most vertebrates to explore novel objects over familiar ones. Despite that the use of larval zebrafish is rapidly increasing in research on brain, cognition and neuropathologies, it is unknown whether larvae possess recognition memory and whether the NORt can be used to assess it. Here, we tested a NOR procedure in zebrafish larvae of 7-, 14- and 21-days post-fertilization (dpf) to investigate when recognition memory first appears during ontogeny. Overall, we found that larvae explored a novel stimulus longer than a familiar one. This response was fully significant only for 14-dpf larvae. A control experiment evidenced that larvae become neophobic at 21-dpf, which may explain the poor performance at this age. The preference for the novel stimulus was also affected by the type of stimulus, being significant with tri-dimensional objects varying in shape and bi-dimensional geometrical figures but not with objects differing in colour. Further analyses suggest that lack of effect for objects with different colours was due to spontaneous preference for one colour. This study highlights the presence of recognition memory in zebrafish larvae but also revealed non-cognitive factors that may hinder the application of NORt paradigms in the early developmental stages of zebrafish.
APA, Harvard, Vancouver, ISO, and other styles
39

Yang, Lingzhi, Xiaojuan Ban, Michele Mukeshimana, and Zhe Chen. "Multimodal Emotion Recognition Using the Symmetric S-ELM-LUPI Paradigm." Symmetry 11, no. 4 (April 4, 2019): 487. http://dx.doi.org/10.3390/sym11040487.

Full text
Abstract:
Multimodal emotion recognition has become one of the new research fields of human-machine interaction. This paper focuses on feature extraction and data fusion in audio-visual emotion recognition, aiming at improving recognition effect and saving storage space. A semi-serial fusion symmetric method is proposed to fuse the audio and visual patterns of emotional recognition, and a method of Symmetric S-ELM-LUPI is adopted (Symmetric Sparse Extreme Learning Machine-Learning Using Privileged Information). The method inherits the generalized high speed of the Extreme Learning Machine, and combines this with the acceleration in the recognition process by the Learning Using Privileged Information and the memory saving of the Sparse Extreme Learning Machine. It is a learning method, which improves the traditional learning methods of examples and targets only. It introduces the role of a teacher in providing additional information to enhance the recognition (test) without complicating the learning process. The proposed method is tested on publicly available datasets and yields promising results. This method regards one pattern as the standard information source, while the other pattern as the privileged information source. Each mode can be treated as privileged information for another mode. The results show that this method is appropriate for multi-modal emotion recognition. For hundreds of samples, the execution time is less than one percent seconds. The sparsity of the proposed method has the advantage of storing memory economy. Compared with other machine learning methods, this method is more accurate and stable.
APA, Harvard, Vancouver, ISO, and other styles
40

Yuan, Qinying. "A Classroom Emotion Recognition Model Based on a Convolutional Neural Network Speech Emotion Algorithm." Occupational Therapy International 2022 (July 7, 2022): 1–12. http://dx.doi.org/10.1155/2022/9563877.

Full text
Abstract:
In this paper, we construct a model of convolutional neural network speech emotion algorithm, analyze the classroom identified by the neural network with a certain degree of confidence together with the school used in the dataset, find the characteristics and rules of teachers’ control of classroom emotion nowadays using big data, find the parts of classroom emotion, and design a classroom emotion recognition model based on convolutional neural network speech emotion algorithm according to these characteristics. This paper will investigate the factors and patterns of teachers’ emotional control in the classroom. In this paper, the existing neural network is adapted and improved, and some preprocessing is performed on the current dataset to train the network. The network used in this paper is a combination of convolutional neural network (CNN) and recurrent neural network (RNN), which takes advantage of both CNN for feature extraction and RNN for memory capability in the sequence model. This network has a good effect on both object labeling and speech recognition. For the problem of extracting emotion features of whole-sentence speech, we propose an attention mechanism-based emotion recognition algorithm for variable-length speech and design a spatiotemporal attention module for the speech emotion algorithm and a convolutional channel attention module for the CNN network to reduce the contribution of the spatiotemporal data of the speech emotion algorithm and the unimportant parts of the CNN convolutional channel feature data in the subsequent recognition by the attention mechanism. In turn, the weight of core key data and features is increased to improve the model recognition accuracy.
APA, Harvard, Vancouver, ISO, and other styles
41

Bowen, Holly J., Eric C. Fields, and Elizabeth A. Kensinger. "Prior Emotional Context Modulates Early Event-Related Potentials to Neutral Retrieval Cues." Journal of Cognitive Neuroscience 31, no. 11 (November 2019): 1755–67. http://dx.doi.org/10.1162/jocn_a_01451.

Full text
Abstract:
Memory retrieval is thought to involve the reactivation of encoding processes. Previous fMRI work has indicated that reactivation processes are modulated by the residual effects of the prior emotional encoding context; different spatial patterns emerge during retrieval of memories previously associated with negative compared with positive or neutral context. Other research suggests that event-related potential (ERP) indicators of memory retrieval processes, like the left parietal old/new effect, can also be modulated by emotional context, but the spatial distribution and temporal dynamics of these effects are unclear. In the current study, we examined “when” emotion affects recognition memory and whether that timing reflects processes that come before and may guide successful retrieval or postrecollection recovery of emotional episodic detail. While recording EEG, participants ( n = 25) viewed neutral words paired with negative, positive, or neutral pictures during encoding, followed by a recognition test for the words. Analyses focused on ERPs during the recognition test. In line with prior ERP studies, we found an early positive-going parietally distributed effect starting around 200 msec after retrieval-cue onset. This effect emerged for words that had been encoded in an emotional compared with neutral context (no valence differences), before the general old/new effect. This emotion-dependent effect occurred in an early time window, suggesting that emotion-related reactivation is a precursor to successful recognition.
APA, Harvard, Vancouver, ISO, and other styles
42

Pietschnig, J., R. Aigner-Wöber, N. Reischenböck, I. Kryspin-Exner, D. Moser, S. Klug, E. Auff, P. Dal-Bianco, G. Pusswald, and J. Lehrner. "Facial emotion recognition in patients with subjective cognitive decline and mild cognitive impairment." International Psychogeriatrics 28, no. 3 (September 17, 2015): 477–85. http://dx.doi.org/10.1017/s1041610215001520.

Full text
Abstract:
ABSTRACTBackground:Deficits in facial emotion recognition (FER) have been shown to substantially impair several aspects in everyday life of affected individuals (e.g. social functioning). Presently, we aim at assessing differences in emotion recognition performance in three patient groups suffering from mild forms of cognitive impairment compared to healthy controls.Methods:Performance on a concise emotion recognition test battery (VERT-K) of 68 patients with subjective cognitive decline (SCD), 44 non-amnestic (non-aMCI), and 25 amnestic patients (aMCI) with mild cognitive impairment (MCI) was compared with an age-equivalent sample of 138 healthy controls all of which were recruited within the framework of the Vienna Conversion to Dementia Study. Additionally, patients and controls underwent individual assessment using a comprehensive neuropsychological test battery examining attention, executive functioning, language, and memory (NTBV), the Beck Depression Inventory (BDI), and a measure of premorbid IQ (WST).Results:Type of diagnosis showed a significant effect on emotion recognition performance, indicating progressively deteriorating results as severity of diagnosis increased. Between-groups effect sizes were substantial, showing non-trivial effects in all comparisons (Cohen's ds from −0.30 to −0.83) except for SCD versus controls. Moreover, emotion recognition performance was higher in women and positively associated with premorbid IQ.Conclusions:Our findings indicate substantial effects of progressive neurological damage on emotion recognition in patients. Importantly, emotion recognition deficits were observable in non-amnestic patients as well, thus conceivably suggesting associations between decreased recognition performance and global cognitive decline. Premorbid IQ appears to act as protective factor yielding lesser deficits in patients showing higher IQs.
APA, Harvard, Vancouver, ISO, and other styles
43

Qu, Zhihao, and Xiujuan Zheng. "EEG Emotion Recognition Based on Temporal and Spatial Features of Sensitive signals." Journal of Electrical and Computer Engineering 2022 (December 5, 2022): 1–8. http://dx.doi.org/10.1155/2022/5130184.

Full text
Abstract:
Currently, there are some problems in the electrocorticogram (EEG) emotion recognition research, such as single feature, redundant signal, which make it impossible to achieve high-precision recognition accuracy when used a few channel signals. To solve the abovementioned problems, the authors proposed a method for emotion recognition based on long short-term memory (LSTM) neural network and convolutional neural network (CNN) combined with neurophysiological knowledge. First, the authors selected emotion-sensitive signals based on the physiological function of EEG regions and the active scenario of the band signals, and then merged temporal and spatial features extracted from sensitive signals by LSTM and CNN. Finally, merged features were classified to recognize emotion. The method was experimented on the DEAP dataset, the average accuracy in the valence and arousal dimensions were 92.87% and 93.23%, respectively. Compared with similar studies, it not only improved the recognition accuracy, but also greatly reduced the calculation channel, which proved the superiority of the method.
APA, Harvard, Vancouver, ISO, and other styles
44

Neumann, Roland, Juliane Völker, Zsuzsanna Hajba, and Sigrid Seiler. "Lesions and reduced working memory impair emotion recognition in self and others." Cognition and Emotion 35, no. 8 (October 8, 2021): 1527–42. http://dx.doi.org/10.1080/02699931.2021.1983521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Wirkner, Janine, Carlos Ventura-Bort, Lars Schwabe, Alfons O. Hamm, and Mathias Weymar. "Chronic stress and emotion: Differential effects on attentional processing and recognition memory." Psychoneuroendocrinology 107 (September 2019): 93–97. http://dx.doi.org/10.1016/j.psyneuen.2019.05.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Pascual, Alexander M., Erick C. Valverde, Jeong-in Kim, Jin-Woo Jeong, Yuchul Jung, Sang-Ho Kim, and Wansu Lim. "Light-FER: A Lightweight Facial Emotion Recognition System on Edge Devices." Sensors 22, no. 23 (December 6, 2022): 9524. http://dx.doi.org/10.3390/s22239524.

Full text
Abstract:
Facial emotion recognition (FER) systems are imperative in recent advanced artificial intelligence (AI) applications to realize better human–computer interactions. Most deep learning-based FER systems have issues with low accuracy and high resource requirements, especially when deployed on edge devices with limited computing resources and memory. To tackle these problems, a lightweight FER system, called Light-FER, is proposed in this paper, which is obtained from the Xception model through model compression. First, pruning is performed during the network training to remove the less important connections within the architecture of Xception. Second, the model is quantized to half-precision format, which could significantly reduce its memory consumption. Third, different deep learning compilers performing several advanced optimization techniques are benchmarked to further accelerate the inference speed of the FER system. Lastly, to experimentally demonstrate the objectives of the proposed system on edge devices, Light-FER is deployed on NVIDIA Jetson Nano.
APA, Harvard, Vancouver, ISO, and other styles
47

Johansson, Mikael, Axel Mecklinger, and Anne-Cécile Treese. "Recognition Memory for Emotional and Neutral Faces: An Event-Related Potential Study." Journal of Cognitive Neuroscience 16, no. 10 (December 2004): 1840–53. http://dx.doi.org/10.1162/0898929042947883.

Full text
Abstract:
This study examined emotional influences on the hypothesized event-related potential (ERP) correlates of familiarity and recollection (Experiment 1) and the states of awareness (Experiment 2) accompanying recognition memory for faces differing in facial affect. Participants made gender judgments to positive, negative, and neutral faces at study and were in the test phase instructed to discriminate between studied and nonstudied faces. Whereas old–new discrimination was unaffected by facial expression, negative faces were recollected to a greater extent than both positive and neutral faces as reflected in the parietal ERP old–new effect and in the proportion of remember judgments. Moreover, emotion-specific modulations were observed in frontally recorded ERPs elicited by correctly rejected new faces that concurred with a more liberal response criterion for emotional as compared to neutral faces. Taken together, the results are consistent with the view that processes promoting recollection are facilitated for negative events and that emotion may affect recognition performance by influencing criterion setting mediated by the prefrontal cortex.
APA, Harvard, Vancouver, ISO, and other styles
48

Salian, Beenaa, Omkar Narvade, Rujuta Tambewagh, and Smita Bharne. "Speech Emotion Recognition using Time Distributed CNN and LSTM." ITM Web of Conferences 40 (2021): 03006. http://dx.doi.org/10.1051/itmconf/20214003006.

Full text
Abstract:
Speech has several distinguishing characteristic features which has remained a state-of-the-art tool for extracting valuable information from audio samples. Our aim is to develop a emotion recognition system using these speech features, which would be able to accurately and efficiently recognize emotions through audio analysis. In this article, we have employed a hybrid neural network comprising four blocks of time distributed convolutional layers followed by a layer of Long Short Term Memory to achieve the same.The audio samples for the speech dataset are collectively assembled from RAVDESS, TESS and SAVEE audio datasets and are further augmented by injecting noise. Mel Spectrograms are computed from audio samples and are used to train the neural network. We have been able to achieve a testing accuracy of about 89.26%.
APA, Harvard, Vancouver, ISO, and other styles
49

Teixeira, Thomas, Éric Granger, and Alessandro Lameiras Koerich. "Continuous Emotion Recognition with Spatiotemporal Convolutional Neural Networks." Applied Sciences 11, no. 24 (December 10, 2021): 11738. http://dx.doi.org/10.3390/app112411738.

Full text
Abstract:
Facial expressions are one of the most powerful ways to depict specific patterns in human behavior and describe the human emotional state. However, despite the impressive advances of affective computing over the last decade, automatic video-based systems for facial expression recognition still cannot correctly handle variations in facial expression among individuals as well as cross-cultural and demographic aspects. Nevertheless, recognizing facial expressions is a difficult task, even for humans. This paper investigates the suitability of state-of-the-art deep learning architectures based on convolutional neural networks (CNNs) to deal with long video sequences captured in the wild for continuous emotion recognition. For such an aim, several 2D CNN models that were designed to model spatial information are extended to allow spatiotemporal representation learning from videos, considering a complex and multi-dimensional emotion space, where continuous values of valence and arousal must be predicted. We have developed and evaluated convolutional recurrent neural networks, combining 2D CNNs and long short term-memory units and inflated 3D CNN models, which are built by inflating the weights of a pre-trained 2D CNN model during fine-tuning, using application-specific videos. Experimental results on the challenging SEWA-DB dataset have shown that these architectures can effectively be fine-tuned to encode spatiotemporal information from successive raw pixel images and achieve state-of-the-art results on such a dataset.
APA, Harvard, Vancouver, ISO, and other styles
50

Zheng, Chunjun, Chunli Wang, and Ning Jia. "An Ensemble Model for Multi-Level Speech Emotion Recognition." Applied Sciences 10, no. 1 (December 26, 2019): 205. http://dx.doi.org/10.3390/app10010205.

Full text
Abstract:
Speech emotion recognition is a challenging and widely examined research topic in the field of speech processing. The accuracy of existing models in speech emotion recognition tasks is not high, and the generalization ability is not strong. Since the feature set and model design of effective speech directly affect the accuracy of speech emotion recognition, research on features and models is important. Because emotional expression is often correlated with the global features, local features, and model design of speech, it is often difficult to find a universal solution for effective speech emotion recognition. Based on this, the main research purpose of this paper is to generate general emotion features in speech signals from different angles, and use the ensemble learning model to perform emotion recognition tasks. It is divided into the following aspects: (1) Three expert roles of speech emotion recognition are designed. Expert 1 focuses on three-dimensional feature extraction of local signals; expert 2 focuses on extraction of comprehensive information in local data; and expert 3 emphasizes global features: acoustic feature descriptors (low-level descriptors (LLDs)), high-level statistics functionals (HSFs), and local features and their timing relationships. A single-/multiple-level deep learning model that meets expert characteristics is designed for each expert, including convolutional neural network (CNN), bi-directional long short-term memory (BLSTM), and gated recurrent unit (GRU). Convolutional recurrent neural network (CRNN), based on a combination of an attention mechanism, is used for internal training of experts. (2) By designing an ensemble learning model, each expert can play to its own advantages and evaluate speech emotions from different focuses. (3) Through experiments, the performance of various experts and ensemble learning models in emotion recognition is compared in the Interactive Emotional Dyadic Motion Capture (IEMOCAP) corpus and the validity of the proposed model is verified.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography