Journal articles on the topic 'Affective state recognition'

To see the other types of publications on this topic, follow the link: Affective state recognition.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Affective state recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Murali Krishna, P., R. Pradeep Reddy, Veena Narayanan, S. Lalitha, and Deepa Gupta. "Affective state recognition using audio cues." Journal of Intelligent & Fuzzy Systems 36, no. 3 (March 26, 2019): 2147–54. http://dx.doi.org/10.3233/jifs-169926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Blanco, M. J., F. Valle-Inclán, and J. Lamas. "Affective state dependence in a recognition task." Revista de Psicología Social 1, no. 1 (January 1986): 79–82. http://dx.doi.org/10.1080/02134748.1986.10821545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Neethirajan, Suresh. "Affective State Recognition in Livestock—Artificial Intelligence Approaches." Animals 12, no. 6 (March 17, 2022): 759. http://dx.doi.org/10.3390/ani12060759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Farm animals, numbering over 70 billion worldwide, are increasingly managed in large-scale, intensive farms. With both public awareness and scientific evidence growing that farm animals experience suffering, as well as affective states such as fear, frustration and distress, there is an urgent need to develop efficient and accurate methods for monitoring their welfare. At present, there are not scientifically validated ‘benchmarks’ for quantifying transient emotional (affective) states in farm animals, and no established measures of good welfare, only indicators of poor welfare, such as injury, pain and fear. Conventional approaches to monitoring livestock welfare are time-consuming, interrupt farming processes and involve subjective judgments. Biometric sensor data enabled by artificial intelligence is an emerging smart solution to unobtrusively monitoring livestock, but its potential for quantifying affective states and ground-breaking solutions in their application are yet to be realized. This review provides innovative methods for collecting big data on farm animal emotions, which can be used to train artificial intelligence models to classify, quantify and predict affective states in individual pigs and cows. Extending this to the group level, social network analysis can be applied to model emotional dynamics and contagion among animals. Finally, ‘digital twins’ of animals capable of simulating and predicting their affective states and behaviour in real time are a near-term possibility.
4

Chen, Zhimin, and David Whitney. "Tracking the affective state of unseen persons." Proceedings of the National Academy of Sciences 116, no. 15 (February 27, 2019): 7559–64. http://dx.doi.org/10.1073/pnas.1812250116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Emotion recognition is an essential human ability critical for social functioning. It is widely assumed that identifying facial expression is the key to this, and models of emotion recognition have mainly focused on facial and bodily features in static, unnatural conditions. We developed a method called affective tracking to reveal and quantify the enormous contribution of visual context to affect (valence and arousal) perception. When characters’ faces and bodies were masked in silent videos, viewers inferred the affect of the invisible characters successfully and in high agreement based solely on visual context. We further show that the context is not only sufficient but also necessary to accurately perceive human affect over time, as it provides a substantial and unique contribution beyond the information available from face and body. Our method (which we have made publicly available) reveals that emotion recognition is, at its heart, an issue of context as much as it is about faces.
5

M. M. Al Qudah, Mustafa, Ahmad S. A. Mohamed, and Syaheerah L. Lutfi. "Affective State Recognition Using Thermal-Based Imaging: A Survey." Computer Systems Science and Engineering 37, no. 1 (2021): 47–62. http://dx.doi.org/10.32604/csse.2021.015222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Meng, Hongying, and Nadia Bianchi-Berthouze. "Affective State Level Recognition in Naturalistic Facial and Vocal Expressions." IEEE Transactions on Cybernetics 44, no. 3 (March 2014): 315–28. http://dx.doi.org/10.1109/tcyb.2013.2253768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ab. Aziz, Nor Azlina, Tawsif K., Sharifah Noor Masidayu Sayed Ismail, Muhammad Anas Hasnul, Kamarulzaman Ab. Aziz, Siti Zainab Ibrahim, Azlan Abd. Aziz, and J. Emerson Raja. "Asian Affective and Emotional State (A2ES) Dataset of ECG and PPG for Affective Computing Research." Algorithms 16, no. 3 (February 27, 2023): 130. http://dx.doi.org/10.3390/a16030130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Affective computing focuses on instilling emotion awareness in machines. This area has attracted many researchers globally. However, the lack of an affective database based on physiological signals from the Asian continent has been reported. This is an important issue for ensuring inclusiveness and avoiding bias in this field. This paper introduces an emotion recognition database, the Asian Affective and Emotional State (A2ES) dataset, for affective computing research. The database comprises electrocardiogram (ECG) and photoplethysmography (PPG) recordings from 47 Asian participants of various ethnicities. The subjects were exposed to 25 carefully selected audio–visual stimuli to elicit specific targeted emotions. An analysis of the participants’ self-assessment and a list of the 25 stimuli utilised are also presented in this work. Emotion recognition systems are built using ECG and PPG data; five machine learning algorithms: support vector machine (SVM), k-nearest neighbour (KNN), naive Bayes (NB), decision tree (DT), and random forest (RF); and deep learning techniques. The performance of the systems built are presented and compared. The SVM was found to be the best learning algorithm for the ECG data, while RF was the best for the PPG data. The proposed database is available to other researchers.
8

Mavromoustakos Blom, Paris, Sander Bakkes, Chek Tan, Shimon Whiteson, Diederik Roijers, Roberto Valenti, and Theo Gevers. "Towards Personalised Gaming via Facial Expression Recognition." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 10, no. 1 (June 29, 2021): 30–36. http://dx.doi.org/10.1609/aiide.v10i1.12707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper we propose an approach for personalising the space in which a game is played (i.e., levels) dependent on classifications of the user's facial expression — to the end of tailoring the affective game experience to the individual user. Our approach is aimed at online game personalisation, i.e., the game experience is personalised during actual play of the game. A key insight of this paper is that game personalisation techniques can leverage novel computer vision-based techniques to unobtrusively infer player experiences automatically based on facial expression analysis. Specifically, to the end of tailoring the affective game experience to the individual user, in this paper we (1) leverage the proven InSight facial expression recognition SDK as a model of the user's affective state InSight, and (2) employ this model for guiding the online game personalisation process. User studies that validate the game personalisation approach in the actual video game Infinite Mario Bros. reveal that it provides an effective basis for converging to an appropriate affective state for the individual human player.
9

Erle, Thorsten M., and Friederike Funk. "Visuospatial and Affective Perspective-Taking." Social Psychology 53, no. 5 (September 2022): 315–26. http://dx.doi.org/10.1027/1864-9335/a000504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract. Perspective-taking is the ability to intuit another person’s mental state. Historically, cognitive and affective perspective-taking are distinguished from visuospatial perspective-taking because the content these processes operate on is too dissimilar. However, all three share functional similarities. Following recent research showing relations between cognitive and visuospatial perspective-taking, this article explores links between visuospatial and affective perspective-taking. Data of three preregistered experiments suggest that visuospatial perspective-taking does not improve emotion recognition speed and only slightly increases emotion recognition accuracy (Experiment 1), yet visuospatial perspective-taking increases the perceived intensity of emotional expressions (Experiment 2), as well as the emotional contagiousness of negative emotions (Experiment 3). The implications of these findings for content-based, cognitive, and functional taxonomies of perspective-taking and related processes are discussed.
10

Schmidt, Philip, Attila Reiss, Robert Dürichen, and Kristof Van Laerhoven. "Wearable-Based Affect Recognition—A Review." Sensors 19, no. 19 (September 20, 2019): 4079. http://dx.doi.org/10.3390/s19194079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Affect recognition is an interdisciplinary research field bringing together researchers from natural and social sciences. Affect recognition research aims to detect the affective state of a person based on observables, with the goal to, for example, provide reasoning for the person’s decision making or to support mental wellbeing (e.g., stress monitoring). Recently, beside of approaches based on audio, visual or text information, solutions relying on wearable sensors as observables, recording mainly physiological and inertial parameters, have received increasing attention. Wearable systems enable an ideal platform for long-term affect recognition applications due to their rich functionality and form factor, while providing valuable insights during everyday life through integrated sensors. However, existing literature surveys lack a comprehensive overview of state-of-the-art research in wearable-based affect recognition. Therefore, the aim of this paper is to provide a broad overview and in-depth understanding of the theoretical background, methods and best practices of wearable affect and stress recognition. Following a summary of different psychological models, we detail the influence of affective states on the human physiology and the sensors commonly employed to measure physiological changes. Then, we outline lab protocols eliciting affective states and provide guidelines for ground truth generation in field studies. We also describe the standard data processing chain and review common approaches related to the preprocessing, feature extraction and classification steps. By providing a comprehensive summary of the state-of-the-art and guidelines to various aspects, we would like to enable other researchers in the field to conduct and evaluate user studies and develop wearable systems.
11

Verma, Bindu, and Ayesha Choudhary. "Affective state recognition from hand gestures and facial expressions using Grassmann manifolds." Multimedia Tools and Applications 80, no. 9 (January 20, 2021): 14019–40. http://dx.doi.org/10.1007/s11042-020-10341-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Beyers, Christiaan. "Moral Subjectivity and Affective Deficit in the Transitional State." Social Analysis 59, no. 4 (December 1, 2015): 66–82. http://dx.doi.org/10.3167/sa.2015.590405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the context of transitional justice, how does the reinvented state come to be assumed as a social fact? South African land restitution interpellates victims of apartheid- and colonial-era forced removals as claimants, moral and legal subjects of a virtuous 'new' state. In the emotional narratives of loss and suffering called forth in land claim forms, the state is addressed as a subject capable of moral engagement. Claim forms also 'capture' affects related to the event of forced removals as an unstable political resource. However, within an ultimately legal and bureaucratic process, the desire for recognition is typically not reciprocated. Moreover, material settlements are indefinitely delayed due to political and institutional complications. The resulting disillusionment is counterweighed by persistent aspirations for state redress.
13

Sapiński, Tomasz, Dorota Kamińska, Adam Pelikant, and Gholamreza Anbarjafari. "Emotion Recognition from Skeletal Movements." Entropy 21, no. 7 (June 29, 2019): 646. http://dx.doi.org/10.3390/e21070646.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Automatic emotion recognition has become an important trend in many artificial intelligence (AI) based applications and has been widely explored in recent years. Most research in the area of automated emotion recognition is based on facial expressions or speech signals. Although the influence of the emotional state on body movements is undeniable, this source of expression is still underestimated in automatic analysis. In this paper, we propose a novel method to recognise seven basic emotional states—namely, happy, sad, surprise, fear, anger, disgust and neutral—utilising body movement. We analyse motion capture data under seven basic emotional states recorded by professional actor/actresses using Microsoft Kinect v2 sensor. We propose a new representation of affective movements, based on sequences of body joints. The proposed algorithm creates a sequential model of affective movement based on low level features inferred from the spacial location and the orientation of joints within the tracked skeleton. In the experimental results, different deep neural networks were employed and compared to recognise the emotional state of the acquired motion sequences. The experimental results conducted in this work show the feasibility of automatic emotion recognition from sequences of body gestures, which can serve as an additional source of information in multimodal emotion recognition.
14

Zhang, Yunxia, Xin Li, Changming Zhao, Wenyin Zheng, Manqing Wang, Yongqing Zhang, Hongjiang Ma, and Dongrui Gao. "Affective EEG-Based Person Identification Using Channel Attention Convolutional Neural Dense Connection Network." Security and Communication Networks 2021 (November 22, 2021): 1–10. http://dx.doi.org/10.1155/2021/7568460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the biometric recognition mode, the use of electroencephalogram (EEG) for biometric recognition has many advantages such as anticounterfeiting and nonsteal ability. Compared with traditional biometrics, EEG biometric recognition is safer and more concealed. Generally, EEG-based biometric recognition is to perform person identification (PI) through EEG signals collected by performing motor imagination and visual evoked tasks. The aim of this paper is to improve the performance of different affective EEG-based PI using a channel attention mechanism of convolutional neural dense connection network (CADCNN net) approach. Channel attention mechanism (CA) is used to handle the channel information from the EEG, while convolutional neural dense connection network (DCNN net) extracts the unique biological characteristics information for PI. The proposed method is evaluated on the state-of-the-art affective data set HEADIT. The results indicate that CADCNN net can perform PI from different affective states and reach up to 95%-96% mean correct recognition rate. This significantly outperformed a random forest (RF) and multilayer perceptron (MLP). We compared our method with the state-of-the-art EEG classifiers and models of EEG biometrics. The results show that the further extraction of the feature matrix is more robust than the direct use of the feature matrix. Moreover, the CADCNN net can effectively and efficiently capture discriminative traits, thus generalizing better over diverse human states.
15

Rani, Pramila, Nilanjan Sarkar, Craig A. Smith, and Leslie D. Kirby. "Anxiety detecting robotic system – towards implicit human-robot collaboration." Robotica 22, no. 1 (January 2004): 85–95. http://dx.doi.org/10.1017/s0263574703005319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A novel affect-sensitive human-robot cooperative framework is presented in this paper. Peripheral physiological indices are measured through wearable biofeedback sensors to detect the affective state of the human. Affect recognition is performed through both quantitative and qualitative analyses. A subsumption control architecture sensitive to the affective state of the human is proposed for a mobile robot. Human-robot cooperation experiments are performed where the robot senses the affective state of the human and responds appropriately. The results presented here validate the proposed framework and demonstrate a new way of achieving implicit communication between a human and a robot.
16

Djara, Tahirou, Abdoul Matine Ousmane, and Antoine Vianou. "Emotional State Recognition Using Facial Expression, Voice, and Physiological Signal." International Journal of Robotics Applications and Technologies 6, no. 1 (January 2018): 1–20. http://dx.doi.org/10.4018/ijrat.2018010101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Emotion recognition is an important aspect of affective computing, one of whose aims is the study and development of behavioral and emotional interaction between human and machine. In this context, another important point concerns acquisition devices and signal processing tools which lead to an estimation of the emotional state of the user. This article presents a survey about concepts around emotion, multimodality in recognition, physiological activities and emotional induction, methods and tools for acquisition and signal processing with a focus on processing algorithm and their degree of reliability.
17

Brown, Terry M., Robert N. Golden, and Dwight L. Evans. "Organic Affective Psychosis Associated with the Routine Use of Non-prescription Cold Preparations." British Journal of Psychiatry 156, no. 4 (April 1990): 572–75. http://dx.doi.org/10.1192/bjp.156.4.572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A patient experienced an organic affective psychosis on three separate occasions after taking recommended doses of non-prescription cold/sinus preparations. The possible underlying pharmacological mechanisms of this clinical reaction lend support to the cholinergic-adrenergic balance hypothesis of affective disorders. Recognition of this acute drug-induced state can lead to appropriate short-term pharmacotherapy and can prevent misdiagnosis of a major affective disorder or schizophrenia.
18

Yu, Dong Mei. "Affective-Cognitive Reward Model Based on Emotional Interactions." Applied Mechanics and Materials 333-335 (July 2013): 1357–60. http://dx.doi.org/10.4028/www.scientific.net/amm.333-335.1357.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, author presents a new computational model where both the results of emotional interactions from the intrinsic emotion and recognition and the extrinsic environmental stimulation (another agent), the two parts play an important role in everyday life. We take a kind of six basic emotion states (happiness, surprise, anger, fear, sadness, disgust), and updates its state depending on its current emotional and cognitive state and meeting with another one in random environment. At last, we also design some experiments to verify the effects of affective cognitive algorithm. Those experimental results are accordance with the emotion principle of human being.
19

Al Qudah, Mustafa, Ahmad Mohamed, and Syaheerah Lutfi. "Analysis of Facial Occlusion Challenge in Thermal Images for Human Affective State Recognition." Sensors 23, no. 7 (March 27, 2023): 3513. http://dx.doi.org/10.3390/s23073513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Several studies have been conducted using both visual and thermal facial images to identify human affective states. Despite the advantages of thermal facial images in recognizing spontaneous human affects, few studies have focused on facial occlusion challenges in thermal images, particularly eyeglasses and facial hair occlusion. As a result, three classification models are proposed in this paper to address the problem of thermal occlusion in facial images, with six basic spontaneous emotions being classified. The first proposed model in this paper is based on six main facial regions, including the forehead, tip of the nose, cheeks, mouth, and chin. The second model deconstructs the six main facial regions into multiple subregions to investigate the efficacy of subregions in recognizing the human affective state. The third proposed model in this paper uses selected facial subregions, free of eyeglasses and facial hair (beard, mustaches). Nine statistical features on apex and onset thermal images are implemented. Furthermore, four feature selection techniques with two classification algorithms are proposed for a further investigation. According to the comparative analysis presented in this paper, the results obtained from the three proposed modalities were promising and comparable to those of other studies.
20

Dvoynikova, Anastasia, Maxim Markitantov, Elena Ryumina, Mikhail Uzdiaev, Alena Velichko, Dmitry Ryumin, Elena Lyakso, and Alexey Karpov. "Analysis of infoware and software for human affective states recognition." Informatics and Automation 21, no. 6 (November 24, 2022): 1097–144. http://dx.doi.org/10.15622/ia.21.6.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The article presents an analytical review of research in the affective computing field. This research direction is a component of artificial intelligence, and it studies methods, algorithms and systems for analyzing human affective states during interactions with other people, computer systems or robots. In the field of data mining, the definition of affect means the manifestation of psychological reactions to an exciting event, which can occur both in the short and long term, and also have different intensity. The affects in this field are divided into 4 types: affective emotions, basic emotions, sentiment and affective disorders. The manifestation of affective states is reflected in verbal data and non-verbal characteristics of behavior: acoustic and linguistic characteristics of speech, facial expressions, gestures and postures of a person. The review provides a comparative analysis of the existing infoware for automatic recognition of a person’s affective states on the example of emotions, sentiment, aggression and depression. The few Russian-language, affective databases are still significantly inferior in volume and quality compared to electronic resources in other world languages. Thus, there is a need to consider a wide range of additional approaches, methods and algorithms used in a limited amount of training and testing data, and set the task of developing new approaches to data augmentation, transferring model learning and adapting foreign-language resources. The article describes the methods of analyzing unimodal visual, acoustic and linguistic information, as well as multimodal approaches for the affective states recognition. A multimodal approach to the automatic affective states analysis makes it possible to increase the accuracy of recognition of the phenomena compared to single-modal solutions. The review notes the trend of modern research that neural network methods are gradually replacing classical deterministic methods through better quality of state recognition and fast processing of large amount of data. The article discusses the methods for affective states analysis. The advantage of multitasking hierarchical approaches is the ability to extract new types of knowledge, including the influence, correlation and interaction of several affective states on each other, which potentially leads to improved recognition quality. The potential requirements for the developed systems for affective states analysis and the main directions of further research are given.
21

Hudlicka, Eva, and Jonathan Pfautz. "Once More with Feeling: Augmenting Recognition Primed Decision Making with Affective Factors." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 46, no. 3 (September 2002): 303–7. http://dx.doi.org/10.1177/154193120204600319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Although quintessentially human, emotions have, until recently, been largely ignored in the human factors cognitive engineering / decision-making area. This is surprising, as extensive empirical evidence indicates that emotions, and personality traits, influence human perception and decision-making. This is particularly the case in crisis situations, when extreme affective states may arise (e.g., anxiety). The development of more complete and realistic theories of human perception and decision-making, and associated computational models, will require the inclusion of personality and affective considerations. In this paper, we propose an augmented version of the recognition-primed decision-making theory, which takes into consideration trait and state effects on decision-making. We describe a cognitive architecture that implements this theory, and a generic methodology for modeling trait and state effects within this architecture. Following an initial prototype demonstration, the full architecture is currently being implemented in the context of a military peacekeeping scenario.
22

Barabanschikov, V. A., and E. V. Suvorova. "Human Emotional State Assessment Based on a Video Portrayal." Experimental Psychology (Russia) 13, no. 4 (2020): 4–24. http://dx.doi.org/10.17759/exppsy.2020130401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The article is devoted to the results of approbation of the Geneva Emotion Recognition Test (GERT), a Swiss method for assessing dynamic emotional states, on Russian sample. Identification accuracy and the categorical fields’ structure of emotional expressions of a “living” face are analysed. Similarities and differences in the perception of affective groups of dynamic emotions in the Russian and Swiss samples are considered. A number of patterns of recognition of multi-modal expressions with changes in valence and arousal of emotions are described. Differences in the perception of dynamics and statics of emotional expressions are revealed. GERT method confirmed it’s high potential for solving a wide range of academic and applied problems.
23

Ramya, Mrs T., Mr Y. Anurag, Ms K. S. S. Pranathi, Mr M. Samba Raju, and Mr P. Rohith. "FACIAL EMOTION RECOGNITION." YMER Digital 21, no. 04 (April 29, 2022): 543–57. http://dx.doi.org/10.37896/ymer21.04/55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Facial Emotion is the noticeable indication of the full of affective state, cognitive activity, intension, personality, and psychopathology of an individual and plays a communicative role in interpersonal relations. Automatic recognition of facial emotion can be a significant part of normal human-machine interfaces; it might likewise be utilized in conduct science and in clinical practice. An automatic Facial Emotion Recognition framework requirement to perform identification and area of countenances in a jumbled scene, facial component extraction, and look characterization. The facial emotion recognition framework is carried out utilizing the Deep Convolution Neural Network (DCNN). The CNN model of the undertaking depends on LeNet Architecture. Kaggle Facial Expression Dataset (FER-2013) with seven facial feelings named as happy, sad, surprise, fear, anger, disgust, and neutral is utilized in this venture. The framework accomplished 65% ± 5% accuracy.
24

Oikonomou, Vangelis P., Kostas Georgiadis, Fotis Kalaganis, Spiros Nikolopoulos, and Ioannis Kompatsiaris. "A Sparse Representation Classification Scheme for the Recognition of Affective and Cognitive Brain Processes in Neuromarketing." Sensors 23, no. 5 (February 23, 2023): 2480. http://dx.doi.org/10.3390/s23052480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this work, we propose a novel framework to recognize the cognitive and affective processes of the brain during neuromarketing-based stimuli using EEG signals. The most crucial component of our approach is the proposed classification algorithm that is based on a sparse representation classification scheme. The basic assumption of our approach is that EEG features from a cognitive or affective process lie on a linear subspace. Hence, a test brain signal can be represented as a linear (or weighted) combination of brain signals from all classes in the training set. The class membership of the brain signals is determined by adopting the Sparse Bayesian Framework with graph-based priors over the weights of linear combination. Furthermore, the classification rule is constructed by using the residuals of linear combination. The experiments on a publicly available neuromarketing EEG dataset demonstrate the usefulness of our approach. For the two classification tasks offered by the employed dataset, namely affective state recognition and cognitive state recognition, the proposed classification scheme manages to achieve a higher classification accuracy compared to the baseline and state-of-the art methods (more than 8% improvement in classification accuracy).
25

Rossi, Daniela. "Lexical reduplication and affective contents." Cognitive and Empirical Pragmatics 25 (December 5, 2011): 148–75. http://dx.doi.org/10.1075/bjl.25.07ros.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The purpose of this paper is to investigate how the use of a linguistic form (lexical reduplication) can communicate affective contents. Lexical reduplication, understood as the intentional repetition of a word, is defined as a pattern XX used to convey, on the one hand, a content which differs from the “basic” meaning of X by involving, for instance, intensification, narrowing, or expansion, and, on the other hand, an affective content that results from the evaluation of the state of affairs at hand. To test reduplication as well as the derivation of affective contents linked to its use, I have relied on a recognition task: after hearing a short story, participants were asked if the items presented on the screen occurred in the story or not. The results obtained suggest that the formal pattern of reduplication plays the role of a trigger.
26

Yang, Zhuqing, Liya Zhou, and Zhengjun Jing. "A Novel Affective Analysis System Modeling Method Integrating Affective Cognitive Model and Bi-LSTM Neural Network." Computational Intelligence and Neuroscience 2022 (October 7, 2022): 1–11. http://dx.doi.org/10.1155/2022/1856496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The severity of mental health issues among college students has increased over the past few years, having a significant negative impact on not only their academic performance but also on their families and even society as a whole. Therefore, one of the pressing issues facing college administrators right now is finding a method that is both scientific and useful for determining the mental health of college students. In pace with the advancement of Internet technology, the Internet has become an important communication channel for contemporary college students. As one of the main forces in the huge Internet population, college students are at the stage of growing knowledge and being most enthusiastic about new things, and they like to express their opinions and views on study life and social issues and are brave to express their emotions. These subjective text data often contain some affective tendencies and psychological characteristics of college students, and it is beneficial to dig out their affective tendencies to further understand what they think and expect and to grasp their mental health as early as possible. In order to address the issue of assessing the mental health of college students, this study makes an effort to use public opinion data from the university network and suggests a college student sentiment analysis model based on the OCC affective cognitive model and Bi-LSTM neural network. In order to label three different types of positive, negative, and neutral sentiment on the microblog text of college network public opinion, we first design a sentiment rule system based on the OCC affective cognition elicitation mechanism. In order to effectively and automatically identify the sentiment state of college students in the network public opinion, this study uses a Bi-LSTM neural network to classify the preprocessed college network public opinion data. Finally, this study performs comparison experiments to confirm the validity of the Bi-LSTM neural network sentiment recognition algorithm and the accuracy of the OCC sentiment rule labeling system. The findings show that the college student sentiment recognition effect of the model is significantly enhanced when the OCC sentiment rule system is used to label the college network public opinion data set as opposed to the naturally labeled data set. In contrast to SVM and other classification models like CNN and LSTM, the Bi-LSTM neural network-based classification model achieves more satisfactory classification results in the recognition of college opinion sentiment.
27

Hazer-Rau, Dilana, Lin Zhang, and Harald C. Traue. "A Workflow for Affective Computing and Stress Recognition from Biosignals." Engineering Proceedings 2, no. 1 (November 14, 2020): 85. http://dx.doi.org/10.3390/ecsa-7-08227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Affective computing and stress recognition from biosignals have a high potential in various medical applications such as early intervention, stress management and risk prevention, as well as monitoring individuals’ mental health. This paper presents an automated processing workflow for the psychophysiological recognition of emotion and stress states. Our proposed workflow allows the processing of biosignals in their raw state as obtained from wearable sensors. It consists of five stages: (1) Biosignal Preprocessing—raw data conversion and physiological data triggering, relevant information selection, artifact and noise filtering; (2) Feature Extraction—using different mathematical groups including amplitude, frequency, linearity, stationarity, entropy and variability, as well as cardiovascular-specific characteristics; (3) Feature Selection—dimension reduction and computation optimization using Forward Selection, Backward Elimination and Brute Force methods; (4) Affect Classification—machine learning using Support Vector Machine, Random Forest and k-Nearest Neighbor algorithms; (5) Model Validation—performance matrix computation using k-Cross, Leave-One-Subject-Out and Split Validations. All workflow stages are integrated into embedded functions and operators, allowing an automated execution of the recognition process. The next steps include further development of the algorithms and the integration of the developed tools into an easy-to-use system, thereby satisfying the needs of medical and psychological staff. Our automated workflow was evaluated using our uulmMAC database, previously developed for affective computing and machine learning applications in human–computer interaction.
28

Ali, Kamran, Sachin Shah, and Charles E. Hughes. "In-the-Wild Affect Analysis of Children with ASD Using Heart Rate." Sensors 23, no. 14 (July 21, 2023): 6572. http://dx.doi.org/10.3390/s23146572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recognizing the affective state of children with autism spectrum disorder (ASD) in real-world settings poses challenges due to the varying head poses, illumination levels, occlusion and a lack of datasets annotated with emotions in in-the-wild scenarios. Understanding the emotional state of children with ASD is crucial for providing personalized interventions and support. Existing methods often rely on controlled lab environments, limiting their applicability to real-world scenarios. Hence, a framework that enables the recognition of affective states in children with ASD in uncontrolled settings is needed. This paper presents a framework for recognizing the affective state of children with ASD in an in-the-wild setting using heart rate (HR) information. More specifically, an algorithm is developed that can classify a participant’s emotion as positive, negative, or neutral by analyzing the heart rate signal acquired from a smartwatch. The heart rate data are obtained in real time using a smartwatch application while the child learns to code a robot and interacts with an avatar. The avatar assists the child in developing communication skills and programming the robot. In this paper, we also present a semi-automated annotation technique based on facial expression recognition for the heart rate data. The HR signal is analyzed to extract features that capture the emotional state of the child. Additionally, in this paper, the performance of a raw HR-signal-based emotion classification algorithm is compared with a classification approach based on features extracted from HR signals using discrete wavelet transform (DWT). The experimental results demonstrate that the proposed method achieves comparable performance to state-of-the-art HR-based emotion recognition techniques, despite being conducted in an uncontrolled setting rather than a controlled lab environment. The framework presented in this paper contributes to the real-world affect analysis of children with ASD using HR information. By enabling emotion recognition in uncontrolled settings, this approach has the potential to improve the monitoring and understanding of the emotional well-being of children with ASD in their daily lives.
29

Pólya, Tibor, and István Csertő. "Emotion Recognition Based on the Structure of Narratives." Electronics 12, no. 4 (February 11, 2023): 919. http://dx.doi.org/10.3390/electronics12040919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
One important application of natural language processing (NLP) is the recognition of emotions in text. Most current emotion analyzers use a set of linguistic features such as emotion lexicons, n-grams, word embeddings, and emoticons. This study proposes a new strategy to perform emotion recognition, which is based on the homologous structure of emotions and narratives. It is argued that emotions and narratives share both a goal-based structure and an evaluation structure. The new strategy was tested in an empirical study with 117 participants who recounted two narratives about their past emotional experiences, including one positive and one negative episode. Immediately after narrating each episode, the participants reported their current affective state using the Affect Grid. The goal-based structure and evaluation structure of the narratives were analyzed with a hybrid method. First, a linguistic analysis of the texts was carried out, including tokenization, lemmatization, part-of-speech tagging, and morphological analysis. Second, an extensive set of rule-based algorithms was used to analyze the goal-based structure of, and evaluations in, the narratives. Third, the output was fed into machine learning classifiers of narrative structural features that previously proved to be effective predictors of the narrator’s current affective state. This hybrid procedure yielded a high average F1 score (0.72). The results are discussed in terms of the benefits of employing narrative structure analysis in NLP-based emotion recognition.
30

Santana, Maíra Araújo de, Clarisse Lins de Lima, Arianne Sarmento Torcate, Flávio Secco Fonseca, and Wellington Pinheiro dos Santos. "Affective computing in the context of music therapy: a systematic review." Research, Society and Development 10, no. 15 (November 28, 2021): e392101522844. http://dx.doi.org/10.33448/rsd-v10i15.22844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Music therapy is an effective tool to slow down the progress of dementia since interaction with music may evoke emotions that stimulates brain areas responsible for memory. This therapy is most successful when therapists provide adequate and personalized stimuli for each patient. This personalization is often hard. Thus, Artificial Intelligence (AI) methods may help in this task. This paper brings a systematic review of the literature in the field of affective computing in the context of music therapy. We particularly aim to assess AI methods to perform automatic emotion recognition applied to Human-Machine Musical Interfaces (HMMI). To perform the review, we conducted an automatic search in five of the main scientific databases on the fields of intelligent computing, engineering, and medicine. We search all papers released from 2016 and 2020, whose metadata, title or abstract contains the terms defined in the search string. The systematic review protocol resulted in the inclusion of 144 works from the 290 publications returned from the search. Through this review of the state-of-the-art, it was possible to list the current challenges in the automatic recognition of emotions. It was also possible to realize the potential of automatic emotion recognition to build non-invasive assistive solutions based on human-machine musical interfaces, as well as the artificial intelligence techniques in use in emotion recognition from multimodality data. Thus, machine learning for recognition of emotions from different data sources can be an important approach to optimize the clinical goals to be achieved through music therapy.
31

Stassen, H. H. "Affective State and Voice: The Specific Properties of Overtone Distributions." Methods of Information in Medicine 30, no. 01 (1991): 44–52. http://dx.doi.org/10.1055/s-0038-1634812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Motivated by psychiatric interests and as part of our investigations into the basic properties of human speech, we carried out a normative study with 192 healthy subjects - stratified according to sex, age and education - in order to derive reference values of the general population and to learn to distinguish between normal fluctuations and significant changes over time. In the present investigation, our interest focused on the individual sound characteristics of speakers (“timbre”) rather than on speech behavior. Accordingly, we determined the optimum parameter setting for a problem-specific, reliable estimation of time dependent spectra. An interval of one second length was found to be optimum for reproducibly assessing formants and corresponding band-widths for more than 95% of the cases. Based on these findings, we adapted the concept of “spectral patterns” to speech analysis. It turned out that spectral voice patterns are stable over time and measure the fine graduations of mutual differences between human voices. A highly reliable computerized recognition of persons was possible by means of these quantities, on the basis of 16-32 s time series: 93% of persons could be uniquely recognized after a 14-day interval. Hence, we succeeded in developing specific means for modelling intra-individual changes of voice timbres over time. This is of particular interest for investigations of the speech characteristics of affectively disturbed patients, since the tonal expressiveness of human voices, or the lack thereof, essentially depends on the actual distribution of overtones and the corresponding variabilities.
32

Chowanda, Andry. "Emowars: Interactive Game Input Menggunakan Ekspresi Wajah." ComTech: Computer, Mathematics and Engineering Applications 4, no. 2 (December 1, 2013): 1009. http://dx.doi.org/10.21512/comtech.v4i2.2542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Research in the affective game has received attention from the research communities over this lustrum. As a crucial aspect of a game, emotions play an important role in user experience as well as to emphasize the user’s emotions state on game design. This will improve the user’s interactivity while they playing the game. This research aims to discuss and analyze whether emotions can replace traditional user game inputs (keyboard, mouse, and others). The methodology used in this research is divided into two main phases: game design and facial expression recognition. The results of this research indicate that users preferred to use a traditional input such as mouse. Moreover, user’s interactivities with game are still slightly low. However, this is a great opportunity for researchers in affective game with a more interactive game play as well as rich and complex story. Hopefully this will improve the user affective state and emotions in game. The results of this research imply that happy emotion obtains 78% of detection, meanwhile the anger emotion has the lowest detection of 44.4%. Moreover, users prefer mouse and FER (face expression recognition) as the best input for this game.
33

Ryumina, Elena, Maxim Markitantov, and Alexey Karpov. "Multi-Corpus Learning for Audio–Visual Emotions and Sentiment Recognition." Mathematics 11, no. 16 (August 15, 2023): 3519. http://dx.doi.org/10.3390/math11163519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recognition of emotions and sentiment (affective states) from human audio–visual information is widely used in healthcare, education, entertainment, and other fields; therefore, it has become a highly active research area. The large variety of corpora with heterogeneous data available for the development of single-corpus approaches for recognition of affective states may lead to approaches trained on one corpus being less effective on another. In this article, we propose a multi-corpus learned audio–visual approach for emotion and sentiment recognition. It is based on the extraction of mid-level features at the segment level using two multi-corpus temporal models (a pretrained transformer with GRU layers for the audio modality and pre-trained 3D CNN with BiLSTM-Former for the video modality) and on predicting affective states using two single-corpus cross-modal gated self-attention fusion (CMGSAF) models. The proposed approach was tested on the RAMAS and CMU-MOSEI corpora. To date, our approach has outperformed state-of-the-art audio–visual approaches for emotion recognition by 18.2% (78.1% vs. 59.9%) for the CMU-MOSEI corpus in terms of the Weighted Accuracy and by 0.7% (82.8% vs. 82.1%) for the RAMAS corpus in terms of the Unweighted Average Recall.
34

Guo, Yuanyuan, Yifan Xia, Jing Wang, Hui Yu, and Rung-Ching Chen. "Real-Time Facial Affective Computing on Mobile Devices." Sensors 20, no. 3 (February 6, 2020): 870. http://dx.doi.org/10.3390/s20030870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Convolutional Neural Networks (CNNs) have become one of the state-of-the-art methods for various computer vision and pattern recognition tasks including facial affective computing. Although impressive results have been obtained in facial affective computing using CNNs, the computational complexity of CNNs has also increased significantly. This means high performance hardware is typically indispensable. Most existing CNNs are thus not generalizable enough for mobile devices, where the storage, memory and computational power are limited. In this paper, we focus on the design and implementation of CNNs on mobile devices for real-time facial affective computing tasks. We propose a light-weight CNN architecture which well balances the performance and computational complexity. The experimental results show that the proposed architecture achieves high performance while retaining the low computational complexity compared with state-of-the-art methods. We demonstrate the feasibility of a CNN architecture in terms of speed, memory and storage consumption for mobile devices by implementing a real-time facial affective computing application on an actual mobile device.
35

Harmer, C. J., M. Charles, S. McTavish, E. Favaron, and P. J. Cowen. "Negative ion treatment increases positive emotional processing in seasonal affective disorder." Psychological Medicine 42, no. 8 (December 13, 2011): 1605–12. http://dx.doi.org/10.1017/s0033291711002820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
BackgroundAntidepressant drug treatments increase the processing of positive compared to negative affective information early in treatment. Such effects have been hypothesized to play a key role in the development of later therapeutic responses to treatment. However, it is unknown whether these effects are a common mechanism of action for different treatment modalities. High-density negative ion (HDNI) treatment is an environmental manipulation that has efficacy in randomized clinical trials in seasonal affective disorder (SAD).MethodThe current study investigated whether a single session of HDNI treatment could reverse negative affective biases seen in seasonal depression using a battery of emotional processing tasks in a double-blind, placebo-controlled randomized study.ResultsUnder placebo conditions, participants with seasonal mood disturbance showed reduced recognition of happy facial expressions, increased recognition memory for negative personality characteristics and increased vigilance to masked presentation of negative words in a dot-probe task compared to matched healthy controls. Negative ion treatment increased the recognition of positive compared to negative facial expression and improved vigilance to unmasked stimuli across participants with seasonal depression and healthy controls. Negative ion treatment also improved recognition memory for positive information in the SAD group alone. These effects were seen in the absence of changes in subjective state or mood.ConclusionsThese results are consistent with the hypothesis that early change in emotional processing may be an important mechanism for treatment action in depression and suggest that these effects are also apparent with negative ion treatment in seasonal depression.
36

SCHIPOR, O. A., S. G. PENTIUC, and M. D. SCHIPOR. "Toward Automatic Recognition of Children's Affective State Using Physiological Parameters and Fuzzy Model of Emotions." Advances in Electrical and Computer Engineering 12, no. 2 (2012): 47–50. http://dx.doi.org/10.4316/aece.2012.02008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Neethirajan, Suresh. "Correction: Neethirajan, S. Affective State Recognition in Livestock—Artificial Intelligence Approaches. Animals 2022, 12, 759." Animals 12, no. 14 (July 21, 2022): 1856. http://dx.doi.org/10.3390/ani12141856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Hadjidimitriou, Stelios, Vasileios Charisis, and Leontios Hadjileontiadis. "Towards a Practical Subject-Independent Affective State Recognition Based on Time-Domain EEG Feature Extraction." International Journal of Heritage in the Digital Era 4, no. 2 (June 2015): 165–77. http://dx.doi.org/10.1260/2047-4970.4.2.165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Abdullahi, Bashir Eseyin, Emeka Ogbuju, Taiwo Abiodun, and Francisca Oladipo. "Techniques for facial affective computing: A review." Ukrainian Journal of Educational Studies and Information Technology 11, no. 3 (September 30, 2023): 211–26. http://dx.doi.org/10.32919/uesit.2023.03.05.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Facial affective computing has gained popularity and become a progressive research area, as it plays a key role in human-computer interaction. However, many researchers lack the right technique to carry out a reliable facial affective computing effectively. To address this issue, we presented a review of the state-of-the-art artificial intelligence techniques that are being used for facial affective computing. Three research questions were answered by studying and analysing related papers collected from some well-established scientific databases based on some exclusion and inclusion criteria. The result presented the common artificial intelligence approaches for face detection, face recognition and emotion detection. The paper finds out that the haar-cascade algorithm has outperformed all the algorithms that have been used for face detection, the Convolutional Neural Network (CNN) based algorithms have performed best in face recognition, and the neural network algorithm with multiple layers has the best performance in emotion detection. A limitation of this research is the access to some research papers, as some documents require a high subscription cost. Practice implication: The paper provides a comprehensive and unbiased analysis of existing literature, identifying knowledge gaps and future research direction and supports evidence-based decision-making. We considered articles and conference papers from well-established databases. The method presents a novel scope for facial affective computing and provides decision support for researchers when selecting plans for facial affective computing.
40

Reddy, Dr K. Shirisha. "Emotion Recognition through Text, Speech and Image." International Journal for Research in Applied Science and Engineering Technology 11, no. 12 (December 31, 2023): 1493–506. http://dx.doi.org/10.22214/ijraset.2023.57652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract: Emotion recognition plays a vital role in various applications, such as human-computer interaction, affective computing, and psychological assessment. This comparative study investigates the effectiveness of emotion recognition through text, image, and speech modalities. Our research aims to analyze and compare state-of-art approaches in each modality and identify their strengths and limitations. Conducting an exhaustive literature review enabled us to understand existing methodologies, datasets, and evaluation metrics. The research methodology and implementation include data collection, preprocessing, feature extraction, and the application of machine learning and deep learning models. The results provide insights into the performance of different modalities, paving the way for advancements in emotion recognition research.
41

Schuller, Björn. "Affective speaker state analysis in the presence of reverberation." International Journal of Speech Technology 14, no. 2 (January 27, 2011): 77–87. http://dx.doi.org/10.1007/s10772-011-9090-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Bhalerao, Ayush. "A Comprehensive Survey on Predicting Human Psychological State through Facial Feature Analysis." International Journal for Research in Applied Science and Engineering Technology 12, no. 3 (March 31, 2024): 731–38. http://dx.doi.org/10.22214/ijraset.2024.58777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract: This study uses convolutional neural networks (CNNs) and image edge computing techniques to analyze facial features and forecast human psychological states in a novel way. Real-time or almost real-time face expression identification is the goal of the suggested approach, which will advance the developing field of affective computing and have potential uses in mental health evaluation and human-computer interaction. In this study, a variety of datasets with a broad range of facial expressions are used to train CNN models, guaranteeing consistent performance across a range of users and cultural backgrounds. Furthermore, the effective detection of human faces in photos is achieved by the use of the Haar Cascade Classifier, which improves the overall accuracy and dependability of the emotion recognition system. The algorithms’ efficiency is further increased by the addition of picture edge computing techniques, which makes them appropriate for deployment in contexts with limited resources. The suggested method’s accuracy in identifying and categorizing facial emotions is demonstrated by the experimental findings, indicating its potential practical applications. This research has implications for building mental health monitoring systems and enhancing user experience through technology, which goes beyond affective computing. This research fills important gaps in mental health screening and assistance while also enhancing the capabilities of facial expression recognition systems and making human-computer interaction interfaces more responsive and intuitive.
43

Ríos, Ulises, Marcelo Arancibia, Juan Pablo Jiménez, and Felix Bermpohl. "The forgotten affective route of social cognition in patients with bipolar disorders." Journal of Experimental Psychopathology 13, no. 4 (October 2022): 204380872211354. http://dx.doi.org/10.1177/20438087221135422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Social cognition (SC) research in bipolar disorders (BD) has provided evidence about deficits in different phases of the illness. Most of the studies have focused on two aspects of SC: theory of mind and emotion recognition. However, according to influential models of social neuroscience, two aspects of understanding others need to be distinguished: the cognitive (theory of mind and emotion recognition) and the affective route (empathy and compassion) of SC. We aimed to determine whether individuals with BD significantly differ from healthy controls on measures of the affective route of SC according to the available evidence. We conduct a narrative review of original research based on a social neuroscience model of SC. BD is associated with alterations of the affective route of SC during acute episodes and remission. During mania and subthreshold depression, an increase in empathy (“over-empathizing”) and discomfort (empathy) has been reported, respectively. A pattern of high empathic distress and low compassion appears during remission. This article is the first to review the evidence on the affective route of SC in BD, revealing trait and state alterations. We emphasize the need to consider this affective dimension of SC in future research, to design more specific interventions in BD patients.
44

Dolan, Jeremich D., and Rifaat Kamil. "Atypical Affective Disorder with Episodic Dyscontrol: A Case of von Economo's Disease (Encephalitis Lethargica)*." Canadian Journal of Psychiatry 37, no. 2 (March 1992): 140–42. http://dx.doi.org/10.1177/070674379203700214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The case is described of a patient with atypical affective disorder, episodic behavioural dyscontrol and parkinsonism resulting from presumed encephalitis lethargica. EEG abnormalities were found which were compatible with a post-encephalitic state and suggestive of epileptiform complications. Poor or deleterious response to neuroleptics, sleep disorder, and parkinsonism are features that may allow recognition of this illness in a psychiatric setting.
45

Hill, W. Trey, and Jack A. Palmer. "Affective Response to a Set of New Musical Stimuli." Psychological Reports 106, no. 2 (April 2010): 581–88. http://dx.doi.org/10.2466/pr0.106.2.581-588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recently, a novel set of musical stimuli was developed in an attempt to bring more rigor to a paradigm which often falls under scientific scrutiny. Although these musical clips were validated in terms of recognition for emotion, valence, and arousal, the clips were not specifically tested for their ability to elicit certain affective responses. The present study examined self-reported “elation” among 82 participants after listening to one of two types of the musical clips; 47 listened to happy music and 35 listened to sad music. Individuals who listened to happy music reported significantly higher “elation” than individuals who listened to the sad music. These results support the idea that music can elicit certain affective state responses.
46

Salazar, Camilo, Edwin Montoya-Múnera, and Jose Aguilar. "Analysis of different affective state multimodal recognition approaches with missing data-oriented to virtual learning environments." Heliyon 7, no. 6 (June 2021): e07253. http://dx.doi.org/10.1016/j.heliyon.2021.e07253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Szipl, Georgine, Markus Boeckle, Sinja A. B. Werner, and Kurt Kotrschal. "Mate Recognition and Expression of Affective State in Croop Calls of Northern Bald Ibis (Geronticus eremita)." PLoS ONE 9, no. 2 (February 5, 2014): e88265. http://dx.doi.org/10.1371/journal.pone.0088265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

López-Hernández, Jesús Leonardo, Israel González-Carrasco, José Luis López-Cuadrado, and Belén Ruiz-Mezcua. "Towards the Recognition of the Emotions of People with Visual Disabilities through Brain–Computer Interfaces." Sensors 19, no. 11 (June 9, 2019): 2620. http://dx.doi.org/10.3390/s19112620.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A brain–computer interface is an alternative for communication between people and computers, through the acquisition and analysis of brain signals. Research related to this field has focused on serving people with different types of motor, visual or auditory disabilities. On the other hand, affective computing studies and extracts information about the emotional state of a person in certain situations, an important aspect for the interaction between people and the computer. In particular, this manuscript considers people with visual disabilities and their need for personalized systems that prioritize their disability and the degree that affects them. In this article, a review of the state of the techniques is presented, where the importance of the study of the emotions of people with visual disabilities, and the possibility of representing those emotions through a brain–computer interface and affective computing, are discussed. Finally, the authors propose a framework to study and evaluate the possibility of representing and interpreting the emotions of people with visual disabilities for improving their experience with the use of technology and their integration into today’s society.
49

Abdullah, Sharmeen M. Saleem Abdullah, Siddeeq Y. Ameen Ameen, Mohammed A. M. Sadeeq, and Subhi Zeebaree. "Multimodal Emotion Recognition using Deep Learning." Journal of Applied Science and Technology Trends 2, no. 02 (April 16, 2021): 52–58. http://dx.doi.org/10.38094/jastt20291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
New research into human-computer interaction seeks to consider the consumer's emotional status to provide a seamless human-computer interface. This would make it possible for people to survive and be used in widespread fields, including education and medicine. Multiple techniques can be defined through human feelings, including expressions, facial images, physiological signs, and neuroimaging strategies. This paper presents a review of emotional recognition of multimodal signals using deep learning and comparing their applications based on current studies. Multimodal affective computing systems are studied alongside unimodal solutions as they offer higher accuracy of classification. Accuracy varies according to the number of emotions observed, features extracted, classification system and database consistency. Numerous theories on the methodology of emotional detection and recent emotional science address the following topics. This would encourage studies to understand better physiological signals of the current state of the science and its emotional awareness problems.
50

MIHALACHE, Serban, and Dragos BURILEANU. "Speech Emotion Recognition Using Deep Neural Networks, Transfer Learning, and Ensemble Classification Techniques." Romanian Journal of Information Science and Technology 2023, no. 3-4 (September 28, 2023): 375–87. http://dx.doi.org/10.59277/romjist.2023.3-4.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Speech emotion recognition (SER) is the task of determining the affective content present in speech, a promising research area of great interest in recent years, with important applications especially in the field of forensic speech and law enforcement operations, among others. In this paper, systems based on deep neural networks (DNNs) spanning five levels of complexity are proposed, developed, and tested, including systems leveraging transfer learning (TL) for the top modern image recognition deep learning models, as well as several ensemble classification techniques that lead to significant performance increases. The systems were tested on the most relevant SER datasets: EMODB, CREMAD, and IEMOCAP, in the context of: (i) classification: using the standard full sets of emotion classes, as well as additional negative emotion subsets relevant for forensic speech applications; and (ii) regression: using the continuously valued 2D arousal-valence affect space. The proposed systems achieved state-of-the-art results for the full class subset for EMODB (up to 83% accuracy) and performance comparable to other published research for the full class subsets for CREMAD and IEMOCAP (up to 55% and 62% accuracy). For the class subsets focusing only on negative affective content, the proposed solutions offered top performance vs. previously published state of the art results.

To the bibliography