Статті в журналах з теми "Facial expression manipulation"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Facial expression manipulation.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Facial expression manipulation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Kobai, Ryota, and Hiroki Murakami. "Effects of interactions between facial expressions and self-focused attention on emotion." PLOS ONE 16, no. 12 (December 23, 2021): e0261666. http://dx.doi.org/10.1371/journal.pone.0261666.

Повний текст джерела
Анотація:
Self-focus is a type of cognitive processing that maintains negative emotions. Moreover, bodily feedback is also essential for maintaining emotions. This study investigated the effect of interactions between self-focused attention and facial expressions on emotions. The results indicated that control facial expression manipulation after self-focus reduced happiness scores. On the contrary, the smiling facial expression manipulation after self-focus increased happiness scores marginally. However, facial expressions did not affect positive emotions after the other-focus manipulation. These findings suggest that self-focus plays a pivotal role in facial expressions’ effect on positive emotions. However, self-focusing is insufficient for decreasing positive emotions, and the interaction between self-focus and facial expressions is crucial for developing positive emotions.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Geng, Zhenglin, Chen Cao, and Sergey Tulyakov. "Towards Photo-Realistic Facial Expression Manipulation." International Journal of Computer Vision 128, no. 10-11 (August 28, 2020): 2744–61. http://dx.doi.org/10.1007/s11263-020-01361-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kuehne, Maria, Isabelle Siwy, Tino Zaehle, Hans-Jochen Heinze, and Janek S. Lobmaier. "Out of Focus: Facial Feedback Manipulation Modulates Automatic Processing of Unattended Emotional Faces." Journal of Cognitive Neuroscience 31, no. 11 (November 2019): 1631–40. http://dx.doi.org/10.1162/jocn_a_01445.

Повний текст джерела
Анотація:
Facial expressions provide information about an individual's intentions and emotions and are thus an important medium for nonverbal communication. Theories of embodied cognition assume that facial mimicry and resulting facial feedback plays an important role in the perception of facial emotional expressions. Although behavioral and electrophysiological studies have confirmed the influence of facial feedback on the perception of facial emotional expressions, the influence of facial feedback on the automatic processing of such stimuli is largely unexplored. The automatic processing of unattended facial expressions can be investigated by visual expression-related MMN. The expression-related MMN reflects a differential ERP of automatic detection of emotional changes elicited by rarely presented facial expressions (deviants) among frequently presented facial expressions (standards). In this study, we investigated the impact of facial feedback on the automatic processing of facial expressions. For this purpose, participants ( n = 19) performed a centrally presented visual detection task while neutral (standard), happy, and sad faces (deviants) were presented peripherally. During the task, facial feedback was manipulated by different pen holding conditions (holding the pen with teeth, lips, or nondominant hand). Our results indicate that automatic processing of facial expressions is influenced and thus dependent on the own facial feedback.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lewczuk, Joanna. "Change of social value orientation affected by the observed mimical expression of the interaction partner." Studia z Teorii Wychowania X, no. 4 (29) (December 25, 2019): 85–106. http://dx.doi.org/10.5604/01.3001.0014.1075.

Повний текст джерела
Анотація:
The issues addressed in this paper relate to a possible change in the observer’s social value orientation under the influence of a specific emotional expression being perceived on another individual’s face. The paper fits into the trend in the research into the link between social value orientations and the perception of a facial emotional expression. An „omnibus” type representative survey was carried out according to the experimental scheme, entirely via the Internet (N = 972). The following tools were used: for the measurement of social value orientations, a modified version of the Ring Measure of Social Values; for the experimental manipulation, photographs of facial expressions (happiness, anger, neutrality). In the light of the data obtained, one may, for the very first time, speak of social value orientations as of a dimension being susceptible to a change under the influence of a facial expression. A diversity of the indicators of the orientation on the others was shown, as well as of the distribution of the groups of the dominant social value orientations before and after the experimental manipulation, depending on the type of a basic facial emotional expression being presented (happiness vs anger). Directional predictions were confirmed with regard to the negative manipulation (expression of anger) which was followed by a reduction in the orientation on the others and a reduction in the total number of altruists, while the positive manipulation (expression of happiness) resulted in a general increase being observed in the number of altruists. It remains in line with the trend in predictions that observation of a positive facial expression triggers prosocial tendencies, while observation of a negative facial expression undermines prosocial tendencies. Keywords: social value orientations; prosociality; orientation on the self/orientation on the others; variability of social value orientations; Ring Measure of Social Values; facial emotional expressions
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Dethier, Marie, Sylvie Blairy, Hannah Rosenberg, and Skye McDonald. "Emotional Regulation Impairments Following Severe Traumatic Brain Injury: An Investigation of the Body and Facial Feedback Effects." Journal of the International Neuropsychological Society 19, no. 4 (January 28, 2013): 367–79. http://dx.doi.org/10.1017/s1355617712001555.

Повний текст джерела
Анотація:
AbstractThe object of this study was to evaluate the combined effect of body and facial feedback in adults who had suffered from a severe traumatic brain injury (TBI) to gain some understanding of their difficulties in the regulation of negative emotions. Twenty-four participants with TBI and 28 control participants adopted facial expressions and body postures according to specific instructions and maintained these positions for 10 s. Expressions and postures entailed anger, sadness, and happiness as well as a neutral (baseline) condition. After each expression/posture manipulation, participants evaluated their subjective emotional state (including cheerfulness, sadness, and irritation). TBI participants were globally less responsive to the effects of body and facial feedback than control participants, F(1,50) = 5.89, p = .02, η2 = .11. More interestingly, the TBI group differed from the Control group across emotions, F(8,400) = 2.51, p = .01, η2 = .05. Specifically, participants with TBI were responsive to happy but not to negative expression/posture manipulations whereas control participants were responsive to happy, angry, and sad expression/posture manipulations. In conclusion, TBI appears to impair the ability to recognize both the physical configuration of a negative emotion and its associated subjective feeling. (JINS, 2013, 19, 1–13)
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Pollick, Frank E., Harold Hill, Andrew Calder, and Helena Paterson. "Recognising Facial Expression from Spatially and Temporally Modified Movements." Perception 32, no. 7 (July 2003): 813–26. http://dx.doi.org/10.1068/p3319.

Повний текст джерела
Анотація:
We examined how the recognition of facial emotion was influenced by manipulation of both spatial and temporal properties of 3-D point-light displays of facial motion. We started with the measurement of 3-D position of multiple locations on the face during posed expressions of anger, happiness, sadness, and surprise, and then manipulated the spatial and temporal properties of the measurements to obtain new versions of the movements. In two experiments, we examined recognition of these original and modified facial expressions: in experiment 1, we manipulated the spatial properties of the facial movement, and in experiment 2 we manipulated the temporal properties. The results of experiment 1 showed that exaggeration of facial expressions relative to a fixed neutral expression resulted in enhanced ratings of the intensity of that emotion. The results of experiment 2 showed that changing the duration of an expression had a small effect on ratings of emotional intensity, with a trend for expressions with shorter durations to have lower ratings of intensity. The results are discussed within the context of theories of encoding as related to caricature and emotion.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Lin, Wenyi, Jing Hu, and Yanfei Gong. "Is it Helpful for Individuals with Minor Depression to Keep Smiling? An Event-Related Potentials Analysis." Social Behavior and Personality: an international journal 43, no. 3 (April 23, 2015): 383–96. http://dx.doi.org/10.2224/sbp.2015.43.3.383.

Повний текст джерела
Анотація:
We used event-related potentials (ERPs) to explore the influence of manipulating facial expression on error monitoring in individuals. The participants were 11 undergraduate students who had been diagnosed with minor depression (MinD). We recorded error-related negativity (ERN) as the participants performed a modified flanker task in 3 conditions: Duchenne smile, standard smile, and no smile. Behavioral data results showed that, in both the Duchenne smile and standard smile conditions, error rates were significantly lower than in the no-smile condition. The ERP analysis results indicated that, compared to the no-smile condition, both Duchenne and standard smiling facial expressions decreased ERN amplitude, and ERN amplitudes were smallest for those in the Duchenne smile condition. Our findings suggested that even brief smile manipulation may improve long-term negative mood states of people with MinD.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Guo, Zixin, and Ruizhi Yang. "A channel attention and feature manipulation network for facial expression recognition." Applied and Computational Engineering 6, no. 1 (June 14, 2023): 1344–54. http://dx.doi.org/10.54254/2755-2721/6/20230751.

Повний текст джерела
Анотація:
Facial expression conveys a variety of emotional and intentional message from human beings, and automated facial expression recognition (FER) has become an ongoing and promising research topic in the field of computer vision. However, the primary challenge of FER is learning to discriminate similar features among different emotion categories. In this paper, a hybrid architecture using Efficient Channel Attention (ECA) residual network ResNet-18, and feature manipulation network is proposed to tackle the above challenge. First, the ECA residual network effectively extract input features with local cross-channel interaction. Then, the feature decomposition network (FDN), feature reconstruction network (FRN) modules are added to decompose and aggregate latent features for enhancing the compactness of intra-category features and discrimination of inter-category features. Finally, an expression prediction network is connected to FRN to draw the final expression classification result. To examine the efficacy of the suggested approach, the model is trained independently using in-the-lab (CK+) and in-the-wild (RAF-DB) datasets. Several important evaluation metrics such as confusion matrix, Grad-CAM are reported, and the ablation study is conducted for demonstrating the efficacy and interpretability of the proposed network. It achieves the state-of-the-art accuracy compared to the existing facial recognition work, at 99.70% in CK+ and 89.17% in RAF-DB.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Yi, Jonathan, Philip Pärnamets, and Andreas Olsson. "The face value of feedback: facial behaviour is shaped by goals and punishments during interaction with dynamic faces." Royal Society Open Science 8, no. 7 (July 2021): 202159. http://dx.doi.org/10.1098/rsos.202159.

Повний текст джерела
Анотація:
Responding appropriately to others' facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behaviour, and replicated earlier findings of faster and more accurate responses in congruent versus incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, when compared with frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Boker, Steven M., Jeffrey F. Cohn, Barry-John Theobald, Iain Matthews, Timothy R. Brick, and Jeffrey R. Spies. "Effects of damping head movement and facial expression in dyadic conversation using real–time facial expression tracking and synthesized avatars." Philosophical Transactions of the Royal Society B: Biological Sciences 364, no. 1535 (December 12, 2009): 3485–95. http://dx.doi.org/10.1098/rstb.2009.0152.

Повний текст джерела
Анотація:
When people speak with one another, they tend to adapt their head movements and facial expressions in response to each others' head movements and facial expressions. We present an experiment in which confederates' head movements and facial expressions were motion tracked during videoconference conversations, an avatar face was reconstructed in real time, and naive participants spoke with the avatar face. No naive participant guessed that the computer generated face was not video. Confederates' facial expressions, vocal inflections and head movements were attenuated at 1 min intervals in a fully crossed experimental design. Attenuated head movements led to increased head nods and lateral head turns, and attenuated facial expressions led to increased head nodding in both naive participants and confederates. Together, these results are consistent with a hypothesis that the dynamics of head movements in dyadicconversation include a shared equilibrium. Although both conversational partners were blind to the manipulation, when apparent head movement of one conversant was attenuated, both partners responded by increasing the velocity of their head movements.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Li, Shyue-Ran, Yi-Chen Chen, Kuohsiang Chen, and Chun-Heng Ho. "Quantifying influence from form manipulation of artificial facial expression to viewers." Digital Creativity 25, no. 4 (May 14, 2014): 313–29. http://dx.doi.org/10.1080/14626268.2014.882847.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Luna-Jiménez, Cristina, Jorge Cristóbal-Martín, Ricardo Kleinlein, Manuel Gil-Martín, José M. Moya, and Fernando Fernández-Martínez. "Guided Spatial Transformers for Facial Expression Recognition." Applied Sciences 11, no. 16 (August 5, 2021): 7217. http://dx.doi.org/10.3390/app11167217.

Повний текст джерела
Анотація:
Spatial Transformer Networks are considered a powerful algorithm to learn the main areas of an image, but still, they could be more efficient by receiving images with embedded expert knowledge. This paper aims to improve the performance of conventional Spatial Transformers when applied to Facial Expression Recognition. Based on the Spatial Transformers’ capacity of spatial manipulation within networks, we propose different extensions to these models where effective attentional regions are captured employing facial landmarks or facial visual saliency maps. This specific attentional information is then hardcoded to guide the Spatial Transformers to learn the spatial transformations that best fit the proposed regions for better recognition results. For this study, we use two datasets: AffectNet and FER-2013. For AffectNet, we achieve a 0.35% point absolute improvement relative to the traditional Spatial Transformer, whereas for FER-2013, our solution gets an increase of 1.49% when models are fine-tuned with the Affectnet pre-trained weights.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Liu, Feng-Lin, Shu-Yu Chen, Yu-Kun Lai, Chunpeng Li, Yue-Ren Jiang, Hongbo Fu, and Lin Gao. "DeepFaceVideoEditing." ACM Transactions on Graphics 41, no. 4 (July 2022): 1–16. http://dx.doi.org/10.1145/3528223.3530056.

Повний текст джерела
Анотація:
Sketches, which are simple and concise, have been used in recent deep image synthesis methods to allow intuitive generation and editing of facial images. However, it is nontrivial to extend such methods to video editing due to various challenges, ranging from appropriate manipulation propagation and fusion of multiple editing operations to ensure temporal coherence and visual quality. To address these issues, we propose a novel sketch-based facial video editing framework, in which we represent editing manipulations in latent space and propose specific propagation and fusion modules to generate high-quality video editing results based on StyleGAN3. Specifically, we first design an optimization approach to represent sketch editing manipulations by editing vectors, which are propagated to the whole video sequence using a proper strategy to cope with different editing needs. Specifically, input editing operations are classified into two categories: temporally consistent editing and temporally variant editing. The former (e.g., change of face shape) is applied to the whole video sequence directly, while the latter (e.g., change of facial expression or dynamics) is propagated with the guidance of expression or only affects adjacent frames in a given time window. Since users often perform different editing operations in multiple frames, we further present a region-aware fusion approach to fuse diverse editing effects. Our method supports video editing on facial structure and expression movement by sketch, which cannot be achieved by previous works. Both qualitative and quantitative evaluations show the superior editing ability of our system to existing and alternative solutions.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Mehra, Aman, Akshay Agarwal, Mayank Vatsa, and Richa Singh. "Detection of Digital Manipulation in Facial Images (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (May 18, 2021): 15845–46. http://dx.doi.org/10.1609/aaai.v35i18.17919.

Повний текст джерела
Анотація:
Advances in deep learning have enabled the creation of photo-realistic DeepFakes by switching the identity or expression of individuals. Such technology in the wrong hands can seed chaos through blackmail, extortion, and forging false statements of influential individuals. This work proposes a novel approach to detect forged videos by magnifying their temporal inconsistencies. A study is also conducted to understand role of ethnicity bias due to skewed datasets on deepfake detection. A new dataset comprising forged videos of Indian ethnicity individuals is presented to facilitate this study.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Parkinson, Brian. "Do Facial Movements Express Emotions or Communicate Motives?" Personality and Social Psychology Review 9, no. 4 (November 2005): 278–311. http://dx.doi.org/10.1207/s15327957pspr0904_1.

Повний текст джерела
Анотація:
This article addresses the debate between emotion-expression and motive-communication approaches to facial movements, focusing on Ekman's (1972) and Fridlund's (1994) contrasting models and their historical antecedents. Available evidence suggests that the presence of others either reduces or increases facial responses, depending on the quality and strength of the emotional manipulation and on the nature of the relationship between interactants. Although both display rules and social motives provide viable explanations of audience “inhibition ” effects, some audience facilitation effects are less easily accommodated within an emotion-expression perspective. In particular emotion is not a sufficient condition for a corresponding “expression,” even discounting explicit regulation, and, apparently, “spontaneous ”facial movements may be facilitated by the presence of others. Further, there is no direct evidence that any particular facial movement provides an unambiguous expression of a specific emotion. However, information communicated by facial movements is not necessarily extrinsic to emotion. Facial movements not only transmit emotion-relevant information but also contribute to ongoing processes of emotional action in accordance with pragmatic theories.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Meister, Hartmut, Isa Winter, Moritz Waechtler, Pasale Sandmann, and Khaled Abdellatif. "Examination of audiovisual prosody in cochlear implant recipients." Journal of the Acoustical Society of America 153, no. 3_supplement (March 1, 2023): A285. http://dx.doi.org/10.1121/10.0018860.

Повний текст джерела
Анотація:
Prosody plays a vital role in verbal communication. It is important for the expression of emotions but also carries information on sentence stress or the distinction between questions and statements. Cochlear Implant (CI) recipients are restricted in the use of acoustic prosody cues, especially in terms of the voice fundamental frequency. However, prosody is also perceived visually, as head and facial movements accompany the vocal expression. To date, few studies have addressed multimodal prosody perception in CI users. Controlled manipulations of acoustic cues are a valuable method to uncover and quantify prosody perception. For visual prosody, however, such a technique is more complicated. We describe a novel approach based on animations via virtual humans. Such a method has the advantage that–in parallel to acoustic manipulations–head and facial movements can be parametrized. It is shown that animations based on a virtual human generally provide similar motion cues as video recordings of a real talker. Parametrization yields fine-grained manipulation of visual prosody, which can be combined with modifications of acoustic features. This allows generating both congruent and incongruent stimuli with different salience. Initial results of using this method with CI recipients are presented and discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Romero-Martínez, Ángel, Carolina Sarrate-Costa, and Luis Moya-Albiol. "A Systematic Review of the Role of Oxytocin, Cortisol, and Testosterone in Facial Emotional Processing." Biology 10, no. 12 (December 15, 2021): 1334. http://dx.doi.org/10.3390/biology10121334.

Повний текст джерела
Анотація:
A topic of interest is the way decoding and interpreting facial emotional expressions can lead to mutual understanding. Facial emotional expression is a basic source of information that guarantees the functioning of other higher cognitive processes (e.g., empathy, cooperativity, prosociality, or decision-making, among others). In this regard, hormones such as oxytocin, cortisol, and/or testosterone have been found to be important in modifying facial emotion processing. In fact, brain structures that participate in facial emotion processing have been shown to be rich in receptors for these hormones. Nonetheless, much of this research has been based on correlational designs. In recent years, a growing number of researchers have tried to carry out controlled laboratory manipulation of these hormones by administering synthetic forms of these hormones. The main objective of this study was to carry out a systematic review of studies that assess whether manipulation of these three hormones effectively promotes significant alterations in facial emotional processing. To carry out this review, PRISMA quality criteria for reviews were followed, using the following digital databases: PsycINFO, PubMed, Dialnet, Psicodoc, Web of Knowledge, and the Cochrane Library, and focusing on manuscripts with a robust research design (e.g., randomized, single- or double-blind, and/or placebo-controlled) to increase the value of this systematic review. An initial identification of 6340 abstracts and retrieval of 910 full texts led to the final inclusion of 101 papers that met all the inclusion criteria. Only about 18% of the manuscripts included reported a direct effect of hormone manipulation. In fact, emotional accuracy seemed to be enhanced after oxytocin increases, but it diminished when cortisol and/or testosterone increased. Nonetheless, when emotional valence and participants’ gender were included, hormonal manipulation reached significance (in around 53% of the articles). In fact, these studies offered a heterogeneous pattern in the way these hormones altered speed processing, attention, and memory. This study reinforces the idea that these hormones are important, but not the main modulators of facial emotion processing. As our comprehension of hormonal effects on emotional processing improves, the potential to design good treatments to improve this ability will be greater.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Wu, Peng, Isabel Gonzalez, Georgios Patsis, Dongmei Jiang, Hichem Sahli, Eric Kerckhofs, and Marie Vandekerckhove. "Objectifying Facial Expressivity Assessment of Parkinson’s Patients: Preliminary Study." Computational and Mathematical Methods in Medicine 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/427826.

Повний текст джерела
Анотація:
Patients with Parkinson’s disease (PD) can exhibit a reduction of spontaneous facial expression, designated as “facial masking,” a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participant’s self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinson’s disease have been observed.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Lee, Isack, and Seok Bong Yoo. "Latent-PER: ICA-Latent Code Editing Framework for Portrait Emotion Recognition." Mathematics 10, no. 22 (November 14, 2022): 4260. http://dx.doi.org/10.3390/math10224260.

Повний текст джерела
Анотація:
Although real-image emotion recognition has been developed in several studies, an acceptable accuracy level has not been achieved in portrait drawings. This paper proposes a portrait emotion recognition framework based on independent component analysis (ICA) and latent codes to overcome the performance degradation problem in drawings. This framework employs latent code extracted through a generative adversarial network (GAN)-based encoder. It learns independently from factors that interfere with expression recognition, such as color, small occlusion, and various face angles. It is robust against environmental factors since it filters latent code by adding an emotion-relevant code extractor to extract only information related to facial expressions from the latent code. In addition, an image is generated by changing the latent code to the direction of the eigenvector for each emotion obtained through the ICA method. Since only the position of the latent code related to the facial expression is changed, there is little external change and the expression changes in the desired direction. This technique is helpful for qualitative and quantitative emotional recognition learning. The experimental results reveal that the proposed model performs better than the existing models, and the latent editing used in this process suggests a novel manipulation method through ICA. Moreover, the proposed framework can be applied for various portrait emotion applications from recognition to manipulation, such as automation of emotional subtitle production for the visually impaired, understanding the emotions of objects in famous classic artwork, and animation production assistance.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Henry, Tyson R., Andrey K. Yeatts, Scott E. Hudson, Brad A. Myers, and Steven Feiner. "A Nose Gesture Interface Device: Extending Virtual Realities." Presence: Teleoperators and Virtual Environments 1, no. 2 (January 1992): 258–61. http://dx.doi.org/10.1162/pres.1992.1.2.258.

Повний текст джерела
Анотація:
This paper reports on the1 development of a nose-machine interface device that provides real-time gesture, position, smell and facial expression information. The DATANOSE™2—Data AtomaTa CORNUCOPIA pNeumatic Olfactory 1/O-deviSE Tactile Manipulation (Olsen, 1986; Myers, 1991)—allows novice users without any formal nose training to perform complex interactive tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Lima, Camila Moura de, Caroline Xavier Grala, Gustavo Antônio Boff, Mariana Cristina Hoeppner Rondelli, and Márcia de Oliveira Nobre. "The importance of the facial and body expressions interpretation of domestic feline in clinical practice." Research, Society and Development 9, no. 11 (November 20, 2020): e4269119875. http://dx.doi.org/10.33448/rsd-v9i11.9875.

Повний текст джерела
Анотація:
Felines can express several emotions through their behavior. Based on this, it is essential to understand the meaning of the feline body and facial expressions since they will assist in carrying out the procedures safely and without causing discomfort to the patient. Therefore, this article aimed to evaluate the behavior and welfare of felines through the visualization of body language during clinical care and the procedures performed. Thirty adult felines, 16 females and 14 males participated in this study, all neutered and mixed breed. The felines were exposed to several clinical procedures during the physical examination, blood pressure measurement by the non-invasive method, assessment of the body condition score and morphometric measurements, electrocardiogram examination, and blood collection. The behavioral assessment observed the most prevalent facial and postural expression. In general, most felines reacted well to the procedures performed and felt comfortable with relaxed and alert facial and body expressions. Few animals showed signs of fear. Thus, the use of practices aimed at welfare together with the use of feline synthetic facial pheromone contributed positively to the results found. Therefore, it is concluded that most felines show signs of relaxation and welfare during the manipulation of the clinical examination and the performance of the procedures with a quiet and safe environment for care and also with the use of feline synthetic facial pheromone.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Arshed, Muhammad Asad, Ayed Alwadain, Rao Faizan Ali, Shahzad Mumtaz, Muhammad Ibrahim, and Amgad Muneer. "Unmasking Deception: Empowering Deepfake Detection with Vision Transformer Network." Mathematics 11, no. 17 (August 29, 2023): 3710. http://dx.doi.org/10.3390/math11173710.

Повний текст джерела
Анотація:
With the development of image-generating technologies, significant progress has been made in the field of facial manipulation techniques. These techniques allow people to easily modify media information, such as videos and images, by substituting the identity or facial expression of one person with the face of another. This has significantly increased the availability and accessibility of such tools and manipulated content termed ‘deepfakes’. Developing an accurate method for detecting fake images needs time to prevent their misuse and manipulation. This paper examines the capabilities of the Vision Transformer (ViT), i.e., extracting global features to detect deepfake images effectively. After conducting comprehensive experiments, our method demonstrates a high level of effectiveness, achieving a detection accuracy, precision, recall, and F1 rate of 99.5 to 100% for both the original and mixture data set. According to our existing understanding, this study is a research endeavor incorporating real-world applications, specifically examining Snapchat-filtered images.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Zarrad, Anis. "An Extensible Game Engine to Develop Animated Facial Avatars in 3D Virtual Environment." International Journal of Virtual Communities and Social Networking 8, no. 2 (April 2016): 12–27. http://dx.doi.org/10.4018/ijvcsn.2016040102.

Повний текст джерела
Анотація:
Avatar facial expression and animation in 3D Collaborative Virtual Environment (CVE) systems are reconstructed through a complex manipulation of all details that compose it like muscles, bones and wrinkles in 3D space. The need for a fast and easy reconstruction approach has emerged out in the recent years due to its application in various domains; 3D disaster management and military training etc. These details simulation must be as realistic as possible to convey different emotions according to the constantly changing situations in CVE during the runtime. For example, in 3D disaster management, it is important to use dynamic avatar emotions; firefighters should be frightened when dealing with a fire disaster and smiling when treating injures and evacuating habitants. However, the solution of facial animation remains a challenge that restricts the rapid and ease development of facial animation systems. In this work, the author presents extensible game engine architecture to easily produce real-time facial animations using a script atomic action without having to deal with control structures and 3D programing language. The proposed architecture defines various controllers, object behaviors, tactical and graphics rendering, and collision effects to quickly design 3D virtual environment. Firstly, the author gives the concept of atomic expression, and the method to build a parametrized script file according to the atomic expression. Then the author shows the validity of the generated expressions based on the MPEG-4 facial animation framework. Finally, the feasibility of the proposed architecture is tested via a firefighter scenario. The author's approach has the advantages over previous techniques of fitting directly an easy and faster technology with a high degree of programming independence. The author also minimizes the interaction with the game engine during the runtime by injecting dynamically the XML file into the game engine without stopping or restarting the engine.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Medin, Safa C., Bernhard Egger, Anoop Cherian, Ye Wang, Joshua B. Tenenbaum, Xiaoming Liu, and Tim K. Marks. "MOST-GAN: 3D Morphable StyleGAN for Disentangled Face Image Manipulation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (June 28, 2022): 1962–71. http://dx.doi.org/10.1609/aaai.v36i2.20091.

Повний текст джерела
Анотація:
Recent advances in generative adversarial networks (GANs) have led to remarkable achievements in face image synthesis. While methods that use style-based GANs can generate strikingly photorealistic face images, it is often difficult to control the characteristics of the generated faces in a meaningful and disentangled way. Prior approaches aim to achieve such semantic control and disentanglement within the latent space of a previously trained GAN. In contrast, we propose a framework that a priori models physical attributes of the face such as 3D shape, albedo, pose, and lighting explicitly, thus providing disentanglement by design. Our method, MOST-GAN, integrates the expressive power and photorealism of style-based GANs with the physical disentanglement and flexibility of nonlinear 3D morphable models, which we couple with a state-of-the-art 2D hair manipulation network. MOST-GAN achieves photorealistic manipulation of portrait images with fully disentangled 3D control over their physical attributes, enabling extreme manipulation of lighting, facial expression, and pose variations up to full profile view.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Xue, Ziyu, Qingtong Liu, Haichao Shi, Ruoyu Zou, and Xiuhua Jiang. "A Transformer-Based DeepFake-Detection Method for Facial Organs." Electronics 11, no. 24 (December 12, 2022): 4143. http://dx.doi.org/10.3390/electronics11244143.

Повний текст джерела
Анотація:
Nowadays, deepfake detection on subtle-expression manipulation, facial-detail modification, and smeared images has become a research hotspot. Existing deepfake-detection methods on the whole face are coarse-grained, where the details are missing due to the negligible manipulated size of the image. To address the problems, we propose to build a transformer model for a deepfake-detection method by organ, to obtain the deepfake features. We reduce the detection weight of defaced or unclear organs to prioritize the detection of clear and intact organs. Meanwhile, to simulate the real-world environment, we build a Facial Organ Forgery Detection Test Dataset (FOFDTD), which includes the images of mask face, sunglasses face, and undecorated face collected from the network. Experimental results on four benchmarks, i.e., FF++, DFD, DFDC-P, Celeb-DF, and for FOFDTD datasets, demonstrated the effectiveness of our proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Gentsch, Kornelia, Ursula Beermann, Lingdan Wu, Stéphanie Trznadel, and Klaus R. Scherer. "Temporal Unfolding of Micro-valences in Facial Expression Evoked by Visual, Auditory, and Olfactory Stimuli." Affective Science 1, no. 4 (November 13, 2020): 208–24. http://dx.doi.org/10.1007/s42761-020-00020-y.

Повний текст джерела
Анотація:
AbstractAppraisal theories suggest that valence appraisal should be differentiated into micro-valences, such as intrinsic pleasantness and goal-/need-related appraisals. In contrast to a macro-valence approach, this dissociation explains, among other things, the emergence of mixed or blended emotions. Here, we extend earlier research that showed that these valence types can be empirically dissociated. We examine the timing and the response patterns of these two micro-valences via measuring facial muscle activity changes (electromyography, EMG) over the brow and the cheek regions. In addition, we explore the effects of the sensory stimulus modality (vision, audition, and olfaction) on these patterns. The two micro-valences were manipulated in a social judgment task: first, intrinsic un/pleasantness (IP) was manipulated by exposing participants to appropriate stimuli presented in different sensory domains followed by a goal conduciveness/obstruction (GC) manipulation consisting of feedback on participants’ judgments that were congruent or incongruent with their task-related goal. The results show significantly different EMG responses and timing patterns for both types of micro-valence, confirming the prediction that they are independent, consecutive parts of the appraisal process. Moreover, the lack of interaction effects with the sensory stimulus modality suggests high generalizability of the underlying appraisal mechanisms across different perception channels.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Zarrad, Anis. "A Dynamic Platform for Developing 3D Facial Avatars in a Networked Virtual Environment." International Journal of Computer Games Technology 2016 (2016): 1–13. http://dx.doi.org/10.1155/2016/8489278.

Повний текст джерела
Анотація:
Avatar facial expression and animation in 3D collaborative virtual environment (CVE) systems are reconstructed through a complex manipulation of muscles, bones, and wrinkles in 3D space. The need for a fast and easy reconstruction approach has emerged in the recent years due to its application in various domains: 3D disaster management, virtual shopping, and military training. In this work we proposed a new script language based on atomic parametric action to easily produce real-time facial animation. To minimize use of the game engine, we introduced script-based component where the user introduces simple short script fragments to feed the engine with a new animation on the fly. During runtime, when an embedded animation is required, an xml file is created and injected into the game engine without stopping or restarting the engine. The resulting animation method preserves the real-time performance because the modification occurs not through the modification of the 3D code that describes the CVE and its objects but rather through modification of the action scenario that rules when an animation happens or might happen in that specific situation.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Grave, J., S. Soares, N. Madeira, P. Rodrigues, T. Santos, C. Roque, S. Morais, C. Pereira, and V. Santos. "Control of attention in bipolar disorder: Effects of perceptual load in processing task-irrelevant facial expressions." European Psychiatry 33, S1 (March 2016): S335. http://dx.doi.org/10.1016/j.eurpsy.2016.01.1172.

Повний текст джерела
Анотація:
Bipolar disorder (BD), along with schizophrenia, is one of the most severe psychiatric conditions and is correlated with attentional deficits and emotion dysregulation. Bipolar patients appear to be highly sensitive to the presence of emotional distractors. Yet, no study has investigated whether perceptual load modulates the interference of emotionally distracting information. Our main goal was to test whether bipolar patients are more sensitive to task-irrelevant emotional stimulus, even when the task demands a high amount of attentional resources.Fourteen participants with BD I or BD II and 14 controls, age- and gender-matched, performed a target-letter discrimination task with emotional task-irrelevant stimulus (angry, happy and neutral facial expressions). Target-letters were presented among five distractor-letters, which could be the same (low perceptual load) or different (high perceptual load). Participants should discriminate the target-letter and ignore the facial expression. Response time and accuracy rate were analyzed.Resultsshowed a greater interference of facial stimuli at high load than low load, confirming the effectiveness of perceptual load manipulation. More importantly, patients tarried significantly longer at high load. This is consistent with deficits in control of attention, showing that bipolar patients are more prone to distraction by task-irrelevant stimulus only when the task is more demanding. Moreover, for bipolar patients neutral and angry faces resulted in a higher interference with the task (longer response time), compared to controls, suggesting an attentional bias for neutral and threating social cues. Nevertheless, a more detailed investigation regarding the attentional impairments in social context in BD is needed.Disclosure of interestThe authors have not supplied their declaration of competing interest.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Harmer, C. J., M. Charles, S. McTavish, E. Favaron, and P. J. Cowen. "Negative ion treatment increases positive emotional processing in seasonal affective disorder." Psychological Medicine 42, no. 8 (December 13, 2011): 1605–12. http://dx.doi.org/10.1017/s0033291711002820.

Повний текст джерела
Анотація:
BackgroundAntidepressant drug treatments increase the processing of positive compared to negative affective information early in treatment. Such effects have been hypothesized to play a key role in the development of later therapeutic responses to treatment. However, it is unknown whether these effects are a common mechanism of action for different treatment modalities. High-density negative ion (HDNI) treatment is an environmental manipulation that has efficacy in randomized clinical trials in seasonal affective disorder (SAD).MethodThe current study investigated whether a single session of HDNI treatment could reverse negative affective biases seen in seasonal depression using a battery of emotional processing tasks in a double-blind, placebo-controlled randomized study.ResultsUnder placebo conditions, participants with seasonal mood disturbance showed reduced recognition of happy facial expressions, increased recognition memory for negative personality characteristics and increased vigilance to masked presentation of negative words in a dot-probe task compared to matched healthy controls. Negative ion treatment increased the recognition of positive compared to negative facial expression and improved vigilance to unmasked stimuli across participants with seasonal depression and healthy controls. Negative ion treatment also improved recognition memory for positive information in the SAD group alone. These effects were seen in the absence of changes in subjective state or mood.ConclusionsThese results are consistent with the hypothesis that early change in emotional processing may be an important mechanism for treatment action in depression and suggest that these effects are also apparent with negative ion treatment in seasonal depression.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Abualula, Yosra, and Eric Allard. "Age Similarity in Emotion Perception Based on Eye Gaze Manipulation." Innovation in Aging 4, Supplement_1 (December 1, 2020): 455–56. http://dx.doi.org/10.1093/geroni/igaa057.1474.

Повний текст джерела
Анотація:
Abstract The purpose of this study was to examine age differences in emotion perception as a function of emotion type and gaze direction. Old and young adult participants were presented with facial images showing happiness, sadness, fear, anger and disgust while having their eyes tracked. The image stimuli included a manipulation of eye gaze. Half of the facial expressions had a directed eye gaze while the other half showed an averted gaze. A 2 (age) x 2 (gaze) x 5 (emotion) repeated measures ANOVA was used to analyze emotion perception scores and fixation to eye and mouth regions of the face. The manipulation of eye gaze yielded more age similarities than differences in emotion perception. Overall, we did not detect age differences in recognition ability. However, we found that certain emotion categories differentially impacted emotion perception. Interestingly, we observed that an averted gaze led to beneficial performance for fear and disgust faces. Additionally, participants spent more time fixating on the eye regions of sad facial expressions. We discuss how naturalistic manipulations of various facial features could impact age-related differences (or similarities) in emotion perception.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Mendolia, Marilyn. "Facial Identity Memory Is Enhanced When Sender’s Expression Is Congruent to Perceiver’s Experienced Emotion." Psychological Reports 121, no. 5 (November 24, 2017): 892–908. http://dx.doi.org/10.1177/0033294117741655.

Повний текст джерела
Анотація:
The role of the social context in facial identity recognition and expression recall was investigated by manipulating the sender’s emotional expression and the perceiver’s experienced emotion during encoding. A mixed-design with one manipulated between-subjects factor (perceiver’s experienced emotion) and two within-subjects factors (change in experienced emotion and sender’s emotional expression) was used. Senders’ positive and negative expressions were implicitly encoded while perceivers experienced their baseline emotion and then either a positive or a negative emotion. Facial identity recognition was then tested using senders’ neutral expressions. Memory for senders previously seen expressing positive or negative emotion was facilitated if the perceiver initially encoded the expression while experiencing a positive or a negative emotion, respectively. Furthermore, perceivers were confident of their decisions. This research provides a more detailed understanding of the social context by exploring how the sender–perceiver interaction affects the memory for the sender.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Theobald, Barry-John, Iain Matthews, Michael Mangini, Jeffrey R. Spies, Timothy R. Brick, Jeffrey F. Cohn, and Steven M. Boker. "Mapping and Manipulating Facial Expression." Language and Speech 52, no. 2-3 (June 2009): 369–86. http://dx.doi.org/10.1177/0023830909103181.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Kinchella, Jade, and Kun Guo. "Facial Expression Ambiguity and Face Image Quality Affect Differently on Expression Interpretation Bias." Perception 50, no. 4 (March 12, 2021): 328–42. http://dx.doi.org/10.1177/03010066211000270.

Повний текст джерела
Анотація:
We often show an invariant or comparable recognition performance for perceiving prototypical facial expressions, such as happiness and anger, under different viewing settings. However, it is unclear to what extent the categorisation of ambiguous expressions and associated interpretation bias are invariant in degraded viewing conditions. In this exploratory eye-tracking study, we systematically manipulated both facial expression ambiguity (via morphing happy and angry expressions in different proportions) and face image clarity/quality (via manipulating image resolution) to measure participants’ expression categorisation performance, perceived expression intensity, and associated face-viewing gaze distribution. Our analysis revealed that increasing facial expression ambiguity and decreasing face image quality induced the opposite direction of expression interpretation bias (negativity vs. positivity bias, or increased anger vs. increased happiness categorisation), the same direction of deterioration impact on rating expression intensity, and qualitatively different influence on face-viewing gaze allocation (decreased gaze at eyes but increased gaze at mouth vs. stronger central fixation bias). These novel findings suggest that in comparison with prototypical facial expressions, our visual system has less perceptual tolerance in processing ambiguous expressions which are subject to viewing condition-dependent interpretation bias.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Mermier, Julia, Ermanno Quadrelli, Hermann Bulf, and Chiara Turati. "Ostracism modulates children’s recognition of emotional facial expressions." PLOS ONE 18, no. 6 (June 15, 2023): e0287106. http://dx.doi.org/10.1371/journal.pone.0287106.

Повний текст джерела
Анотація:
Ostracism has been shown to induce considerable physiological, behavioral and cognitive changes in adults. Previous research demonstrated its effects on children’s cognitive and behavioral abilities, but less is known about its impact on their capacity to recognize subtle variations in social cues. The present study aimed at investigating whether social manipulations of inclusion and ostracism modulate emotion recognition abilities in children, and whether this modulation varies across childhood. To do so, 5- and 10-year-old children participated in a computer-based ball tossing game called Cyberball during which they were either included or ostracized. Then, they completed a facial emotion recognition task in which they were required to identify neutral facial expressions, or varying levels of intensity of angry and fearful facial expressions. Results indicated lower misidentification rates for children who were previously ostracized as compared to children who were previously included, both at 5 and 10 years of age. Moreover, when looking at children’s accuracy and sensitivity to facial expressions, 5-year-olds’ decoding abilities were affected by the social manipulation, while no difference between included and ostracized participants was observed for 10-year-olds. In particular, included and ostracized 10-year-old children as well as ostracized 5-year-olds showed higher accuracy and sensitivity for expressions of fear as compared to anger, while no such difference was observed for included 5-year-olds. Overall, the current study presents evidence that Cyberball-induced inclusion and ostracism modulate children’s recognition of emotional faces.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

King, R., D. Jecmen, J. Mitchell, K. Ralston, J. Gould, A. Burns, A. Bullock, M. A. Grandner, A. Alkozei, and W. D. Killgore. "0081 Habitual Sleep Duration is Negatively Correlated with Emotional Reactivity within the Rostral Anterior Cingulate Cortex in Individuals with PTSD." Sleep 43, Supplement_1 (April 2020): A32—A33. http://dx.doi.org/10.1093/sleep/zsaa056.079.

Повний текст джерела
Анотація:
Abstract Introduction Sleep difficulties, such as insomnia, are highly prevalent in individuals with Post-Traumatic Stress Disorder (PTSD). However, sleep deprivation can also increase emotional reactivity to positive (as well as negative) stimuli. While the effects of sleep loss on emotional perception healthy individuals has been documented, it remains unclear how lack of sleep in individuals with PTSD may affect their emotional reactivity to positive stimuli. We hypothesized that lower habitual sleep duration would be associated with greater functional brain activation changes in response to subliminally presented happy faces in brain areas of the reward network, such as the rostral anterior cingulate cortex (rACC). Methods Thirty-nine individuals with DSM-5 confirmed PTSD were administered the Pittsburgh Sleep Quality Index (PSQI) as a measure of their average nightly sleep duration over the past month. Participants then underwent fMRI imagining while viewing subliminal presentations of faces displaying happiness, using a backward masked facial affect paradigm to minimize conscious awareness of the expressed emotion. Brain activation to masked happy expressions was regressed against sleep duration in SPM12. Results There was a negative correlation between habitual sleep duration and activation within the rACC in response to the masked happy faces (x=14,y=40,z=0; k=102, pFWE-corr= 0.008). Conclusion Individuals with PTSD who average less sleep at night showed greater emotional reactivity, as indexed by greater functional brain activation changes within an area of the reward network, than individuals who obtained more sleep per night. Future research involving actual sleep duration manipulation will be necessary to determine whether this finding reflects the well-known antidepressant effect of sleep deprivation or a form of greater emotional expression error monitoring among traumatized patients when lacking sleep. Regardless, these findings suggest that insufficient sleep could affect unconsciously perceived emotion in faces and potentially affect social and emotional responses among individuals with PTSD. Support US Army Medical Research and Materiel Command: W81XWH-14-1-0570
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Duong, Van Chien, Emma Regolini, Billy Sung, Min Teah, and Siobhan Hatton-Jones. "Is more really better for in-store experience? A psychophysiological experiment on sensory modalities." Journal of Consumer Marketing 39, no. 2 (February 8, 2022): 218–29. http://dx.doi.org/10.1108/jcm-02-2020-3656.

Повний текст джерела
Анотація:
Purpose The purpose of this study is to understand whether increasing the number of sensory modalities being stimulated impacts consumers’ in-store emotional responses (i.e. in-store enjoyment and arousal), store image perception and brand attitude. Design/methodology/approach The study used a between-subjects experimental design to examine 551 individuals’ perceptions and emotional responses in four sensory modalities stimulation conditions (i.e. visual, visual-smell, visual-taste and visual-smell-taste). The study used virtual reality visualisation technology and psychophysiological measurements (i.e. skin conductance and facial expression) to improve the ecological validity of the study design. Findings The current study supports the importance of multisensory in-store atmospheric design. When increasing the number of sensory modalities being stimulated, more positive emotional responses and perceptions were recorded. Additionally, increasing the number of sensory modalities also increased perceived intensity, and perceived intensity mediate the relationship between the stimulation of multisensory modalities and perception. Research limitations/implications The study is without its limitations. For instance, the scope of the study was limited by the exclusion of auditory and haptic stimulation, the lack of manipulation of sensory intensity and the absence of sensory congruency examination. Practical implications This study contributes to retail and marketing practices by providing evidence to assist the retail design of in-store sensory cues and customer experiences. Originality/value This research uses both self-reported measures and biometric measures to test the sole effect of sensory modalities being stimulated on consumer evaluation. To the best of the authors’ knowledge, this study is the first to examine store atmospheric designs with psychophysiological methodologies and an immersive, two-story-high, 180-degree-visual-field and dome-shaped display.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Yano, Ken, and Koichi Harada. "A Facial Expression Parameterization by Elastic Surface Model." International Journal of Computer Games Technology 2009 (2009): 1–11. http://dx.doi.org/10.1155/2009/397938.

Повний текст джерела
Анотація:
We introduce a novel parameterization of facial expressions by using elastic surface model. The elastic surface model has been used as a deformation tool especially for nonrigid organic objects. The parameter of expressions is either retrieved from existing articulated face models or obtained indirectly by manipulating facial muscles. The obtained parameter can be applied on target face models dissimilar to the source model to create novel expressions. Due to the limited number of control points, the animation data created using the parameterization require less storage size without affecting the range of deformation it provides. The proposed method can be utilized in many ways: (1) creating a novel facial expression from scratch, (2) parameterizing existing articulation data, (3) parameterizing indirectly by muscle construction, and (4) providing a new animation data format which requires less storage.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Luo, Yifan, Feng Ye, Bin Weng, Shan Du, and Tianqiang Huang. "A Novel Defensive Strategy for Facial Manipulation Detection Combining Bilateral Filtering and Joint Adversarial Training." Security and Communication Networks 2021 (August 2, 2021): 1–10. http://dx.doi.org/10.1155/2021/4280328.

Повний текст джерела
Анотація:
Facial manipulation enables facial expressions to be tampered with or facial identities to be replaced in videos. The fake videos are so realistic that they are even difficult for human eyes to distinguish. This poses a great threat to social and public information security. A number of facial manipulation detectors have been proposed to address this threat. However, previous studies have shown that the accuracy of these detectors is sensitive to adversarial examples. The existing defense methods are very limited in terms of applicable scenes and defense effects. This paper proposes a new defense strategy for facial manipulation detectors, which combines a passive defense method, bilateral filtering, and a proactive defense method, joint adversarial training, to mitigate the vulnerability of facial manipulation detectors against adversarial examples. The bilateral filtering method is applied in the preprocessing stage of the model without any modification to denoise the input adversarial examples. The joint adversarial training starts from the training stage of the model, which mixes various adversarial examples and original examples to train the model. The introduction of joint adversarial training can train a model that defends against multiple adversarial attacks. The experimental results show that the proposed defense strategy positively helps facial manipulation detectors counter adversarial examples.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Ho, Hao Tam, Erich Schröger, and Sonja A. Kotz. "Selective Attention Modulates Early Human Evoked Potentials during Emotional Face–Voice Processing." Journal of Cognitive Neuroscience 27, no. 4 (April 2015): 798–818. http://dx.doi.org/10.1162/jocn_a_00734.

Повний текст джерела
Анотація:
Recent findings on multisensory integration suggest that selective attention influences cross-sensory interactions from an early processing stage. Yet, in the field of emotional face–voice integration, the hypothesis prevails that facial and vocal emotional information interacts preattentively. Using ERPs, we investigated the influence of selective attention on the perception of congruent versus incongruent combinations of neutral and angry facial and vocal expressions. Attention was manipulated via four tasks that directed participants to (i) the facial expression, (ii) the vocal expression, (iii) the emotional congruence between the face and the voice, and (iv) the synchrony between lip movement and speech onset. Our results revealed early interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N1 and P2 amplitude by incongruent emotional face–voice combinations. Although audiovisual emotional interactions within the N1 time window were affected by the attentional manipulations, interactions within the P2 modulation showed no such attentional influence. Thus, we propose that the N1 and P2 are functionally dissociated in terms of emotional face–voice processing and discuss evidence in support of the notion that the N1 is associated with cross-sensory prediction, whereas the P2 relates to the derivation of an emotional percept. Essentially, our findings put the integration of facial and vocal emotional expressions into a new perspective—one that regards the integration process as a composite of multiple, possibly independent subprocesses, some of which are susceptible to attentional modulation, whereas others may be influenced by additional factors.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Maglieri, Veronica, Marco Germain Riccobono, Dimitri Giunchi, and Elisabetta Palagi. "Navigating from live to virtual social interactions: looking at but not manipulating smartphones provokes a spontaneous mimicry response in the observers." Journal of Ethology 39, no. 3 (April 17, 2021): 287–96. http://dx.doi.org/10.1007/s10164-021-00701-6.

Повний текст джерела
Анотація:
AbstractBy gathering data on people during their ordinary daily activities, we tested if looking at, but not manipulating, smartphones led to a mimicry response in the observer. Manipulating and looking at the device (experimental condition), more than its mere manipulation (control condition), was critical to elicit a mimicry response in the observer. Sex, age and relationship quality between the experimenter and the observer had no effect on the smartphone mimicry response that tended to decrease during social meals. Due to the role of food as a tool in increasing social affiliation, it is possible that during communal eating, people engage in other forms of mimicry involving facial expressions and postures rather than the use of objects. Understanding the ethological mechanisms of the use of smartphones at everyday-social scale could unveil the processes at the basis of the widespread/increasing use of these devices at a large scale.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Ueda, Yoshiyuki. "Understanding Mood of the Crowd with Facial Expressions: Majority Judgment for Evaluation of Statistical Summary Perception." Attention, Perception, & Psychophysics 84, no. 3 (March 15, 2022): 843–60. http://dx.doi.org/10.3758/s13414-022-02449-8.

Повний текст джерела
Анотація:
AbstractWe intuitively perceive mood or collective information of facial expressions without much effort. Although it is known that statistical summarization occurs even for faces instantaneously, it might be hard to perceive precise summary statistics of facial expressions (i.e., using all of them equally) since recognition of them requires the binding of multiple features of a face. This study assessed which information is extracted from the crowd to understand mood. In a series of experiments, twelve individual faces with happy and neutral expressions (or angry and neutral expressions) were presented simultaneously, and participants reported which expression appeared more frequently. To perform this task correctly, participants must perceive precise distribution of facial expressions in the crowd. If participants could perceive ensembles based on every face instantaneously, expressions presented on more than half of the faces (in a single ensemble/trial) would have been identified as more frequently presented and the just noticeable difference would be small. The results showed that participants did not always report seeing emotional faces more frequently until much more emotional than neutral faces appeared, suggesting that facial expression ensembles were not perceived from all faces. Manipulating the presentation layout revealed that participants’ judgments highly weight only a part of the faces in the center of the crowd regardless of their visual size. Moreover, individual differences in the precision of summary statistical perception were related to visual working memory. Based on these results, this study provides a speculative explanation of summary perception of real distinctive faces. (247 words)
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Dolensek, Nejc, Daniel A. Gehrlach, Alexandra S. Klein, and Nadine Gogolla. "Facial expressions of emotion states and their neuronal correlates in mice." Science 368, no. 6486 (April 2, 2020): 89–94. http://dx.doi.org/10.1126/science.aaz9468.

Повний текст джерела
Анотація:
Understanding the neurobiological underpinnings of emotion relies on objective readouts of the emotional state of an individual, which remains a major challenge especially in animal models. We found that mice exhibit stereotyped facial expressions in response to emotionally salient events, as well as upon targeted manipulations in emotion-relevant neuronal circuits. Facial expressions were classified into distinct categories using machine learning and reflected the changing intrinsic value of the same sensory stimulus encountered under different homeostatic or affective conditions. Facial expressions revealed emotion features such as intensity, valence, and persistence. Two-photon imaging uncovered insular cortical neuron activity that correlated with specific facial expressions and may encode distinct emotions. Facial expressions thus provide a means to infer emotion states and their neuronal correlates in mice.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Vacaru, Stefania V., Johanna E. van Schaik, Erik de Water, and Sabine Hunnius. "Five-year-olds’ facial mimicry following social ostracism is modulated by attachment security." PLOS ONE 15, no. 12 (December 29, 2020): e0240680. http://dx.doi.org/10.1371/journal.pone.0240680.

Повний текст джерела
Анотація:
Social ostracism triggers an increase in affiliative behaviours. One such behaviour is the rapid copying of others’ facial expressions, called facial mimicry. Insofar, it remains unknown how individual differences in intrinsic affiliation motivation regulate responses to social ostracism during early development. We examined children’s facial mimicry following ostracism as modulated by individual differences in the affiliation motivation, expressed in their attachment tendencies. Resistant and avoidant tendencies are characterized by high and low affiliation motivation, and were hypothesized to lead to facial mimicry enhancement or suppression towards an ostracizing partner, respectively. Following an ostracism manipulation in which children played a virtual game (Cyberball) with an includer and an excluder peer, mimicry of the two peers’ happy and sad facial expressions was recorded with electromyography (EMG). Attachment was assessed via parent-report questionnaire. We found that 5-year-olds smiled to sad facial expressions of the excluder peer, while they showed no facial reactions for the includer peer. Neither resistant nor avoidant tendencies predicted facial mimicry to the excluder peer. Yet, securely attached children smiled towards the excluder peer, when sad facial expressions were displayed. In conclusion, these findings suggest a modulation of facial reactions following ostracism by early attachment.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

M., Murugappan, and Mutawa A. "Facial geometric feature extraction based emotional expression classification using machine learning algorithms." PLOS ONE 16, no. 2 (February 18, 2021): e0247131. http://dx.doi.org/10.1371/journal.pone.0247131.

Повний текст джерела
Анотація:
Emotion plays a significant role in interpersonal communication and also improving social life. In recent years, facial emotion recognition is highly adopted in developing human-computer interfaces (HCI) and humanoid robots. In this work, a triangulation method for extracting a novel set of geometric features is proposed to classify six emotional expressions (sadness, anger, fear, surprise, disgust, and happiness) using computer-generated markers. The subject’s face is recognized by using Haar-like features. A mathematical model has been applied to positions of eight virtual markers in a defined location on the subject’s face in an automated way. Five triangles are formed by manipulating eight markers’ positions as an edge of each triangle. Later, these eight markers are uninterruptedly tracked by Lucas- Kanade optical flow algorithm while subjects’ articulating facial expressions. The movement of the markers during facial expression directly changes the property of each triangle. The area of the triangle (AoT), Inscribed circle circumference (ICC), and the Inscribed circle area of a triangle (ICAT) are extracted as features to classify the facial emotions. These features are used to distinguish six different facial emotions using various types of machine learning algorithms. The inscribed circle area of the triangle (ICAT) feature gives a maximum mean classification rate of 98.17% using a Random Forest (RF) classifier compared to other features and classifiers in distinguishing emotional expressions.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Roesch, Etienne B., Lucas Tamarit, Lionel Reveret, Didier Grandjean, David Sander, and Klaus R. Scherer. "FACSGen: A Tool to Synthesize Emotional Facial Expressions Through Systematic Manipulation of Facial Action Units." Journal of Nonverbal Behavior 35, no. 1 (November 19, 2010): 1–16. http://dx.doi.org/10.1007/s10919-010-0095-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Kobayashi, Kazuki, Seiji Yamada, Shinobu Nakagawa, and Yasunori Saito. "Rebo: A Pet-Like Strokable Remote Control." Journal of Advanced Computational Intelligence and Intelligent Informatics 16, no. 7 (November 20, 2012): 771–83. http://dx.doi.org/10.20965/jaciii.2012.p0771.

Повний текст джерела
Анотація:
This paper describes a pet-like remote control called Rebo for home appliances and TVs. Rebo has three new advantages over conventional remote controls: user-friendliness, function awareness, and functional manipulation by stroking touch panels. Its pet-like presence and facial expressions make it seem friendly to users. Its function awareness makes users easily aware of its functions through expressive feedback that informs them of the meaning of their manipulations by showing part of the function that is to be executed. The ability to manipulate its functions by stroking it like one would do a pet also enables users to use Rebo without having to look for buttons to push. We conducted experiments in which we monitored the eye movements of users while they operated Rebo and another remote control and administered questionnaires to users afterwards. The experimental results revealed significant aspects of Rebo and confirmed its advantages.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Bradley, Brendan P., Karin Mogg, Sara J. Falla, and Lucy R. Hamilton. "Attentional Bias for Threatening Facial Expressions in Anxiety: Manipulation of Stimulus Duration." Cognition & Emotion 12, no. 6 (November 1998): 737–53. http://dx.doi.org/10.1080/026999398379411.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Liu, Jianyi, Yang Liu, Heng Jiang, Jingjing Zhao, and Xiaobin Ding. "Facial feedback manipulation influences the automatic detection of unexpected emotional body expressions." Neuropsychologia 195 (March 2024): 108802. http://dx.doi.org/10.1016/j.neuropsychologia.2024.108802.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Vieira, Roberto Cesar Cavalcante, Creto Vidal, and Joaquim Bento Cavalcante-Neto. "Expression Cloning Based on Anthropometric Proportions and Deformations by Motion of Spherical Influence Zones." Journal on Interactive Systems 2, no. 1 (May 20, 2011): 1. http://dx.doi.org/10.5753/jis.2011.563.

Повний текст джерела
Анотація:
Virtual tridimensional creatures are active actors in many types of applications nowadays, such as virtual reality, games and computer animation. The virtual actors encountered in those applications are very diverse, but usually have humanlike behavior and facial expressions. This paper deals with the mapping of facial expressions between virtual characters, based on anthropometric proportions and geometric manipulations by moving influence zones. Facial proportions of a base model is used to transfer expressions to any other model with similar global characteristics (if the base model is a human, for instance, the other models need to have two eyes, one nose and one mouth). With this solution, it is possible to insert new virtual characters in real-time applications without having to go through the tedious process of customizing the characters’ emotions.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Rosado, Pilar, Rubén Fernández, and Ferran Reverter. "GANs and Artificial Facial Expressions in Synthetic Portraits." Big Data and Cognitive Computing 5, no. 4 (November 4, 2021): 63. http://dx.doi.org/10.3390/bdcc5040063.

Повний текст джерела
Анотація:
Generative adversarial networks (GANs) provide powerful architectures for deep generative learning. GANs have enabled us to achieve an unprecedented degree of realism in the creation of synthetic images of human faces, landscapes, and buildings, among others. Not only image generation, but also image manipulation is possible with GANs. Generative deep learning models are inherently limited in their creative abilities because of a focus on learning for perfection. We investigated the potential of GAN’s latent spaces to encode human expressions, highlighting creative interest for suboptimal solutions rather than perfect reproductions, in pursuit of the artistic concept. We have trained Deep Convolutional GAN (DCGAN) and StyleGAN using a collection of portraits of detained persons, portraits of dead people who died of violent causes, and people whose portraits were taken during an orgasm. We present results which diverge from standard usage of GANs with the specific intention of producing portraits that may assist us in the representation and recognition of otherness in contemporary identity construction.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії