Academic literature on the topic 'Emotional face'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Emotional face.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Emotional face"

1

Föcker, Julia, and Brigitte Röder. "Event-Related Potentials Reveal Evidence for Late Integration of Emotional Prosody and Facial Expression in Dynamic Stimuli: An ERP Study." Multisensory Research 32, no. 6 (2019): 473–97. http://dx.doi.org/10.1163/22134808-20191332.

Full text
Abstract:
Abstract The aim of the present study was to test whether multisensory interactions of emotional signals are modulated by intermodal attention and emotional valence. Faces, voices and bimodal emotionally congruent or incongruent face–voice pairs were randomly presented. The EEG was recorded while participants were instructed to detect sad emotional expressions in either faces or voices while ignoring all stimuli with another emotional expression and sad stimuli of the task irrelevant modality. Participants processed congruent sad face–voice pairs more efficiently than sad stimuli paired with an incongruent emotion and performance was higher in congruent bimodal compared to unimodal trials, irrespective of which modality was task-relevant. Event-related potentials (ERPs) to congruent emotional face–voice pairs started to differ from ERPs to incongruent emotional face–voice pairs at 180 ms after stimulus onset: Irrespectively of which modality was task-relevant, ERPs revealed a more pronounced positivity (180 ms post-stimulus) to emotionally congruent trials compared to emotionally incongruent trials if the angry emotion was presented in the attended modality. A larger negativity to incongruent compared to congruent trials was observed in the time range of 400–550 ms (N400) for all emotions (happy, neutral, angry), irrespectively of whether faces or voices were task relevant. These results suggest an automatic interaction of emotion related information.
APA, Harvard, Vancouver, ISO, and other styles
2

Peterson, Johnathan Caleb, Carly Jacobs, John Hibbing, and Kevin Smith. "In your face." Politics and the Life Sciences 37, no. 1 (2018): 53–67. http://dx.doi.org/10.1017/pls.2017.13.

Full text
Abstract:
Research suggests that people can accurately predict the political affiliations of others using only information extracted from the face. It is less clear from this research, however, what particular facial physiological processes or features communicate such information. Using a model of emotion developed in psychology that treats emotional expressivity as an individual-level trait, this article provides a theoretical account of why emotional expressivity may provide reliable signals of political orientation, and it tests the theory in four empirical studies. We find statistically significant liberal/conservative differences in self-reported emotional expressivity, in facial emotional expressivity measured physiologically, in the perceived emotional expressivity and ideology of political elites, and in an experiment that finds that more emotionally expressive faces are perceived as more liberal.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Zikang, Yujia Zhang, Tianjun Wu, Hao Guo, and Yao Li. "Emotionally Controllable Talking Face Generation from an Arbitrary Emotional Portrait." Applied Sciences 12, no. 24 (December 14, 2022): 12852. http://dx.doi.org/10.3390/app122412852.

Full text
Abstract:
With the continuous development of cross-modality generation, audio-driven talking face generation has made substantial advances in terms of speech content and mouth shape, but existing research on talking face emotion generation is still relatively unsophisticated. In this work, we present Emotionally Controllable Talking Face Generation from an Arbitrary Emotional Portrait to synthesize lip-sync and an emotionally controllable high-quality talking face. Specifically, we take a facial reenactment perspective, using facial landmarks as an intermediate representation driving the expression generation of talking faces through the landmark features of an arbitrary emotional portrait. Meanwhile, decoupled design ideas are used to divide the model into three sub-networks to improve emotion control. They are the lip-sync landmark animation generation network, the emotional landmark animation generation network, and the landmark-to-animation translation network. The two landmark animation generation networks are responsible for generating content-related lip area landmarks and facial expression landmarks to correct the landmark sequences of the target portrait. Following this, the corrected landmark sequences and the target portrait are fed into the translation network to generate an emotionally controllable talking face. Our method controls the expressions of talking faces by driving the emotional portrait images while ensuring the generation of animated lip-sync, and can handle new audio and portraits not seen during training. A multi-perspective user study and extensive quantitative and qualitative evaluations demonstrate the superiority of the system in terms of visual emotion representation and video authenticity.
APA, Harvard, Vancouver, ISO, and other styles
4

Schaarschmidt, Nadine, and Thomas Koehler. "Experiencing Emotions in Video-Mediated Psychological Counselling Versus to Face-to-Face Settings." Societies 11, no. 1 (March 11, 2021): 20. http://dx.doi.org/10.3390/soc11010020.

Full text
Abstract:
How does using video technology influence the emotional experience of communication in psychological counselling? In this paper, the experience of emotion—as an essential factor in the communication between counsellor and client—is systematically compared for face-to-face and video formats. It is suggested that the research methodology for studying computer-mediated forms of communication links lab and (virtual) reality in an ideal way. Based on a sample of 27 cases, significant differences and their observed effect sizes are presented. The aim of this study is to investigate the emotional experience in direct and mediated interaction and thus to contribute to the systematic search for evidence as to whether and how the emotional experience in psychological counselling interviews changes during video-mediated transmission. The results suggest, among others, that negative emotions are more intense in the video format and positive emotions are intensified in the face-to-face format.
APA, Harvard, Vancouver, ISO, and other styles
5

Homorogan, C., R. Adam, R. Barboianu, Z. Popovici, C. Bredicean, and M. Ienciu. "Emotional Face Recognition in Bipolar Disorder." European Psychiatry 41, S1 (April 2017): S117. http://dx.doi.org/10.1016/j.eurpsy.2017.01.1904.

Full text
Abstract:
IntroductionEmotional face recognition is significant for social communication. This is impaired in mood disorders, such as bipolar disorder. Individuals with bipolar disorder lack the ability to perceive facial expressions.ObjectivesTo analyse the capacity of emotional face recognition in subjects diagnosed with bipolar disorder.AimsTo establish a correlation between emotion recognition ability and the evolution of bipolar disease.MethodsA sample of 24 subjects were analysed in this trial, diagnosed with bipolar disorder (according to ICD-10 criteria), who were hospitalised in the Psychiatry Clinic of Timisoara and monitored in outpatients clinic. Subjects were introduced in the trial based on inclusion/exclusion criteria. The analysed parameters were: socio-demographic (age, gender, education level), the number of relapses, the predominance of manic or depressive episodes, and the ability of identifying emotions (Reading the Mind in the Eyes Test).ResultsMost of the subjects (79.16%) had a low ability to identify emotions, 20.83% had a normal capacity to recognise emotions, and none of them had a high emotion recognition capacity. The positive emotions (love, joy, surprise) were easier recognised, by 75% of the subjects, than the negative ones (anger, sadness, fear). There was no evident difference in emotional face recognition between the individuals with predominance of manic episodes than the ones who had mostly depressive episodes, and between the number of relapses.ConclusionsThe individuals with bipolar disorder have difficulties in identifying facial emotions, but with no obvious correlation between the analysed parameters.Disclosure of interestThe authors have not supplied their declaration of competing interest.
APA, Harvard, Vancouver, ISO, and other styles
6

Malsert, Jennifer, Khanh Tran, Tu Anh Thi Tran, Tho Ha-Vinh, Edouard Gentaz, and Russia Ha-Vinh Leuchter. "Cross-Cultural and Environmental Influences on Facial Emotional Discrimination Sensitivity in 9-Year-Old Children from Swiss and Vietnamese Schools." Swiss Journal of Psychology 79, no. 3-4 (December 2020): 89–99. http://dx.doi.org/10.1024/1421-0185/a000240.

Full text
Abstract:
Abstract. The Other Race Effect (ORE), i.e., recognition facilitation for own-race faces, is a well-established phenomenon with broad evidence in adults and infants. Nevertheless, the ORE in older children is poorly understood, and even less so for emotional face processing. This research samples 87 9-year-old children from Vietnamese and Swiss schools. In two separate studies, we evaluated the children’s abilities to perceive the disappearance of emotions in Asian and Caucasian faces in an offset task. The first study evaluated an “emotional ORE” in Vietnamese-Asian, Swiss-Caucasian, and Swiss-Multicultural children. Offset times showed an emotional ORE in Vietnamese-Asian children living in an ethnically homogeneous environment, whereas mixed ethnicities in Swiss children seem to have balanced performance between face types. The second study compared socioemotionally trained versus untrained Vietnamese-Asian children. Vietnamese children showed a strong emotional ORE and tend to increase their sensitivity to emotion offset after training. Moreover, an effect of emotion consistent with previous observation in adults could suggest a cultural sensitivity to disapproval signs. Taken together, the results suggest that 9-year-old children can present an emotional ORE, but that a heterogeneous environment or an emotional training could strengthen face-processing abilities without reducing skills on their own-group.
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Eunye. "The Effects of Classes on the Application of Emotional Regulation Task in Online Classes." Korean Society of Culture and Convergence 45, no. 2 (February 28, 2023): 175–91. http://dx.doi.org/10.33645/cnc.2023.02.45.02.175.

Full text
Abstract:
The purpose of this study is to examine the effects of learning flow, self-determination motivation, academic emotional regulation, and self-directed learning ability, which are known to affect learners' positive academic achievement after applying emotion regulation strategies in non-face-to-face classes. Emotional regulation strategies were conducted for 64 college students in a non-face-to-face class for 10 weeks. Next, the Paried t-test was conducted to compare learning flow, self-determination motivation, academic emotional regulation, and self-directed learning ability before and after. As a result, learning flow, academic emotional regulation, and self-directed learning ability significantly increased after the application of emotion regulation tasks in non-face-to-face classes. This study shows that if learners focus more on positive emotions in non-face-to-face classes, they can intervene in the learning and psychological aspects of students.
APA, Harvard, Vancouver, ISO, and other styles
8

Isomura, Tomoko, and Tamami Nakano. "Automatic facial mimicry in response to dynamic emotional stimuli in five-month-old infants." Proceedings of the Royal Society B: Biological Sciences 283, no. 1844 (December 14, 2016): 20161948. http://dx.doi.org/10.1098/rspb.2016.1948.

Full text
Abstract:
Human adults automatically mimic others' emotional expressions, which is believed to contribute to sharing emotions with others. Although this behaviour appears fundamental to social reciprocity, little is known about its developmental process. Therefore, we examined whether infants show automatic facial mimicry in response to others' emotional expressions. Facial electromyographic activity over the corrugator supercilii (brow) and zygomaticus major (cheek) of four- to five-month-old infants was measured while they viewed dynamic clips presenting audiovisual, visual and auditory emotions. The audiovisual bimodal emotion stimuli were a display of a laughing/crying facial expression with an emotionally congruent vocalization, whereas the visual/auditory unimodal emotion stimuli displayed those emotional faces/vocalizations paired with a neutral vocalization/face, respectively. Increased activation of the corrugator supercilii muscle in response to audiovisual cries and the zygomaticus major in response to audiovisual laughter were observed between 500 and 1000 ms after stimulus onset, which clearly suggests rapid facial mimicry. By contrast, both visual and auditory unimodal emotion stimuli did not activate the infants' corresponding muscles. These results revealed that automatic facial mimicry is present as early as five months of age, when multimodal emotional information is present.
APA, Harvard, Vancouver, ISO, and other styles
9

Curby, Kim M., Kareem J. Johnson, and Alyssa Tyson. "Face to face with emotion: Holistic face processing is modulated by emotional state." Cognition & Emotion 26, no. 1 (January 2012): 93–102. http://dx.doi.org/10.1080/02699931.2011.555752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Evers, Kris, Inneke Kerkhof, Jean Steyaert, Ilse Noens, and Johan Wagemans. "No Differences in Emotion Recognition Strategies in Children with Autism Spectrum Disorder: Evidence from Hybrid Faces." Autism Research and Treatment 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/345878.

Full text
Abstract:
Emotion recognition problems are frequently reported in individuals with an autism spectrum disorder (ASD). However, this research area is characterized by inconsistent findings, with atypical emotion processing strategies possibly contributing to existing contradictions. In addition, an attenuated saliency of the eyes region is often demonstrated in ASD during face identity processing. We wanted to compare reliance on mouth versus eyes information in children with and without ASD, using hybrid facial expressions. A group of six-to-eight-year-old boys with ASD and an age- and intelligence-matched typically developing (TD) group without intellectual disability performed an emotion labelling task with hybrid facial expressions. Five static expressions were used: one neutral expression and four emotional expressions, namely, anger, fear, happiness, and sadness. Hybrid faces were created, consisting of an emotional face half (upper or lower face region) with the other face half showing a neutral expression. Results showed no emotion recognition problem in ASD. Moreover, we provided evidence for the existence of top- and bottom-emotions in children: correct identification of expressions mainly depends on information in the eyes (so-called top-emotions: happiness) or in the mouth region (so-called bottom-emotions: sadness, anger, and fear). No stronger reliance on mouth information was found in children with ASD.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Emotional face"

1

Neth, Donald C. "Facial configuration and the perception of facial expression." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1189090729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

MATTAVELLI, GIULIA CAMILLA. "Neural correlates of face evaluation: emotional expressions and social traits." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2013. http://hdl.handle.net/10281/43782.

Full text
Abstract:
Face processing is a crucial skill for human interaction. Accordingly, it is supported by a widely distributed fronto-temporo-occipital neural circuit (Haxby et al., 2000). The present work investigates the neural correlates of face expression processing by means of different neuroimaging and electrophysiological techniques. Using fMRI I investigated amygdala responses to basic emotions and activations in face-selective regions in response to social cues detected in faces (Study 1 and Study 2). These studies showed that the amygdala is highly responsive to fear expressions but has also a critical role in appraising socially relevant stimuli and together with the posterior face-selective regions it is sensitive to face distinctiveness as well as social meaning of face features. In Study 3 I demonstrated by means of TMS that the medial prefrontal cortex (mPFC) contains different neural representations for angry and happy expressions linked to lexical knowledge of emotions. Finally, the combined TMS-EEG experiment reported in Study 4 revealed interconnections between activity in the core and the extended system of face processing, and the interactions resulted to be modulated by the type of behavioural task. Taken together the present results help to clarify the role of different regions as part of the face perception system and suggest that the coupling between cortical areas and the coordinated activity of different regions in the distributed network are crucial to recognize the multiplicity of information that faces convey.
APA, Harvard, Vancouver, ISO, and other styles
3

Shostak, Lisa. "Social information processing, emotional face recognition and emotional response style in offending and non-offending adolescents." Thesis, King's College London (University of London), 2007. https://kclpure.kcl.ac.uk/portal/en/theses/social-information-processing-emotional-face-recognition-and-emotional-response-style-in-offending-and-nonoffending-adolescents(15ff1b2d-1e52-46b7-be1a-736098263ce1).html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Siino, Rosanne Marie. "Emotional engagement on geographically distributed teams : exploring interaction challenges in mediated versus face-to-face meetings /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Palumbo, Letizia. "Beyond face value : involuntary emotional anticipation in typical development and Asperger's syndrome." Thesis, University of Hull, 2012. http://hydra.hull.ac.uk/resources/hull:6229.

Full text
Abstract:
Understanding and anticipating the behavior and associated mental/emotional states of mind of others is crucial for successful social interactions. Typically developed (TD) humans rely on the processing and integration of social cues that accompany other’s actions to make, either implicitly or explicitly, inferences about others’ mental states. Interestingly, the attribution of affective or mental states to the agent can in turn (top down) induce distortions in the visual perception of those actions (Hudson, Liu, & Jellema, 2009; Hudson & Jellema, 2011; Jellema, Pecchinenda, Palumbo, & Tan, 2011). The aim of this thesis was to investigate bottom-up and top-down influences on distortions in the perception of dynamic facial expressions and to explore the role those biases may play in action/emotion understanding.
APA, Harvard, Vancouver, ISO, and other styles
6

Merz, Sabine Psychology Faculty of Science UNSW. "Face emotion recognition in children and adolescents; effects of puberty and callous unemotional traits in a community sample." Publisher:University of New South Wales. Psychology, 2008. http://handle.unsw.edu.au/1959.4/41247.

Full text
Abstract:
Previous research suggests that as well as behavioural difficulties, a small subset of aggressive and antisocial children show callous unemotional (CU) personality traits (i.e., lack of remorse and absence of empathy) that set them apart from their low-CU peers. These children have been identified as being most at risk to follow a path of severe and persistent antisocial behaviour, showing distinct behavioural patterns, and have been found to respond less to traditional treatment programs. One particular focus of this thesis is that emerging findings have shown emotion recognition deficits within both groups. Whereas children who only show behavioural difficulties (in the absence of CU traits) have been found to misclassify vague and neutral expressions as anger, the presence of CU traits has been associated with an inability to correctly identify fear and to a lesser extend, sadness. Furthermore, emotion recognition competence varies with age and development. In general, emotion recognition improves with age, but interestingly there is some evidence that it may become less efficient during puberty. No research could be located, however, that assessed emotion recognition through childhood and adolescence for children high and low on CU traits and antisocial behaviour. The primary focus of this study was to investigate the impact of these personality traits and pubertal development on emotion recognition competence in isolation and in combination. A specific aim was to assess if puberty would exacerbate these deficits in children with pre-existing deficits in emotion recognition. The effect of gender, emotion type and measure characteristics, in particular the age of the target face, was also examined. A community sample of 703 children and adolescents aged 7-17 were administered the Strength and Difficulties Questionnaire to assess adjustment, the Antisocial Process Screening Device to assess antisocial traits, and the Pubertal Development Scale was administered to evaluate pubertal stage. Empathy was assessed using the Bryant Index of Empathy for Children and Adolescents. Parents or caregivers completed parent version of these measures for their children. Emotion recognition ability was measured using the newly developed UNSW FACES task (Dadds, Hawes & Merz, 2004). Description of the development and validation of this measure are included. Contrary to expectations, emotion recognition accuracy was not negatively affected by puberty. In addition, no overall differences in emotion recognition ability were found due to participant’s gender or target face age group characteristics. The hypothesis that participants would be better at recognising emotions expressed by their own age group was therefore not supported. In line with expectations, significant negative associations between CU traits and fear recognition were found. However, these were small, and contrary to expectations, were found for girls rather than boys. Also, puberty did not exacerbate emotion recognition deficits in high CU children. However, the relationship between CU traits and emotion recognition was affected differently by pubertal status. The implications of these results are discussed in relation to future research into emotion recognition deficits within this population. In addition, theoretical and practical implications of these findings for the development of antisocial behaviour and the treatment of children showing CU traits are explored.
APA, Harvard, Vancouver, ISO, and other styles
7

Garrod, Oliver G. B. "Mapping multivariate measures of brain response onto stimulus information during emotional face classification." Thesis, University of Glasgow, 2010. http://theses.gla.ac.uk/1662/.

Full text
Abstract:
The relationship between feature processing and visual classification in the brain has been explored through a combination of reverse correlation methods (i.e.“Bubbles” [22]) and electrophysiological measurements (EEG) taken during a facial emotion categorization task [63]. However, in the absence of any specific model of the brain response measurements, this and other [60] attempts to parametrically relate stimulus properties to measurements of brain activation are difficult to interpret. In this thesis I consider a blind data–driven model of brain response. Statistically independent model parameters are found to minimize the expectation of an objective likelihood function over time [55], and a novel combination of methods is proposed for separating the signal from the noise. The model’s estimated signal parameters are then objectively rated by their ability to explain the subject’s performance during a facial emotion classification task, and also by their ability to explain the stimulus features, as revealed in a Bubbles experiment.
APA, Harvard, Vancouver, ISO, and other styles
8

Porto, Juliana Antola. "Neural bases of emotional face processing in infancy : a funcional near-infrared spectroscopy study." Pontif?cia Universidade Cat?lica do Rio Grande do Sul, 2017. http://tede2.pucrs.br/tede2/handle/tede/7867.

Full text
Abstract:
Submitted by PPG Medicina e Ci?ncias da Sa?de (medicina-pg@pucrs.br) on 2018-02-23T19:15:10Z No. of bitstreams: 1 JULIANA_ANTOLA_PORTO_TES.pdf: 4776720 bytes, checksum: 1995f76f1de8d24f63bbbf990ed7083c (MD5)
Approved for entry into archive by Caroline Xavier (caroline.xavier@pucrs.br) on 2018-02-26T19:46:38Z (GMT) No. of bitstreams: 1 JULIANA_ANTOLA_PORTO_TES.pdf: 4776720 bytes, checksum: 1995f76f1de8d24f63bbbf990ed7083c (MD5)
Made available in DSpace on 2018-02-26T19:51:08Z (GMT). No. of bitstreams: 1 JULIANA_ANTOLA_PORTO_TES.pdf: 4776720 bytes, checksum: 1995f76f1de8d24f63bbbf990ed7083c (MD5) Previous issue date: 2017-10-31
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior - CAPES
As bases neurais do processamento da emo??o facial na inf?ncia s?o amplamente desconhecidas. Os fatores ambientais que podem afetar o processamento facial e o reconhecimento emocional ao longo do curso de desenvolvimento tamb?m s?o pouco compreendidos. No entanto, acredita-se que as experi?ncias iniciais, particularmente envolvendo exposi??o repetida a faces emocionais dos cuidadores, influenciem esse curso. O objetivo deste estudo foi investigar os correlatos neurais do processamento de faces emocionais em lactentes usando a espectroscopia funcional no infravermelho pr?ximo (fNIRS), e examinar a poss?vel influ?ncia das experi?ncias emocionais iniciais dos lactentes, indiretamente medida pela investiga??o de sintomas de ansiedade materna. Foram avaliadas 29 crian?as de 5 meses de idade e suas m?es, recrutadas de uma amostra da comunidade de Boston, EUA. A ansiedade materna foi avaliada usando o componente tra?o do Invent?rio de Ansiedade Tra?o-Estado (STAI-T). Os lactentes observaram imagens visuais est?ticas de faces femininas retratando express?es de alegria e medo, enquanto as respostas hemodin?micas corticais foram medidas usando fNIRS. As respostas de oxihemoglobina (oxiHb) e deoxihemoglobina (deoxiHb) nas ?reas frontais, parietais e temporais foram comparadas entre as faces emocionais, e entre filhos de m?es com n?veis altos e baixos de sintomas de ansiedade. Os resultados demonstraram efeito principal significativo da emo??o (p=0,022), evidenciado pelo aumento na concentra??o de oxiHb para faces de alegria em compara??o a faces de medo. Ademais, observou-se efeito principal significativo da regi?o (p=0,013), induzido por maior concentra??o de oxiHb nas regi?es corticais temporais em rela??o ?s regi?es corticais frontais (p=0,031). Al?m disso, houve uma intera??o significativa entre emo??o, hemisf?rio e ansiedade (p=0,037). As an?lises revelaram que filhos de m?es com alta ansiedade demonstraram uma resposta hemodin?mica significativamente elevada no hemisf?rio esquerdo para faces de alegria, em compara??o com faces de medo no hemisf?rio direito (p=0,040) e esquerdo (p=0,033). Os resultados indicam que lactentes de 5 meses discriminaram faces de alegria em compara??o com faces de medo, evidenciado pela maior ativa??o para a primeira. A maior ativa??o nas regi?es temporais em rela??o ?s ?reas frontais foi discutida em rela??o ? ontog?nese do processamento facial e ?s redes neurais de reconhecimento emocional. A resposta mais acentuada, comparando faces de alegria e medo observada nos filhos de m?es com alta ansiedade, pode estar relacionada a altera??es no ambiente emocional dessas crian?as em compara??o com os filhos de m?es com baixa ansiedade. Assim, os n?veis de ansiedade materna parecem moderar as respostas cerebrais hemodin?micas das crian?as ?s faces emocionais.
The neural bases of facial emotion processing in infancy are largely unknown. The environmental factors that may impact facial processing and emotion recognition along the developmental course are also not clearly understood. However, early experiences, particularly involving consistent exposure to familiar caregiver faces, are believed to influence this course. The aim of this study was to investigate the neural correlates of infants? emotional face processing using functional near-infrared spectroscopy (fNIRS), and examine the potential influence of infants? early emotional experiences, indirectly measured by investigating maternal anxiety symptoms. Participants were 29 typically developing 5-monthold infants and their mothers, recruited from a community sample from the Boston greater area, MA, USA. Maternal anxiety was assessed using the trait component of the State-Trait Anxiety Inventory. Infants observed static visual images of a female model portraying happy and fearful expressions, while hemodynamic brain responses were measured using fNIRS. The oxyhemoglobin (oxyHb) and deoxyhemoglobin (deoxyHb) responses over frontal, parietal and temporal areas were compared for the emotional expressions in infants of mothers reporting low and high levels of anxiety symptoms. Results revealed a significant main effect of emotion (p=.022), driven by greater oxyHb concentration responses for happy compared to fearful faces. There was also a main effect of region (p=.013) induced by a significantly greater oxyHb concentration in temporal compared to frontal cortical regions (p=.031). Additionally, a significant three-way interaction between emotion, hemisphere and anxiety was observed (p=.037). Planned comparisons revealed that infants of high-anxious mothers showed significantly greater left hemispheric activation of oxyHb to happy faces when compared with right (p=.040) and left (p=.033) hemispheric activation of oxyHb to fearful faces. These findings possibly indicate that 5-month-olds can discriminate happy from fearful faces, evinced by the greater activation for the former. The greater activation in temporal as compared to frontal areas was discussed in relation to the ontogenesis of face processing and emotion recognition neural networks. The enhanced response to happy versus fearful faces observed in infants of high-anxious mothers can be related to the presumed altered emotional environment experienced by these infants, compared to that of infants of low-anxious mothers. Therefore, maternal anxiety levels appeared to moderate infants? hemodynamic brain responses to emotional faces.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Xu. "A new method for generic three dimensional human face modelling for emotional bio-robots." Thesis, University of Gloucestershire, 2012. http://eprints.glos.ac.uk/4592/.

Full text
Abstract:
Existing 3D human face modelling methods are confronted with difficulties in applying flexible control over all facial features and generating a great number of different face models. The gap between the existing methods and the requirements of emotional bio-robots applications urges the creation of a generic 3D human face model. This thesis focuses on proposing and developing two new methods involved in the research of emotional bio-robots: face detection in complex background images based on skin colour model and establishment of a generic 3D human face model based on NURBS. The contributions of this thesis are: A new skin colour based face detection method has been proposed and developed. The new method consists of skin colour model for skin regions detection and geometric rules for distinguishing faces from detected regions. By comparing to other previous methods, the new method achieved better results of detection rate of 86.15% and detection speed of 0.4-1.2 seconds without any training datasets. A generic 3D human face modelling method is proposed and developed. This generic parametric face model has the abilities of flexible control over all facial features and generating various face models for different applications. It includes: The segmentation of a human face of 21 surface features. These surfaces have 34 boundary curves. This feature-based segmentation enables the independent manipulation of different geometrical regions of human face. The NURBS curve face model and NURBS surface face model. These two models are built up based on cubic NURBS reverse computation. The elements of the curve model and surface model can be manipulated to change the appearances of the models by their parameters which are obtained by NURBS reverse computation. A new 3D human face modelling method has been proposed and implemented based on bi-cubic NURBS through analysing the characteristic features and boundary conditions of NURBS techniques. This model can be manipulated through control points on the NURBS facial features to build any specific face models for any kind of appearances and to simulate dynamic facial expressions for various applications such as emotional bio-robots, aesthetic surgery, films and games, and crime investigation and prevention, etc.
APA, Harvard, Vancouver, ISO, and other styles
10

Ridout, Nathan. "Processing of emotional material in major depression : cognitive and neuropsychological investigations." Thesis, University of St Andrews, 2005. http://hdl.handle.net/10023/13141.

Full text
Abstract:
The aim of this thesis was to expand the existing knowledge base concerning the profile of emotional processing that is associated with major depression, particularly in terms of socially important non-verbal stimuli (e.g. emotional facial expressions). Experiment one utilised a face-word variant of the emotional Stroop task and demonstrated that depressed patients (DP) did not exhibit a selective attention bias for sad faces. Conversely, the healthy controls (HC) were shown to selectively attend to happy faces. At recognition memory testing, DP did not exhibit a memory bias for depression-relevant words, but did demonstrate a tendency to falsely recognise depression-relevant words that had not been presented at encoding. Experiment two examined the pattern of autobiographical memory (ABM) retrieval exhibited by DP and HC in response to verbal (words) and non-verbal (images & faces) affective cues. DP were slower than HC to retrieve positive ABMs, but did not differ from HC in their retrieval times for negative ABMs. Overall, DP retrieved fewer specific ABMs than did the HC. Participants retrieved more specific ABMs to image cues than to words or faces, but this pattern was only demonstrated by the HC. Reduced retrieval of specific ABMs by DP was a consequence of increased retrieval of categorical ABMs; this tendency was particularly marked when the participants were cued with faces. During experiment three, DP and HC were presented with a series of faces and were asked to identify the gender of the person featured in each photograph. Overall, gender identification times were not affected by the emotion portrayed by the faces. Furthermore at subsequent recognition memory testing, DP did not exhibit MCM bias for sad faces. During experiment four, DP and HC were presented with videotaped depictions of 'realistic' social interactions and were asked to identify the emotion portrayed by the characters and to make inferences about the thoughts, intentions and beliefs of these individuals. Overall, DP were impaired in their recognition of happiness and in understanding social interactions involving sarcasm and deception. Correct social inference was significantly related to both executive function and depression severity. Experiment five involved assessing a group of eight patients that had undergone neurosurgery for chronic, treatment-refractory depression on the identical emotion recognition and social perception tasks that were utilised in experiment four. Relative to HC, surgery patients (SP) exhibited general deficits on all emotion recognition and social processing tasks. Notably, depression status did not appear to interact with surgery status to worsen these observed deficits. These findings suggest that the anterior cingulate region of the prefrontal cortex may play a role in correct social inference. Summary: Taken together the findings of the five experimental studies of the thesis demonstrate that, in general, biases that have been observed in DP processing of affective verbal material generalise to non-verbal emotional material (e.g. emotional faces). However, there are a number of marked differences that have been highlighted throughout the thesis. There is also evidence that biased emotional processing in DP requires explicit processing of the emotional content of the stimuli. Furthermore, a central theme of the thesis is that deficits in executive function in DP appear to be implicated in the impairments of emotional processing that are exhibited by these patients.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Emotional face"

1

Emotional face comprehension: Neuropsychological perspectives. New York: Nova Science, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Groc, Isabelle. Gentle giants: An emotional face to face with dolphins and whales. [Vercelli, Italy]: White Star Publishers, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Trust, Doreen. Overcoming disfigurement: Defeating the problems--physical, social, and emotional. Wellingborough: Thorsons Pub. Group, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Michela, Balconi, ed. Neuropsychology and cognition of emotional face comprehension, 2006. Trivandrum, India: Research Signpost, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Flying lessons: 102 strategies for equipping your child to face life with confidence and competence. Nashville, Tenn: Thomas Nelson, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Magdalen, Margaret. The hidden face of Jesus: Reflections on the emotional life ofChrist. London: Darton, Longman & Todd, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Trust, Doreen Savage. Overcoming disfigurement: Defeating the problems - physical, social, emotional. Wellingborough: Thorsons, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Duncan, Tara N. Using chimeric photography to study the effects of handedness on positive and negative emotional expressions of the face. Sudbury, Ont: Laurentian University, Department of Psychology, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Voice of the nurse: An exploration of the physical, mental, emotional, social, and spiritual stressors nurses face in today's healthcare environment. San Bernardino, California: CreateSpace Independent Publishing Platform, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Emotions revealed: Recognizing faces and feelings to improve communication and emotional life. 2nd ed. New York: Henry Holt, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Emotional face"

1

Gray, Benjamin. "The Clinical Contexts of Emotional Labor." In Face to Face with Emotions in Health and Social Care, 73–95. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-3402-3_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Clark, Adele, and Jacqui Blades. "Who is in my face?" In Practical Ideas for Emotional Intelligence, 55–56. London: Routledge, 2021. http://dx.doi.org/10.4324/9781315169224-30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kaiser, Susanne. "Facial Expressions as Indicators of “Functional” and “Dysfunctional” Emotional Processes." In The Human Face, 235–53. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4615-1063-5_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Anisetti, Marco, and Valerio Bellandi. "Emotional State Inference Using Face Related Features." In New Directions in Intelligent Interactive Multimedia Systems and Services - 2, 401–11. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02937-0_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Maurage, Pierre, Scott Love, and Fabien D’Hondt. "Crossmodal Integration of Emotional Stimuli in Alcohol Dependence." In Integrating Face and Voice in Person Perception, 271–98. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-3585-3_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nader-Grosbois, Nathalie, and James M. Day. "Emotional Cognition: Theory of Mind and Face Recognition." In International Handbook of Autism and Pervasive Developmental Disorders, 127–57. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-8065-6_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kreifelts, Benjamin, Dirk Wildgruber, and Thomas Ethofer. "Audiovisual Integration of Emotional Information from Voice and Face." In Integrating Face and Voice in Person Perception, 225–51. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-3585-3_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Flower, Lisa. "Emotional labour, cooling the client out and lawyer face." In Emotional Labour in Criminal Justice and Criminology, 173–84. Abingdon, Oxon ; New York, NY : Routledge, 2020.: Routledge, 2020. http://dx.doi.org/10.4324/9780429055669-14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Dong, and Pei-Luen Patrick Rau. "Impacts of Emotional Ambient Sounds on Face Detection Sensitivity." In HCI International 2019 – Late Breaking Papers, 497–506. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30033-3_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Xiangfeng, Ling Yu, Yini Zhang, Zeyuan Shao, Linyu Huang, Chaoyang Zhang, and Xiaoyang He. "Emotional Feedback Lighting Control System Based on Face Recognition." In Green Energy and Networking, 258–66. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-21730-3_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Emotional face"

1

Sinha, Sanjana, Sandika Biswas, Ravindra Yadav, and Brojeshwar Bhowmick. "Emotion-Controllable Generalized Talking Face Generation." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/184.

Full text
Abstract:
Despite the significant progress in recent years, very few of the AI-based talking face generation methods attempt to render natural emotions. Moreover, the scope of the methods is majorly limited to the characteristics of the training dataset, hence they fail to generalize to arbitrary unseen faces. In this paper, we propose a one-shot facial geometry-aware emotional talking face generation method that can generalize to arbitrary faces. We propose a graph convolutional neural network that uses speech content feature, along with an independent emotion input to generate emotion and speech-induced motion on facial geometry-aware landmark representation. This representation is further used in our optical flow-guided texture generation network for producing the texture. We propose a two-branch texture generation network, with motion and texture branches designed to consider the motion and texture content independently. Compared to the previous emotion talking face methods, our method can adapt to arbitrary faces captured in-the-wild by fine-tuning with only a single image of the target identity in neutral emotion.
APA, Harvard, Vancouver, ISO, and other styles
2

Veltmeijer, Emmeke, Charlotte Gerritsen, and Koen Hindriks. "Automatic Recognition of Emotional Subgroups in Images." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/190.

Full text
Abstract:
Both social group detection and group emotion recognition in images are growing fields of interest, but never before have they been combined. In this work we aim to detect emotional subgroups in images, which can be of great importance for crowd surveillance or event analysis. To this end, human annotators are instructed to label a set of 171 images, and their recognition strategies are analysed. Three main strategies for labeling images are identified, with each strategy assigning either 1) more weight to emotions (emotion-based fusion), 2) more weight to spatial structures (group-based fusion), or 3) equal weight to both (summation strategy). Based on these strategies, algorithms are developed to automatically recognize emotional subgroups. In particular, K-means and hierarchical clustering are used with location and emotion features derived from a fine-tuned VGG network. Additionally, we experiment with face size and gaze direction as extra input features. The best performance comes from hierarchical clustering with emotion, location and gaze direction as input.
APA, Harvard, Vancouver, ISO, and other styles
3

Pena, Alejandro, Julian Fierrez, Aythami Morales, and Agata Lapedriza. "Learning Emotional-Blinded Face Representations." In 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. http://dx.doi.org/10.1109/icpr48806.2021.9412581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Zhaoyu, Shangfei Wang, Menghua He, Zhilei Liu, and Qiang Ji. "Emotional tagging of videos by exploring multiple emotions' coexistence." In 2013 10th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2013). IEEE, 2013. http://dx.doi.org/10.1109/fg.2013.6553771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ito, Teruaki. "Implicit Emotional Message Representation With 1/F-Fluctuation for Ambient Interface Application." In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/detc2013-12807.

Full text
Abstract:
For remote communication, it is very hard to convey emotional message, or atmosphere of conversation to the opponent people who are located in a geographically difference place. For Email messages, emoticon, or face mark symbols are often used to express emotion, which works well to show emotion in an explicit way. Even though, the opponent face is show on the screen in remote video communication, emotional expression over the network is not similar to face-to-face meeting. In a face-to-face meeting, atmosphere of the conversation is shared in an implicit manner as opposed to the explicit explanation using words, sign, or pictures. This research proposes an idea of implicit representation of emotional messages using light illumination [Lin, T.I. 2004; Vandewalle, G. 2010] with 1/f-fluctuation. It is reported that the 1/f-fluctuation makes a relaxed mental state. Therefore, ambient lighting with 1/f-fluctuation could provide a comfortable (and/or uncomfortable) atmosphere. In addition to this, this study aims to present emotional message in the illumination. In order to do so, four different types of 1/f-fluctuation pattern were created to represent four different types of typical human emotion, including joy, anger, pathos, and humor. The paper explains how the idea was implemented, using light illumination of four types of emotion based on the user experiments. According to some feasibility experiments, these four types of illumination were recognized to express each type of emotion. Applying emotional message representation with 1/f-fluctuation in the illumination of video conference speaker, implicit emotional message was presented during the conversation. Then the paper discusses the feasibility of the idea for ambient human interface.
APA, Harvard, Vancouver, ISO, and other styles
6

Wei, Shuyi, Xiaoying Li, and Xiuxia Zhang. "Research on Face Emotional Recognition Algorithm." In CONF-CDS 2021: The 2nd International Conference on Computing and Data Science. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3448734.3450797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Talavera, Marta, Olga Mayoral, and Elena Zalve. "EMOTIONAL COMPETENCE: MAIN HEALTH PROBLEMS TO FACE." In International Technology, Education and Development Conference. IATED, 2017. http://dx.doi.org/10.21125/inted.2017.0160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rathod, Kanchan Yadav, and Tanuja Pattanshetti. "YouTube Music Recommendation System Based on Face Expression." In International Research Conference on IOT, Cloud and Data Science. Switzerland: Trans Tech Publications Ltd, 2023. http://dx.doi.org/10.4028/p-r8573m.

Full text
Abstract:
Nowadays face recognition system is widely used in every field of computer vision applications such as Face lock-in smartphones, surveillance, smart attendance system, and driverless car technology. Because of this, the demand for face recognition systems is increasing day by day in the research field. The aim of this project is to develop a system that will recommend music based on facial expressions. The face-recognition system consists of object detection and identifying facial features from input images, and the face recognition system can be made more accurate with the use of convolutional neural networks. Layers of convolution neural network are used for the expression detection and are optimized with Adam to reduce overall loss and improve accuracy. YouTube song playlist recommendation is an application of a face recognition system based on a neural network. We use streamlit-webrtc to design the web frame for the song recommendation system. For face detection, we used the Kaggle-FER2013 dataset, and images in the dataset are classified into seven natural emotions of a person. The system captures the emotional state of a person in real-time and generates a playlist of youtube songs based on that emotion.
APA, Harvard, Vancouver, ISO, and other styles
9

Karapetyan, Larisa V. "Characteristics of Emotional and Personal Wellbeing of Prisoners." In Wellbeing and Security in the Face of Social Transformations. Liberal Arts University – University for Humanities, 2019. http://dx.doi.org/10.35853/lau.ws.2019.sp17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cadayona, Alexandra M., Nicole Meredith S. Cerilla, Darla Monica M. Jurilla, Ariel Kelly D. Balan, and Joel C. de Goma. "Emotional State Classification: An Additional Step in Emotion Classification through Face Detection." In 2019 IEEE 6th International Conference on Industrial Engineering and Applications (ICIEA). IEEE, 2019. http://dx.doi.org/10.1109/iea.2019.8715171.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Emotional face"

1

Bayley, Stephen, Darge Wole, Louise Yorke, Paul Ramchandani, and Pauline Rose. Researching Socio-Emotional Learning, Mental Health and Wellbeing: Methodological Issues in Low-Income Contexts. Research on Improving Systems of Education (RISE), April 2021. http://dx.doi.org/10.35489/bsg-rise-wp_2021/068.

Full text
Abstract:
This paper explores methodological issues relating to research on children’s socio-emotional learning (SEL), mental health and wellbeing in low- and lower-middle-income countries. In particular, it examines the key considerations and challenges that researchers may face and provides practical guidance for generating reliable and valid data on SEL, mental health and wellbeing in diverse settings and different cultural contexts. In so doing, the paper draws on the experience of recent research undertaken in Ethiopia to illustrate some of the issues and how they were addressed. The present study extends earlier 2018-2019 RISE Ethiopia research, expanding its scope to consider further aspects of SEL, mental health and wellbeing in the particular context of COVID-19. In particular, the research highlights that the pandemic has brought to the fore the importance of assessing learning, and learning loss, beyond academic learning alone.
APA, Harvard, Vancouver, ISO, and other styles
2

Clarke, Alison, Sherry Hutchinson, and Ellen Weiss. Psychosocial support for children. Population Council, 2005. http://dx.doi.org/10.31899/hiv14.1003.

Full text
Abstract:
Masiye Camp in Matopos National Park, and Kids’ Clubs in downtown Bulawayo, Zimbabwe, are examples of a growing number of programs in Africa and elsewhere that focus on the psychological and social needs of AIDS-affected children. Given the traumatic effects of grief, loss, and other hardships faced by these children, there is increasing recognition of the importance of programs to help them strengthen their social and emotional support systems. This Horizons Report describes findings from operations research in Zimbabwe and Rwanda that examines the psychosocial well-being of orphans and vulnerable children and ways to increase their ability to adapt and cope in the face of adversity. In these studies, a person’s psychosocial well-being refers to his/her emotional and mental state and his/her network of human relationships and connections. A total of 1,258 youth were interviewed. All were deemed vulnerable by their communities because they had been affected by HIV/AIDS and/or other factors such as severe poverty.
APA, Harvard, Vancouver, ISO, and other styles
3

Zimmerman, Emily, and Jana Smith. Behavioral tactics to support providers in offering quality care: Insights from provider behavior change research and practice. Population Council, 2022. http://dx.doi.org/10.31899/sbsr2022.1043.

Full text
Abstract:
This document offers a synthesis of insights from recent research and design activities conducted by ideas42 through Breakthrough RESEARCH, Breakthrough ACTION, and other projects across nine different low- and middle-income settings about the behavioral roots of challenges health care providers face in providing high quality services. We discuss how the physical and social environment in which they work and live sends signals to providers about what is important, how they can navigate difficulties, and how well they are performing. We discuss how experiences outside the health facility impact how providers approach their professional duties. We also discuss how pervasive time and resource constraints create a cognitive and emotional burden that gets in the way of what they can do, even within these constraints. For each challenge, we also share lessons emerging from this research about about how global health practitioners can address these challenges through program design and implementation.
APA, Harvard, Vancouver, ISO, and other styles
4

Punjabi, Maitri, Julianne Norman, Lauren Edwards, and Peter Muyingo. Using ACASI to Measure Gender-Based Violence in Ugandan Primary Schools. RTI Press, March 2021. http://dx.doi.org/10.3768/rtipress.2021.rb.0025.2104.

Full text
Abstract:
School-related gender-based violence (SRGBV) remains difficult to measure because of high sensitivity and response bias. However, most SRGBV measurement relies on face-to-face (FTF) survey administration, which is susceptible to increased social desirability bias. Widely used in research on sensitive topics, Audio Computer-Assisted Self-Interview (ACASI) allows subjects to respond to pre-recorded questions on a computerized device, providing respondents with privacy and confidentiality. This brief contains the findings from a large-scale study conducted in Uganda in 2019 where primary grade 3 students were randomly selected to complete surveys using either ACASI or FTF administration. The surveys covered school climate, gender attitudes, social-emotional learning, and experiences of SRGBV. Through this study, we find that although most survey responses were comparable between ACASI and FTF groups, the reporting of experiences of sexual violence differed drastically: 43% of students in the FTF group versus 77% of students in the ACASI group reported experiencing sexual violence in the past school term. We also find that factor structures are similar for data collected with ACASI compared with data collected FTF, though there is weaker evidence for construct validity for both administration modes. We conclude that ACASI is a valuable tool in measuring sensitive sub-topics of SRGBV and should be utilized over FTF administration, although further psychometric testing of these surveys is recommended.
APA, Harvard, Vancouver, ISO, and other styles
5

Abufhele, Alejandra, David Bravo, Florencia Lopez-Boo, and Pamela Soto-Ramirez. Developmental losses in young children from pre-primary program closures during the COVID-19 pandemic. Inter-American Development Bank, January 2022. http://dx.doi.org/10.18235/0003920.

Full text
Abstract:
The learning and developmental losses from pre-primary program closures due to COVID-19 may be unprecedented. These disruptions early in life, when the brain is more sensitive to environmental changes, can be long-lasting. Although there is evidence about the effects of school closures on older children, there is currently no evidence on such losses for children in their early years. This paper is among the first to quantify the actual impact of pandemic-related closures on child development, in this case for a sample of young children in Chile, where school and childcare closures lasted for about a year. We use a unique dataset collected face-to-face in December 2020, which includes child development indicators for general development, language development, social-emotional development, and executive function. We are able to use a first difference strategy because Chile has a history of collecting longitudinal data on children as part of their national social policies monitoring strategy. This allows us to construct a valid comparison group from the 2017 longitudinal data. We find adverse impacts on children in 2020 compared to children interviewed in 2017 in most development areas. In particular, nine months after the start of the pandemic, we find a loss in language development of 0.25 SDs. This is equivalent to the impact on a childs language development of having a mother with approximately five years less education. Timely policies are needed to mitigate these enormous losses.
APA, Harvard, Vancouver, ISO, and other styles
6

Dabrowski, Anna, and Pru Mitchell. Effects of remote learning on mental health and socialisation. Literature Review. Australian Council for Educational Research, November 2022. http://dx.doi.org/10.37517/978-1-74286-682-6.

Full text
Abstract:
This literature review focuses on the effects of remote learning on mental health, including acute mental health issues and possible ongoing implications for student wellbeing and socialisation. It provides an overview of some of the challenges that can impact on the mental health and relationships of young people, many of which have accelerated or become more complex during the COVID-19 pandemic. In the light of concern about rising antisocial behaviour and extremism there is a focus on socialisation and self-regulation on return to school post-pandemic. In the face of limited Australian research on these topics, the review takes a global focus and includes experiences from other countries as evidenced in the emerging research literature. Based on these findings the review offers advice to school leaders regarding the self-regulatory behaviours of students on return to school after periods of remote learning, and addresses social and emotional considerations as students transition back to school. It also considers ways in which schools can promote wellbeing and respond to mental health concerns as a way to address and prevent antisocial behaviours, recognise manifestations in extremism (including religious fundamentalism), and challenge a general rise in extremist views.
APA, Harvard, Vancouver, ISO, and other styles
7

Sowa, Patience, Rachel Jordan, Wendi Ralaingita, and Benjamin Piper. Higher Grounds: Practical Guidelines for Forging Learning Pathways in Upper Primary Education. RTI Press, May 2021. http://dx.doi.org/10.3768/rtipress.2021.op.0069.2105.

Full text
Abstract:
To address chronically low primary school completion rates and the disconnect between learners’ skills at the end of primary school and the skills learners need to thrive in secondary school identified in many low- and middle-income countries, more investment is needed to improve the quality of teaching and learning in upper primary grades. Accordingly, we provide guidelines for improving five components of upper primary education: (1) In-service teacher professional development and pre-service preparation to improve and enhance teacher quality; (2) a focus on mathematics, literacy, and core content-area subjects; (3) assessment for learning; (4) high-quality teaching and learning materials; and (5) positive school climates. We provide foundational guiding principles and recommendations for intervention design and implementation for each component. Additionally, we discuss and propose how to structure and design pre-service teacher preparation and in-service teacher training and ongoing support, fortified by materials design and assessment, to help teachers determine where learners are in developmental progressions, move learners towards mastery, and differentiate and support learners who have fallen behind. We provide additional suggestions for integrating a whole-school climate curriculum, social-emotional learning, and school-related gender-based violence prevention strategies to address the internal and societal changes learners often face as they enter upper primary.
APA, Harvard, Vancouver, ISO, and other styles
8

Yilmaz, Ihsan, and Kainat Shakil. Religious Populism and Vigilantism: The Case of the Tehreek-e-Labbaik Pakistan. European Center for Populism Studies (ECPS), January 2022. http://dx.doi.org/10.55271/pp0001.

Full text
Abstract:
Religious populism and radicalism are hardly new to Pakistan. Since its birth in 1947, the country has suffered through an ongoing identity crisis. Under turbulent political conditions, religion has served as a surrogate identity for Pakistan, masking the country’s evident plurality, and over the years has come to dominate politics. Tehreek-e-Labbaik Pakistan (TLP) is the latest face of religious extremism merged with populist politics. Nevertheless, its sporadic rise from a national movement defending Pakistan’s notorious blasphemy laws to a “pious” party is little understood. This paper draws on a collection of primary and secondary sources to piece together an account of the party’s evolution that sheds light on its appeal to “the people” and its marginalization and targeting of the “other.” The analysis reveals that the TLP has evolved from a proxy backed by the establishment against the mainstream parties to a full-fledged political force in its own right. Its ability to relate to voters via its pious narrative hinges on exploiting the emotional insecurities of the largely disenfranchised masses. With violence legitimized under the guise of religion, “the people” are afforded a new sense of empowerment. Moreover, the party’s rhetoric has given rise to a vigilante-style mob culture so much so that individuals inspired by this narrative have killed in plain sight without remorse. To make matters worse, the incumbent government of Imran Khan — itself a champion of Islamist rhetoric — has made repeated concessions and efforts to appease the TLP that have only emboldened the party. Today, the TLP poses serious challenges to Pakistan’s long-standing, if fragile, pluralistic social norms and risks tipping the country into an even deadlier cycle of political radicalization.
APA, Harvard, Vancouver, ISO, and other styles
9

Herbert, George. How Can Middle-income Countries Improve Their Skills Systems Post- COVID-19? Institute of Development Studies (IDS), February 2021. http://dx.doi.org/10.19088/k4d.2021.082.

Full text
Abstract:
Vocational training systems in middle-income countries are going to face multiple challenges in the post-COVID era, notably, challenges related to (1) automation; (2) the transition to a green economy, and (3) demographic pressures. Of these, automation - linked to the burgeoning ‘fourth industrial revolution’ that is set to transform the global economy - represents the most serious challenge and is the only one of the three challenges discussed in any depth in this paper. Whilst estimates of the likely scale of automation in the coming years and decades vary widely, it appears likely that waves of automation will lead to a dramatic decline in many kinds of jobs that largely involve routine, repetitive tasks. These trends pre-date COVID-19, but the disruption caused by the pandemic provides an opportunity to prepare for these challenges by implementing vocational training system reforms as part of the Build Back Better agenda. Reforms to vocational training systems will be crucial to ensuring middle-income countries respond appropriately to accelerating labour market changes. However, they should only form a limited part of that response and need to be integrated with a wide range of other policy measures. Vocational training reform will need to occur in the context of major reforms to basic education in order to ensure that all workers are equipped with the cross-cutting cognitive and socio-emotional skills they will require to perform hard-to-automate tasks and to be able to learn and adapt rapidly in a changing economy. Middle-income countries will also likely need to progressively expand social protection schemes in order to provide a safety net for workers that struggle to adapt to changing labour market requirements.
APA, Harvard, Vancouver, ISO, and other styles
10

Тарасова, Олена Юріївна, and Ірина Сергіївна Мінтій. Web application for facial wrinkle recognition. Кривий Ріг, КДПУ, 2022. http://dx.doi.org/10.31812/123456789/7012.

Full text
Abstract:
Facial recognition technology is named one of the main trends of recent years. It’s wide range of applications, such as access control, biometrics, video surveillance and many other interactive humanmachine systems. Facial landmarks can be described as key characteristics of the human face. Commonly found landmarks are, for example, eyes, nose or mouth corners. Analyzing these key points is useful for a variety of computer vision use cases, including biometrics, face tracking, or emotion detection. Different methods produce different facial landmarks. Some methods use only basic facial landmarks, while others bring out more detail. We use 68 facial markup, which is a common format for many datasets. Cloud computing creates all the necessary conditions for the successful implementation of even the most complex tasks. We created a web application using the Django framework, Python language, OpenCv and Dlib libraries to recognize faces in the image. The purpose of our work is to create a software system for face recognition in the photo and identify wrinkles on the face. The algorithm for determining the presence and location of various types of wrinkles and determining their geometric determination on the face is programmed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography