To see the other types of publications on this topic, follow the link: Face, attention, gaze, prosopagnosia.

Journal articles on the topic 'Face, attention, gaze, prosopagnosia'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Face, attention, gaze, prosopagnosia.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Burra, Nicolas, Dirk Kerzel, and Meike Ramon. "Gaze-cueing requires intact face processing – Insights from acquired prosopagnosia." Brain and Cognition 113 (April 2017): 125–32. http://dx.doi.org/10.1016/j.bandc.2017.01.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Erni, Britt, Roland Maurer, Dirk Kerzel, and Nicolas Burra. "Perception of eye gaze direction in a case of acquired prosopagnosia." Neuropsychologie clinique et appliquée 3, Fall 2019 (2019): 105–19. http://dx.doi.org/10.46278/j.ncacn.201907284.

Full text
Abstract:
The ability to perceive the direction of eye gaze is critical in social settings. Brain lesions in the superior temporal sulcus (STS) impair this ability. We investigated the perception of gaze direction of PS, a patient suffering from acquired prosopagnosia (Rossion et al., 2003). Despite lesions in the face network, the STS was spared in PS. We assessed perception of gaze direction in PS with upright, inverted, and contrast-reversed faces. Compared to the performance of 11 healthy women matched for age and education, PS demonstrated abnormal discrimination of gaze direction with upright and contrast-reversed faces, but not with inverted faces. Our findings suggest that the inability of the patient to process faces holistically weakened her perception of gaze direction, especially in demanding tasks.
APA, Harvard, Vancouver, ISO, and other styles
3

Van Belle, G., P. De Graef, K. Verfaillie, T. Busigny, and B. Rossion. "Gaze-contingent techniques reveal impairment of holistic face processing in acquired prosopagnosia." Journal of Vision 9, no. 8 (March 24, 2010): 541. http://dx.doi.org/10.1167/9.8.541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Callejas, Alicia, Gordon L. Shulman, and Maurizio Corbetta. "Dorsal and Ventral Attention Systems Underlie Social and Symbolic Cueing." Journal of Cognitive Neuroscience 26, no. 1 (January 2014): 63–80. http://dx.doi.org/10.1162/jocn_a_00461.

Full text
Abstract:
Eye gaze is a powerful cue for orienting attention in space. Studies examining whether gaze and symbolic cues recruit the same neural mechanisms have found mixed results. We tested whether there is a specialized attentional mechanism for social cues. We separately measured BOLD activity during orienting and reorienting attention following predictive gaze and symbolic cues. Results showed that gaze and symbolic cues exerted their influence through the same neural networks but also produced some differential modulations. Dorsal frontoparietal regions in left intraparietal sulcus (IPS) and bilateral MT+/lateral occipital cortex only showed orienting effects for symbolic cues, whereas right posterior IPS showed larger validity effects following gaze cues. Both exceptions may reflect the greater automaticity of gaze cues: Symbolic orienting may require more effort, while disengaging attention during reorienting may be more difficult following gaze cues. Face-selective regions, identified with a face localizer, showed selective activations for gaze cues reflecting sensory processing but no attentional modulations. Therefore, no evidence was found linking face-selective regions to a hypothetical, specialized mechanism for orienting attention to gaze cues. However, a functional connectivity analysis showed greater connectivity between face-selective regions and right posterior IPS, posterior STS, and inferior frontal gyrus during gaze cueing, consistent with proposals that face-selective regions may send gaze signals to parts of the dorsal and ventral frontoparietal attention networks. Finally, although the default-mode network is thought to be involved in social cognition, this role does not extend to gaze orienting as these regions were more deactivated following gaze cues and showed less functional connectivity with face-selective regions during gaze cues.
APA, Harvard, Vancouver, ISO, and other styles
5

Edwards, S. Gareth, Lisa J. Stephenson, Mario Dalmaso, and Andrew P. Bayliss. "Social orienting in gaze leading: a mechanism for shared attention." Proceedings of the Royal Society B: Biological Sciences 282, no. 1812 (August 7, 2015): 20151141. http://dx.doi.org/10.1098/rspb.2015.1141.

Full text
Abstract:
Here, we report a novel social orienting response that occurs after viewing averted gaze. We show, in three experiments, that when a person looks from one location to an object, attention then shifts towards the face of an individual who has subsequently followed the person's gaze to that same object. That is, contrary to ‘gaze following’, attention instead orients in the opposite direction to observed gaze and towards the gazing face. The magnitude of attentional orienting towards a face that ‘follows’ the participant's gaze is also associated with self-reported autism-like traits. We propose that this gaze leading phenomenon implies the existence of a mechanism in the human social cognitive system for detecting when one's gaze has been followed, in order to establish ‘shared attention’ and maintain the ongoing interaction.
APA, Harvard, Vancouver, ISO, and other styles
6

Dalmaso, Mario, Giulia Pavan, Luigi Castelli, and Giovanni Galfano. "Social status gates social attention in humans." Biology Letters 8, no. 3 (November 16, 2011): 450–52. http://dx.doi.org/10.1098/rsbl.2011.0881.

Full text
Abstract:
Humans tend to shift attention in response to the averted gaze of a face they are fixating, a phenomenon known as gaze cuing. In the present paper, we aimed to address whether the social status of the cuing face modulates this phenomenon. Participants were asked to look at the faces of 16 individuals and read fictive curriculum vitae associated with each of them that could describe the person as having a high or low social status. The association between each specific face and either high or low social status was counterbalanced between participants. The same faces were then used as stimuli in a gaze-cuing task. The results showed a greater gaze-cuing effect for high-status faces than for low-status faces, independently of the specific identity of the face. These findings confirm previous evidence regarding the important role of social factors in shaping social attention and show that a modulation of gaze cuing can be observed even when knowledge about social status is acquired through episodic learning.
APA, Harvard, Vancouver, ISO, and other styles
7

Potter, Douglas D., and Simon Webster. "Normal Gaze Cueing in Children with Autism Is Disrupted by Simultaneous Speech Utterances in “Live” Face-to-Face Interactions." Autism Research and Treatment 2011 (2011): 1–7. http://dx.doi.org/10.1155/2011/545964.

Full text
Abstract:
Gaze cueing was assessed in children with autism and in typically developing children, using a computer-controlled “live” face-to-face procedure. Sensitivity to gaze direction was assessed using a Posner cuing paradigm. Both static and dynamic directional gaze cues were used. Consistent with many previous studies, using photographic and cartoon faces, gaze cueing was present in children with autism and was not developmentally delayed. However, in the same children, gaze cueing was abolished when a mouth movement occurred at the same time as the gaze cue. In contrast, typical children were able to use gaze cues in all conditions. The findings indicate that gaze cueing develops successfully in some children with autism but that their attention is disrupted by speech utterances. Their ability to learn to read nonverbal emotional and intentional signals provided by the eyes may therefore be significantly impaired. This may indicate a problem with cross-modal attention control or an abnormal sensitivity to peripheral motion in general or the mouth region in particular.
APA, Harvard, Vancouver, ISO, and other styles
8

Moradi, F., and S. Shimojo. "Face adaptation depends on gaze (overt attention) to the face." Journal of Vision 6, no. 6 (March 24, 2010): 875. http://dx.doi.org/10.1167/6.6.875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stephenson, Lisa J., S. Gareth Edwards, Natacha M. Luri, Louis Renoult, and Andrew P. Bayliss. "The N170 event-related potential differentiates congruent and incongruent gaze responses in gaze leading." Social Cognitive and Affective Neuroscience 15, no. 4 (April 2020): 479–86. http://dx.doi.org/10.1093/scan/nsaa054.

Full text
Abstract:
Abstract To facilitate social interactions, humans need to process the responses that other people make to their actions, including eye movements that could establish joint attention. Here, we investigated the neurophysiological correlates of the processing of observed gaze responses following the participants’ own eye movement. These observed gaze responses could either establish, or fail to establish, joint attention. We implemented a gaze leading paradigm in which participants made a saccade from an on-screen face to an object, followed by the on-screen face either making a congruent or incongruent gaze shift. An N170 event-related potential was elicited by the peripherally located gaze shift stimulus. Critically, the N170 was greater for joint attention than non-joint gaze both when task-irrelevant (Experiment 1) and task-relevant (Experiment 2). These data suggest for the first time that the neurocognitive system responsible for structural encoding of face stimuli is affected by the establishment of participant-initiated joint attention.
APA, Harvard, Vancouver, ISO, and other styles
10

Rubies, Elena, Jordi Palacín, and Eduard Clotet. "Enhancing the Sense of Attention from an Assistance Mobile Robot by Improving Eye-Gaze Contact from Its Iconic Face Displayed on a Flat Screen." Sensors 22, no. 11 (June 4, 2022): 4282. http://dx.doi.org/10.3390/s22114282.

Full text
Abstract:
One direct way to express the sense of attention in a human interaction is through the gaze. This paper presents the enhancement of the sense of attention from the face of a human-sized mobile robot during an interaction. This mobile robot was designed as an assistance mobile robot and uses a flat screen at the top of the robot to display an iconic (simplified) face with big round eyes and a single line as a mouth. The implementation of eye-gaze contact from this iconic face is a problem because of the difficulty of simulating real 3D spherical eyes in a 2D image considering the perspective of the person interacting with the mobile robot. The perception of eye-gaze contact has been improved by manually calibrating the gaze of the robot relative to the location of the face of the person interacting with the robot. The sense of attention has been further enhanced by implementing cyclic face explorations with saccades in the gaze and by performing blinking and small movements of the mouth.
APA, Harvard, Vancouver, ISO, and other styles
11

Dalmaso, Mario, Lara Petri, Elisabetta Patron, Andrea Spoto, and Michele Vicovaro. "Direct Gaze Holds Attention, but Not in Individuals with Obsessive-Compulsive Disorder." Brain Sciences 12, no. 2 (February 19, 2022): 288. http://dx.doi.org/10.3390/brainsci12020288.

Full text
Abstract:
The attentional response to eye-gaze stimuli is still largely unexplored in individuals with obsessive-compulsive disorder (OCD). Here, we focused on an attentional phenomenon according to which a direct-gaze face can hold attention in a perceiver. Individuals with OCD and a group of matched healthy controls were asked to discriminate, through a speeded manual response, a peripheral target. Meanwhile, a task-irrelevant face displaying either direct gaze (in the eye-contact condition) or averted gaze (in the no-eye-contact condition) was also presented at the centre of the screen. Overall, the latencies were slower for faces with direct gaze than for faces with averted gaze; however, this difference was reliable in the healthy control group but not in the OCD group. This suggests the presence of an unusual attentional response to direct gaze in this clinical population.
APA, Harvard, Vancouver, ISO, and other styles
12

Dalmaso, Mario, Xinyuan Zhang, Giovanni Galfano, and Luigi Castelli. "Face Masks Do Not Alter Gaze Cueing of Attention: Evidence From the COVID-19 Pandemic." i-Perception 12, no. 6 (November 2021): 204166952110584. http://dx.doi.org/10.1177/20416695211058480.

Full text
Abstract:
Interacting with others wearing a face mask has become a regular worldwide practice since the beginning of the COVID-19 pandemic. However, the impact of face masks on cognitive mechanisms supporting social interaction is still largely unexplored. In the present work, we focused on gaze cueing of attention, a phenomenon tapping the essential ability which allows individuals to orient their attentional resources in response to eye gaze signals coming from others. Participants from both a European (i.e., Italy; Experiment 1) and an Asian (i.e., China; Experiment 2) country were involved, namely two countries in which the daily use of face masks before COVID-19 pandemic was either extremely uncommon or frequently adopted, respectively. Both samples completed a task in which a peripheral target had to be discriminated while a task irrelevant averted gaze face, wearing a mask or not, acted as a central cueing stimulus. Overall, a reliable and comparable gaze cueing emerged in both experiments, independent of the mask condition. These findings suggest that gaze cueing of attention is preserved even when the person perceived is wearing a face mask.
APA, Harvard, Vancouver, ISO, and other styles
13

Pickron, Charisse B., Eswen Fava, and Lisa S. Scott. "Follow My Gaze: Face Race and Sex Influence Gaze-Cued Attention in Infancy." Infancy 22, no. 5 (February 22, 2017): 626–44. http://dx.doi.org/10.1111/infa.12180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Freebody, Susannah, and Gustav Kuhn. "Own-age biases in adults’ and children’s joint attention: Biased face prioritization, but not gaze following!" Quarterly Journal of Experimental Psychology 71, no. 2 (January 1, 2018): 372–79. http://dx.doi.org/10.1080/17470218.2016.1247899.

Full text
Abstract:
Previous studies have reported own-age biases in younger and older adults in gaze following. We investigated own-age biases in social attentional processes between adults and children by focusing on two aspects of the joint attention process; the extent to which people attend towards an individual’s face, and the extent to which they fixate objects that are looked at by this person (i.e., gaze following). Participants viewed images that always contained a child and an adult who either looked towards each other or each looked at objects located to their side. Observers consistently, and rapidly fixated the actor’s faces, though the children were faster to fixate the child’s face than the adult’s faces, whilst the adults were faster to fixate on the adult’s face than the child’s face. The children also spent significantly more time fixating the child’s face than the adult’s face, and the opposite pattern of results was found for the adults. Whilst both adults and children prioritized objects when they were looked at by the actor, both groups showed equivalent levels of gaze following, and there was no own-age bias for gaze following. Our results show an own-age bias for prioritizing faces, but not gaze following.
APA, Harvard, Vancouver, ISO, and other styles
15

Freeth, Megan, and Patricia Bugembe. "Social partner gaze direction and conversational phase; factors affecting social attention during face-to-face conversations in autistic adults?" Autism 23, no. 2 (February 11, 2018): 503–13. http://dx.doi.org/10.1177/1362361318756786.

Full text
Abstract:
Social attention is atypical in autism. However, the majority of evidence for this claim comes from studies where the social partner is not physically present and the participants are children. Consequently, to ensure acquisition of a comprehensive overview of social attention in autism, systematic analysis of factors known to influence face-to-face social attention in neurotypicals is necessary and evidence from adulthood is required. This study assessed the influence of experimenter gaze direction (direct or averted) and conversational phase (speaking or listening) on social attention during a face-to-face conversation. Eye-tracking analyses indicated that when the experimenter looked directly at the participant, autistic adults looked at the experimenter’s face less than did neurotypical adults. However, this between-group difference was significantly reduced when the experimenter’s gaze was averted. Therefore, opportunities for reciprocal social gaze are missed by autistic adults when the social partner makes direct eye contact. A greater proportion of time was spent fixating the experimenter’s eye region when participants were speaking compared to listening in both neurotypical and autistic adults. Overall, this study provides a rich picture of the nature of social attention in face-to-face conversations adopted by autistic adults and demonstrates individual variation in social attention styles.
APA, Harvard, Vancouver, ISO, and other styles
16

Lansing, Charissa R., and George W. McConkie. "Attention to Facial Regions in Segmental and Prosodic Visual Speech Perception Tasks." Journal of Speech, Language, and Hearing Research 42, no. 3 (June 1999): 526–39. http://dx.doi.org/10.1044/jslhr.4203.526.

Full text
Abstract:
Two experiments were conducted to test the hypothesis that visual information related to segmental versus prosodic aspects of speech is distributed differently on the face of the talker. In the first experiment, eye gaze was monitored for 12 observers with normal hearing. Participants made decisions about segmental and prosodic categories for utterances presented without sound. The first experiment found that observers spend more time looking at and direct more gazes toward the upper part of the talker's face in making decisions about intonation patterns than about the words being spoken. The second experiment tested the Gaze Direction Assumption underlying Experiment 1—that is, that people direct their gaze to the stimulus region containing information required for their task. In this experiment, 18 observers with normal hearing made decisions about segmental and prosodic categories under conditions in which face motion was restricted to selected areas of the face. The results indicate that information in the upper part of the talker's face is more critical for intonation pattern decisions than for decisions about word segments or primary sentence stress, thus supporting the Gaze Direction Assumption. Visual speech perception proficiency requires learning where to direct visual attention for cues related to different aspects of speech.
APA, Harvard, Vancouver, ISO, and other styles
17

Hood, Bruce M., J. Douglas Willen, and Jon Driver. "Adult's Eyes Trigger Shifts of Visual Attention in Human Infants." Psychological Science 9, no. 2 (March 1998): 131–34. http://dx.doi.org/10.1111/1467-9280.00024.

Full text
Abstract:
Two experiments examined whether infants shift their visual attention in the direction toward which an adult's eyes turn. A computerized modification of previous joint-attention paradigms revealed that infants as young as 3 months attend in the same direction as the eyes of a digitized adult face. This attention shift was indicated by the latency and direction of their orienting to peripheral probes presented after the face was extinguished. A second experiment found a similar influence of direction of perceived gaze, but also that less peripheral orienting occurred if the central face remained visible during presentation of the probe. This may explain why attention shifts triggered by gaze perception have been difficult to observe in infants using previous naturalistic procedures. Our new method reveals both that direction of perceived gaze can be discriminated by young infants and that this perception triggers corresponding shifts of their own attention.
APA, Harvard, Vancouver, ISO, and other styles
18

Edwards, S. Gareth, Nathalie Seibert, and Andrew P. Bayliss. "Joint attention facilitates observed gaze direction discrimination." Quarterly Journal of Experimental Psychology 73, no. 1 (August 18, 2019): 80–90. http://dx.doi.org/10.1177/1747021819867901.

Full text
Abstract:
Efficiently judging where someone else is looking is important for social interactions, allowing us a window into their mental state by establishing joint attention. Previous work has shown that judging the gaze direction of a non-foveally presented face is facilitated when the eyes of that face are directed towards the centre of the scene. This finding has been interpreted as an example of the human bias for misattributing observed ambiguous gaze signals as self-directed eye-contact. To test this interpretation against an alternative hypothesis that the facilitation is instead driven by the establishment of joint attention, we conducted two experiments in which we varied the participants’ fixation location. In both experiments, we replicated the previous finding of facilitated gaze discrimination when the participants fixated centrally. However, this facilitation was abolished when participants fixated peripheral fixation crosses (Experiment 1) and reversed when participants fixated peripheral images of real-world objects (Experiment 2). Based on these data, we propose that the facilitation effect is consistent with the interpretation that gaze discrimination is facilitated when joint attention is established. This finding therefore extends previous work showing that engaging in joint attention facilitates a range of social cognitive processes.
APA, Harvard, Vancouver, ISO, and other styles
19

Van Belle, G., T. Busigny, A. Hosein, B. Jemel, P. Lefere, and B. Rossion. "Holistic face perception impairment in acquired prosopagnosia as evidenced by eye-gaze-contingency: Generalization to several cases." Journal of Vision 11, no. 11 (September 23, 2011): 569. http://dx.doi.org/10.1167/11.11.569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Van Belle, Goedele, Thomas Busigny, Philippe Lefèvre, Sven Joubert, Olivier Felician, Francesco Gentile, and Bruno Rossion. "Impairment of holistic face perception following right occipito-temporal damage in prosopagnosia: Converging evidence from gaze-contingency." Neuropsychologia 49, no. 11 (September 2011): 3145–50. http://dx.doi.org/10.1016/j.neuropsychologia.2011.07.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kingstone, Alan, Chris Kelland Friesen, and Michael S. Gazzaniga. "Reflexive Joint Attention Depends on Lateralized Cortical Connections." Psychological Science 11, no. 2 (March 2000): 159–66. http://dx.doi.org/10.1111/1467-9280.00232.

Full text
Abstract:
Joint attention, the tendency to spontaneously direct attention to where someone else is looking, has been thought to occur because eye direction provides a reliable cue to the presence of important events in the environment. We have discovered, however, that adults will shift their attention to where a schematic face is looking—even when gaze direction does not predict any events in the environment. Research with 2 split-brain patients revealed that this reflexive joint attention is lateralized to a single hemisphere. Moreover, although this phenomenon could be inhibited by inversion of a face, eyes alone produced reflexive shifts of attention. Consistent with recent functional neuroimaging studies, these results suggest that lateralized cortical connections between (a) temporal lobe subsystems specialized for processing upright faces and gaze and (b) the parietal area specialized for orienting spatial attention underlie human reflexive shifts of attention in response to gaze direction.
APA, Harvard, Vancouver, ISO, and other styles
22

Lo, Ronda F., Andy H. Ng, Adam S. Cohen, and Joni Y. Sasaki. "Does self-construal shape automatic social attention?" PLOS ONE 16, no. 2 (February 10, 2021): e0246577. http://dx.doi.org/10.1371/journal.pone.0246577.

Full text
Abstract:
We examined whether activating independent or interdependent self-construal modulates attention shifting in response to group gaze cues. European Canadians (Study 1) and East Asian Canadians (Study 2) primed with independence vs. interdependence completed a multi-gaze cueing task with a central face gazing left or right, flanked by multiple background faces that either matched or mismatched the direction of the foreground gaze. Results showed that European Canadians (Study 1) mostly ignored background gaze cues and were uninfluenced by the self-construal primes. However, East Asian Canadians (Study 2), who have cultural backgrounds relevant to both independence and interdependence, showed different attention patterns by prime: those primed with interdependence were more distracted by mismatched (vs. matched) background gaze cues, whereas there was no change for those primed with independence. These findings suggest activating an interdependent self-construal modulates social attention mechanisms to attend broadly, but only for those who may find these representations meaningful.
APA, Harvard, Vancouver, ISO, and other styles
23

Farroni, Teresa, Mark H. Johnson, and Gergely Csibra. "Mechanisms of Eye Gaze Perception during Infancy." Journal of Cognitive Neuroscience 16, no. 8 (October 2004): 1320–26. http://dx.doi.org/10.1162/0898929042304787.

Full text
Abstract:
Previous work has shown that infants are sensitive to the direction of gaze of another's face, and that gaze direction can cue attention. The present study replicates and extends results on the ERP correlates of gaze processing in 4-month-olds. In two experiments, we recorded ERPs while 4-month-olds viewed direct and averted gaze within the context of averted and inverted heads. Our results support the previous finding that cortical processing of faces in infants is enhanced when accompanied by direct gaze. However, this effect is only found when eyes are presented within the context of an upright face.
APA, Harvard, Vancouver, ISO, and other styles
24

Reddy, Vasudevi. "The strategic use of gaze to face in joint attention." Infant Behavior and Development 21 (April 1998): 640. http://dx.doi.org/10.1016/s0163-6383(98)91853-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Gullberg, Marianne, and Kenneth Holmqvist. "What speakers do and what addressees look at." Pragmatics and Cognition 14, no. 1 (August 22, 2006): 53–82. http://dx.doi.org/10.1075/pc.14.1.05gul.

Full text
Abstract:
This study investigates whether addressees visually attend to speakers’ gestures in interaction and whether attention is modulated by changes in social setting and display size. We compare a live face-to-face setting to two video conditions. In all conditions, the face dominates as a fixation target and only a minority of gestures draw fixations. The social and size parameters affect gaze mainly when combined and in the opposite direction from the predicted with fewer gestures fixated on video than live. Gestural holds and speakers’ gaze at their own gestures reliably attract addressees’ fixations in all conditions. The attraction force of holds is unaffected by changes in social and size parameters, suggesting a bottom-up response, whereas speaker-fixated gestures draw significantly less attention in both video conditions, suggesting a social effect for overt gaze-following and visual joint attention. The study provides and validates a video-based paradigm enabling further experimental but ecologically valid explorations of cross-modal information processing.
APA, Harvard, Vancouver, ISO, and other styles
26

Langton, Stephen R. H. "The Mutual Influence of Gaze and Head Orientation in the Analysis of Social Attention Direction." Quarterly Journal of Experimental Psychology Section A 53, no. 3 (August 2000): 825–45. http://dx.doi.org/10.1080/713755908.

Full text
Abstract:
Three experiments are reported that investigate the hypothesis that head orientation and gaze direction interact in the processing of another individual's direction of social attention. A Stroop-type interference paradigm was adopted, in which gaze and head cues were placed into conflict. In separate blocks of trials, participants were asked to make speeded keypress responses contingent on either the direction of gaze, or the orientation of the head displayed in a digitized photograph of a male face. In Experiments 1 and 2, head and gaze cues showed symmetrical interference effects. Compared with congruent arrangements, incongruent head cues slowed responses to gaze cues, and incongruent gaze cues slowed responses to head cues, suggesting that head and gaze are mutually influential in the analysis of social attention direction. This mutuality was also evident in a cross-modal version of the task (Experiment 3) where participants responded to spoken directional words whilst ignoring the head/gaze images. It is argued that these interference effects arise from the independent influences of gaze and head orientation on decisions concerning social attention direction.
APA, Harvard, Vancouver, ISO, and other styles
27

Cheng, Yihua, Shiyao Huang, Fei Wang, Chen Qian, and Feng Lu. "A Coarse-to-Fine Adaptive Network for Appearance-Based Gaze Estimation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 10623–30. http://dx.doi.org/10.1609/aaai.v34i07.6636.

Full text
Abstract:
Human gaze is essential for various appealing applications. Aiming at more accurate gaze estimation, a series of recent works propose to utilize face and eye images simultaneously. Nevertheless, face and eye images only serve as independent or parallel feature sources in those works, the intrinsic correlation between their features is overlooked. In this paper we make the following contributions: 1) We propose a coarse-to-fine strategy which estimates a basic gaze direction from face image and refines it with corresponding residual predicted from eye images. 2) Guided by the proposed strategy, we design a framework which introduces a bi-gram model to bridge gaze residual and basic gaze direction, and an attention component to adaptively acquire suitable fine-grained feature. 3) Integrating the above innovations, we construct a coarse-to-fine adaptive network named CA-Net and achieve state-of-the-art performances on MPIIGaze and EyeDiap.
APA, Harvard, Vancouver, ISO, and other styles
28

Rigato, Silvia, Enrica Menon, Valentina Di Gangi, Nathalie George, and Teresa Farroni. "The role of facial expressions in attention-orienting in adults and infants." International Journal of Behavioral Development 37, no. 2 (February 12, 2013): 154–59. http://dx.doi.org/10.1177/0165025412472410.

Full text
Abstract:
Faces convey many signals (i.e., gaze or expressions) essential for interpersonal interaction. We have previously shown that facial expressions of emotion and gaze direction are processed and integrated in specific combinations early in life. These findings open a number of developmental questions and specifically in this paper we address whether such emotional signals may modulate the behavior in a following gaze context. A classic spatial cueing paradigm was used to assess whether different facial expressions may cause differential orienting response times and modulate the visual response to a peripheral target in adults and in 4-month-old infants. Results showed that both adults and infants oriented towards a peripheral target when a central face was gazing in the direction of the target location. However, in adults this effect occurred regardless of the facial expression displayed by the face. In contrast, in infants, the emotional facial expressions used, at least in the current study, did not facilitate the attention shift but tended to hold infants’ attention.
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Mao, Yoichiro Maeda, and Yasutake Takahashi. "Visual Attention Region Prediction Based on Eye Tracking Using Fuzzy Inference." Journal of Advanced Computational Intelligence and Intelligent Informatics 18, no. 4 (July 20, 2014): 499–510. http://dx.doi.org/10.20965/jaciii.2014.p0499.

Full text
Abstract:
Visual attention region prediction has attracted the attention of intelligent systems researchers because it makes the interaction between human beings and intelligent nonhuman agents to be more intelligent. Visual attention region prediction uses multiple input factors such as gestures, face images and eye gaze position. Physically, disabled persons may find it difficult to move in some way. In this paper, we propose using gaze position estimation as input to a prediction system achieved by extracting image features. Our approach is divided into two parts: user gaze estimation and visual attention region inference. The neural network has been used in user gaze estimation as the decision making unit, following which the user gaze position at the computer screen is then estimated. We proposed that prediction in visual attention region inference of the visual attention region be inferred by using fuzzy inference after image feature maps and saliency maps have been extracted and computed. User experiments conducted to evaluate the prediction accuracy of our proposed method surveyed prediction results. These results indicated that the prediction we proposed performs better at the attention regions position prediction level depending on the image.
APA, Harvard, Vancouver, ISO, and other styles
30

Haworth, Joshua L., Klaus Libertus, and Rebecca J. Landa. "Social Intervention Impacts Action Anticipation, Goal Extraction, and Social Interest in Children With Autism Spectrum Disorder." Journal of Educational and Developmental Psychology 8, no. 2 (August 16, 2018): 95. http://dx.doi.org/10.5539/jedp.v8n2p95.

Full text
Abstract:
Anticipatory looking in the context of goal-directed actions emerges during the first year of life. However, children with autism spectrum disorder (ASD) often show diminished social gaze and anticipation while observing goal-directed actions. The current study examined a therapist-mediated social intervention targeting action-anticipation, goal-extraction, and social gaze in 18 children with ASD diagnosis. Before and after the intervention period, children viewed a video displaying a toddler repeatedly placing blocks into a bowl using a cross-body motion. Gaze to the actor’s face and anticipatory gaze to the goal location were analyzed. Results revealed that young children with ASD understand repeated actions and demonstrate goal-extraction even before exposure to the intervention. Further, targeted social intervention experience led to a redistribution of attention in favor of the actor’s face, while retaining action intention comprehension of the block transfer activity. Attention to social aspects during action observation by children with ASD could have favorable cascading effects on social reciprocity, social contingency, and theory of mind development.
APA, Harvard, Vancouver, ISO, and other styles
31

Takao, Saki, Aiko Murata, and Katsumi Watanabe. "Gaze-Cueing With Crossed Eyes: Asymmetry Between Nasal and Temporal Shifts." Perception 47, no. 2 (November 9, 2017): 158–70. http://dx.doi.org/10.1177/0301006617738719.

Full text
Abstract:
A person’s direction of gaze (and visual attention) can be inferred from the direction of the parallel shift of the eyes. However, the direction of gaze is ambiguous when there is a misalignment between the eyes. The use of schematic drawings of faces in a previous study demonstrated that gaze-cueing was equally effective, even when one eye looked straight and the other eye was averted. In the current study, we used more realistic computer-generated face models to re-examine if the diverging direction of the eyes affected gaze-cueing. The condition where one eye was averted nasally while the other looked straight produced a significantly smaller gaze-cueing effect in comparison with when both eyes were averted in parallel or one eye was averted temporally. The difference in the gaze-cueing effect disappeared when the position of one eye was occluded with a rectangular surface or an eye-patch. These results highlight the possibility that the gaze-cueing effect might be weakened when a direct gaze exists between the cueing eye (i.e., nasally oriented eye) and the target and the effect magnitude might depend on which type of face stimulus are used as a cue.
APA, Harvard, Vancouver, ISO, and other styles
32

Aktar, Evin, Cristina Colonnesi, Wieke de Vente, Mirjana Majdandžić, and Susan M. Bögels. "How do parents' depression and anxiety, and infants' negative temperament relate to parent–infant face-to-face interactions?" Development and Psychopathology 29, no. 3 (June 17, 2016): 697–710. http://dx.doi.org/10.1017/s0954579416000390.

Full text
Abstract:
AbstractThe present study investigated the associations of mothers' and fathers' lifetime depression and anxiety symptoms, and of infants' negative temperament with parents' and infants' gaze, facial expressions of emotion, and synchrony. We observed infants' (age between 3.5 and 5.5 months, N = 101) and parents' gaze and facial expressions during 4-min naturalistic face-to-face interactions. Parents' lifetime symptoms of depression and anxiety were assessed with clinical interviews, and infants' negative temperament was measured with standardized observations. Parents with more depressive symptoms and their infants expressed less positive and more neutral affect. Parents' lifetime anxiety symptoms were not significantly related to parents' expressions of affect, while they were linked to longer durations of gaze to parent, and to more positive and negative affect in infants. Parents' lifetime depression or anxiety was not related to synchrony. Infants' temperament did not predict infants' or parents' interactive behavior. The study reveals that more depression symptoms in parents are linked to more neutral affect from parents and from infants during face-to-face interactions, while parents' anxiety symptoms are related to more attention to parent and less neutral affect from infants (but not from parents).
APA, Harvard, Vancouver, ISO, and other styles
33

Graham, Reiko, and Kevin S. LaBar. "Neurocognitive mechanisms of gaze-expression interactions in face processing and social attention." Neuropsychologia 50, no. 5 (April 2012): 553–66. http://dx.doi.org/10.1016/j.neuropsychologia.2012.01.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

TENENBAUM, ELENA J., DAVID M. SOBEL, STEPHEN J. SHEINKOPF, BERTRAM F. MALLE, and JAMES L. MORGAN. "Attention to the mouth and gaze following in infancy predict language development." Journal of Child Language 42, no. 6 (November 18, 2014): 1173–90. http://dx.doi.org/10.1017/s0305000914000725.

Full text
Abstract:
ABSTRACTWe investigated longitudinal relations among gaze following and face scanning in infancy and later language development. At 12 months, infants watched videos of a woman describing an object while their passive viewing was measured with an eye-tracker. We examined the relation between infants' face scanning behavior and their tendency to follow the speaker's attentional shift to the object she was describing. We also collected language outcome measures on the same infants at 18 and 24 months. Attention to the mouth and gaze following at 12 months both predicted later productive vocabulary. The results are discussed in terms of social engagement, which may account for both attentional distribution and language onset. We argue that an infant's inherent interest in engaging with others (in addition to creating more opportunities for communication) leads infants to attend to the most relevant information in a social scene and that this information facilitates language learning.
APA, Harvard, Vancouver, ISO, and other styles
35

Patil, Vaibhavi, Sakshi Patil, Krishna Ganjegi, and Pallavi Chandratre. "Face and Eye Detection for Interpreting Malpractices in Examination Hall." International Journal for Research in Applied Science and Engineering Technology 10, no. 4 (April 30, 2022): 1119–23. http://dx.doi.org/10.22214/ijraset.2022.41456.

Full text
Abstract:
Abstract: One of the most difficult problems in computer vision is detecting faces and eyes. The purpose of this work is to give a review of the available literature on face and eye detection, as well as assessment of gaze. With the growing popularity of systems based on face and eye detection in a range of disciplines in recent years, academia and industry have paid close attention to this topic. Face and eye identification has been the subject of numerous investigations. Face and eye detection systems have made significant process despite numerous challenges such as varying illumination conditions, wearing glasses, having facial hair or moustache on the face, and varying orientation poses or occlusion of the face. We categorize face detection models and look at basic face detection methods in this paper. We categorize face detection models and look at basic face detection methos in this paper. Then we’ll go through eye detection and estimation techniques. Keywords: Image Processing, Face Detection, Eye Detection, Gaze Estimation
APA, Harvard, Vancouver, ISO, and other styles
36

Anderson, James R., and Martin J. Doherty. "Preschoolers' Perception of other People's Looking: Photographs and Drawings." Perception 26, no. 3 (March 1997): 333–43. http://dx.doi.org/10.1068/p260333.

Full text
Abstract:
Children aged 3–4 years were tested for their ability to decide which of two photographs or drawings of a face depicted the act of fixating on a target object; in each control photograph or drawing the same face and object were present without fixation. Performance was above chance on both stimulus types, but low enough to call into question conclusions from previous research. The same children were also tested on their ability to discriminate between photographs/drawings depicting two faces fixating the same object (joint visual attention) and the same two faces fixating different objects. While discrimination of joint visual attention depicted in drawings was as good as discrimination of fixation in the single-face tasks, the ability to reliably choose between a photograph of two people attending to a common object and a control photograph was significantly poorer. The results suggest that, while young infants and children may be highly sensitive to face-on gaze, even well into the fourth year of life children arc unable consistently to interpret (1) direction of non-self-directed gaze in static faces and (2) joint visual attention by others.
APA, Harvard, Vancouver, ISO, and other styles
37

Mattavelli, Giulia, Daniele Romano, Andrew W. Young, and Paola Ricciardelli. "The interplay between gaze cueing and facial trait impressions." Quarterly Journal of Experimental Psychology 74, no. 9 (April 5, 2021): 1642–55. http://dx.doi.org/10.1177/17470218211007791.

Full text
Abstract:
The gaze cueing effect involves the rapid orientation of attention to follow the gaze direction of another person. Previous studies reported reciprocal influences between social variables and the gaze cueing effect, with modulation of gaze cueing by social features of face stimuli and modulation of the observer’s social judgements from the validity of the gaze cues themselves. However, it remains unclear which social dimensions can affect—and be affected by—gaze cues. We used computer-averaged prototype face-like images with high and low levels of perceived trustworthiness and dominance to investigate the impact of these two fundamental social impression dimensions on the gaze cueing effect. Moreover, by varying the proportions of valid and invalid gaze cues across three experiments, we assessed whether gaze cueing influences observers’ impressions of dominance and trustworthiness through incidental learning. Bayesian statistical analyses provided clear evidence that the gaze cueing effect was not modulated by facial social trait impressions (Experiments 1–3). However, there was uncertain evidence of incidental learning of social evaluations following the gaze cueing task. A decrease in perceived trustworthiness for non-cooperative low dominance faces (Experiment 2) and an increase in dominance ratings for faces whose gaze behaviour contradicted expectations (Experiment 3) appeared, but further research is needed to clarify these effects. Thus, this study confirms that attentional shifts triggered by gaze direction involve a robust and relatively automatic process, which could nonetheless influence social impressions depending on perceived traits and the gaze behaviour of faces providing the cues.
APA, Harvard, Vancouver, ISO, and other styles
38

Nomkin, Lilach Graff, and Ilanit Gordon. "The relationship between maternal smartphone use, physiological responses, and gaze patterns during breastfeeding and face-to-face interactions with infant." PLOS ONE 16, no. 10 (October 8, 2021): e0257956. http://dx.doi.org/10.1371/journal.pone.0257956.

Full text
Abstract:
Smartphone use during parent-child interactions is highly prevalent, however, there is a lack of scientific knowledge on how smartphone use during breastfeeding or face-to-face interactions may modulate mothers’ attentive responsiveness towards the infant as well as maternal physiological arousal. In the present study, we provide the first evidence for the influence of the smartphone on maternal physiological responses and her attention towards the infant during breastfeeding and face-to-face interactions. Twenty breastfeeding mothers and their infants participated in this lab study during which electrodermal activity, cardiograph impedance, and gaze patterns were monitored in breastfeeding and face-to-face interactions with three conditions manipulating the level of maternal smartphone involvement. We report that mothers’ gaze toward their infants decreased when breastfeeding while using the smartphone compared to face-to-face interaction. Further, we show that greater maternal electrodermal activity and cardiac output were related to longer maternal gaze fixation toward the smartphone during breastfeeding. Finally, results indicate that mothers’ smartphone addiction levels were negatively correlated with electrodermal activity during breastfeeding. This study provides an initial basis for much required further research that will explore the influence of smartphone use on maternal biobehavioral responses in this digital age and the consequences for infant cognitive, emotional, and social development.
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Qiuzhen, Lan Ma, Liqiang Huang, and Lei Wang. "Effect of the model eye gaze direction on consumer information processing: a consideration of gender differences." Online Information Review 44, no. 7 (October 15, 2020): 1403–20. http://dx.doi.org/10.1108/oir-01-2020-0025.

Full text
Abstract:
PurposeThe purpose of this paper aims to investigate the effect of a model's eye gaze direction on the information processing behavior of consumers varying based on their gender.Design/methodology/approachAn eye-tracking experiment and a memory test are conducted to test the research hypotheses.FindingsCompared to an averted gaze, a model with a direct gaze attracts more attention to the model's face among male consumers, leading to deeper processing. However, the findings show that when a model displays a direct gaze rather than an averted gaze, female consumers pay more attention to the brand name, thus leading to deeper processing.Originality/valueThis study contributes to not only the existing eye gaze direction literature by integrating the facilitative effect of direct gaze and considering the moderating role of consumer gender on consumer information processing but also the literature concerning the selectivity hypothesis by providing evidence of gender differences in information processing. Moreover, this study offers practical insights to practitioners regarding how to design appealing webpages to satisfy consumers of different genders.Peer reviewThe peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-01-2020-0025
APA, Harvard, Vancouver, ISO, and other styles
40

Willemse, Cesco, and Agnieszka Wykowska. "In natural interaction with embodied robots, we prefer it when they follow our gaze: a gaze-contingent mobile eyetracking study." Philosophical Transactions of the Royal Society B: Biological Sciences 374, no. 1771 (March 11, 2019): 20180036. http://dx.doi.org/10.1098/rstb.2018.0036.

Full text
Abstract:
Initiating joint attention by leading someone's gaze is a rewarding experience which facilitates social interaction. Here, we investigate this experience of leading an agent's gaze while applying a more realistic paradigm than traditional screen-based experiments. We used an embodied robot as our main stimulus and recorded participants' eye movements. Participants sat opposite a robot that had either of two ‘identities’—‘Jimmy’ or ‘Dylan’. Participants were asked to look at either of two objects presented on screens to the left and the right of the robot. Jimmy then looked at the same object in 80% of the trials and at the other object in the remaining 20%. For Dylan, this proportion was reversed. Upon fixating on the object of choice, participants were asked to look back at the robot's face. We found that return-to-face saccades were conducted earlier towards Jimmy when he followed the gaze compared with when he did not. For Dylan, there was no such effect. Additional measures indicated that our participants also preferred Jimmy and liked him better. This study demonstrates (a) the potential of technological advances to examine joint attention where ecological validity meets experimental control, and (b) that social reorienting is enhanced when we initiate joint attention. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.
APA, Harvard, Vancouver, ISO, and other styles
41

Shah, Sayyed Mudassar, Zhaoyun Sun, Khalid Zaman, Altaf Hussain, Muhammad Shoaib, and Lili Pei. "A Driver Gaze Estimation Method Based on Deep Learning." Sensors 22, no. 10 (May 23, 2022): 3959. http://dx.doi.org/10.3390/s22103959.

Full text
Abstract:
Car crashes are among the top ten leading causes of death; they could mainly be attributed to distracted drivers. An advanced driver-assistance technique (ADAT) is a procedure that can notify the driver about a dangerous scenario, reduce traffic crashes, and improve road safety. The main contribution of this work involved utilizing the driver’s attention to build an efficient ADAT. To obtain this “attention value”, the gaze tracking method is proposed. The gaze direction of the driver is critical toward understanding/discerning fatal distractions, pertaining to when it is obligatory to notify the driver about the risks on the road. A real-time gaze tracking system is proposed in this paper for the development of an ADAT that obtains and communicates the gaze information of the driver. The developed ADAT system detects various head poses of the driver and estimates eye gaze directions, which play important roles in assisting the driver and avoiding any unwanted circumstances. The first (and more significant) task in this research work involved the development of a benchmark image dataset consisting of head poses and horizontal and vertical direction gazes of the driver’s eyes. To detect the driver’s face accurately and efficiently, the You Only Look Once (YOLO-V4) face detector was used by modifying it with the Inception-v3 CNN model for robust feature learning and improved face detection. Finally, transfer learning in the InceptionResNet-v2 CNN model was performed, where the CNN was used as a classification model for head pose detection and eye gaze angle estimation; a regression layer to the InceptionResNet-v2 CNN was added instead of SoftMax and the classification output layer. The proposed model detects and estimates head pose directions and eye directions with higher accuracy. The average accuracy achieved by the head pose detection system was 91%; the model achieved a RMSE of 2.68 for vertical and 3.61 for horizontal eye gaze estimations.
APA, Harvard, Vancouver, ISO, and other styles
42

Deaner, Robert O., Stephen V. Shepherd, and Michael L. Platt. "Familiarity accentuates gaze cuing in women but not men." Biology Letters 3, no. 1 (November 28, 2006): 65–68. http://dx.doi.org/10.1098/rsbl.2006.0564.

Full text
Abstract:
Gaze cuing, the tendency to shift attention in the direction other individuals are looking, is hypothesized to depend on a distinct neural module. One expectation of such a module is that information processing should be encapsulated within it. Here, we tested whether familiarity, a type of social knowledge, penetrates the neural circuits governing gaze cuing. Male and female subjects viewed the face of an adult male looking left or right and then pressed a keypad to indicate the location of a target appearing randomly left or right. Responses were faster for targets congruent with gaze direction. Moreover, gaze cuing was stronger in females than males. Contrary to the modularity hypothesis, familiarity enhanced gaze cuing, but only in females. Sex differences in the effects of familiarity on gaze cuing may reflect greater adaptive significance of social information for females than males.
APA, Harvard, Vancouver, ISO, and other styles
43

Yang, Zetian, and Winrich A. Freiwald. "Joint encoding of facial identity, orientation, gaze, and expression in the middle dorsal face area." Proceedings of the National Academy of Sciences 118, no. 33 (August 12, 2021): e2108283118. http://dx.doi.org/10.1073/pnas.2108283118.

Full text
Abstract:
The last two decades have established that a network of face-selective areas in the temporal lobe of macaque monkeys supports the visual processing of faces. Each area within the network contains a large fraction of face-selective cells. And each area encodes facial identity and head orientation differently. A recent brain-imaging study discovered an area outside of this network selective for naturalistic facial motion, the middle dorsal (MD) face area. This finding offers the opportunity to determine whether coding principles revealed inside the core network would generalize to face areas outside the core network. We investigated the encoding of static faces and objects, facial identity, and head orientation, dimensions which had been studied in multiple areas of the core face-processing network before, as well as facial expressions and gaze. We found that MD populations form a face-selective cluster with a degree of selectivity comparable to that of areas in the core face-processing network. MD encodes facial identity robustly across changes in head orientation and expression, it encodes head orientation robustly against changes in identity and expression, and it encodes expression robustly across changes in identity and head orientation. These three dimensions are encoded in a separable manner. Furthermore, MD also encodes the direction of gaze in addition to head orientation. Thus, MD encodes both structural properties (identity) and changeable ones (expression and gaze) and thus provides information about another animal’s direction of attention (head orientation and gaze). MD contains a heterogeneous population of cells that establish a multidimensional code for faces.
APA, Harvard, Vancouver, ISO, and other styles
44

Bukach, Cindy M., Daniel N. Bub, Isabel Gauthier, and Michael J. Tarr. "Perceptual Expertise Effects Are Not All or None: Spatially Limited Perceptual Expertise for Faces in a Case of Prosopagnosia." Journal of Cognitive Neuroscience 18, no. 1 (January 1, 2006): 48–63. http://dx.doi.org/10.1162/089892906775250094.

Full text
Abstract:
We document a seemingly unique case of severe prosopagnosia, L. R., who suffered damage to his anterior and inferior right temporal lobe as a result of a motor vehicle accident. We systematically investigated each of three factors associated with expert face recognition: fine-level discrimination, holistic processing, and configural processing (Experiments 1-3). Surprisingly, L. R. shows preservation of all three of these processes; that is, his performance in these experiments is comparable to that of normal controls. However, L. R. is only able to apply these processes over a limited spatial extent to the fine-level detail within faces. Thus, when the location of a given change is unpredictable (Experiment 3), L. R. exhibits normal detection of features and spatial configurations only for the lower half of each face. Similarly, when required to divide his attention over multiple face features, L. R. is able to determine the identity of only a single feature (Experiment 4). We discuss these results in the context of forming a better understanding of prosopagnosia and the mechanisms used in face recognition and visual expertise. We conclude that these mechanisms are not “all-or-none”, but rather can be impaired incrementally, such that they may remain functional over a restricted spatial area. This conclusion is consistent with previous research suggesting that perceptual expertise is acquired in a spatially incremental manner [Gauthier, I., & Tarr, M. J. Unraveling mechanisms for expert object recognition: Bridging brain activity and behavior. Journal of Experimental Psychology: Human Perception & Performance, 28, 431-446, 2002].
APA, Harvard, Vancouver, ISO, and other styles
45

Mathews, Andrew, Elaine Fox, Jenny Yiend, and Andy Calder. "The face of fear: Effects of eye gaze and emotion on visual attention." Visual Cognition 10, no. 7 (October 2003): 823–35. http://dx.doi.org/10.1080/13506280344000095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Bulut, Merve, and Burak Erdeniz. "The Other-Race and Other-Species Effect during a Sex Categorization Task: An Eye Tracker Study." Behavioral Sciences 10, no. 1 (January 1, 2020): 24. http://dx.doi.org/10.3390/bs10010024.

Full text
Abstract:
Sex categorization from faces is a crucial ability for humans and non-human primates for various social and cognitive processes. In the current study, we performed two eye tracking experiments to examine the gaze behavior of participants during a sex categorization task in which participants categorize face pictures from their own-race (Caucasian), other-race (Asian) and other-species (chimpanzee). In experiment 1, we presented the faces in an upright position to 16 participants, and found a strong other-race and other-species effect. In experiment 2, the same faces were shown to 24 naïve participants in an upside-down (inverted) position, which showed that, although the other-species effect was intact, other-race effect disappeared. Moreover, eye-tracking analysis revealed that in the upright position, the eye region was the first and most widely viewed area for all face categories. However, during upside-down viewing, participants’ attention directed more towards the eye region of the own-race and own-species faces, whereas the nose received more attention in other-race and other-species faces. Overall results suggest that other-race faces were processed less holistically compared to own-race faces and this could affect both participants’ behavioral performance and gaze behavior during sex categorization. Finally, gaze data suggests that the gaze of participants shifts from the eye to the nose region with decreased racial and species-based familiarity.
APA, Harvard, Vancouver, ISO, and other styles
47

Hubble, Kelly, Katie Daughters, Antony S. R. Manstead, Aled Rees, Anita Thapar, and Stephanie H. M. van Goozen. "Oxytocin Reduces Face Processing Time but Leaves Recognition Accuracy and Eye-Gaze Unaffected." Journal of the International Neuropsychological Society 23, no. 1 (November 21, 2016): 23–33. http://dx.doi.org/10.1017/s1355617716000886.

Full text
Abstract:
AbstractObjectives: Previous studies have found that oxytocin (OXT) can improve the recognition of emotional facial expressions; it has been proposed that this effect is mediated by an increase in attention to the eye-region of faces. Nevertheless, evidence in support of this claim is inconsistent, and few studies have directly tested the effect of oxytocin on emotion recognition via altered eye-gaze Methods: In a double-blind, within-subjects, randomized control experiment, 40 healthy male participants received 24 IU intranasal OXT and placebo in two identical experimental sessions separated by a 2-week interval. Visual attention to the eye-region was assessed on both occasions while participants completed a static facial emotion recognition task using medium intensity facial expressions. Results: Although OXT had no effect on emotion recognition accuracy, recognition performance was improved because face processing was faster across emotions under the influence of OXT. This effect was marginally significant (p<.06). Consistent with a previous study using dynamic stimuli, OXT had no effect on eye-gaze patterns when viewing static emotional faces and this was not related to recognition accuracy or face processing time. Conclusions: These findings suggest that OXT-induced enhanced facial emotion recognition is not necessarily mediated by an increase in attention to the eye-region of faces, as previously assumed. We discuss several methodological issues which may explain discrepant findings and suggest the effect of OXT on visual attention may differ depending on task requirements. (JINS, 2017, 23, 23–33)
APA, Harvard, Vancouver, ISO, and other styles
48

Ho, Pik Ki, and Fiona N. Newell. "Turning Heads: The Effects of Face View and Eye Gaze Direction on the Perceived Attractiveness of Expressive Faces." Perception 49, no. 3 (February 17, 2020): 330–56. http://dx.doi.org/10.1177/0301006620905216.

Full text
Abstract:
We investigated whether the perceived attractiveness of expressive faces was influenced by head turn and eye gaze towards or away from the observer. In all experiments, happy faces were consistently rated as more attractive than angry faces. A head turn towards the observer, whereby a full-face view was shown, was associated with relatively higher attractiveness ratings when gaze direction was aligned with face view (Experiment 1). However, preference for full-face views of happy faces was not affected by gaze shifts towards or away from the observer (Experiment 2a). In Experiment 3, the relative duration of each face view (front-facing or averted at 15°) during a head turn away or towards the observer was manipulated. There was benefit on attractiveness ratings for happy faces shown for a longer duration from the front view, regardless of the direction of head turn. Our findings support previous studies indicating a preference for positive expressions on attractiveness judgements, which is further enhanced by the front views of faces, whether presented during a head turn or shown statically. In sum, our findings imply a complex interaction between cues of social attention, indicated by the view of the face shown, and reward on attractiveness judgements of unfamiliar faces.
APA, Harvard, Vancouver, ISO, and other styles
49

Costanzo, Valeria, Antonio Narzisi, Sonia Cerullo, Giulia Crifaci, Maria Boncoddo, Marco Turi, Fabio Apicella, et al. "High-Risk Siblings without Autism: Insights from a Clinical and Eye-Tracking Study." Journal of Personalized Medicine 12, no. 11 (October 29, 2022): 1789. http://dx.doi.org/10.3390/jpm12111789.

Full text
Abstract:
Joint attention (JA)—the human ability to coordinate our attention with that of other people—is impaired in the early stage of Autism Spectrum Disorder (ASD). However, little is known about the JA skills in the younger siblings of children with ASD who do not develop ASD at 36 months of age [high-risk (HR)-noASD]. In order to advance our understanding of this topic, a prospective multicenter observational study was conducted with three groups of toddlers (age range: 18–33 months): 17 with ASD, 19 with HR-noASD and 16 with typical development (TD). All subjects underwent a comprehensive clinical assessment and an eye-tracking experiment with pre-recorded stimuli in which the visual patterns during two tasks eliciting initiating joint attention (IJA) were measured. Specifically, fixations, transitions and alternating gaze were analyzed. Clinical evaluation revealed that HR-noASD subjects had lower non-verbal cognitive skills than TD children, while similar levels of restricted and repetitive behaviors and better social communication skills were detected in comparison with ASD children. Eye-tracking paradigms indicated that HR-noASD toddlers had visual patterns resembling TD in terms of target-object-to-face gaze alternations, while their looking behaviors were similar to ASD toddlers regarding not-target-object-to-face gaze alternations. This study indicated that high-risk, unaffected siblings displayed a shared profile of IJA-eye-tracking measures with both ASD patients and TD controls, providing new insights into the characterization of social attention in this group of toddlers.
APA, Harvard, Vancouver, ISO, and other styles
50

Oh, Jaekwang, Youngkeun Lee, Jisang Yoo, and Soonchul Kwon. "Improved Feature-Based Gaze Estimation Using Self-Attention Module and Synthetic Eye Images." Sensors 22, no. 11 (May 26, 2022): 4026. http://dx.doi.org/10.3390/s22114026.

Full text
Abstract:
Gaze is an excellent indicator and has utility in that it can express interest or intention and the condition of an object. Recent deep-learning methods are mainly appearance-based methods that estimate gaze based on a simple regression from entire face and eye images. However, sometimes, this method does not give satisfactory results for gaze estimations in low-resolution and noisy images obtained in unconstrained real-world settings (e.g., places with severe lighting changes). In this study, we propose a method that estimates gaze by detecting eye region landmarks through a single eye image; and this approach is shown to be competitive with recent appearance-based methods. Our approach acquires rich information by extracting more landmarks and including iris and eye edges, similar to the existing feature-based methods. To acquire strong features even at low resolutions, we used the HRNet backbone network to learn representations of images at various resolutions. Furthermore, we used the self-attention module CBAM to obtain a refined feature map with better spatial information, which enhanced the robustness to noisy inputs, thereby yielding a performance of a 3.18% landmark localization error, a 4% improvement over the existing error and A large number of landmarks were acquired and used as inputs for a lightweight neural network to estimate the gaze. We conducted a within-datasets evaluation on the MPIIGaze, which was obtained in a natural environment and achieved a state-of-the-art performance of 4.32 degrees, a 6% improvement over the existing performance.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography