Academic literature on the topic 'Face, attention, gaze, prosopagnosia'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Face, attention, gaze, prosopagnosia.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Face, attention, gaze, prosopagnosia"

1

Burra, Nicolas, Dirk Kerzel, and Meike Ramon. "Gaze-cueing requires intact face processing – Insights from acquired prosopagnosia." Brain and Cognition 113 (April 2017): 125–32. http://dx.doi.org/10.1016/j.bandc.2017.01.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Erni, Britt, Roland Maurer, Dirk Kerzel, and Nicolas Burra. "Perception of eye gaze direction in a case of acquired prosopagnosia." Neuropsychologie clinique et appliquée 3, Fall 2019 (2019): 105–19. http://dx.doi.org/10.46278/j.ncacn.201907284.

Full text
Abstract:
The ability to perceive the direction of eye gaze is critical in social settings. Brain lesions in the superior temporal sulcus (STS) impair this ability. We investigated the perception of gaze direction of PS, a patient suffering from acquired prosopagnosia (Rossion et al., 2003). Despite lesions in the face network, the STS was spared in PS. We assessed perception of gaze direction in PS with upright, inverted, and contrast-reversed faces. Compared to the performance of 11 healthy women matched for age and education, PS demonstrated abnormal discrimination of gaze direction with upright and contrast-reversed faces, but not with inverted faces. Our findings suggest that the inability of the patient to process faces holistically weakened her perception of gaze direction, especially in demanding tasks.
APA, Harvard, Vancouver, ISO, and other styles
3

Van Belle, G., P. De Graef, K. Verfaillie, T. Busigny, and B. Rossion. "Gaze-contingent techniques reveal impairment of holistic face processing in acquired prosopagnosia." Journal of Vision 9, no. 8 (March 24, 2010): 541. http://dx.doi.org/10.1167/9.8.541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Callejas, Alicia, Gordon L. Shulman, and Maurizio Corbetta. "Dorsal and Ventral Attention Systems Underlie Social and Symbolic Cueing." Journal of Cognitive Neuroscience 26, no. 1 (January 2014): 63–80. http://dx.doi.org/10.1162/jocn_a_00461.

Full text
Abstract:
Eye gaze is a powerful cue for orienting attention in space. Studies examining whether gaze and symbolic cues recruit the same neural mechanisms have found mixed results. We tested whether there is a specialized attentional mechanism for social cues. We separately measured BOLD activity during orienting and reorienting attention following predictive gaze and symbolic cues. Results showed that gaze and symbolic cues exerted their influence through the same neural networks but also produced some differential modulations. Dorsal frontoparietal regions in left intraparietal sulcus (IPS) and bilateral MT+/lateral occipital cortex only showed orienting effects for symbolic cues, whereas right posterior IPS showed larger validity effects following gaze cues. Both exceptions may reflect the greater automaticity of gaze cues: Symbolic orienting may require more effort, while disengaging attention during reorienting may be more difficult following gaze cues. Face-selective regions, identified with a face localizer, showed selective activations for gaze cues reflecting sensory processing but no attentional modulations. Therefore, no evidence was found linking face-selective regions to a hypothetical, specialized mechanism for orienting attention to gaze cues. However, a functional connectivity analysis showed greater connectivity between face-selective regions and right posterior IPS, posterior STS, and inferior frontal gyrus during gaze cueing, consistent with proposals that face-selective regions may send gaze signals to parts of the dorsal and ventral frontoparietal attention networks. Finally, although the default-mode network is thought to be involved in social cognition, this role does not extend to gaze orienting as these regions were more deactivated following gaze cues and showed less functional connectivity with face-selective regions during gaze cues.
APA, Harvard, Vancouver, ISO, and other styles
5

Edwards, S. Gareth, Lisa J. Stephenson, Mario Dalmaso, and Andrew P. Bayliss. "Social orienting in gaze leading: a mechanism for shared attention." Proceedings of the Royal Society B: Biological Sciences 282, no. 1812 (August 7, 2015): 20151141. http://dx.doi.org/10.1098/rspb.2015.1141.

Full text
Abstract:
Here, we report a novel social orienting response that occurs after viewing averted gaze. We show, in three experiments, that when a person looks from one location to an object, attention then shifts towards the face of an individual who has subsequently followed the person's gaze to that same object. That is, contrary to ‘gaze following’, attention instead orients in the opposite direction to observed gaze and towards the gazing face. The magnitude of attentional orienting towards a face that ‘follows’ the participant's gaze is also associated with self-reported autism-like traits. We propose that this gaze leading phenomenon implies the existence of a mechanism in the human social cognitive system for detecting when one's gaze has been followed, in order to establish ‘shared attention’ and maintain the ongoing interaction.
APA, Harvard, Vancouver, ISO, and other styles
6

Dalmaso, Mario, Giulia Pavan, Luigi Castelli, and Giovanni Galfano. "Social status gates social attention in humans." Biology Letters 8, no. 3 (November 16, 2011): 450–52. http://dx.doi.org/10.1098/rsbl.2011.0881.

Full text
Abstract:
Humans tend to shift attention in response to the averted gaze of a face they are fixating, a phenomenon known as gaze cuing. In the present paper, we aimed to address whether the social status of the cuing face modulates this phenomenon. Participants were asked to look at the faces of 16 individuals and read fictive curriculum vitae associated with each of them that could describe the person as having a high or low social status. The association between each specific face and either high or low social status was counterbalanced between participants. The same faces were then used as stimuli in a gaze-cuing task. The results showed a greater gaze-cuing effect for high-status faces than for low-status faces, independently of the specific identity of the face. These findings confirm previous evidence regarding the important role of social factors in shaping social attention and show that a modulation of gaze cuing can be observed even when knowledge about social status is acquired through episodic learning.
APA, Harvard, Vancouver, ISO, and other styles
7

Potter, Douglas D., and Simon Webster. "Normal Gaze Cueing in Children with Autism Is Disrupted by Simultaneous Speech Utterances in “Live” Face-to-Face Interactions." Autism Research and Treatment 2011 (2011): 1–7. http://dx.doi.org/10.1155/2011/545964.

Full text
Abstract:
Gaze cueing was assessed in children with autism and in typically developing children, using a computer-controlled “live” face-to-face procedure. Sensitivity to gaze direction was assessed using a Posner cuing paradigm. Both static and dynamic directional gaze cues were used. Consistent with many previous studies, using photographic and cartoon faces, gaze cueing was present in children with autism and was not developmentally delayed. However, in the same children, gaze cueing was abolished when a mouth movement occurred at the same time as the gaze cue. In contrast, typical children were able to use gaze cues in all conditions. The findings indicate that gaze cueing develops successfully in some children with autism but that their attention is disrupted by speech utterances. Their ability to learn to read nonverbal emotional and intentional signals provided by the eyes may therefore be significantly impaired. This may indicate a problem with cross-modal attention control or an abnormal sensitivity to peripheral motion in general or the mouth region in particular.
APA, Harvard, Vancouver, ISO, and other styles
8

Moradi, F., and S. Shimojo. "Face adaptation depends on gaze (overt attention) to the face." Journal of Vision 6, no. 6 (March 24, 2010): 875. http://dx.doi.org/10.1167/6.6.875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stephenson, Lisa J., S. Gareth Edwards, Natacha M. Luri, Louis Renoult, and Andrew P. Bayliss. "The N170 event-related potential differentiates congruent and incongruent gaze responses in gaze leading." Social Cognitive and Affective Neuroscience 15, no. 4 (April 2020): 479–86. http://dx.doi.org/10.1093/scan/nsaa054.

Full text
Abstract:
Abstract To facilitate social interactions, humans need to process the responses that other people make to their actions, including eye movements that could establish joint attention. Here, we investigated the neurophysiological correlates of the processing of observed gaze responses following the participants’ own eye movement. These observed gaze responses could either establish, or fail to establish, joint attention. We implemented a gaze leading paradigm in which participants made a saccade from an on-screen face to an object, followed by the on-screen face either making a congruent or incongruent gaze shift. An N170 event-related potential was elicited by the peripherally located gaze shift stimulus. Critically, the N170 was greater for joint attention than non-joint gaze both when task-irrelevant (Experiment 1) and task-relevant (Experiment 2). These data suggest for the first time that the neurocognitive system responsible for structural encoding of face stimuli is affected by the establishment of participant-initiated joint attention.
APA, Harvard, Vancouver, ISO, and other styles
10

Rubies, Elena, Jordi Palacín, and Eduard Clotet. "Enhancing the Sense of Attention from an Assistance Mobile Robot by Improving Eye-Gaze Contact from Its Iconic Face Displayed on a Flat Screen." Sensors 22, no. 11 (June 4, 2022): 4282. http://dx.doi.org/10.3390/s22114282.

Full text
Abstract:
One direct way to express the sense of attention in a human interaction is through the gaze. This paper presents the enhancement of the sense of attention from the face of a human-sized mobile robot during an interaction. This mobile robot was designed as an assistance mobile robot and uses a flat screen at the top of the robot to display an iconic (simplified) face with big round eyes and a single line as a mouth. The implementation of eye-gaze contact from this iconic face is a problem because of the difficulty of simulating real 3D spherical eyes in a 2D image considering the perspective of the person interacting with the mobile robot. The perception of eye-gaze contact has been improved by manually calibrating the gaze of the robot relative to the location of the face of the person interacting with the robot. The sense of attention has been further enhanced by implementing cyclic face explorations with saccades in the gaze and by performing blinking and small movements of the mouth.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Face, attention, gaze, prosopagnosia"

1

COMPARETTI, CHIARA MADDALENA. "Looking at a face. Relavant aspects of face perception in social cognition." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2012. http://hdl.handle.net/10281/28331.

Full text
Abstract:
An important issue in human cognition concerns face processing. Faces are incontestably one of the most important biological stimuli for humans. They convey crucial social cues, such as age, sex, emotion and identity information, and are the basis of verbal and non-verbal communication. Face processing and recognition have been extensively studied over the past years, through different methodology including neuroimaging and electrophysiology, mostly aimed at testing the extent to which faces can be considered a class of special visual stimuli (e. g. Farah, Wilson, Drain, & Tanaka, 1998; but see also Gauthier, Behrmann, & Tarr, 2004). Despite there is no complete agreement on this debated issue all authors concord on the fact that there are at least two reasons that make faces special: face recognition exhibit functional characteristics not found in the recognition of other visual stimuli and, second, the neural substrate that mediates face recognition is anatomically separated from the those mediating general object recognition (e.g. Farah, Rabinowitz, Quinn, & Liu, 2000). The majority of the literature on face processing was aimed primarily to investigate the ability to discriminate between faces and non-face like objects (e.g. Gauthier, Behrmann, & Tarr, 1999), as well as defining which kind of processing is involved (configural vs. featural processing) (e.g. Maurer, Grand, & Mondloch, 2002) and the ability to perceive the uniqueness of individual faces (e.g. Bruce & Young, 1986), thus focusing primarily on face-identity related aspects of recognition It has been claimed that the recognition of facial identity is based on invariant facial features, such as eyes, nose, mouth and their reciprocal configurational relations. As well as these invariant aspects, faces have another essential component: their changeable aspects, that carry a variety of socially important cues that are essential to social interaction. Indeed, since birth most face viewing occurs in the context of social interactions and faces provide a wealth of information, beyond identity, which facilitate social communication. Indeed, facial features can move, changing their reciprocal relations, generating for example facial expression, lip or eye movement. In fact, while these changeable aspects do not modify the identity of that particular face, they result constitute in different visual stimuli which convey different social signals. The ability to process such social relevant information may represent a more highly developed visual perceptual skill than the recognition of identity. Only recently however, the study of these aspects have started to be investigated. Among the different neuroanatomical - functional models proposed in literature, the Haxby and colleagues’s (2000) take into account both important components, invariant features and changeable aspects of a face. The network includes visual (“core”) regions, which process invariant facial features, as well as limbic and prefrontal (“extended”) regions that process changeable aspects of faces (Haxby, Hoffman, & Gobbini, 2000; Ishai, 2008). Starting from Haxby model, the attention of the present work has been focused on the role of the changeable aspect of a face within social interaction. More specifically, the aim of the current series of studies was to investigate how observer could process, use, interact and react to different social signals (i.e. gaze direction, head orientation, facial expressions). In experiment 1 we explored the perception of different gaze directions and the role of conflicting information in gaze following behaviour was investigated using ERPs. In experiment 2 we examined the effect of the combination between gaze direction and head positions on allocation of attentional resources and thus on the processing of subsequent target using fMRI. In experiment 3 we studied how non-emotional facial expressions could help recognition of identity in a clinical population (i.e. congenital prosopagnosia). It is well known that others’ gaze direction and body position attract our attention (Ricciardelli, Baylis, & Driver, 2000), and it also has been demonstrated the existence of an automatic tendency to follow the gaze of others, leading to joint attention (Ricciardelli, Bricolo, Aglioti, & Chelazzi, 2002). It is known that we can use those signals to modulate our attention, but it is still unclear the nature and the time course of control processes involved in this modulation. In the first part of the present study we investigate this issue on gaze by using different methodologies: electrophysiological method in order to investigate the time course of the gaze following behaviour (the fact that ultimately the observer’s look and attend where another person is looking); and neuroimaging method to explore what neural system is activated when a temporal allocation of resource is required and influenced by seeing actors with different gaze direction and head orientation. In experiment 1 we wanted to trace the time course of the processed involved in a gaze cueing task in which the effect was investigated in an overt paradigm. By combining eye movement and ERP recordings we investigate the involvement of conflict monitoring processes in various contexts and at different times with respect to the distracter’s eyes movement. We used ERP because they provide a measure of the timing of the elaboration of gaze observed and of the consequent planning of a saccadic response. Participants were instructed to saccade towards one of two lateral targets in a Posner like paradigm. Seventy-five milliseconds before, or after the instruction onset, a distracting face gazed towards a target (goal-directed), congruent or incongruent with the instructed direction, or towards an empty spatial location (non-goal-directed). We analyzed the N2 and Error-Related Negativity (ERN) measures, known to be involved in conflict monitoring processes (respectively in pre-response conflict and in error detection). Results interestingly showed that a certain degree of control over the gaze following response is possible, suggesting that tendency to follow the gaze of others is more flexible than previously believed, as it seems to depend not only on an early visuo-motor priming (Crostella, Carducci, & Aglioti, 2009), but also on the circumstances (i.e. context) associated with the seen gaze shift. In experiment 2 we explored activations in face neural system in order to verify whether social cues indicating mutual contact enhanced or reduced attention for subsequent events. More specifically it has been investigated how the processing of gaze direction (averted, directed) and head position (deviated, frontal) diminishes attentional blink (AB) for subsequent visual events. We used fMRI in order to measure the hemodynamic response (change in blood flow) related to neural activity in attentional and face processing systems when the temporal allocation of resource is linked to gaze direction and head position processing. Results showed that when the eyes and the head were oriented in the same direction (i.e., congruent conditions), attract attention and increase the processing of subsequent visual events, than when they were oriented in opposite directions (i.e., incongruent), In fact analysis showed that congruent gaze direction and head orientation increased activity within bilateral temporoparietal junction, an area that is strongly associated with mentalizing and understanding intentions of other’s (Redcay et al., 2010), as well as increased activity in regions of the face perception network, such as Occipital Face Area, Superior Temporal Sulcus and anterior insula (Ishai, 2008), but these responses were drastically diminished during AB. Moreover activity in bilateral Intraparietal Sulcus, a region involved in gaze perception (Calder et al., 2007) and attention (Marois, Chun, & Gore, 2000), decreased during AB in parallel to the decrease in recognition performance, thus when head and gaze were averted. These results show that head and gaze directions seem to be powerful social cues that are able to modulate the AB effect and, more generally, influence the observer's attention in reacting to subsequent visual stimuli. Together with the results from Experiment 1, these findings validate the issue that humans has a neural system to process other’s gaze direction and that this system is complexity linked with attentional networks both to allocate resource and to share the attention with someone else. Another important features connected with social signals in face perception are facial expressions which were investigated in the second part of the present work. The idea that facial identity and facial expressions are processed by separate visual route has well established in face research. The model proposed by Haxby and colleagues (2000) contain a separate route for facial identity but it is unknown if a single system supports the processing of emotional and non-emotional facial expressions whereas non emotional facial expressions are expressions that are not supported by an affective state. A previous study (Comparetti, Ricciardelli, & Daini, 2011) on normal subjects suggests that non-emotional facial expressions could be processed in a specific way dissociable from emotions and from other facial features. In perceiving emotional expressions congenital prosopagnosic individuals (people who are unable to recognize faces and maintain the disability lifelong in absence of any obvious brain damage) are indistinguishable from control but it’s still unknown if they could process non-emotional facial expressions. This hypothesis was tested in Experiment 3 by investigating whether and how CP participants could elaborate facial expressions that not convey an affective state (A. J. O'Toole, Roark, & Abdi, 2002). Using the Face inversion paradigm, as in Comparetti et al. (2011) we tested if non-emotional facial expressions could be processed by system, differentiated from identity recognition system and emotion processing in CP subjects with pathological score at standard face recognition tasks. We carried out a behavioural study in which we compared performance in a recognition task and in a same/different judgement task, using upright and inverted faces. In the experiment were manipulated respectively internal features, emotional and non-emotional facial expressions. Results demonstrated that in these subjects non emotional facial expressions are processed and facilitated the judgment in the upright orientation, while emotions and features manipulation did not. Overall, the present thesis has investigated issues from the current domain of processes associated with face perception and social information essential for adaptive behavior in a complex social environment. It provides further evidence that social signs are important and are processed even if they are not relevant for the task. For example gaze cueing is observed even when the participants are motivated to orient away from gaze direction because the target will be in an uncued location (experiment 1) or even if it is not relevant for the task (experiment 2) and facial expressions are elaborated even if only the identity of the face will be required (experiment 3). More specifically it has been investigated how people react to social signal and could plan their behaviours reacting to the social information given by a face. In fact in Experiment 1 has been demonstrated that other’s gaze is a strong trigger to allocate our attention to an important location in space but more deeply other’s gaze it’s really important when the two actors have something in common (i. e. the same peripheral targets); in fact, under certain conditions, the gaze following behaviour could be controlled and specifically when the context is not shared. Moreover the Experiment 2 has shown how people could allocate temporarily their attention responding to gaze direction and head orientation, and demonstrating that when the different signals are congruent it is possible to reallocate attentional resources to process subsequent event. Finally in Experiment 3 it has been demonstrated that a facial expression that does not convey an universal affective state could be processed by congenital prosopagnosic individuals and these expressions could be used as a cue to arrive to the identity.
APA, Harvard, Vancouver, ISO, and other styles
2

Gillespie-Smith, Karri Y. "Eye-tracking explorations of attention to faces for communicative cues in Autism Spectrum Disorders." Thesis, University of Stirling, 2011. http://hdl.handle.net/1893/6499.

Full text
Abstract:
Background Individuals with Autism Spectrum Disorder (ASD) have been reported to show socio-communicative impairments which are associated with impaired face perception and atypical gaze behaviour. Attending to faces and interpreting the important socio-communicative cues presented allows us to understand other’s cognitive states, emotions, wants and desires. This information enables successful social encounters and interactions to take place. Children with ASD not attending to these important social cues on the face may cause some of the socio-communicative impairments observed within this population. Examining how children with ASD attend to faces will enhance our understanding of their communicative impairments. Aim The present thesis therefore aimed to use eye-tracking methodology to examine attention allocation to faces for communicative cues in children with ASD. Method The first line of enquiry examined how children with ASD (n = 21; age = 13y7m) attended to faces presented within their picture communication systems compared to typically developing children matched on chronological age, verbal ability age and visuo-spatial ability age. The next investigation was conducted on the same group of children and examined how children with ASD attended to faces of different familiarity including, familiar, unfamiliar and the child’s own face. These faces were also presented with direct gaze or averted gaze to investigate how this would impact on the children’s allocation of attention. The final exploration highlighted how children with ASD (n = 20; age = 12y3m) attended to socially salient information (faces) and non-socially salient information (objects) presented within social scenes of varying complexity, compared to typically developing controls. Again groups were matched based on chronological age, verbal ability age, and visuo-spatial ability age. Results Children with ASD were shown to allocate attention to faces presented within their picture communication symbols similarly compared to their typically developing counterparts. All children were shown to fixate significantly longer on the face images compared to the object images. The children with ASD fixated for similar amounts of time to the eye and mouth regions regardless of familiarity and gaze direction compared to their controlled matches. All groups looked significantly longer at the eye areas compared to the mouth areas of the faces across all familiarity types. The children also fixated longer on the eye and mouth regions of direct gazing faces compared to the regions presented on the averted gazing faces. The children with ASD fixated on the faces and objects presented within social scenes similar to their typically developing counterparts across all complexity conditions. The children were shown to fixate significantly longer on the objects compared to the faces. Conclusions Children with ASD showed typical allocation of attention to faces. This suggests that faces are not aversive to them and they are able to attend to the relevant areas such as eye and mouth regions. This may have been influenced by the inclusion of high functioning children with ASD. However these results may also suggest that attention allocation and gaze behaviour are not the only factors which contribute to the socio-communicative impairments observed in ASD.
APA, Harvard, Vancouver, ISO, and other styles
3

Cooper, Robbie Mathew. "The effects of eye gaze and emotional facial expression on the allocation of visual attention." Thesis, University of Stirling, 2006. http://hdl.handle.net/1893/128.

Full text
Abstract:
This thesis examines the way in which meaningful facial signals (i.e., eye gaze and emotional facial expressions) influence the allocation of visual attention. These signals convey information about the likely imminent behaviour of the sender and are, in turn, potentially relevant to the behaviour of the viewer. It is already well established that different signals influence the allocation of attention in different ways that are consistent with their meaning. For example, direct gaze (i.e., gaze directed at the viewer) is considered both to draw attention to its location and hold attention when it arrives, whereas observing averted gaze is known to create corresponding shifts in the observer’s attention. However, the circumstances under which these effects occur are not yet understood fully. The first two sets of experiments in this thesis tested directly whether direct gaze is particularly difficult to ignore when the task is to ignore it, and whether averted gaze will shift attention when it is not relevant to the task. Results suggest that direct gaze is no more difficult to ignore than closed eyes, and the shifts in attention associated with viewing averted gaze are not evident when the gaze cues are task-irrelevant. This challenges the existing understanding of these effects. The remaining set of experiments investigated the role of gaze direction in the allocation of attention to emotional facial expressions. Without exception, previous work looking at this issue has measured the allocation of attention to such expressions when gaze is directed at the viewer. Results suggest that while the type of emotional expression (i.e., angry or happy) does influence the allocation of attention, the associated gaze direction does not, even when the participants are divided in terms of anxiety level (a variable known to influence the allocation of attention to emotional expressions). These findings are discussed in terms of how the social meaning of the stimulus can influence preattentive processing. This work also serves to highlight the need for general theories of visual attention to incorporate such data. Not to do so fundamentally risks misrepresenting the nature of attention as it operates out-with the laboratory setting.
APA, Harvard, Vancouver, ISO, and other styles
4

Adil, Safaa. "Influence de la présence d’un personnage, d’un visage et de la direction du regard en communication publicitaire." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1G025.

Full text
Abstract:
Cette recherche a pour objectif d’examiner l’influence de la présence, dans des annonces presse, d’un personnage, d’un visage ou de la direction d’un regard sur l’attention portée à cette annonce ainsi que sur la mémorisation et les évaluations de cette annonce. Afin d’éprouver nos hypothèses quatre expérimentations ont été menées. Dans l’une d’entre elles un système eye-tracking a été utilisé permettant de mieux cerner les processus attentionnels. Pour nous rapprocher des conditions d’exposition habituelles à la publicité, les annonces presse ont été dissimulées dans un magazine fictif. Les résultats obtenus montrent que la présence du personnage ou du visage (versus leur absence), ainsi que le regard dirigé vers le produit (versus le regard dirigé vers l’observateur), exercent une influence sur l’attention portée à l’annonce, la mémorisation de son contenu et son évaluation
This research aims to examine the influence of the presence in a print advertisement of a character, a face or a gaze direction (gaze directed toward the product versus gaze directed toward the observer) on attention toward the advertisement, the memorization and the evaluation of the advertisement content. To test our hypotheses, four experiments were conducted. In one of these experiments an eye-tracker has been used to better measure attentional processes. In order to approach the ordinary conditions of exposure to advertising, print advertisements were inserted in a fictive magazine (folder test procedure). The results show that the presence of the character or the face (versus their absence) and the gaze directed toward the product (versus gaze directed toward the observer) improve the attention toward the advertisement, the memorization and the assessment of advertisement content
APA, Harvard, Vancouver, ISO, and other styles
5

Pickron, Charisse. "Not All Gaze Cues Are the Same: Face Biases Influence Object Attention in Infancy." 2015. https://scholarworks.umass.edu/masters_theses_2/220.

Full text
Abstract:
In their first year, infants’ ability to follow eye gaze to allocate attention shifts from being a response to low-level perceptual cues, to a deeper understanding of social intent. By 4 months infants look longer to uncued versus cued targets following a gaze cuing event, suggesting that infants better encode targets cued by shifts in eye gaze compared to targets not cued by eye gaze. From 6 to 9 months of age infants develop biases in face processing such that they show increased differentiation of faces within highly familiar groups (e.g., own-race) and a decreased differentiation of faces within unfamiliar or infrequently experienced groups (e.g., other-race). Although the development of cued object learning and face biases are both important social processes, they have primarily been studied independently. The current study examined whether early face processing biases for familiar compared to unfamiliar groups influences object encoding within the context of a gaze-cuing paradigm. Five- and 10-month-old infants viewed videos of adults, who varied by race and sex, shift their eye gaze towards one of two objects. The two objects were then presented side-by-side and fixation duration for the cued and uncued object was measured. Results revealed 5-month-old infants look significantly longer to uncued versus cued objects when the cuing face was a female. Additionally, 10-month-old infants displayed significantly longer looking to the uncued relative to the cued object when the cuing face was a female and from the infant’s own-race group. These findings are the first to demonstrate that perceptual narrowing based on sex and race shape infants’ use of social cues for allocating visual attention to objects in their environment.
APA, Harvard, Vancouver, ISO, and other styles
6

Rigby, Sarah Nugent. "Selective attention to face cues in adults with and without autism spectrum disorders." 2015. http://hdl.handle.net/1993/30710.

Full text
Abstract:
Individuals with autism spectrum disorders (ASD) use atypical approaches when processing facial stimuli. The first purpose of this research was to investigate face processing abilities in adults with ASD using several tasks, to compare patterns of interference between static identity and expression processing in adults with ASD and typical adults, and to investigate whether the introduction of dynamic cues caused members of one or both groups to shift from a global to a more local processing strategy. The second purpose was to compare the gaze behaviour of groups of participants as they viewed static and dynamic single- and multiple-character scenes. I tested 16 adults with ASD and 16 sex-, age-, and IQ-matched typical controls. In Study 1, participants completed a task designed to assess processing speed, another to measure visual processing bias, and two tasks involving static and dynamic face stimuli -- an identity-matching task and a Garner selective attention task. Adults with ASD were less sensitive to facial identity, and, unlike typical controls, showed negligible interference between identity and expression processing when judging both static and moving faces. In Study 2, participants viewed scenes while their gaze behaviour was recorded. Overall, participants with ASD showed fewer and shorter fixations on faces compared to their peers. Additionally, whereas the introduction of motion and increased social complexity of the scenes affected the gaze behaviour of typical adults, only the latter manipulation affected adults with ASD. My findings emphasize the importance of using dynamic displays when studying typical and atypical face processing mechanisms.
October 2015
APA, Harvard, Vancouver, ISO, and other styles
7

Dawel, Amy. "Face Emotion Processing and attention to eye-gaze : typical development and associations with psychopathic and callous unemotional (CU) traits." Thesis, 2015. http://hdl.handle.net/1885/109354.

Full text
Abstract:
This thesis investigates three aspects of the social processing of faces - recognition of others' facial emotions, arousal to those facial emotions, and shifting of attention to follow others' eye-gaze - that are potentially impaired in individuals with high levels of psychopathic and callous unemotional (CU) traits. The starting motivation for the thesis was to address three competing theories of the etiology of psychopathic and CU traits: a deficit in processing others' emotions that is specific to emotions signaling distress (e.g., fear); a deficit in attending to the eye region of faces; or general differences in attention that produce an abnormally-enhanced ability to tune out unwanted information. For this aim, results showed the following. A meta-analysis of 29 published experiments found psychopathy was associated with impaired recognition of several expressions (i.e., not just fear and sadness), including the positive emotion of happiness. A new empirical study found that higher psychopathic and CU traits in undergraduates were associated with decreased arousal to happy expressions (when stimuli showed emotions that were genuinely-felt by the person displaying the expression). And, another new empirical study of attentional cueing found that high CU traits were associated with decreased shifting of attention to follow left-right directional cues, but that the decrease was of equal magnitude for eye-gaze cues in fearful faces, eye-gaze cues in happy faces, and non-social arrow cues. Together, these results are inconsistent with a specific deficit in processing others' distress emotions. Nor did they suggest that attentional abnormalities are specific to the eyes. Instead, results supported general attention abnormalities in high psychopathic and CU traits. The new empirical studies also produced a number of important results with broader implications for the facial expression literature. In establishing the eye-gaze cueing paradigm, I investigated typical development of social attention in 8-12 year olds compared to adults. Of interest was whether facial expression and context (what the face was looking at) interact to drive social attention and threat bias in children. Results showed interactions emerged from 8 years of age, but were not fully mature even in the oldest children tested. Finally, prior to testing arousal to genuinely-felt happy emotions, I investigated the extent to which currently-available stimulus sets were perceived by observers as showing genuine or faked emotion. Results showed that commonly-used stimuli, including Ekman and Friesans' (1976) Pictures of Facial Affect, were perceived as showing clearly faked emotion for many expressions. I then found that using genuine compared to faked expressions can result in starkly different theoretical conclusions: in the case of psychopathic and CU traits, higher levels of these traits were associated with decreased arousal to genuinely-felt happy expressions, but there was no such relationship for posed happy expressions. Thus, had only posed emotion stimuli been used, results would have appeared to support the distress-specific theory. These results highlight the importance of using ecologically valid stimuli that show faces as they would occur in real world interactions - with contextual information and often signaling genuine emotion.
APA, Harvard, Vancouver, ISO, and other styles
8

Rondeau, Émélie. "Étude de l'attention spatiale en condition d'interférence émotionnelle chez les enfants avec un trouble autistique." Thèse, 2011. http://hdl.handle.net/1866/8626.

Full text
Abstract:
Le déficit social, incluant la perturbation du traitement du regard et des émotions, est au cœur de l’autisme. Des études ont montré que les visages de peur provoquent une orientation rapide et involontaire de l’attention spatiale vers leur emplacement chez les individus à développement typique. De plus, ceux-ci détectent plus rapidement et plus efficacement les visages avec un regard direct (vs regard dévié). La présente étude vise à explorer l’effet de l’émotion de peur et de la direction du regard (direct vs dévié) sur l’attention spatiale chez les enfants autistes à l’aide d’une tâche d’attention spatiale implicite. Six enfants avec un trouble autistique (TA) ont participé à cette étude. Les participants doivent détecter l’apparition d’une cible à gauche ou à droite d’un écran. L’apparition de la cible est précédée d’une amorce (paire de visages peur/neutre avec regard direct/dévié). La cible peut être présentée soit dans le même champ visuel que l’amorce émotionnellement chargée (condition valide), soit dans le champ visuel opposé (condition invalide). Nos résultats montrent que les amorces avec un visage de peur (vs les amorces avec un visage neutre) provoquent un effet d’interférence au niveau comportemental et divergent l’attention de leur emplacement chez les enfants avec un TA.
Autism is characterized by a social deficit, including difficulties in using and responding to facial expressions and gaze. Previous studies showed that fearful faces elicit a rapid involuntary orienting of spatial attention towards their location in typically developing (TD) individuals. In addition, target faces with direct gaze are detected faster and more efficiently than those with averted gaze in TD individuals. The aim of the current study is to explore the effect of fear and gaze direction (direct vs averted) on spatial attention in children with autistic disorder (AD). Six children with AD performed a covert spatial orienting task. Each trial consisted of a pair of faces (fearful/neutral with direct/averted gaze) briefly presented followed by a target presented at the location of one of the faces. Participants had to judge the location of the target (right or left visual field). The target unpredictably appeared on the side of the emotional face (fear, direct) (valid condition) or on the opposite side (neutral, averted) (invalid condition). Our results show that fearful faces have an interferent effect on the performance of AD children and divert attention from their location.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Face, attention, gaze, prosopagnosia"

1

Schubert, A., and E. D. Dickmanns. "Real-time Gaze Observation for Tracking Human Control of Attention." In Face Recognition, 617–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/978-3-642-72201-1_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Stukenbrock, Anja, and Anh Nhi Dao. "Joint Attention in Passing: What Dual Mobile Eye Tracking Reveals About Gaze in Coordinating Embodied Activities at a Market." In Embodied Activities in Face-to-face and Mediated Settings, 177–213. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-97325-8_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Bumhwi, Rammohan Mallipeddi, and Minho Lee. "Embedded System for Human Augmented Cognition Based on Face Selective Attention Using Eye Gaze Tracking." In Neural Information Processing, 729–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-42042-9_90.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Khellat-Kihel, Souad, Zhenan Sun, and Massimo Tistarelli. "An Hybrid Attention-Based System for the Prediction of Facial Attributes." In Lecture Notes in Computer Science, 116–27. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82427-3_9.

Full text
Abstract:
AbstractRecent research on face analysis has demonstrated the richness of information embedded in feature vectors extracted from a deep convolutional neural network. Even though deep learning achieved a very high performance on several challenging visual tasks, such as determining the identity, age, gender and race, it still lacks a well grounded theory which allows to properly understand the processes taking place inside the network layers. Therefore, most of the underlying processes are unknown and not easy to control. On the other hand, the human visual system follows a well understood process in analyzing a scene or an object, such as a face. The direction of the eye gaze is repeatedly directed, through purposively planned saccadic movements, towards salient regions to capture several details. In this paper we propose to capitalize on the knowledge of the saccadic human visual processes to design a system to predict facial attributes embedding a biologically-inspired network architecture, the HMAX. The architecture is tailored to predict attributes with different textural information and conveying different semantic meaning, such as attributes related and unrelated to the subject’s identity. Salient points on the face are extracted from the outputs of the S2 layer of the HMAX architecture and fed to a local texture characterization module based on LBP (Local Binary Pattern). The resulting feature vector is used to perform a binary classification on a set of pre-defined visual attributes. The devised system allows to distill a very informative, yet robust, representation of the imaged faces, allowing to obtain high performance but with a much simpler architecture as compared to a deep convolutional neural network. Several experiments performed on publicly available, challenging, large datasets demonstrate the validity of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
5

"Gaze and attention." In Face Perception, 217–60. Psychology Press, 2013. http://dx.doi.org/10.4324/9780203721254-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bavelas, Janet Beavin. "The Social Life of Facial Gestures." In Face-to-Face Dialogue, 125–44. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780190913366.003.0008.

Full text
Abstract:
Co-speech facial gestures are distinct from facial expressions of emotion in several ways. They can involve the entire face, head, or gaze, not just a specific set of facial muscles. Facial gestures are precisely timed with the words they refer to, not to an underlying emotion. Also, the variety of meanings of facial gestures is much broader than emotions; for example, smiling to mark ironic humor, making a facial shrug or a thinking face, emphasizing a word with a quick eyebrow movement, or portraying another person’s appearance or expression. Coordination of gaze and mutual gaze are particularly important in facilitating turn exchanges and eliciting back channels. Speakers may also point with their gaze (e.g., drawing attention to their hand gestures). In contrast to emotional expressions, facial gestures serve predominantly track 2 interactive functions rather than conveying topical content.
APA, Harvard, Vancouver, ISO, and other styles
7

Pantic, Maja. "Face for Interface." In Encyclopedia of Multimedia Technology and Networking, Second Edition, 560–67. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-014-1.ch075.

Full text
Abstract:
The human face is involved in an impressive variety of different activities. It houses the majority of our sensory apparatus: eyes, ears, mouth, and nose, allowing the bearer to see, hear, taste, and smell. Apart from these biological functions, the human face provides a number of signals essential for interpersonal communication in our social life. The face houses the speech production apparatus and is used to identify other members of the species, to regulate the conversation by gazing or nodding, and to interpret what has been said by lip reading. It is our direct and naturally preeminent means of communicating and understanding somebody’s affective state and intentions on the basis of the shown facial expression (Lewis & Haviland-Jones, 2000). Personality, attractiveness, age, and gender can also be seen from someone’s face. Thus the face is a multisignal sender/receiver capable of tremendous flexibility and specificity. In general, the face conveys information via four kinds of signals listed in Table 1. Automating the analysis of facial signals, especially rapid facial signals, would be highly beneficial for fields as diverse as security, behavioral science, medicine, communication, and education. In security contexts, facial expressions play a crucial role in establishing or detracting from credibility. In medicine, facial expressions are the direct means to identify when specific mental processes are occurring. In education, pupils’ facial expressions inform the teacher of the need to adjust the instructional message. As far as natural user interfaces between humans and computers (PCs/robots/machines) are concerned, facial expressions provide a way to communicate basic information about needs and demands to the machine. In fact, automatic analysis of rapid facial signals seem to have a natural place in various vision subsystems and vision-based interfaces (face-for-interface tools), including automated tools for gaze and focus of attention tracking, lip reading, bimodal speech processing, face/visual speech synthesis, face-based command issuing, and facial affect processing. Where the user is looking (i.e., gaze tracking) can be effectively used to free computer users from the classic keyboard and mouse. Also, certain facial signals (e.g., a wink) can be associated with certain commands (e.g., a mouse click) offering an alternative to traditional keyboard and mouse commands. The human capability to “hear” in noisy environments by means of lip reading is the basis for bimodal (audiovisual) speech processing that can lead to the realization of robust speech-driven interfaces. To make a believable “talking head” (avatar) representing a real person, tracking the person’s facial signals and making the avatar mimic those using synthesized speech and facial expressions is compulsory. The human ability to read emotions from someone’s facial expressions is the basis of facial affect processing that can lead to expanding user interfaces with emotional communication and, in turn, to obtaining a more flexible, adaptable, and natural affective interfaces between humans and machines. More specifically, the information about when the existing interaction/processing should be adapted, the importance of such an adaptation, and how the interaction/ reasoning should be adapted, involves information about how the user feels (e.g., confused, irritated, tired, interested). Examples of affect-sensitive user interfaces are still rare, unfortunately, and include the systems of Lisetti and Nasoz (2002), Maat and Pantic (2006), and Kapoor, Burleson, and Picard (2007). It is this wide range of principle driving applications that has lent a special impetus to the research problem of automatic facial expression analysis and produced a surge of interest in this research topic.
APA, Harvard, Vancouver, ISO, and other styles
8

Huk, Romana. "Lyres, Liars, Repetition, and Prophecy." In Tony Harrison and the Classics, 182–201. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198861072.003.0009.

Full text
Abstract:
This chapter looks at the future-oriented nature of Harrison’s drive, which in this reading is always towards ‘coming’ life. Harrison looks for ‘signs’ of it in times of its ending (as at the end of ‘The Mother of the Muses’); he imagines its extinction. More than witnessing, he tells us of ‘the worst thing that our imagination can, and [he’s] afraid, must conceive’ so that we ‘face up to the Muses’ as did Hecuba, and still say ‘life’ at the last, as she does in Euripides’ play, affirming it rather than averting her gaze from it, or going silent before horror. Classical sources not only provide precedents for staging the unspeakable to be spoken about, and faced up to, but also much else besides. Although Harrison is not a backward-looking artist, culture’s receding memory of founding conflicts and violence affords his deep double-vision a new kind of believability when he hears the furies coming for us out of that past in new/old forms of self-destruction. The chapter considers the roles played by classical references in his forward-looking poetry written in response to war, violence, and escalating destruction in the age of mechanical reproduction. Fine lines between prophecy and pretence, revelation and deadening repetition as witness, make the venture one of risk-taking toward ‘righting,’ as one early Harrison poem put it. Tony Harrison’s poetics of seeing by paying attention to a number of poems are explored, especially those of his collection, The Gaze of the Gorgon.
APA, Harvard, Vancouver, ISO, and other styles
9

Kommu, John Vijay Sagar, and Sowmyashree Mayur Kaku. "Functional MRI in Pediatric Neurodevelopmental and Behavioral Disorders." In Functional MRI, edited by S. Kathleen Bandt and Dennis D. Spencer, 140–57. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780190297763.003.0008.

Full text
Abstract:
This chapter addresses functional magnetic resonance imaging (fMRI) of brain in children with neurodevelopmental and behavioral disorders. Common challenges of pediatric fMRI studies are related to acquisition and processing. In children with disruptive behavior disorders, deficits in affective response, empathy, and decision-making have been reported. Resting-state fMRI studies in attention-deficit hyperactivity disorder (ADHD) have shown altered activity in default mode and cognitive control networks. Task-based fMRI studies in ADHD have implicated frontoparietal cognitive and attentional networks. The role of stimulants in restoring the altered brain function has been examined using fMRI studies. In children with autism spectrum disorder, fMRI studies using face-processing tasks, theory-of-mind tasks, imitation, and language processing (e.g., sentence comprehension), as well as studies of gaze aversion, interest in social faces, and faces with emotions have implicated cerebellum, amygdala, hippocampus, insula, fusiform gyrus, superior temporal sulcus, planum temporale, inferior frontal gyrus, basal ganglia, thalamus, cingulate cortex, corpus callosum, and brainstem. In addition, fMRI has been a valuable research tool for understanding neurobiological substrates in children with psychiatric disorders (e.g., psychosis, posttraumatic stress disorder, and anxiety disorders).
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Face, attention, gaze, prosopagnosia"

1

Zhuang, Jiayi, and Changyuan Wang. "Attention Mechanism Based Full-face Gaze Estimation for Human-computer Interaction." In 2022 International Conference on Computer Network, Electronic and Automation (ICCNEA). IEEE, 2022. http://dx.doi.org/10.1109/iccnea57056.2022.00013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Martinez, Francis, Andrea Carbone, and Edwige Pissaloux. "Combining first-person and third-person gaze for attention recognition." In 2013 10th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2013). IEEE, 2013. http://dx.doi.org/10.1109/fg.2013.6553735.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Song, Xiang-Dong Zhou, Xingyu Jiang, Haiyng Wu, and Yu Shi. "Face Shows Your Intention: Visual Search Based on Full-face Gaze Estimation with Channel-spatial Attention." In ICIAI 2021: 2021 the 5th International Conference on Innovation in Artificial Intelligence. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3461353.3461362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Songjiang, Wen Cui, Jinshi Cui, Li Wang, Ming Li, and Hongbin Zha. "Improving Children's Gaze Prediction via Separate Facial Areas and Attention Shift Cue." In 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017). IEEE, 2017. http://dx.doi.org/10.1109/fg.2017.92.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Guan, Xin, Qi Wang, Ji Zhang, and Xue-min Zhang. "Notice of Retraction: Effect of Gaze Direction and Face Expression on Visual Reflexive Attention." In 2009 International Conference on Education Technology and Training (ETT 2009). IEEE, 2009. http://dx.doi.org/10.1109/ett.2009.83.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Song, Xinyuan, Shaoxiang Guo, Zhenfu Yu, and Junyu Dong. "An Encoder-Decoder Network with Residual and Attention Blocks for Full-Face 3D Gaze Estimation." In 2022 7th International Conference on Image, Vision and Computing (ICIVC). IEEE, 2022. http://dx.doi.org/10.1109/icivc55077.2022.9886734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Oh, Jun O., Hyung Jin Chang, and Sang-Il Choi. "Self-Attention with Convolution and Deconvolution for Efficient Eye Gaze Estimation from a Full Face Image." In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2022. http://dx.doi.org/10.1109/cvprw56347.2022.00547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Soro, Alessandro, and Andry Rakotonirainy. "Automatic Inference of Driving Task Demand from Visual Cues of Emotion and Attention." In Applied Human Factors and Ergonomics Conference. AHFE International, 2022. http://dx.doi.org/10.54941/ahfe100740.

Full text
Abstract:
Sensing the mental, physical and emotional demand of a driving task is of primary importance in road safety research and for effectively designing in-vehicle information systems (IVIS). Particularly, the need of cars capable of sensing and reacting to the emotional state of the driver has been repeatedly advocated in the literature. Algorithms and sensors to identify patterns of human behavior, such as gestures, speech, eye gaze and facial expression, are becoming available by using low cost hardware: This paper presents a new system which uses surrogate measures such as facial expression (emotion) and head pose and movements (intention) to infer task difficulty in a driving situation. 11 drivers were recruited and observed in a simulated driving task that involved several pre-programmed events aimed at eliciting emotive reactions, such as being stuck behind slower vehicles, intersections and roundabouts, and potentially dangerous situations. The resulting system, combining face expressions and head pose classification, is capable of recognizing dangerous events (such as crashes and near misses) and stressful situations (e.g. intersections and way giving) that occur during the simulated drive.
APA, Harvard, Vancouver, ISO, and other styles
9

Ponomarev, Andrey, Oleg Serebrennikov, Gleb Guglya, Oleg Korotkevich, and Andrey Proletarskiy. "THE USE OF ARTIFICIAL INTELLIGENCE TECHNOLOGIES TO IMPLEMENT A MULTI-FACTOR ANALYSIS OF A STUDENT'S ACTIONS DURING AN EXAM." In eLSE 2020. University Publishing House, 2020. http://dx.doi.org/10.12753/2066-026x-20-143.

Full text
Abstract:
Conducting an exam after the course is always a very complicated process, the main problem of which is the credibility of the student's knowledge evaluation. In this article, we will discuss the problem that affects the assessment most critically - cheating. The elimination of this factor will help to increase the quality of knowledge evaluation, and later the quality of the curriculum itself. It is difficult for one teacher to maintain the procedure of examination during special verification activities. The more students taking the exam at the same time, the more difficult it is to control the process. In this article I would like to propose the idea of automating the process of controlling students during the exam using personal computers equipped with a camera and a microphone. This idea implies an increase in the quality of objective evaluation of students' actual knowledge using clear criteria inherent in this technology. In addition, such an approach eliminates disciplinary deviations from the training programs, appearing in the form of subjective attitude of the teaching staff to the exam and directly to the students. The control will be carried out with the help of a complex software system with the use of the following technologies: an artificial intelligence module for the user's face recognition and also the direction of his attention based on eye gazement estimation technology; verification of the identity with the personal card of the student, with the help of a database system and a computing sub-program; voice recognition and the subsequent transformation of the voice into text, with accompanying analysis. Recognition of the direction of the gaze lets us allocate the "safe" and "dangerous" areas of attention around the student. Also, the article highlights the main criteria by which students can be suspected of cheating, such as the presence of people in the frame, the presence of unauthorized voices, distraction from the testing window, and other "suspicious" behavior. In addition, the Article will be presented the primary requirements to the technical provision of classrooms and personal workplaces for the exam.
APA, Harvard, Vancouver, ISO, and other styles
10

Serebrennikov, Oleg, Andrey Ponomarev, Gleb Guglya, and Andrey Proletarskiy. "USAGE OF ARTIFICIAL INTELLIGENCE IN QUALITY IMPROVEMENT OF EDUCATIONAL PROGRAMS." In eLSE 2020. University Publishing House, 2020. http://dx.doi.org/10.12753/2066-026x-20-148.

Full text
Abstract:
In the field of education, we are faced with a wide range of absorbability degrees of educational materials in various educational programs and courses. We assume that part of this problem is the style of delivery of information in the associated training materials. The described software product is designed to assess the complexity of the material based on the analysis of various criteria received from the subject during the study of the material. Taking into account the research in psychology, we can conclude that such criteria as the direction of the gaze and the length of time of attention on certain elements of the exposition (text, images, interface elements), by the subject (student), tells us about the importance of these expositions and their elements, as well as the characteristics specifically for the subject. In particular, such matters as interest in new information, the necessity of additional information related to the main theme of the exhibition, complexity of understanding or absorption of complex data (schemes, graphs, interfaces) can be considered. To study such parameters as the direction and length of the view, a firmware and software complex is used, which is based on the following technologies: digital video processing, computer vision for processing video information in the required formats, neural networks for identifying areas where the subject's face and eyes are located, and a software subsystem for performing the necessary calculations. Binding the exposition contexts and data on the current parameters of the subject's sight, this firmware and software complex is able to add special metrics to the exposition and its elements. For example, in addition to the exposition consisting of text and graphs, we also add metrics of interest, importance of this information, complexity of understanding. For general exposure research average values of metrics unique for each subject are used. This allows us to evaluate not the subjective attitude to the exposition, but the parameters of the quality of the exposition itself (digestibility of the material, complexity, thematic comprehensiveness). In the field of education, based on the obtained metrics, training courses, teaching methods and testing methods can be analyzed. Having this analysis, we can be more objective about the quality of the material and training programs, as well as to find and designate their most problematic areas for further improvement processing.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography