Journal articles on the topic 'Gaze model'

To see the other types of publications on this topic, follow the link: Gaze model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Gaze model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lance, Brent J., and Stacy C. Marsella. "The Expressive Gaze Model: Using Gaze to Express Emotion." IEEE Computer Graphics and Applications 30, no. 4 (July 2010): 62–73. http://dx.doi.org/10.1109/mcg.2010.43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vasiljevas, Mindaugas, Robertas Damaševičius, and Rytis Maskeliūnas. "A Human-Adaptive Model for User Performance and Fatigue Evaluation during Gaze-Tracking Tasks." Electronics 12, no. 5 (February 25, 2023): 1130. http://dx.doi.org/10.3390/electronics12051130.

Full text
Abstract:
Eye gaze interfaces are an emerging technology that allows users to control graphical user interfaces (GUIs) simply by looking at them. However, using gaze-controlled GUIs can be a demanding task, resulting in high cognitive and physical load and fatigue. To address these challenges, we propose the concept and model of an adaptive human-assistive human–computer interface (HA-HCI) based on biofeedback. This model enables effective and sustainable use of computer GUIs controlled by physiological signals such as gaze data. The proposed model allows for analytical human performance monitoring and evaluation during human–computer interaction processes based on the damped harmonic oscillator (DHO) model. To test the validity of this model, the authors acquired gaze-tracking data from 12 healthy volunteers playing a gaze-controlled computer game and analyzed it using odd–even statistical analysis. The experimental findings show that the proposed model effectively describes and explains gaze-tracking performance dynamics, including subject variability in performance of GUI control tasks, long-term fatigue, and training effects, as well as short-term recovery of user performance during gaze-tracking-based control tasks. We also analyze the existing HCI and human performance models and develop an extension to the existing physiological models that allows for the development of adaptive user-performance-aware interfaces. The proposed HA-HCI model describes the interaction between a human and a physiological computing system (PCS) from the user performance perspective, incorporating a performance evaluation procedure that interacts with the standard UI components of the PCS and describes how the system should react to loss of productivity (performance). We further demonstrate the applicability of the HA-HCI model by designing an eye-controlled game. We also develop an analytical user performance model based on damped harmonic oscillation that is suitable for describing variability in performance of a PC game based on gaze tracking. The model’s validity is tested using odd–even analysis, which demonstrates strong positive correlation. Individual characteristics of users established by the damped oscillation model can be used for categorization of players under their playing skills and abilities. The experimental findings suggest that players can be categorized as learners, whose damping factor is negative, and fatiguers, whose damping factor is positive. We find a strong positive correlation between amplitude and damping factor, indicating that good starters usually have higher fatigue rates, but slow starters have less fatigue and may even improve their performance during play. The proposed HA-HCI model and analytical user performance models provide a framework for developing an adaptive human-oriented HCI that enables monitoring, analysis, and increased performance of users working with physiological-computing-based user interfaces. The proposed models have potential applications in improving the usability of future human-assistive gaze-controlled interface systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Jogeshwar, Anjali K., Gabriel J. Diaz, Susan P. Farnand, and Jeff B. Pelz. "The Cone Model: Recognizing gaze uncertainty in virtual environments." Electronic Imaging 2020, no. 9 (January 26, 2020): 288–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.9.iqsp-288.

Full text
Abstract:
Eye tracking is used by psychologists, neurologists, vision researchers, and many others to understand the nuances of the human visual system, and to provide insight into a person’s allocation of attention across the visual environment. When tracking the gaze behavior of an observer immersed in a virtual environment displayed on a head-mounted display, estimated gaze direction is encoded as a three-dimensional vector extending from the estimated location of the eyes into the 3D virtual environment. Additional computation is required to detect the target object at which gaze was directed. These methods must be robust to calibration error or eye tracker noise, which may cause the gaze vector to miss the target object and hit an incorrect object at a different distance. Thus, the straightforward solution involving a single vector-to-object collision could be inaccurate in indicating object gaze. More involved metrics that rely upon an estimation of the angular distance from the ray to the center of the object must account for an object’s angular size based on distance, or irregularly shaped edges - information that is not made readily available by popular game engines (e.g. Unity© /Unreal© ) or rendering pipelines (OpenGL). The approach presented here avoids this limitation by projecting many rays distributed across an angular space that is centered upon the estimated gaze direction.
APA, Harvard, Vancouver, ISO, and other styles
4

Kaur, Harsimran, Swati Jindal, and Roberto Manduchi. "Rethinking Model-Based Gaze Estimation." Proceedings of the ACM on Computer Graphics and Interactive Techniques 5, no. 2 (May 17, 2022): 1–17. http://dx.doi.org/10.1145/3530797.

Full text
Abstract:
Over the past several years, a number of data-driven gaze tracking algorithms have been proposed, which have been shown to outperform classic model-based methods in terms of gaze direction accuracy. These algorithms leverage the recent development of sophisticated CNN architectures, as well as the availability of large gaze datasets captured under various conditions. One shortcoming of black-box, end-to-end methods, though, is that any unexpected behaviors are difficult to explain. In addition, there is always the risk that a system trained with a certain dataset may not perform well when tested on data from a different source (the "domain gap" problem.) In this work, we propose a novel method to embed eye geometry information in an end-to-end gaze estimation network by means of a "geometric layer". Our experimental results show that our system outperforms other state-of-the-art methods in cross-dataset evaluation, while producing competitive performance over within dataset tests. In addition, the proposed system is able to extrapolate gaze angles outside the range of those considered in the training data.
APA, Harvard, Vancouver, ISO, and other styles
5

Le, Thao, Ronal Singh, and Tim Miller. "Goal Recognition for Deceptive Human Agents through Planning and Gaze." Journal of Artificial Intelligence Research 71 (August 3, 2021): 697–732. http://dx.doi.org/10.1613/jair.1.12518.

Full text
Abstract:
Eye gaze has the potential to provide insight into the minds of individuals, and this idea has been used in prior research to improve human goal recognition by combining human's actions and gaze. However, most existing research assumes that people are rational and honest. In adversarial scenarios, people may deliberately alter their actions and gaze, which presents a challenge to goal recognition systems. In this paper, we present new models for goal recognition under deception using a combination of gaze behaviour and observed movements of the agent. These models aim to detect when a person is deceiving by analysing their gaze patterns and use this information to adjust the goal recognition. We evaluated our models in two human-subject studies: (1) using data collected from 30 individuals playing a navigation game inspired by an existing deception study and (2) using data collected from 40 individuals playing a competitive game (Ticket To Ride). We found that one of our models (Modulated Deception Gaze+Ontic) offers promising results compared to the previous state-of-the-art model in both studies. Our work complements existing adversarial goal recognition systems by equipping these systems with the ability to tackle ambiguous gaze behaviours.
APA, Harvard, Vancouver, ISO, and other styles
6

Balkenius, Christian, and Birger Johansson. "Anticipatory models in gaze control: a developmental model." Cognitive Processing 8, no. 3 (April 18, 2007): 167–74. http://dx.doi.org/10.1007/s10339-007-0169-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Stiefelhagen, Rainer, Jie Yang, and Alex Waibel. "A Model-Based Gaze Tracking System." International Journal on Artificial Intelligence Tools 06, no. 02 (June 1997): 193–209. http://dx.doi.org/10.1142/s0218213097000116.

Full text
Abstract:
In this paper we present a non-intrusive model-based gaze tracking system. The system estimates the 3-D pose of a user's head by tracking as few as six facial feature points. The system locates a human face using a statistical color model and then finds and tracks the facial features, such as eyes, nostrils and lip corners. A full perspective model is employed to map these feature points onto the 3D pose. Several techniques have been developed to track the features points and recover from failure. We currently achieve a frame rate of 15+ frames per second using an HP 9000 workstation with a framegrabber and a Canon VC-C1 camera. The application of the system has been demonstrated by a gaze-driven panorama image viewer. The potential applications of the system include multimodal interfaces, virtual reality and video-teleconferencing.
APA, Harvard, Vancouver, ISO, and other styles
8

White, Robert L., and Lawrence H. Snyder. "A Neural Network Model of Flexible Spatial Updating." Journal of Neurophysiology 91, no. 4 (April 2004): 1608–19. http://dx.doi.org/10.1152/jn.00277.2003.

Full text
Abstract:
Neurons in many cortical areas involved in visuospatial processing represent remembered spatial information in retinotopic coordinates. During a gaze shift, the retinotopic representation of a target location that is fixed in the world (world-fixed reference frame) must be updated, whereas the representation of a target fixed relative to the center of gaze (gaze-fixed) must remain constant. To investigate how such computations might be performed, we trained a 3-layer recurrent neural network to store and update a spatial location based on a gaze perturbation signal, and to do so flexibly based on a contextual cue. The network produced an accurate readout of target position when cued to either reference frame, but was less precise when updating was performed. This output mimics the pattern of behavior seen in animals performing a similar task. We tested whether updating would preferentially use gaze position or gaze velocity signals, and found that the network strongly preferred velocity for updating world-fixed targets. Furthermore, we found that gaze position gain fields were not present when velocity signals were available for updating. These results have implications for how updating is performed in the brain.
APA, Harvard, Vancouver, ISO, and other styles
9

Glaholt, M. G., and E. M. Reingold. "Stimulus exposure and gaze bias: A further test of the gaze cascade model." Attention, Perception & Psychophysics 71, no. 3 (April 1, 2009): 445–50. http://dx.doi.org/10.3758/app.71.3.445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ko, Eun-Ji, and Myoung-Jun Kim. "User-Calibration Free Gaze Tracking System Model." Journal of the Korea Institute of Information and Communication Engineering 18, no. 5 (May 31, 2014): 1096–102. http://dx.doi.org/10.6109/jkiice.2014.18.5.1096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Murphy, Hunter A., Andrew T. Duchowski, and Richard A. Tyrrell. "Hybrid image/model-based gaze-contingent rendering." ACM Transactions on Applied Perception 5, no. 4 (January 2009): 1–21. http://dx.doi.org/10.1145/1462048.1462053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Molter, Felix, Armin W. Thomas, Scott A. Huettel, Hauke R. Heekeren, and Peter N. C. Mohr. "Gaze-dependent evidence accumulation predicts multi-alternative risky choice behaviour." PLOS Computational Biology 18, no. 7 (July 6, 2022): e1010283. http://dx.doi.org/10.1371/journal.pcbi.1010283.

Full text
Abstract:
Choices are influenced by gaze allocation during deliberation, so that fixating an alternative longer leads to increased probability of choosing it. Gaze-dependent evidence accumulation provides a parsimonious account of choices, response times and gaze-behaviour in many simple decision scenarios. Here, we test whether this framework can also predict more complex context-dependent patterns of choice in a three-alternative risky choice task, where choices and eye movements were subject to attraction and compromise effects. Choices were best described by a gaze-dependent evidence accumulation model, where subjective values of alternatives are discounted while not fixated. Finally, we performed a systematic search over a large model space, allowing us to evaluate the relative contribution of different forms of gaze-dependence and additional mechanisms previously not considered by gaze-dependent accumulation models. Gaze-dependence remained the most important mechanism, but participants with strong attraction effects employed an additional similarity-dependent inhibition mechanism found in other models of multi-alternative multi-attribute choice.
APA, Harvard, Vancouver, ISO, and other styles
13

Tello Gamarra, Daniel Fernando. "Utilizing Gaze Behavior for Inferring Task Transitions Using Abstract Hidden Markov Models." Inteligencia Artificial 19, no. 58 (December 18, 2016): 1. http://dx.doi.org/10.4114/intartif.vol19iss58pp1-16.

Full text
Abstract:
We demonstrate an improved method for utilizing observed gaze behavior and show that it is useful in inferring hand movement intent during goal directed tasks. The task dynamics and the relationship between hand and gaze behavior are learned using an Abstract Hidden Markov Model (AHMM). We show that the predicted hand movement transitions occur consistently earlier in AHMM models with gaze than those models that do not include gaze observations.
APA, Harvard, Vancouver, ISO, and other styles
14

ZHANG, JING, LI ZHUO, and YINGDI ZHAO. "REGION OF INTEREST DETECTION BASED ON VISUAL PERCEPTION MODEL." International Journal of Pattern Recognition and Artificial Intelligence 26, no. 02 (March 2012): 1255005. http://dx.doi.org/10.1142/s0218001412550051.

Full text
Abstract:
According to human vision theory, the image is conveyed from human visual system to brain when people have a look at. Different from previous work, the study reported in this paper attempts to simulate a more real and complex method for region of interest (ROI) detection and quantitatively analyze the correlation between users' visual perception and ROI. In this paper, a visual perception model-based ROI detection is proposed, which can be realized with an ordinary web camera. Visual perception model employs a combination of visual attention model and gaze tracking data to objectively detect ROIs. The work includes pre-ROI estimation using visual attention model, gaze data collection and ROI detection. Pre-ROIs are segmented by the visual attention model. Since eye feature extraction is critical to the accuracy and performance of gaze tracking, adaptive eye template and neural network are employed to predict gaze points. By computing the density of the gaze points, ROIs are ranked. Experimental results show that the accuracy of our ROI detection method can be raised as high as 97% and it is also demonstrated that our model can efficiently adapt to users' interests and match the objective ROI.
APA, Harvard, Vancouver, ISO, and other styles
15

D’Egidio, Angela. "The tourist gaze in English, Italian and German travel articles about Puglia: A corpus-based study." ICAME Journal 38, no. 1 (April 28, 2014): 57–72. http://dx.doi.org/10.2478/icame-2014-0003.

Full text
Abstract:
Abstract This paper shows how online travel articles may provide important insights into how a tourist destination is perceived and to what extent what is known as the ‘tourist gaze’ may be used to recontextualise tourist material in order to produce more effective tourist texts, which meet receivers’ expectations. For this purpose, three comparable corpora of online travel articles in English, Italian and German language were assembled and analysed in order to understand the way ordinary travellers perceive and experience a tourist destination in Italy (Puglia) by taking language as a point of reference. The first fifteen words of the frequency lists in the three corpora highlighted what landmarks and elements of attraction English, Italian and German travel writers gaze at while on holiday in Puglia. The analysis demonstrated that the Italian tourist gaze is different from the English and German tourist gazes, since not all of them focus on the same landscapes, and even when they gaze at the same sights, their perception and representation are often different. The similarities and differences between the ways the tourists behave suggest a distinction between a model of ‘global gaze’ embodied by English and German travellers, seen as ‘outsiders’, and a model of ‘local gaze’ embodied by Italian tourists, seen as ‘insiders'
APA, Harvard, Vancouver, ISO, and other styles
16

Prsa, Mario, and Henrietta L. Galiana. "Visual-Vestibular Interaction Hypothesis for the Control of Orienting Gaze Shifts by Brain Stem Omnipause Neurons." Journal of Neurophysiology 97, no. 2 (February 2007): 1149–62. http://dx.doi.org/10.1152/jn.00856.2006.

Full text
Abstract:
Models of combined eye-head gaze shifts all aim to realistically simulate behaviorally observed movement dynamics. One of the most problematic features of such models is their inability to determine when a saccadic gaze shift should be initiated and when it should be ended. This is commonly referred to as the switching mechanism mediated by omni-directional pause neurons (OPNs) in the brain stem. Proposed switching strategies implemented in existing gaze control models all rely on a sensory error between instantaneous gaze position and the spatial target. Accordingly, gaze saccades are initiated after presentation of an eccentric visual target and subsequently terminated when an internal estimate of gaze position becomes nearly equal to that of the target. Based on behavioral observations, we demonstrate that such a switching mechanism is insufficient and is unable to explain certain types of movements. We propose an improved hypothesis for how the OPNs control gaze shifts based on a visual-vestibular interaction of signals known to be carried on anatomical projections to the OPN area. The approach is justified by the analysis of recorded gaze shifts interrupted by a head brake in animal subjects and is demonstrated by implementing the switching mechanism in an anatomically based gaze control model. Simulated performance reveals that a weighted sum of three signals: gaze motor error, head velocity, and eye velocity, hypothesized as inputs to OPNs, successfully reproduces diverse behaviorally observed eye-head movements that no other existing model can account for.
APA, Harvard, Vancouver, ISO, and other styles
17

Guterstam, Arvid, Andrew I. Wilterson, Davis Wachtell, and Michael S. A. Graziano. "Other people’s gaze encoded as implied motion in the human brain." Proceedings of the National Academy of Sciences 117, no. 23 (May 26, 2020): 13162–67. http://dx.doi.org/10.1073/pnas.2003110117.

Full text
Abstract:
Keeping track of other people’s gaze is an essential task in social cognition and key for successfully reading other people’s intentions and beliefs (theory of mind). Recent behavioral evidence suggests that we construct an implicit model of other people’s gaze, which may incorporate physically incoherent attributes such as a construct of force-carrying beams that emanate from the eyes. Here, we used functional magnetic resonance imaging and multivoxel pattern analysis to test the prediction that the brain encodes gaze as implied motion streaming from an agent toward a gazed-upon object. We found that a classifier, trained to discriminate the direction of visual motion, significantly decoded the gaze direction in static images depicting a sighted face, but not a blindfolded one, from brain activity patterns in the human motion-sensitive middle temporal complex (MT+) and temporo-parietal junction (TPJ). Our results demonstrate a link between the visual motion system and social brain mechanisms, in which the TPJ, a key node in theory of mind, works in concert with MT+ to encode gaze as implied motion. This model may be a fundamental aspect of social cognition that allows us to efficiently connect agents with the objects of their attention. It is as if the brain draws a quick visual sketch with moving arrows to help keep track of who is attending to what. This implicit, fluid-flow model of other people’s gaze may help explain culturally universal myths about the mind as an energy-like, flowing essence.
APA, Harvard, Vancouver, ISO, and other styles
18

Shah, Sayyed Mudassar, Zhaoyun Sun, Khalid Zaman, Altaf Hussain, Muhammad Shoaib, and Lili Pei. "A Driver Gaze Estimation Method Based on Deep Learning." Sensors 22, no. 10 (May 23, 2022): 3959. http://dx.doi.org/10.3390/s22103959.

Full text
Abstract:
Car crashes are among the top ten leading causes of death; they could mainly be attributed to distracted drivers. An advanced driver-assistance technique (ADAT) is a procedure that can notify the driver about a dangerous scenario, reduce traffic crashes, and improve road safety. The main contribution of this work involved utilizing the driver’s attention to build an efficient ADAT. To obtain this “attention value”, the gaze tracking method is proposed. The gaze direction of the driver is critical toward understanding/discerning fatal distractions, pertaining to when it is obligatory to notify the driver about the risks on the road. A real-time gaze tracking system is proposed in this paper for the development of an ADAT that obtains and communicates the gaze information of the driver. The developed ADAT system detects various head poses of the driver and estimates eye gaze directions, which play important roles in assisting the driver and avoiding any unwanted circumstances. The first (and more significant) task in this research work involved the development of a benchmark image dataset consisting of head poses and horizontal and vertical direction gazes of the driver’s eyes. To detect the driver’s face accurately and efficiently, the You Only Look Once (YOLO-V4) face detector was used by modifying it with the Inception-v3 CNN model for robust feature learning and improved face detection. Finally, transfer learning in the InceptionResNet-v2 CNN model was performed, where the CNN was used as a classification model for head pose detection and eye gaze angle estimation; a regression layer to the InceptionResNet-v2 CNN was added instead of SoftMax and the classification output layer. The proposed model detects and estimates head pose directions and eye directions with higher accuracy. The average accuracy achieved by the head pose detection system was 91%; the model achieved a RMSE of 2.68 for vertical and 3.61 for horizontal eye gaze estimations.
APA, Harvard, Vancouver, ISO, and other styles
19

CHUMERIN, NIKOLAY, AGOSTINO GIBALDI, SILVIO P. SABATINI, and MARC M. VAN HULLE. "LEARNING EYE VERGENCE CONTROL FROM A DISTRIBUTED DISPARITY REPRESENTATION." International Journal of Neural Systems 20, no. 04 (August 2010): 267–78. http://dx.doi.org/10.1142/s0129065710002425.

Full text
Abstract:
We present two neural models for vergence angle control of a robotic head, a simplified and a more complex one. Both models work in a closed-loop manner and do not rely on explicitly computed disparity, but extract the desired vergence angle from the post-processed response of a population of disparity tuned complex cells, the actual gaze direction and the actual vergence angle. The first model assumes that the gaze direction of the robotic head is orthogonal to its baseline and the stimulus is a frontoparallel plane orthogonal to the gaze direction. The second model goes beyond these assumptions, and operates reliably in the general case where all restrictions on the orientation of the gaze, as well as the stimulus position, type and orientation, are dropped.
APA, Harvard, Vancouver, ISO, and other styles
20

VILLANUEVA, ARANTXA, RAFAEL CABEZA, and SONIA PORTA. "GAZE TRACKING SYSTEM MODEL BASED ON PHYSICAL PARAMETERS." International Journal of Pattern Recognition and Artificial Intelligence 21, no. 05 (August 2007): 855–77. http://dx.doi.org/10.1142/s0218001407005697.

Full text
Abstract:
In the past years, research in eye tracking development and applications has attracted much attention and the possibility of interacting with a computer employing just gaze information is becoming more and more feasible. Efforts in eye tracking cover a broad spectrum of fields, system mathematical modeling being an important aspect in this research. Expressions relating to several elements and variables of the gaze tracker would lead to establish geometric relations and to find out symmetrical behaviors of the human eye when looking at a screen. To this end a deep knowledge of projective geometry as well as eye physiology and kinematics are basic. This paper presents a model for a bright-pupil technique tracker fully based on realistic parameters describing the system elements. The system so modeled is superior to that obtained with generic expressions based on linear or quadratic expressions. Moreover, model symmetry knowledge leads to more effective and simpler calibration strategies, resulting in just two calibration points needed to fit the optical axis and only three points to adjust the visual axis. Reducing considerably the time spent by other systems employing more calibration points renders a more attractive model.
APA, Harvard, Vancouver, ISO, and other styles
21

Kanda, Daigo, Shin Kawai, and Hajime Nobuhara. "Visualization Method Corresponding to Regression Problems and Its Application to Deep Learning-Based Gaze Estimation Model." Journal of Advanced Computational Intelligence and Intelligent Informatics 24, no. 5 (September 20, 2020): 676–84. http://dx.doi.org/10.20965/jaciii.2020.p0676.

Full text
Abstract:
The human gaze contains substantial personal information and can be extensively employed in several applications if its relevant factors can be accurately measured. Further, several fields could be substantially innovated if the gaze could be analyzed using popular and familiar smart devices. Deep learning-based methods are robust, making them crucial for gaze estimation on smart devices. However, because internal functions in deep learning are black boxes, deep learning systems often make estimations for unclear reasons. In this paper, we propose a visualization method corresponding to a regression problem to solve the black box problem of the deep learning-based gaze estimation model. The proposed visualization method can clarify which region of an image contributes to deep learning-based gaze estimation. We visualized the gaze estimation model proposed by a research group at the Massachusetts Institute of Technology. The accuracy of the estimation was low, even when the facial features important for gaze estimation were recognized correctly. The effectiveness of the proposed method was further determined through quantitative evaluation using the area over the MoRF perturbation curve (AOPC).
APA, Harvard, Vancouver, ISO, and other styles
22

Denil, Misha, Loris Bazzani, Hugo Larochelle, and Nando de Freitas. "Learning Where to Attend with Deep Architectures for Image Tracking." Neural Computation 24, no. 8 (August 2012): 2151–84. http://dx.doi.org/10.1162/neco_a_00312.

Full text
Abstract:
We discuss an attentional model for simultaneous object tracking and recognition that is driven by gaze data. Motivated by theories of perception, the model consists of two interacting pathways, identity and control, intended to mirror the what and where pathways in neuroscience models. The identity pathway models object appearance and performs classification using deep (factored)-restricted Boltzmann machines. At each point in time, the observations consist of foveated images, with decaying resolution toward the periphery of the gaze. The control pathway models the location, orientation, scale, and speed of the attended object. The posterior distribution of these states is estimated with particle filtering. Deeper in the control pathway, we encounter an attentional mechanism that learns to select gazes so as to minimize tracking uncertainty. Unlike in our previous work, we introduce gaze selection strategies that operate in the presence of partial information and on a continuous action space. We show that a straightforward extension of the existing approach to the partial information setting results in poor performance, and we propose an alternative method based on modeling the reward surface as a gaussian process. This approach gives good performance in the presence of partial information and allows us to expand the action space from a small, discrete set of fixation points to a continuous domain.
APA, Harvard, Vancouver, ISO, and other styles
23

Martinez-Trujillo, Julio C., Eliana M. Klier, Hongying Wang, and J. Douglas Crawford. "Contribution of Head Movement to Gaze Command Coding in Monkey Frontal Cortex and Superior Colliculus." Journal of Neurophysiology 90, no. 4 (October 2003): 2770–76. http://dx.doi.org/10.1152/jn.00330.2003.

Full text
Abstract:
Most of what we know about the neural control of gaze comes from experiments in head-fixed animals, but several “head-free” studies have suggested that fixing the head dramatically alters the apparent gaze command. We directly investigated this issue by quantitatively comparing head-fixed and head-free gaze trajectories evoked by electrically stimulating 52 sites in the superior colliculus (SC) of two monkeys and 23 sites in the supplementary eye fields (SEF) of two other monkeys. We found that head movements made a significant contribution to gaze shifts evoked from both neural structures. In the majority of the stimulated sites, average gaze amplitude was significantly larger and individual gaze trajectories were significantly less convergent in space with the head free to move. Our results are consistent with the hypothesis that head-fixed stimulation only reveals the oculomotor component of the gaze shift, not the true, planned goal of the movement. One implication of this finding is that when comparing stimulation data against popular gaze control models, freeing the head shifts the apparent coding of gaze away from a “spatial code” toward a simpler visual model in the SC and toward an eye-centered or fixed-vector model representation in the SEF.
APA, Harvard, Vancouver, ISO, and other styles
24

Yu, Fang Ming, and Tsai Cheng Li. "Gaze Recognition and Application." Advanced Materials Research 650 (January 2013): 553–58. http://dx.doi.org/10.4028/www.scientific.net/amr.650.553.

Full text
Abstract:
This paper asserts a method of controlling the cursor based on the Fisher linear discriminant (FLD) analysis to recognize six classifications of human face movements, which are face movements upwards, downwards, leftwards, rightwards, blinking of the right eye and blinking of the left eye. These classifications represent cursor movements upward, downwards, leftwards, rightwards, and the right or left button on the mouse respectively. This method can be separated into two areas: face detection and gaze recognition. Face detection is to convert the RGB color space into YCbCr in order to segment skin tone areas, where the human face area is located by using the connect component labeling method. Gaze recognition is accomplished by building a gaze recognition training model parameter through FLD algorithm. Then, by using the Euclidean distance as the rule of decision, match the detected facial image to the parameters of this model to find the shortest Euclidean distance and its corresponding classification to control cursor movements.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Qiuzhen, Lan Ma, Liqiang Huang, and Lei Wang. "Effect of the model eye gaze direction on consumer information processing: a consideration of gender differences." Online Information Review 44, no. 7 (October 15, 2020): 1403–20. http://dx.doi.org/10.1108/oir-01-2020-0025.

Full text
Abstract:
PurposeThe purpose of this paper aims to investigate the effect of a model's eye gaze direction on the information processing behavior of consumers varying based on their gender.Design/methodology/approachAn eye-tracking experiment and a memory test are conducted to test the research hypotheses.FindingsCompared to an averted gaze, a model with a direct gaze attracts more attention to the model's face among male consumers, leading to deeper processing. However, the findings show that when a model displays a direct gaze rather than an averted gaze, female consumers pay more attention to the brand name, thus leading to deeper processing.Originality/valueThis study contributes to not only the existing eye gaze direction literature by integrating the facilitative effect of direct gaze and considering the moderating role of consumer gender on consumer information processing but also the literature concerning the selectivity hypothesis by providing evidence of gender differences in information processing. Moreover, this study offers practical insights to practitioners regarding how to design appealing webpages to satisfy consumers of different genders.Peer reviewThe peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-01-2020-0025
APA, Harvard, Vancouver, ISO, and other styles
26

Farshadmanesh, Farshad, Patrick Byrne, Gerald P. Keith, Hongying Wang, Brian D. Corneil, and J. Douglas Crawford. "Cross-validated models of the relationships between neck muscle electromyography and three-dimensional head kinematics during gaze behavior." Journal of Neurophysiology 107, no. 2 (January 15, 2012): 573–90. http://dx.doi.org/10.1152/jn.00315.2011.

Full text
Abstract:
The object of this study was to model the relationship between neck electromyography (EMG) and three-dimensional (3-D) head kinematics during gaze behavior. In two monkeys, we recorded 3-D gaze, head orientation, and bilateral EMG activity in the sternocleidomastoid, splenius capitis, complexus, biventer cervicis, rectus capitis posterior major, and occipital capitis inferior muscles. Head-unrestrained animals fixated and made gaze saccades between targets within a 60° × 60° grid. We performed a stepwise regression in which polynomial model terms were retained/rejected based on their tendency to increase/decrease a cross-validation-based measure of model generalizability. This revealed several results that could not have been predicted from knowledge of musculoskeletal anatomy. During head holding, EMG activity in most muscles was related to horizontal head orientation, whereas fewer muscles correlated to vertical head orientation and none to small random variations in head torsion. A fourth-order polynomial model, with horizontal head orientation as the only independent variable, generalized nearly as well as higher order models. For head movements, we added time-varying linear and nonlinear perturbations in velocity and acceleration to the previously derived static (head holding) models. The static models still explained most of the EMG variance, but the additional motion terms, which included horizontal, vertical, and torsional contributions, significantly improved the results. Several coordinate systems were used for both static and dynamic analyses, with Fick coordinates showing a marginal (nonsignificant) advantage. Thus, during gaze fixations, recruitment within the neck muscles from which we recorded contributed primarily to position-dependent horizontal orientation terms in our data set, with more complex multidimensional contributions emerging during the head movements that accompany gaze shifts. These are crucial components of the late neuromuscular transformations in a complete model of 3-D head-neck system and should help constrain the study of premotor signals for head control during gaze behaviors.
APA, Harvard, Vancouver, ISO, and other styles
27

Matessa, Michael, and Roger Remington. "Eye Movements in Human Performance Modeling of Space Shuttle Operations." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, no. 12 (September 2005): 1114–18. http://dx.doi.org/10.1177/154193120504901203.

Full text
Abstract:
The goal of our research is to easily develop models that predict astronaut performance in space shuttle operations, but it is difficult to make extrapolations from astronaut training data. A solution is to decompose a complex task into a set of basic operators which are sequenced to create longer chains of behavior. In this modeling project, gaze durations and sequences are predicted and compared to the performance of novice (trained pilots) and expert (astronaut) space shuttle operators. The model makes generally good zero-parameter predictions of gaze durations, but there are notable discrepancies. The gaze sequence of the model is more similar to expert performance than novice performance, but there are differences from both. It appears that with more training, experts develop different gaze sequence strategies than novices due to familiarity with fault messages and procedures. Future modeling efforts should have their gaze sequence strategies based on expert performance.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhang, Xuguang, Jianping Gao, Li Liao, and Guoxiong Wu. "Expressway Lane Change in Fog Environment by Dynamic Strategic Game." Journal of Advanced Transportation 2022 (August 22, 2022): 1–10. http://dx.doi.org/10.1155/2022/8612955.

Full text
Abstract:
To study the behavior of expressway driver’s lane change in a foggy environment, the driver’s vision is adopted as the index for assessing the lane change behavior. The normalization theory is introduced to analyze the driver’s intention of changing lanes. Using the statistical examination results of the driver’s gaze zones, this paper has analyzed the gaze and glance features of the driver during the change of lanes. Based on the driving condition of changing lanes in foggy environment, the game theory is adopted in the study to elaborate on the strategy equilibrium of the driver’s decision-making in the foggy environment. Moreover, a model is established to analyze the driver’s lane changes under the foggy environment. Through the calibration of parameters and verification of the model, the reliability of the model has been proved. The research findings indicate that the features of the driver’s gaze and glance under circumstances of the fine weather differ from those in the foggy environment, thus laying a theoretical foundation for the safety management of subsequent lane changes in the expressway in foggy environment.
APA, Harvard, Vancouver, ISO, and other styles
29

Corneil, Brian D., and James K. Elsley. "Countermanding Eye-Head Gaze Shifts in Humans: Marching Orders Are Delivered to the Head First." Journal of Neurophysiology 94, no. 1 (July 2005): 883–95. http://dx.doi.org/10.1152/jn.01171.2004.

Full text
Abstract:
The countermanding task requires subjects to cancel a planned movement on appearance of a stop signal, providing insights into response generation and suppression. Here, we studied human eye-head gaze shifts in a countermanding task with targets located beyond the horizontal oculomotor range. Consistent with head-restrained saccadic countermanding studies, the proportion of gaze shifts on stop trials increased the longer the stop signal was delayed after target presentation, and gaze shift stop-signal reaction times (SSRTs: a derived statistic measuring how long it takes to cancel a movement) averaged ∼120 ms across seven subjects. We also observed a marked proportion of trials (13% of all stop trials) during which gaze remained stable but the head moved toward the target. Such head movements were more common at intermediate stop signal delays. We never observed the converse sequence wherein gaze moved while the head remained stable. SSRTs for head movements averaged ∼190 ms or ∼70–75 ms longer than gaze SSRTs. Although our findings are inconsistent with a single race to threshold as proposed for controlling saccadic eye movements, movement parameters on stop trials attested to interactions consistent with a race model architecture. To explain our data, we tested two extensions to the saccadic race model. The first assumed that gaze shifts and head movements are controlled by parallel but independent races. The second model assumed that gaze shifts and head movements are controlled by a single race, preceded by terminal ballistic intervals not under inhibitory control, and that the head-movement branch is activated at a lower threshold. Although simulations of both models produced acceptable fits to the empirical data, we favor the second alternative as it is more parsimonious with recent findings in the oculomotor system. Using the second model, estimates for gaze and head ballistic intervals were ∼25 and 90 ms, respectively, consistent with the known physiology of the final motor paths. Further, the threshold of the head movement branch was estimated to be 85% of that required to activate gaze shifts. From these results, we conclude that a commitment to a head movement is made in advance of gaze shifts and that the comparative SSRT differences result primarily from biomechanical differences inherent to eye and head motion.
APA, Harvard, Vancouver, ISO, and other styles
30

Johnson, Leif, Brian Sullivan, Mary Hayhoe, and Dana Ballard. "Predicting human visuomotor behaviour in a driving task." Philosophical Transactions of the Royal Society B: Biological Sciences 369, no. 1636 (February 19, 2014): 20130044. http://dx.doi.org/10.1098/rstb.2013.0044.

Full text
Abstract:
The sequential deployment of gaze to regions of interest is an integral part of human visual function. Owing to its central importance, decades of research have focused on predicting gaze locations, but there has been relatively little formal attempt to predict the temporal aspects of gaze deployment in natural multi-tasking situations. We approach this problem by decomposing complex visual behaviour into individual task modules that require independent sources of visual information for control, in order to model human gaze deployment on different task-relevant objects. We introduce a softmax barrier model for gaze selection that uses two key elements: a priority parameter that represents task importance per module, and noise estimates that allow modules to represent uncertainty about the state of task-relevant visual information. Comparisons with human gaze data gathered in a virtual driving environment show that the model closely approximates human performance.
APA, Harvard, Vancouver, ISO, and other styles
31

Hwang, Bor-Jiunn, Hui-Hui Chen, Chaur-Heh Hsieh, and Deng-Yu Huang. "Gaze Tracking Based on Concatenating Spatial-Temporal Features." Sensors 22, no. 2 (January 11, 2022): 545. http://dx.doi.org/10.3390/s22020545.

Full text
Abstract:
Based on experimental observations, there is a correlation between time and consecutive gaze positions in visual behaviors. Previous studies on gaze point estimation usually use images as the input for model trainings without taking into account the sequence relationship between image data. In addition to the spatial features, the temporal features are considered to improve the accuracy in this paper by using videos instead of images as the input data. To be able to capture spatial and temporal features at the same time, the convolutional neural network (CNN) and long short-term memory (LSTM) network are introduced to build a training model. In this way, CNN is used to extract the spatial features, and LSTM correlates temporal features. This paper presents a CNN Concatenating LSTM network (CCLN) that concatenates spatial and temporal features to improve the performance of gaze estimation in the case of time-series videos as the input training data. In addition, the proposed model can be optimized by exploring the numbers of LSTM layers, the influence of batch normalization (BN) and global average pooling layer (GAP) on CCLN. It is generally believed that larger amounts of training data will lead to better models. To provide data for training and prediction, we propose a method for constructing datasets of video for gaze point estimation. The issues are studied, including the effectiveness of different commonly used general models and the impact of transfer learning. Through exhaustive evaluation, it has been proved that the proposed method achieves a better prediction accuracy than the existing CNN-based methods. Finally, 93.1% of the best model and 92.6% of the general model MobileNet are obtained.
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Ting-Hao, Hiromasa Suzuki, and Yutaka Ohtake. "Visualization of user’s attention on objects in 3D environment using only eye tracking glasses." Journal of Computational Design and Engineering 7, no. 2 (March 30, 2020): 228–37. http://dx.doi.org/10.1093/jcde/qwaa019.

Full text
Abstract:
Abstract Eye tracking technology is widely applied to detect user’s attention in a 2D field, such as web page design, package design, and shooting games. However, because our surroundings primarily consist of 3D objects, applications will be expanded if there is an effective method to obtain and display user’s 3D gaze fixation. In this research, a methodology is proposed to demonstrate the user’s 3D gaze fixation on a digital model of a scene using only a pair of eye tracking glasses. The eye tracking glasses record user’s gaze data and scene video. Thus, using image-based 3D reconstruction, a 3D model of the scene can be reconstructed from the frame images; simultaneously, the transformation matrix of each frame image can be evaluated to find 3D gaze fixation on the 3D model. In addition, a method that demonstrates multiple users’ 3D gaze fixation on the same digital model is presented to analyze gaze distinction between different subjects. With this preliminary development, this approach shows potential to be applied to a larger environment and conduct a more reliable investigation.
APA, Harvard, Vancouver, ISO, and other styles
33

Kim, K. Han, Matthew P. Reed, and Bernard J. Martin. "A model of head movement contribution for gaze transitions." Ergonomics 53, no. 4 (March 22, 2010): 447–57. http://dx.doi.org/10.1080/00140130903483713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Pai, Dinesh. "Smooth pursuit and gaze stabilization: an integrated computational model." Journal of Vision 16, no. 12 (September 1, 2016): 1344. http://dx.doi.org/10.1167/16.12.1344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Hoffman, Matthew W., David B. Grimes, Aaron P. Shon, and Rajesh P. N. Rao. "A probabilistic model of gaze imitation and shared attention." Neural Networks 19, no. 3 (April 2006): 299–310. http://dx.doi.org/10.1016/j.neunet.2006.02.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Jung, Do-Joon, and Jeong-Oh Yoon. "Human Activity Recognition using Model-based Gaze Direction Estimation." Journal of the Korea Industrial Information Systems Research 16, no. 4 (December 30, 2011): 9–18. http://dx.doi.org/10.9723/jksiis.2011.16.4.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Constantin, A. G., H. Wang, J. C. Martinez-Trujillo, and J. D. Crawford. "Frames of Reference for Gaze Saccades Evoked During Stimulation of Lateral Intraparietal Cortex." Journal of Neurophysiology 98, no. 2 (August 2007): 696–709. http://dx.doi.org/10.1152/jn.00206.2007.

Full text
Abstract:
Previous studies suggest that stimulation of lateral intraparietal cortex (LIP) evokes saccadic eye movements toward eye- or head-fixed goals, whereas most single-unit studies suggest that LIP uses an eye-fixed frame with eye-position modulations. The goal of our study was to determine the reference frame for gaze shifts evoked during LIP stimulation in head-unrestrained monkeys. Two macaques ( M1 and M2) were implanted with recording chambers over the right intraparietal sulcus and with search coils for recording three-dimensional eye and head movements. The LIP region was microstimulated using pulse trains of 300 Hz, 100–150 μA, and 200 ms. Eighty-five putative LIP sites in M1 and 194 putative sites in M2 were used in our quantitative analysis throughout this study. Average amplitude of the stimulation-evoked gaze shifts was 8.67° for M1 and 7.97° for M2 with very small head movements. When these gaze-shift trajectories were rotated into three coordinate frames (eye, head, and body), gaze endpoint distribution for all sites was most convergent to a common point when plotted in eye coordinates. Across all sites, the eye-centered model provided a significantly better fit compared with the head, body, or fixed-vector models (where the latter model signifies no modulation of the gaze trajectory as a function of initial gaze position). Moreover, the probability of evoking a gaze shift from any one particular position was modulated by the current gaze direction (independent of saccade direction). These results provide causal evidence that the motor commands from LIP encode gaze command in eye-fixed coordinates but are also subtly modulated by initial gaze position.
APA, Harvard, Vancouver, ISO, and other styles
38

Brousseau, Braiden, Jonathan Rose, and Moshe Eizenman. "Accurate Model-Based Point of Gaze Estimation on Mobile Devices." Vision 2, no. 3 (August 24, 2018): 35. http://dx.doi.org/10.3390/vision2030035.

Full text
Abstract:
The most accurate remote Point of Gaze (PoG) estimation methods that allow free head movements use infrared light sources and cameras together with gaze estimation models. Current gaze estimation models were developed for desktop eye-tracking systems and assume that the relative roll between the system and the subjects’ eyes (the ’R-Roll’) is roughly constant during use. This assumption is not true for hand-held mobile-device-based eye-tracking systems. We present an analysis that shows the accuracy of estimating the PoG on screens of hand-held mobile devices depends on the magnitude of the R-Roll angle and the angular offset between the visual and optical axes of the individual viewer. We also describe a new method to determine the PoG which compensates for the effects of R-Roll on the accuracy of the POG. Experimental results on a prototype infrared smartphone show that for an R-Roll angle of 90 ° , the new method achieves accuracy of approximately 1 ° , while a gaze estimation method that assumes that the R-Roll angle remains constant achieves an accuracy of 3.5 ° . The manner in which the experimental PoG estimation errors increase with the increase in the R-Roll angle was consistent with the analysis. The method presented in this paper can improve significantly the performance of eye-tracking systems on hand-held mobile-devices.
APA, Harvard, Vancouver, ISO, and other styles
39

Bai, Kemeng, Jianzhong Wang, Hongfeng Wang, and Xinlin Chen. "A Study of Eye-Tracking Gaze Point Classification and Application Based on Conditional Random Field." Applied Sciences 12, no. 13 (June 25, 2022): 6462. http://dx.doi.org/10.3390/app12136462.

Full text
Abstract:
The head-mounted eye-tracking technology is often used to manipulate the motion of servo platform in remote tasks, so as to achieve visual aiming of servo platform, which is a highly integrated human-computer interaction effect. However, it is difficult to achieve accurate manipulation for the uncertain meanings of gaze points in eye-tracking. To solve this problem, a method of classifying gaze points based on a conditional random field is proposed. It first describes the features of gaze points and gaze images, according to the eye visual characteristic. An LSTM model is then introduced to merge these two features. Afterwards, the merge features are learned by CRF model to obtain the classified gaze points. Finally, the meaning of gaze point is classified for target, in order to accurately manipulate the servo platform. The experimental results show that the proposed method can classify more accurate target gaze points for 100 images, the average evaluation values Precision = 86.81%, Recall = 86.79%, We = 86.79%, these are better than relevant methods. In addition, the isolated gaze points can be eliminated, and the meanings of gaze points can be classified to achieve the accuracy of servo platform visual aiming.
APA, Harvard, Vancouver, ISO, and other styles
40

Takahashi, Ryo, Hiromasa Suzuki, Jouh Yeong Chew, Yutaka Ohtake, Yukie Nagai, and Koichi Ohtomi. "A system for three-dimensional gaze fixation analysis using eye tracking glasses." Journal of Computational Design and Engineering 5, no. 4 (December 30, 2017): 449–57. http://dx.doi.org/10.1016/j.jcde.2017.12.007.

Full text
Abstract:
Abstract Eye tracking is a technology that has quickly become a commonplace tool for evaluating package and webpage design. In such design processes, static two-dimensional images are shown on a computer screen while a subject's gaze where he or she looks is measured via an eye tracking device. The collected gaze fixation data are then visualized and analyzed via gaze plots and heat maps. Such evaluations using two-dimensional images are often too limited to analyze gaze on three-dimensional physical objects such as products because users look at them not from a single point of view but rather from various angles. Therefore in this study we propose methods for collecting gaze fixation data for a three-dimensional model of a given product and visualizing corresponding gaze plots and heat maps also in three dimensions. To achieve our goals, we used a wearable eye-tracking device, i.e., eye-tracking glasses. Further, we implemented a prototype system to demonstrate its advantages in comparison with two-dimensional gaze fixation methods. Highlights Proposing a method for collecting gaze fixation data for a three-dimensional model of a given product. Proposing two visualization methods for three dimensional gaze data; gaze plots and heat maps. Proposed system was applied to two practical examples of hair dryer and car interior.
APA, Harvard, Vancouver, ISO, and other styles
41

Han, J., and S. Lee. "RESIDENT'S SATISFACTION IN STREET LANDSCAPE USING THE IMMERSIVE VIRTUAL ENVIRONMENT-BASED EYE-TRACKING TECHNIQUE AND DEEP LEARNING MODEL." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-4/W4-2022 (October 14, 2022): 45–52. http://dx.doi.org/10.5194/isprs-archives-xlviii-4-w4-2022-45-2022.

Full text
Abstract:
Abstract. Virtual reality technology provides a significant clue to understanding the human visual perception process by enabling the interaction between humans and computers. In addition, deep learning techniques in the visual field provide analysis methods for image classification, processing, and segmentation. This study reviewed the applicability of gaze movement and deep learning-based satisfaction evaluation on the landscape using an immersive virtual reality-based eye-tracking device. To this end, the following research procedures were established and analysed. First, the gaze movement of the test taker is measured using an immersive virtual environment-based eye tracker. The relationship between the gaze movement pattern of the test taker and the satisfaction evaluation result for the landscape image is analysed. Second, using the Convolutional Neural Networks (CNN)-based Class Activation Map (CAM) technique, a model for estimating the satisfaction evaluation result is constructed, and the gaze pattern of the test taker is derived. Third, we compare and analyse the similarity between the gaze heat map derived through the immersive virtual environment-based gaze tracker and the heat map generated by CAM. This study suggests the applicability of urban environment technology and deep learning methods to understand landscape planning factors that affect urban landscape satisfaction, resulting from the three-dimensional and immediate visual cognitive activity.
APA, Harvard, Vancouver, ISO, and other styles
42

Pierno, Andrea C., Cristina Becchio, Matthew B. Wall, Andrew T. Smith, Luca Turella, and Umberto Castiello. "When Gaze Turns into Grasp." Journal of Cognitive Neuroscience 18, no. 12 (December 2006): 2130–37. http://dx.doi.org/10.1162/jocn.2006.18.12.2130.

Full text
Abstract:
Previous research has provided evidence for a neural system underlying the observation of another person's hand actions. Is the neural system involved in this capacity also important in inferring another person's motor intentions toward an object from their eye gaze? In real-life situations, humans use eye movements to catch and direct the attention of others, often without any accompanying hand movements or speech. In an event-related functional magnetic resonance imaging study, subjects observed videos showing a human model either grasping a target object (grasping condition) or simply gazing (gaze condition) at the same object. These two conditions were contrasted with each other and against a control condition in which the human model was standing behind the object without performing any gazing or grasping action. The results revealed activations within the dorsal premotor cortex, the inferior frontal gyrus, the inferior parietal lobule, and the superior temporal sulcus in both “grasping” and “gaze” conditions. These findings suggest that signaling the presence of an object through gaze elicits in an observer a similar neural response to that elicited by the observation of a reach-to-grasp action performed on the same object.
APA, Harvard, Vancouver, ISO, and other styles
43

Yan, Qiunv, Weiwei Zhang, Wenhao Hu, Guohua Cui, Dan Wei, and Jiejie Xu. "Gaze dynamics with spatiotemporal guided feature descriptor for prediction of driver’s maneuver behavior." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 235, no. 12 (March 31, 2021): 3051–65. http://dx.doi.org/10.1177/09544070211007807.

Full text
Abstract:
At different levels of driving automation, driver’s gaze maintains great indispensable importance on semantic perception of the surround. In this work, we model gaze dynamics and clarify its relationship with driver’s maneuver behaviors from personalized driving style. Firstly, this paper proposes an Occlusion-immune Face Detector (OFD) for facial landmark detection, which can adaptively solve the facial occlusion introduced by the body and glasses frame in the real-world driving scenarios. Meanwhile, an Eye-head Coordination Model is brought up to bridge the error gap in gaze direction through determining eye pose and head pose fused pattern. Then, a vectorized spatiotemporal guidance feature (STGF) descriptor combining gaze accumulation and gaze transition frequency is proposed to construct gaze dynamics within a time window. Finally, we predict driver’s maneuver behavior through STGF descriptor considering different driving styles to clarify the relationship between gaze dynamics and driving maneuver task. Natural driving data are sampled in a dedicated instrumented vehicle testbed, on which 15 drivers with three kind of driving styles participated. Experimental results show that the prediction model achieves the best performance, estimating driver’s behavior an average of 1 s ahead of actual behavior with 83.6% accuracy considering driving style, compared with other approaches.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhu, Bo, Peng Yun Zhang, Jian Nan Chi, and Tian Xia Zhang. "Gaze Estimation Based on Single Camera." Advanced Materials Research 655-657 (January 2013): 1066–76. http://dx.doi.org/10.4028/www.scientific.net/amr.655-657.1066.

Full text
Abstract:
A new gaze tracking method used in single camera gaze tracking system is proposed. The method can be divided into human face and eye location, human features detection and gaze parameters extraction, and ELM based gaze point estimation. In face and eye location, a face detection method which combines skin color model with Adaboost method is used for fast human face detection. In eye features and gaze parameters extraction, many image processing methods are used to detect eye features such as iris center, inner eye corner and so on. And then gaze parameter which is the vector from iris center to eye corner is obtained. After above an ELM based gaze point on the screen estimation method is proposed to establish the mapping relationship between gaze parameter and gaze point. The experimental results illustrate that the method in this paper is effective to do gaze estimation in single camera gaze tracking system.
APA, Harvard, Vancouver, ISO, and other styles
45

Ideström, Jonas. "‘It is that Loving Gaze’." Ecclesial Practices 2, no. 1 (May 8, 2015): 108–19. http://dx.doi.org/10.1163/22144471-00201003.

Full text
Abstract:
The aim of this article is to study ecclesiological themes and patterns in material from a structured group conversation in a rural parish in Church of Sweden. The conversation was conducted in relation to a Gospel narrative and the content is analysed in search for expressive ecclesiologies. The concept expressive ecclesiologies is defined in dialogue with Geir Afdal’s model for research on social practices. The analysis shows how the expressive ecclesiologies in the group conversation give a nuanced and rather thick understanding of the purpose and identity of the church in the local context.
APA, Harvard, Vancouver, ISO, and other styles
46

Haji-Abolhassani, Iman, Daniel Guitton, and Henrietta L. Galiana. "Modeling eye-head gaze shifts in multiple contexts without motor planning." Journal of Neurophysiology 116, no. 4 (October 1, 2016): 1956–85. http://dx.doi.org/10.1152/jn.00605.2015.

Full text
Abstract:
During gaze shifts, the eyes and head collaborate to rapidly capture a target (saccade) and fixate it. Accordingly, models of gaze shift control should embed both saccadic and fixation modes and a mechanism for switching between them. We demonstrate a model in which the eye and head platforms are driven by a shared gaze error signal. To limit the number of free parameters, we implement a model reduction approach in which steady-state cerebellar effects at each of their projection sites are lumped with the parameter of that site. The model topology is consistent with anatomy and neurophysiology, and can replicate eye-head responses observed in multiple experimental contexts: 1) observed gaze characteristics across species and subjects can emerge from this structure with minor parametric changes; 2) gaze can move to a goal while in the fixation mode; 3) ocular compensation for head perturbations during saccades could rely on vestibular-only cells in the vestibular nuclei with postulated projections to burst neurons; 4) two nonlinearities suffice, i.e., the experimentally-determined mapping of tectoreticular cells onto brain stem targets and the increased recruitment of the head for larger target eccentricities; 5) the effects of initial conditions on eye/head trajectories are due to neural circuit dynamics, not planning; and 6) “compensatory” ocular slow phases exist even after semicircular canal plugging, because of interconnections linking eye-head circuits. Our model structure also simulates classical vestibulo-ocular reflex and pursuit nystagmus, and provides novel neural circuit and behavioral predictions, notably that both eye-head coordination and segmental limb coordination are possible without trajectory planning.
APA, Harvard, Vancouver, ISO, and other styles
47

Kar, Anuradha. "MLGaze: Machine Learning-Based Analysis of Gaze Error Patterns in Consumer Eye Tracking Systems." Vision 4, no. 2 (May 7, 2020): 25. http://dx.doi.org/10.3390/vision4020025.

Full text
Abstract:
Analyzing the gaze accuracy characteristics of an eye tracker is a critical task as its gaze data is frequently affected by non-ideal operating conditions in various consumer eye tracking applications. In previous research on pattern analysis of gaze data, efforts were made to model human visual behaviors and cognitive processes. What remains relatively unexplored are questions related to identifying gaze error sources as well as quantifying and modeling their impacts on the data quality of eye trackers. In this study, gaze error patterns produced by a commercial eye tracking device were studied with the help of machine learning algorithms, such as classifiers and regression models. Gaze data were collected from a group of participants under multiple conditions that commonly affect eye trackers operating on desktop and handheld platforms. These conditions (referred here as error sources) include user distance, head pose, and eye-tracker pose variations, and the collected gaze data were used to train the classifier and regression models. It was seen that while the impact of the different error sources on gaze data characteristics were nearly impossible to distinguish by visual inspection or from data statistics, machine learning models were successful in identifying the impact of the different error sources and predicting the variability in gaze error levels due to these conditions. The objective of this study was to investigate the efficacy of machine learning methods towards the detection and prediction of gaze error patterns, which would enable an in-depth understanding of the data quality and reliability of eye trackers under unconstrained operating conditions. Coding resources for all the machine learning methods adopted in this study were included in an open repository named MLGaze to allow researchers to replicate the principles presented here using data from their own eye trackers.
APA, Harvard, Vancouver, ISO, and other styles
48

Cullen, Kathleen E., and Daniel Guitton. "Analysis of Primate IBN Spike Trains Using System Identification Techniques. II. Relationship to Gaze, Eye, and Head Movement Dynamics During Head-Free Gaze Shifts." Journal of Neurophysiology 78, no. 6 (December 1, 1997): 3283–306. http://dx.doi.org/10.1152/jn.1997.78.6.3283.

Full text
Abstract:
Cullen, Kathleen E. and Daniel Guitton. Analysis of primate IBN spike trains using system identification techniques. II. Relationship to gaze, eye, and head movement dynamics during head-free gaze shifts. J. Neurophysiol. 78: 3283–3306, 1997. We have investigated the relationships among the firing frequency B( t) of inhibitory burst neurons (IBNs) and the metrics and dynamics of the eye, head, and gaze (eye + head) movements generated during voluntary combined eye-head gaze shifts in monkey. The same IBNs were characterized during head-fixed saccades in our first of three companion papers. In head-free gaze shifts, the number of spikes (NOS) in a burst was, for 82% of the neurons, better correlated with gaze amplitude than with the amplitude of either the eye or head components of the gaze shift. A multiple regression analysis confirmed that NOS was well correlated to the sum of head and eye amplitudes during head-free gaze shifts. Furthermore, the mean slope of the relationship between NOS and gaze amplitude was significantly less for head-free gaze shifts than for head-fixed saccades. NOS is a global parameter. To refine we used system identification techniques to evaluate a series of dynamic models in which IBN spike trains were related to gaze or eye movements. We found that gaze- and eye-based models predicted the discharges of IBNs equally well. However, the bias values required by gaze-based models were comparable to those required in our head-fixed models whereas those required by eye-based models were significantly larger. The difference in biases between gaze- and eye-based models was very strongly correlated to the mean head velocity ( H˙) during gaze shifts [ R = −0.93 ± 0.15 (SD)]. This result suggested that the increased bias required by the eye-based models reflected an unmodeled H˙ input onto these cells. To pursue this argument further we investigated a series of dynamic models that included both eye velocity ( E˙) and H˙ terms and this confirmed the importance of these two terms. As in our head-fixed analysis of companion paper I, the most valuable model formulation also included an eye saccade amplitude term (Δ E) and was given by B( t) = r 0 + r 1Δ E + b 1 E˙ + g 1 H˙ where r 0, r 1, b 1, and g 1 are constants. The amplitude of the head velocity coefficient was significantly less than that of the eye velocity coefficient. Furthermore, in our population long-lead IBNs tended to have a smaller head velocity coefficients than short-lead IBNs. We conclude that during head-free gaze shifts, the head velocity signal carried to the abducens nucleus by primate excitatory burst neurons (EBNs; if EBNs and IBNs carry similar signals) must be offset by other premotor cells.
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Yafei, Xueyan Ding, Guoliang Yuan, and Xianping Fu. "Dual-Cameras-Based Driver’s Eye Gaze Tracking System with Non-Linear Gaze Point Refinement." Sensors 22, no. 6 (March 17, 2022): 2326. http://dx.doi.org/10.3390/s22062326.

Full text
Abstract:
The human eye gaze plays a vital role in monitoring people’s attention, and various efforts have been made to improve in-vehicle driver gaze tracking systems. Most of them build the specific gaze estimation model by pre-annotated data training in an offline way. These systems usually tend to have poor generalization performance during the online gaze prediction, which is caused by the estimation bias between the training domain and the deployment domain, making the predicted gaze points shift from their correct location. To solve this problem, a novel driver’s eye gaze tracking method with non-linear gaze point refinement is proposed in a monitoring system using two cameras, which eliminates the estimation bias and implicitly fine-tunes the gaze points. Supported by the two-stage gaze point clustering algorithm, the non-linear gaze point refinement method can gradually extract the representative gaze points of the forward and mirror gaze zone and establish the non-linear gaze point re-mapping relationship. In addition, the Unscented Kalman filter is utilized to track the driver’s continuous status features. Experimental results show that the non-linear gaze point refinement method outperforms several previous gaze calibration and gaze mapping methods, and improves the gaze estimation accuracy even on the cross-subject evaluation. The system can be used for predicting the driver’s attention.
APA, Harvard, Vancouver, ISO, and other styles
50

WU, Ling, Weihua ZHAO, Tong ZHU, and Haoxue LIU. "Drivers in Expressway Superlong Tunnels: The Change Patterns of Visual Features and the Discriminant Model of Driving Safety." Journal of Asian Research 3, no. 3 (August 19, 2019): p240. http://dx.doi.org/10.22158/jar.v3n3p240.

Full text
Abstract:
A real-vehicle experiment was carried out in the superlong highway tunnel environment to study the change patterns of driver’s visual features, tracked by eye tracking devices, and the discriminant model of driver’s safety status. On the basis of statistical analysis, a single index and a comprehensive index discriminant model, both based on a C4.5 decision tree, were established. The results showed that compared with the non-tunnel highway sections, the driver’s pupil size was larger, and the gaze duration was longer in the tunnel section. Driver’s pupil size was larger in mid-tunnel section than in the entrance section and exit section. Gazes at the exit section were mainly short gazes. Compared to the exit section, driver’s pupil size changed more dramatically in the entrance section, and the gaze duration was longer. The single visual parameter indicator could clearly discriminate the driver’s safety status in the mid-tunnel section and the non-tunnel sections, while the dual-index-based identification model could clearly discriminate the safety status in each highway sections. The study deepens the research on the driver information perception model in superlong highway tunnels. Also, the study provides a theoretical basis for establishing a visual-feature-based real-time safety status discriminant.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography