Academic literature on the topic 'Gaze model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Gaze model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Gaze model"

1

Lance, Brent J., and Stacy C. Marsella. "The Expressive Gaze Model: Using Gaze to Express Emotion." IEEE Computer Graphics and Applications 30, no. 4 (July 2010): 62–73. http://dx.doi.org/10.1109/mcg.2010.43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vasiljevas, Mindaugas, Robertas Damaševičius, and Rytis Maskeliūnas. "A Human-Adaptive Model for User Performance and Fatigue Evaluation during Gaze-Tracking Tasks." Electronics 12, no. 5 (February 25, 2023): 1130. http://dx.doi.org/10.3390/electronics12051130.

Full text
Abstract:
Eye gaze interfaces are an emerging technology that allows users to control graphical user interfaces (GUIs) simply by looking at them. However, using gaze-controlled GUIs can be a demanding task, resulting in high cognitive and physical load and fatigue. To address these challenges, we propose the concept and model of an adaptive human-assistive human–computer interface (HA-HCI) based on biofeedback. This model enables effective and sustainable use of computer GUIs controlled by physiological signals such as gaze data. The proposed model allows for analytical human performance monitoring and evaluation during human–computer interaction processes based on the damped harmonic oscillator (DHO) model. To test the validity of this model, the authors acquired gaze-tracking data from 12 healthy volunteers playing a gaze-controlled computer game and analyzed it using odd–even statistical analysis. The experimental findings show that the proposed model effectively describes and explains gaze-tracking performance dynamics, including subject variability in performance of GUI control tasks, long-term fatigue, and training effects, as well as short-term recovery of user performance during gaze-tracking-based control tasks. We also analyze the existing HCI and human performance models and develop an extension to the existing physiological models that allows for the development of adaptive user-performance-aware interfaces. The proposed HA-HCI model describes the interaction between a human and a physiological computing system (PCS) from the user performance perspective, incorporating a performance evaluation procedure that interacts with the standard UI components of the PCS and describes how the system should react to loss of productivity (performance). We further demonstrate the applicability of the HA-HCI model by designing an eye-controlled game. We also develop an analytical user performance model based on damped harmonic oscillation that is suitable for describing variability in performance of a PC game based on gaze tracking. The model’s validity is tested using odd–even analysis, which demonstrates strong positive correlation. Individual characteristics of users established by the damped oscillation model can be used for categorization of players under their playing skills and abilities. The experimental findings suggest that players can be categorized as learners, whose damping factor is negative, and fatiguers, whose damping factor is positive. We find a strong positive correlation between amplitude and damping factor, indicating that good starters usually have higher fatigue rates, but slow starters have less fatigue and may even improve their performance during play. The proposed HA-HCI model and analytical user performance models provide a framework for developing an adaptive human-oriented HCI that enables monitoring, analysis, and increased performance of users working with physiological-computing-based user interfaces. The proposed models have potential applications in improving the usability of future human-assistive gaze-controlled interface systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Jogeshwar, Anjali K., Gabriel J. Diaz, Susan P. Farnand, and Jeff B. Pelz. "The Cone Model: Recognizing gaze uncertainty in virtual environments." Electronic Imaging 2020, no. 9 (January 26, 2020): 288–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.9.iqsp-288.

Full text
Abstract:
Eye tracking is used by psychologists, neurologists, vision researchers, and many others to understand the nuances of the human visual system, and to provide insight into a person’s allocation of attention across the visual environment. When tracking the gaze behavior of an observer immersed in a virtual environment displayed on a head-mounted display, estimated gaze direction is encoded as a three-dimensional vector extending from the estimated location of the eyes into the 3D virtual environment. Additional computation is required to detect the target object at which gaze was directed. These methods must be robust to calibration error or eye tracker noise, which may cause the gaze vector to miss the target object and hit an incorrect object at a different distance. Thus, the straightforward solution involving a single vector-to-object collision could be inaccurate in indicating object gaze. More involved metrics that rely upon an estimation of the angular distance from the ray to the center of the object must account for an object’s angular size based on distance, or irregularly shaped edges - information that is not made readily available by popular game engines (e.g. Unity© /Unreal© ) or rendering pipelines (OpenGL). The approach presented here avoids this limitation by projecting many rays distributed across an angular space that is centered upon the estimated gaze direction.
APA, Harvard, Vancouver, ISO, and other styles
4

Kaur, Harsimran, Swati Jindal, and Roberto Manduchi. "Rethinking Model-Based Gaze Estimation." Proceedings of the ACM on Computer Graphics and Interactive Techniques 5, no. 2 (May 17, 2022): 1–17. http://dx.doi.org/10.1145/3530797.

Full text
Abstract:
Over the past several years, a number of data-driven gaze tracking algorithms have been proposed, which have been shown to outperform classic model-based methods in terms of gaze direction accuracy. These algorithms leverage the recent development of sophisticated CNN architectures, as well as the availability of large gaze datasets captured under various conditions. One shortcoming of black-box, end-to-end methods, though, is that any unexpected behaviors are difficult to explain. In addition, there is always the risk that a system trained with a certain dataset may not perform well when tested on data from a different source (the "domain gap" problem.) In this work, we propose a novel method to embed eye geometry information in an end-to-end gaze estimation network by means of a "geometric layer". Our experimental results show that our system outperforms other state-of-the-art methods in cross-dataset evaluation, while producing competitive performance over within dataset tests. In addition, the proposed system is able to extrapolate gaze angles outside the range of those considered in the training data.
APA, Harvard, Vancouver, ISO, and other styles
5

Le, Thao, Ronal Singh, and Tim Miller. "Goal Recognition for Deceptive Human Agents through Planning and Gaze." Journal of Artificial Intelligence Research 71 (August 3, 2021): 697–732. http://dx.doi.org/10.1613/jair.1.12518.

Full text
Abstract:
Eye gaze has the potential to provide insight into the minds of individuals, and this idea has been used in prior research to improve human goal recognition by combining human's actions and gaze. However, most existing research assumes that people are rational and honest. In adversarial scenarios, people may deliberately alter their actions and gaze, which presents a challenge to goal recognition systems. In this paper, we present new models for goal recognition under deception using a combination of gaze behaviour and observed movements of the agent. These models aim to detect when a person is deceiving by analysing their gaze patterns and use this information to adjust the goal recognition. We evaluated our models in two human-subject studies: (1) using data collected from 30 individuals playing a navigation game inspired by an existing deception study and (2) using data collected from 40 individuals playing a competitive game (Ticket To Ride). We found that one of our models (Modulated Deception Gaze+Ontic) offers promising results compared to the previous state-of-the-art model in both studies. Our work complements existing adversarial goal recognition systems by equipping these systems with the ability to tackle ambiguous gaze behaviours.
APA, Harvard, Vancouver, ISO, and other styles
6

Balkenius, Christian, and Birger Johansson. "Anticipatory models in gaze control: a developmental model." Cognitive Processing 8, no. 3 (April 18, 2007): 167–74. http://dx.doi.org/10.1007/s10339-007-0169-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Stiefelhagen, Rainer, Jie Yang, and Alex Waibel. "A Model-Based Gaze Tracking System." International Journal on Artificial Intelligence Tools 06, no. 02 (June 1997): 193–209. http://dx.doi.org/10.1142/s0218213097000116.

Full text
Abstract:
In this paper we present a non-intrusive model-based gaze tracking system. The system estimates the 3-D pose of a user's head by tracking as few as six facial feature points. The system locates a human face using a statistical color model and then finds and tracks the facial features, such as eyes, nostrils and lip corners. A full perspective model is employed to map these feature points onto the 3D pose. Several techniques have been developed to track the features points and recover from failure. We currently achieve a frame rate of 15+ frames per second using an HP 9000 workstation with a framegrabber and a Canon VC-C1 camera. The application of the system has been demonstrated by a gaze-driven panorama image viewer. The potential applications of the system include multimodal interfaces, virtual reality and video-teleconferencing.
APA, Harvard, Vancouver, ISO, and other styles
8

White, Robert L., and Lawrence H. Snyder. "A Neural Network Model of Flexible Spatial Updating." Journal of Neurophysiology 91, no. 4 (April 2004): 1608–19. http://dx.doi.org/10.1152/jn.00277.2003.

Full text
Abstract:
Neurons in many cortical areas involved in visuospatial processing represent remembered spatial information in retinotopic coordinates. During a gaze shift, the retinotopic representation of a target location that is fixed in the world (world-fixed reference frame) must be updated, whereas the representation of a target fixed relative to the center of gaze (gaze-fixed) must remain constant. To investigate how such computations might be performed, we trained a 3-layer recurrent neural network to store and update a spatial location based on a gaze perturbation signal, and to do so flexibly based on a contextual cue. The network produced an accurate readout of target position when cued to either reference frame, but was less precise when updating was performed. This output mimics the pattern of behavior seen in animals performing a similar task. We tested whether updating would preferentially use gaze position or gaze velocity signals, and found that the network strongly preferred velocity for updating world-fixed targets. Furthermore, we found that gaze position gain fields were not present when velocity signals were available for updating. These results have implications for how updating is performed in the brain.
APA, Harvard, Vancouver, ISO, and other styles
9

Glaholt, M. G., and E. M. Reingold. "Stimulus exposure and gaze bias: A further test of the gaze cascade model." Attention, Perception & Psychophysics 71, no. 3 (April 1, 2009): 445–50. http://dx.doi.org/10.3758/app.71.3.445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ko, Eun-Ji, and Myoung-Jun Kim. "User-Calibration Free Gaze Tracking System Model." Journal of the Korea Institute of Information and Communication Engineering 18, no. 5 (May 31, 2014): 1096–102. http://dx.doi.org/10.6109/jkiice.2014.18.5.1096.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Gaze model"

1

Jasso, Hector. "A reinforcement learning model of gaze following." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2007. http://wwwlib.umi.com/cr/ucsd/fullcit?p3259369.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2007.
Title from first page of PDF file (viewed June 22, 2007). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 104-116).
APA, Harvard, Vancouver, ISO, and other styles
2

Murphy, Hunter A. "Hybrid image/model based gaze-contingent rendering." Connect to this title online, 2007. http://etd.lib.clemson.edu/documents/1202498841/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Matsoukas, Christos. "Model Distillation for Deep-Learning-Based Gaze Estimation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-261412.

Full text
Abstract:
With the recent advances in deep learning, the gaze estimation models reached new levels, in terms of predictive accuracy, that could not be achieved with older techniques. Nevertheless, deep learning consists of computationally and memory expensive algorithms that do not allow their integration for embedded systems. This work aims to tackle this problem by boosting the predictive power of small networks using a model compression method called "distillation". Under the concept of distillation, we introduce an additional term to the compressed model’s total loss which is a bounding term between the compressed model (the student) and a powerful one (the teacher). We show that the distillation method introduces to the compressed model something more than noise. That is, the teacher’s inductive bias which helps the student to reach a better optimum due to the adaptive error deduction. Furthermore, we show that the MobileNet family exhibits unstable training phases and we report that the distilled MobileNet25 slightly outperformed the MobileNet50. Moreover, we try newly proposed training schemes to increase the predictive power of small and thin networks and we infer that extremely thin architectures are hard to train. Finally, we propose a new training scheme based on the hintlearning method and we show that this technique helps the thin MobileNets to gain stability and predictive power.
Den senaste utvecklingen inom djupinlärning har hjälp till att förbättra precisionen hos gaze estimation-modeller till nivåer som inte tidigare varit möjliga. Dock kräver djupinlärningsmetoder oftast både stora mängder beräkningar och minne som därmed begränsar dess användning i inbyggda system med små minnes- och beräkningsresurser. Det här arbetet syftar till att kringgå detta problem genom att öka prediktiv kraft i små nätverk som kan användas i inbyggda system, med hjälp av en modellkomprimeringsmetod som kallas distillation". Under begreppet destillation introducerar vi ytterligare en term till den komprimerade modellens totala optimeringsfunktion som är en avgränsande term mellan en komprimerad modell och en kraftfull modell. Vi visar att destillationsmetoden inför mer än bara brus i den komprimerade modellen. Det vill säga lärarens induktiva bias som hjälper studenten att nå ett bättre optimum tack vare adaptive error deduction. Utöver detta visar vi att MobileNet-familjen uppvisar instabila träningsfaser och vi rapporterar att den destillerade MobileNet25 överträffade sin lärare MobileNet50 något. Dessutom undersöker vi nyligen föreslagna träningsmetoder för att förbättra prediktionen hos små och tunna nätverk och vi konstaterar att extremt tunna arkitekturer är svåra att träna. Slutligen föreslår vi en ny träningsmetod baserad på hint-learning och visar att denna teknik hjälper de tunna MobileNets att stabiliseras under träning och ökar dess prediktiva effektivitet.
APA, Harvard, Vancouver, ISO, and other styles
4

Hnatow, Justin Michael. "A theoretical eye model for uncalibrated real-time eye gaze estimation /." Online version of thesis, 2006. http://hdl.handle.net/1850/2606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shimonishi, Kei. "Modeling and Estimation of Selection Interests through Gaze Behavior." Kyoto University, 2017. http://hdl.handle.net/2433/227658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Reis, Ben Y. (Ben Yitzchak). "Implementation of a line attractor-based model of the gaze holding integrator using nonlinear spiking neuron models." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/39389.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (leaves 30-31).
by Ben Y. Reis.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
7

Taba, Isabella Bahareh. "Improving eye-gaze tracking accuracy through personalized calibration of a user's aspherical corneal model." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/40247.

Full text
Abstract:
The eyes present us with a window through which we view the world and gather information. Eye-gaze tracking systems are the means by which a user's point of gaze (POG) can be measured and recorded. Despite the active research in gaze tracking systems and major advances in this field, calibration remains one of the primary challenges in the development of eye tracking systems. In order to facilitate gaze measurement and tracking, eye-gaze trackers utilize simplifications in modeling the human eye. These simplifications include using a spherical corneal model and using population averages for eye parameters in place of individual measurements, but use of these simplifications in modeling contribute to system errors and impose inaccuracies on the process of point of gaze estimation. This research introduces a new one-time per-user calibration method for gaze estimation systems. The purpose of the calibration method developed in this thesis is to calculate and estimate different individual eye parameters based on an aspherical corneal model. Replacing average measurements with individual measurements promises to improve the accuracy and reliability of the system. The approach presented in this thesis involves estimating eye parameters by statistical modeling through least squares curve fitting. Compared to a current approach referred to here as the Hennessey's calibration method, this approach offers significant advantages, including improved, individual calibration. Through analysis and comparison of this new calibration method with the Hennessey calibration method, the research data presented in this thesis shows an improvement in gaze estimation accuracy of approximately 27%. Research has shown that the average accuracy for the Hennessey calibration method is about 1:5 cm on an LCD screen at a distance of 60 cm, while the new system, as tested on eight different subjects, achieved an average accuracy of 1:1 cm. A statistical analysis (T-test) of the comparative accuracy of the new calibration method versus the Hennessey calibration method has demonstrated that the new system represents a statistically significant improvement.
APA, Harvard, Vancouver, ISO, and other styles
8

D'AMELIO, ALESSANDRO. "A STOCHASTIC FORAGING MODEL OF ATTENTIVE EYE GUIDANCE ON DYNAMIC STIMULI." Doctoral thesis, Università degli Studi di Milano, 2021. http://hdl.handle.net/2434/816678.

Full text
Abstract:
Understanding human behavioural signals is one of the key ingredients of an effective human-human and human-computer interaction (HCI). In such respect, non verbal communication plays a key role and is composed by a variety of modalities acting jointly to convey a common message. In particular, cues like gesture, facial expression, prosody etc. have the same importance as spoken words. Gaze behaviour makes no exception, being one of the most common, yet unobtrusive ways of communicating. To this aim, many computational models of visual attention allocation have been proposed; although such models were primarily conceived in the psychological field, in the last couple of decades, the problem of predicting attention allocation on a visual stimuli has started to catch the interest of the computer vision and pattern recognition community, pushed by the fast growing number of possible applications (e.g. autonomous driving, image/video compression, robotics). In this renaissance of attention modelling, some of the key features characterizing eye movements were at best overlooked; in particular the explicit unrolling in time of eye movements (i.e. their dynamics) has been seldom taken into account. Moreover, the vast majority of the proposed models are only able to deal with static stimuli (images), with few notable exceptions. The main contribution of this work is a novel computational model of attentive eye guidance which derives gaze dynamics in a principled way, by reformulating attention deployment as a stochastic foraging problem. We show how treating a virtual observer attending to a video as a stochastic composite forager searching for valuable patches in a multi-modal landscape, leads to simulated gaze trajectories that are not statistically distinguishable from the ones performed by humans while free-viewing the same scene. Model simulation and experiments are carried out on a publicly available dataset of eye-tracked subjects displaying conversations and social interactions between humans.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Zhong. "A UNIFYING HYPOTHESIS FOR THE MULTIPLE WAVEFORMS OF INFANTILE NYSTAGMUS AND THEIR IDIOSYNCRATIC VARIATION WITH GAZE ANGLE AND THERAPY." Case Western Reserve University School of Graduate Studies / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=case1210605209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dodds, Christopher, and chris@iconinc com au. "Avatars and the Invisible Omniscience: The panoptical model within virtual worlds." RMIT University. Creative Media, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080424.100301.

Full text
Abstract:
This Exegesis and accompanying artworks are the culmination of research conducted into the existence of surveillance in virtual worlds. A panoptical model has been used, and its premise tested through the extension into these communal spaces. Issues such as data security, personal and corporate privacy have been investigated, as has the use of art as a propositional mode. This Exegesis contains existing and new theoretical arguments and observations that have aided the development of research outcomes; a discussion of action research as a methodology; and questionnaire outcomes assisting in understanding player perceptions and concerns. A series of artworks were completed during the research to aid in understanding the nature of virtual surveillance; as a method to examine outcomes; and as an experiential interface for viewers of the research. The artworks investigate a series of surveillance perspectives including parental gaze, machine surveillance and self-surveillance. The outcomes include considerations into the influence surveillance has on player behaviour, security issues pertaining to the extension of corporations into virtual worlds, the acceptance of surveillance by virtual communities, and the merits of applying artworks as proposition.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Gaze model"

1

The artist, his model, her image, his gaze: Picasso's pursuit of the model. Chicago: University of Chicago Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gerd, Schmidt-Eichstaedt, Germany. Bundesministerium für Raumordnung, Bauwesen und Städtebau., Germany. Federal Ministry for Regional Planning, Building and Urban Development., and Deutsches Institut für Urbanistik, eds. Model-Urban-Ecology planning game =: Planspiel Modell-Stadt-Ökologie. Berlin: Deutsches Institut für Urbanistik, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bargain, Olivier. Cooperative models in action: Simulation of a nash-bargaining model of household labor supply with taxation. Bonn, Germany: IZA, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Richard, Selten, ed. Game equilibrium models. Berlin: Springer Verlag, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Selten, Reinhard, ed. Game Equilibrium Models I. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/978-3-662-02674-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Selten, Reinhard, ed. Game Equilibrium Models II. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/978-3-662-07365-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Selten, Reinhard, ed. Game Equilibrium Models III. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/978-3-662-07367-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Selten, Reinhard, ed. Game Equilibrium Models IV. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/978-3-662-07369-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

V, Geronimus I͡U. Igra, modelʹ, ėkonomika. Moskva: Izd-vo "Znanie", 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Models of strategic rationality. Dordrecht: Kluwer Academic Publishers, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Gaze model"

1

Yu, Yu, Gang Liu, and Jean-Marc Odobez. "Deep Multitask Gaze Estimation with a Constrained Landmark-Gaze Model." In Lecture Notes in Computer Science, 456–74. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-11012-3_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Toivanen, Miika, and Kristian Lukander. "Improving Model-Based Mobile Gaze Tracking." In Intelligent Decision Technologies, 611–25. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-19857-6_52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

van den Brink, Bram, Christyowidiasmoro, and Zerrin Yumak. "Social Gaze Model for an Interactive Virtual Character." In Intelligent Virtual Agents, 451–54. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-67401-8_56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Jiaxin, Sheng-hua Zhong, Zheng Ma, Stephen J. Heinen, and Jianmin Jiang. "Gaze Aware Deep Learning Model for Video Summarization." In Advances in Multimedia Information Processing – PCM 2018, 285–95. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00767-6_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Peters, Christopher, Catherine Pelachaud, Elisabetta Bevacqua, Maurizio Mancini, and Isabella Poggi. "A Model of Attention and Interest Using Gaze Behavior." In Intelligent Virtual Agents, 229–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11550617_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Berra, Riccardo, Francesco Setti, and Marco Cristani. "Gaze-Based Human-Robot Interaction by the Brunswick Model." In Lecture Notes in Computer Science, 511–21. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30645-8_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wood, Erroll, Tadas Baltrušaitis, Louis-Philippe Morency, Peter Robinson, and Andreas Bulling. "A 3D Morphable Eye Region Model for Gaze Estimation." In Computer Vision – ECCV 2016, 297–313. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46448-0_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Krishnan, Sajitha, J. Amudha, and Sushma Tejwani. "Gaze Fusion-Deep Neural Network Model for Glaucoma Detection." In Communications in Computer and Information Science, 42–53. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-0419-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Beuget, Maël, Sylvain Castagnos, Christophe Luxembourger, and Anne Boyer. "Eye Gaze Sequence Analysis to Model Memory in E-education." In Lecture Notes in Computer Science, 24–29. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-23207-8_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Agustianto, Khafidurrohman, Hendra Yufit Riskiawan, Dwi Putro Sarwo Setyohadi, I. Gede Wiryawan, Andi Besse Firdausiah Mansur, and Ahmad Hoirul Basori. "Eye Gaze Based Model for Anxiety Detection of Engineering Students." In Communications in Computer and Information Science, 195–205. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-97255-4_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Gaze model"

1

Yeo, Alvin W., and Po-Chan Chiu. "Gaze estimation model for eye drawing." In CHI '06 extended abstracts. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1125451.1125736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rodriguez, Ari, Everardo Barcenas, and Guillermo Molero-Castillo. "Model Checking for Gaze Pattern Recognition." In 2019 International Conference on Electronics, Communications and Computers (CONIELECOMP). IEEE, 2019. http://dx.doi.org/10.1109/conielecomp.2019.8673208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chan, Chiu Po, and Alvin W. Yeo. "Eye drawing with gaze estimation model." In the 2006 symposium. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1117309.1117342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Congcong, Yuying Chen, Lei Tai, Haoyang Ye, Ming Liu, and Bertram E. Shi. "A gaze model improves autonomous driving." In ETRA '19: 2019 Symposium on Eye Tracking Research and Applications. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3314111.3319846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Veale, Richard, Chih-yang Chen, and Tadashi Isa. "Marmoset Monkeys Model Human Infant Gaze?" In 2021 IEEE International Conference on Development and Learning (ICDL). IEEE, 2021. http://dx.doi.org/10.1109/icdl49984.2021.9515602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sharma, Kshitij, Hamed S. Alavi, Patrick Jermann, and Pierre Dillenbourg. "A gaze-based learning analytics model." In the Sixth International Conference. New York, New York, USA: ACM Press, 2016. http://dx.doi.org/10.1145/2883851.2883902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Satogata, Riki, Mitsuhiko Kimoto, Shun Yoshioka, Masahiko Osawa, Kazuhiko Shinozawa, and Michita Imai. "Emergence of Agent Gaze Behavior using Interactive Kinetics-Based Gaze Direction Model." In HRI '20: ACM/IEEE International Conference on Human-Robot Interaction. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3371382.3378248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Xiuli, Aditya Acharya, Antti Oulasvirta, and Andrew Howes. "An Adaptive Model of Gaze-based Selection." In CHI '21: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3411764.3445177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Murphy, Hunter, and Andrew T. Duchowski. "Hybrid image-/model-based gaze-contingent rendering." In the 4th symposium. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1272582.1272604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yonetani, Ryo, Hiroaki Kawashima, and Takashi Matsuyama. "Multi-mode saliency dynamics model for analyzing gaze and attention." In the Symposium. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2168556.2168574.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Gaze model"

1

de Alfaro, Luca. Game Models for Open Systems. Fort Belvoir, VA: Defense Technical Information Center, January 2006. http://dx.doi.org/10.21236/ada457326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Soper, Braden C. A Game Theoretic Model of Thermonuclear Cyberwar. Office of Scientific and Technical Information (OSTI), August 2017. http://dx.doi.org/10.2172/1404836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cooney, Greg, James Littlefield, Joe Marriott, Matt Jamieson, Robert E. James III PhD, and Timothy J. Skone. Gate-to-Gate Life Cycle Inventory and Model of CO2-Enhanced Oil Recovery (Presentation). Office of Scientific and Technical Information (OSTI), September 2013. http://dx.doi.org/10.2172/1526697.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Schramm, Harrison, David L. Alderson, W. M. Carlyle, and Nedialko B. Dimitrov. A Game Theoretic Model Of Strategic Conflict In Cyberspace. Fort Belvoir, VA: Defense Technical Information Center, January 2012. http://dx.doi.org/10.21236/ada555943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Skone, Timothy J., Robert E. James III, Greg Cooney, Matt Jamieson, James Littlefield, and Joe Marriott. Gate-to-Gate Life Cycle Inventory and Model of CO2-Enhanced Oil Recovery. Office of Scientific and Technical Information (OSTI), September 2013. http://dx.doi.org/10.2172/1515261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sharp, Jeremy A., Duncan B. Bryant, and Gaurav Savant. Low-Sill Control Structure Gate Load Study. U.S. Army Engineer Research and Development Center, May 2022. http://dx.doi.org/10.21079/11681/44340.

Full text
Abstract:
The effort performed here describes the process to determine the gate lifting loads at the Low-Sill Control Structure. To measure the gate loads, a 1:55 Froude-scaled model of the Low-Sill Control Structure was tested. Load cells were placed on 3 of the 11 gates. Tests evaluated the gate loads for various hydraulic heads across the structure. A total of 109 tests were conducted for 14 flows with each flow having two gate settings provided by the United States Army Corps of Engineers, New Orleans District. The load data illustrated the potential for higher gate lifting loads (GLL) to occur at the mid-range gate opening (Go) for Gates 3 and 6. While for Gate 10, the highest GLL (452 kips, maximum load in testing) was at a Go = 4.2 ft. Conversely, for the low-flow bays, the highest load occurred at Go = 24.86 ft.
APA, Harvard, Vancouver, ISO, and other styles
7

Peters, T. J., and W. R. Park. Performance evaluation of the Enraf-Nonius Model 872 radar gage. Office of Scientific and Technical Information (OSTI), December 1992. http://dx.doi.org/10.2172/6942711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Peters, T. J., and W. R. Park. Performance evaluation of the Enraf-Nonius Model 872 radar gage. Office of Scientific and Technical Information (OSTI), December 1992. http://dx.doi.org/10.2172/10117546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Acemoglu, Daron, and Asuman Ozdaglar. Game-Theoretic Models of Conflict and Social Interactions. Fort Belvoir, VA: Defense Technical Information Center, May 2014. http://dx.doi.org/10.21236/ada610850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Musa, Padde, Zita Ekeocha, Stephen Robert Byrn, and Kari L. Clase. Knowledge Sharing in Organisations: Finding a Best-fit Model for a Regulatory Authority in East Africa. Purdue University, November 2021. http://dx.doi.org/10.5703/1288284317432.

Full text
Abstract:
Knowledge is an essential organisational asset that contributes to organisational effectiveness when carefully managed. Knowledge sharing (KS) is a vital component of knowledge management that allows individuals to engage in new knowledge creation. Until it’s shared, knowledge is considered useless since it resides within the human brain. Public organisations specifically, are more involved in providing and developing knowledge and hence can be classified as knowledge-intensive organisations. Scholarly research conducted on KS has proposed a number of models to help understand the KS process between individuals but none of these models is specifically for a public organisation. Moreover, to really reap the benefits that KS brings to an organization, it’s imperative to apply a model that is attributable to the unique characteristics of that organisation. This study reviews literature from electronic databases that discuss models of KS between individuals. Factors that influence KS under each model were isolated and the extent of each of their influence on KS in a public organization context, were critically analysed. The result of this analysis gave rise to factors that were thought to be most critical in understanding KS process in a public sector setting. These factors were then used to develop a KS model by categorizing them into themes including organisational culture, motivation to share and opportunity to share. From these themes, a KS model was developed and proposed for KS in a medicines regulatory authority in East Africa. The project recommends that an empirical study be conducted to validate the applicability of the proposed KS model at a medicines regulatory authority in East Africa.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography