Dissertations / Theses on the topic 'Gaze'

To see the other types of publications on this topic, follow the link: Gaze.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Gaze.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ilic, Sasa <1987&gt. "The Duel of the Gazes: Male Gaze on Women vs Female Self-Gaze in Carver and Altman." Master's Degree Thesis, Università Ca' Foscari Venezia, 2013. http://hdl.handle.net/10579/3744.

Full text
Abstract:
The visual quality in terms of audiovisual devices and style is a feature that that has been pointed out as a trademark in Raymond Carver's minimal realism. On the other hand, the theme of visuality, especially the visual depiction of the female body on the part of the 'male objectifying gaze', are features of Robert Altman's Short Cuts, the adaptation of some of this writer's stories, that have been widely criticized. Yet, the twofold issue of how female protagonists are visualized by their male counterparts and how female protagonists visualize themselves is a specific focus that has not been employed for the analysis of either of these american artists so far. An approach that both embraces and challenges features of already established theories on gaze, feminism and cinema theory, but also of theories of adaptation, will be employed in order to take into analysis the theme of male gaze and female self-gaze first in Carver's and then Altman’s productions. Arguably, in point of fact, Carver's female characters emerge from such an analysis as both objectified by the male gaze but also as subversive self-gazing subjects. Moreover, an intertextual perspective on the relationship between Carver's stories and Altman's adaptation and works will highlight not dissimilar complexities in Altman's empowered women 'performers' on the one hand and women characters contained to eros and thanatos dichotomies on the other. More generally, the 'intricacies of gazing' and the power a/symmetries underlying them are a fascinating ever evolving issue worth engaging in, that helps expanding the reasonings on each of these authors and their relationship.
APA, Harvard, Vancouver, ISO, and other styles
2

Edwards, Stephen Gareth. "Social orienting in gaze-based interactions : consequences of joint gaze." Thesis, University of East Anglia, 2015. https://ueaeprints.uea.ac.uk/59591/.

Full text
Abstract:
Jointly attending to a shared referent with other people is a social attention behaviour that occurs often and has many developmental and ongoing social impacts. This thesis focused on examining the online, as well as later emerging, impacts of being the gaze leader of joint attention, which has until recently been under-researched. A novel social orienting response that occurs after viewing averted gaze is reported, showing that a gaze leader will rapidly orient their attention towards a face that follows their gaze: the gaze leading effect. In developing the paradigm necessary for this illustration a number of boundary conditions were also outlined, which suggest the social context of the interaction is paramount to the observability of the gaze leading effect. For example, it appears that the gaze leading effect works in direct opposition to other social orienting phenomenon (e.g. gaze cueing), may be specific to eye-gaze stimuli, and is associated with self-reported autism-like traits. This orienting response is suggested as evidence that humans may have an attention mechanism that promotes the more elaborate social attention state of shared attention. This thesis also assessed the longer term impacts of prior joint gaze interactions, finding that gaze perception can be influenced by prior interactions with gaze leaders, but not with followers, and further there is evidence presented that suggests a gaze leader’s attention will respond differently, later, to those whom have or have not previously followed their gaze. Again, this latter finding is associated with autism-like traits. Thus, the current work opens up a number of interesting research avenues concerning how attention orienting during gaze leading may facilitate social learning and how this response may be disrupted in atypically developing populations.
APA, Harvard, Vancouver, ISO, and other styles
3

Bergeron, André 1967. "Multiple-step gaze shifts reveal gaze position error in brainstem." Thesis, McGill University, 2003. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82831.

Full text
Abstract:
The superior colliculus (SC) is a structure that is implicated in the control of visual orienting behaviors. The SC contains a motor map which encodes saccade vectors topographically. Small saccades are encoded rostrally, larger ones caudally. More recently, it was shown that the rostral part of the SC contains a specialized "fixation zone". A group of cells located in the rostral SC have been called SC fixation neurons (SCFNs). SCFNs via projections to "omni-pause" neurons (OPNs) seem to play an important role during fixation behavior by holding gaze on target for a certain period of time through inhibition of the gaze saccade generator. By comparison, the caudal SC generates saccadic commands via its direct connections to the gaze saccade generator. When the head is unrestrained, large gaze shifts are generally made with the contribution of eye and head (gaze = eye-in-head + head-in-space). For a gaze shift executed in one step, gaze, eye, and head trajectories are very stereotyped; each part of the trajectory is correlated in time to another. Consequently, it is difficult to relate brainstem cells activity to a specific trajectory. A differentiation between the trajectories can be obtained by the use of multiple-step gaze-shifts that cats often use naturally. Multiple-step gaze shifts are gaze displacements that are composed of a variable number of gaze saccades separated by periods of steady fixation. The main goal of this thesis was to relate cell activity of both SC cells and OPNs with specific features of the multiple-step gaze shifts. The study of SC cells during multiple-step gaze shifts revealed that neither SCFNs nor cells on the SC motor map encode the complex motor sequence of steps and plateaus in multiple step gaze shifts. They are concerned with keeping track of the difference between current gaze position and the final intended gaze destination independently of how the gaze displacement is achieved. This finding challenges
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Anying M. Eng Massachusetts Institute of Technology. "Learning driver gaze." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/119533.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 65-69).
Driving is a singularly complex task that humans manage to perform successfully day in and day out, guided only by what their eyes can see. Given how prevalent, complex, and not to mention dangerous driving is, it's surprising that we don't really understand how drivers actually use vision to drive. The release of a large scale driving dataset with eye tracking data, DrEyeVe [1], makes analyzing the role of vision feasible. In this thesis, we 1) study the impact of various external features on driver attention, and 2) present a two-path deep-learning model that exploits both static and dynamic information for modeling driver gaze. Our model shows promising results against state-of-the-art saliency models, especially on sequences when the driver is not just looking straight ahead on the road. This model enables us to estimate important regions that the driver should be aware of, and potentially allows an automatic driving assistant to alert drivers of hazards on the road they haven't seen yet.
by Anying Li.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
5

Wood, Erroll William. "Gaze estimation with graphics." Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/267905.

Full text
Abstract:
Gaze estimation systems determine where someone is looking. Gaze is used for a wide range of applications including market research, usability studies, and gaze-based interfaces. Traditional equipment uses special hardware. To bring gaze estimation mainstream, researchers are exploring approaches that use commodity hardware alone. My work addresses two outstanding problems in this field: 1) it is hard to collect good ground truth eye images for machine learning, and 2) gaze estimation systems do not generalize well -- once they are trained with images from one scenario, they do not work in another scenario. In this dissertation I address these problems in two different ways: learning-by-synthesis and analysis-by-synthesis. Learning-by-synthesis is the process of training a machine learning system with synthetic data, i.e. data that has been rendered with graphics rather than collected by hand. Analysis-by-synthesis is a computer vision strategy that couples a generative model of image formation (synthesis) with a perceptive model of scene comparison (analysis). The goal is to synthesize an image that best matches an observed image. In this dissertation I present three main contributions. First, I present a new method for training gaze estimation systems that use machine learning: learning-by-synthesis using 3D head scans and photorealistic rendering. Second, I present a new morphable model of the eye region. I show how this model can be used to generate large amounts of varied data for learning-by-synthesis. Third, I present a new method for gaze estimation: analysis-by-synthesis. I demonstrate how analysis-by-synthesis can generalize to different scenarios, estimating gaze in a device- and person- independent manner.
APA, Harvard, Vancouver, ISO, and other styles
6

Won, Cassandra L. "(Un)Focusing the Gaze." Scholarship @ Claremont, 2014. http://scholarship.claremont.edu/scripps_theses/343.

Full text
Abstract:
This is a piece that engages with Laura Mulvey's idea of the 'male gaze.' It is meant to exaggerate, magnify, and therefore critique the mechanisms that the camera uses to objectify and dominate women's bodies.
APA, Harvard, Vancouver, ISO, and other styles
7

Gafny, Tal. "Pools / Dreams / Parental Gaze." VCU Scholars Compass, 2014. http://scholarscompass.vcu.edu/etd/3482.

Full text
Abstract:
This thesis is a testimony of thoughts and ideas that have been circulating in my studio for the past few years, in their current form. It is also an experiment in writing an autobiographical piece of prose. It was written parallel to, and after, making the film Double Take with Perrin Turner. The film is an exploration of a number of relationships, related and sometimes haunted by one another. I wish for this text to operate not only as an after-the-fact recollection of thoughts, but also in relation to what will follow it – similarly to the way a trailer operates in relation to a movie. This is an extract and a prologue rather than conclusion or resolution.
APA, Harvard, Vancouver, ISO, and other styles
8

Dubrovsky, Alexander Sasha. "Gaze, eye, and head movement dynamics during closed- and open-loop gaze pursuit." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=31222.

Full text
Abstract:
Horizontal step-ramp stimuli were used to examine gaze, eye, and head movement dynamics during head-unrestrained pursuit with and without imposed retinal velocity errors (RVE; i.e. open- and closed-loop, respectively) in two rhesus monkeys. In the closed-loop experiment , pursuit was elicited by step-ramp stimuli with a constant velocity of 20--80 deg/s. Each monkey used a combination of eye and head motion to initially fixate and then pursue the target. Additionally, we found that initial eye and head acceleration increased as a function of target velocity. In the open-loop experiment, step-ramp stimuli (40 deg/s) were presented and ~125 ms after pursuit onset, a constant RVE was imposed for a duration of 300 ms. In each monkey, when RVE = 0 deg/s, gaze, eye, and head velocity trajectories were maintained at their current or at a damped velocity. Moreover, the head as well as the eyes mediated the observed increase and decrease in gaze velocity when RVE was +10 and -10 deg/s, respectively. Based on our findings we conclude that the pursuit system uses visual and non-visual signals to drive coordinated eye-head pursuit.
APA, Harvard, Vancouver, ISO, and other styles
9

Ide, Ichiro, Kenji Yamashiro, Daisuke Deguchi, Tomokazu Takahashi, Hiroshi Murase, Kazunori Higuchi, and Takashi Naito. "Automatic calibration of an in-vehicle gaze tracking system using driver's typical gaze behavior." IEEE, 2009. http://hdl.handle.net/2237/13967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Beckmann, Jeffery Linn. "Single camera 3D gaze determination." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Holm, Linus. "Gaze control in episodic memory." Licentiate thesis, Umeå University, Department of Psychology, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-14733.

Full text
Abstract:

The role of gaze control in episodic recognition was investigated in two studies. In Study 1, participants encoded human faces inverted or upright, with or without eye movements (Experiment 1) and under sorting or rating tasks (Experiment 2) respectively. At test, participants indicated their recollective experience with R(emember) responses (explicit recollection) orK(now) responses (familiarity based recognition). Experiment 1 showed that face inversion and occlusion of eye movements reduced levels of explicit recollection as measured by R responses. In Experiment 2, the relation between recollective experience and perceptual reinstatement wasexamined. Whereas the study instructions produced no differences in terms of eye movements, R responses were associated with a higher proportion of refixations than K responses.In Study 2, perceptual consistency was investigated in two experiments. In Experiment 1, participants studied scenes under different concurrent tasks. Subsequently, their recognition memory was examined in a R / K test. Executive load produced parallel effects on eye movements and R responses. Furthermore, R responses were associated with a higher proportion ofrefixations than K responses. However, number of fixations was correlated with refixations.Experiment 2 corroborated these results and controlled for number of fixations.Together, these studies suggest that visual episodic representations are supported by perceptual detail, and that explicit recollection is a function of encoding and retrieving those details. To this end, active gaze control is an important factor in visual recognition.

APA, Harvard, Vancouver, ISO, and other styles
12

Linn, Andreas. "Gaze Teleportation in Virtual Reality." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-216585.

Full text
Abstract:
This paper reports preliminary investigations into gaze teleportation, a locomotion interaction inside virtual reality in which the user can push a button and teleport to the point at which they are looking. The results could help future application creators design intuitive locomotion interfaces to allow users to more easily scale virtual worlds larger than their play space. In a study consisting of 12 participants, gaze teleportation was compared to the conventional hand-tracked controller. Participants played a portion of Valve’s The Lab with an HTC Vive and a Tobii Eyetracker; half of the participants completed the set tasks with gaze teleportation, and the other half used hand-tracking. Using Likert questions, they then rated their experiences in terms of enjoyment, frustration, effort, distance, occlusion, immersion, and motion sickness. After answering the questions, the participants got to try both methods and were interviewed on their preferences and opinions. Our results suggest that gaze teleportation is an enjoyable, fast, intuitive, and natural locomotion method that performs similarly to hand-tracked teleportation but is preferred by users when they are given a choice. We conclude that gaze teleportation is a good fit for applications in which users are expected to locomote in their direction of focus without too many distractions.
I det här dokumentet rapporteras preliminära resultat av blickteleportation, en rörelseinteraktion för virtuella verkligheter där användaren kan trycka på en knapp och teleportera till den punkt som de tittar på. Resultaten kan hjälpa framtida applikationsskapare att designa intuitiva rörelsegränssnitt så att användarna lättare kan röra sig i virtuella världar som är större än deras spelrum. I en studie med 12 deltagare jämfördes blick med teleportation med den konventionella handkontroll metoden. Deltagarna spelade en del av Valve’s The Lab med en HTC Vive och en Tobii Eyetracker; Hälften av deltagarna slutförde de uppsatta uppgifterna med blickteleportation, och den andra hälften använde handmetoden. Med Likert-frågor bedömde de sedan sina erfarenheter när det gällde njutning, frustration, ansträngning, avstånd, ocklusion och rörelsesjuka. Efter att ha besvarat frågorna fick deltagarna prova båda metoderna och intervjuades om sina preferenser och åsikter. Våra resultat tyder på att blickteleportation är en trevlig, snabb, intuitiv och naturlig rörelseinteraktion som presterar likt handmetoden, men föredras av användarna när de får välja. Vi drar slutsatsen att blickteleportation passar bra för applikationer där användarna förväntas förflytta sig i samma riktning som deras fokus.
APA, Harvard, Vancouver, ISO, and other styles
13

Anderson, Nicola Christine Cole. "Motion cues enhance gaze processing." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42922.

Full text
Abstract:
In four experiments, we investigated the role of motion in gaze perception. In Experiment 1, we developed and evaluated a comprehensive stimulus set of small eye movements at three different gaze angles (1, 2 and 3 degrees visual angle) and demonstrated that observers were able to detect and discriminate these small eye movements with a high degree of fidelity. In Experiments 2 and 3, we evaluated discrimination accuracy and confidence to dynamic and static gaze stimuli. We demonstrated that the reason for the high sensitivity to gaze in Experiment 1 was due predominantly to the presence of the motion signal in the video stimuli. Accuracy to dynamic gaze was significantly higher than accuracy to static gaze. In addition, the size of the gaze angle (i.e. signal strength) increased accuracy for static gaze despite the fact that confidence for these stimuli was consistently moderate. This latter result suggests that the dynamic gaze signal is qualitatively different from the static gaze signal. In Experiment, 4 we tested this possibility by reversing the contrast polarity of half of the gaze stimuli. This manipulation has been shown to disrupt normal gaze processing. We reasoned that if the perception of static and dynamic gaze are fundamentally different, contrast reversal will differentially effect these two types of gaze stimuli. Indeed, contrast reversal impaired the perception of static, but not dynamic gaze, confirming that the perception of dynamic and static gaze are qualitatively different.
APA, Harvard, Vancouver, ISO, and other styles
14

Lenz, Alexander. "Cerebellum inspired robotic gaze control." Thesis, University of the West of England, Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.557412.

Full text
Abstract:
The primary aims of this research were to gain insight into control architectures of the mammalian brain and to explore how such architectures can be transferred into real-world robotic systems. Specifically, the work presented in this thesis focuses on the cerebellum, a part of the brain implicated in motor learning. Based on biologically grounded assumptions of uniformity of the cerebellar structure, one specific (but representative) example of cerebellar motor control was investigated: the mammalian vestibula-ocular reflex (VOR). During movement, animals are faced with disturbances with respect to their vision system. The VOR compensates for head motion by driving the eyes in the opposite direction of the head and thereby stabilising the image on the retina. Due to severe delays in the visual feedback signal, the VOR is required to operate as an open-loop controller, which uses proprioceptive information about head motion to instigate eye movements. As a feed-forward control system, it requires calibration to gradually learn the required motor commands. This is achieved by the cerebellum through the utilisation of the delayed visual information encoding image slip. In order to explore the suitability of a recurrent cerebellar model to achieve similar performance in a robotic context, engineering equivalents of the biological sub-systems were developed and integrated as a distributed embedded computing infrastructure. These included systems for rotation sensing, vision, actuation, stimulation and monitoring. Real-time implementations of cerebellar models were developed and then tested on two custom designed robotic eyes: one actuated with electrical motors and the other operated by pneumatic artificial muscles. It is argued that the successful transfer of cerebellar models into robotic systems implicitly validates these models by providing an existence proof in terms of structure, robust learning under noisy real- world conditions, and the functional role of the cerebellum. In addition, the gained insights from this research may be exploitable in terms of control of novel actuators in the emerging field of soft robotics. Finally, the presented architectures, including hardware and software infrastructures, provide a platform with which to explore other advanced models of brain mediated sensory-motor control interfaces.
APA, Harvard, Vancouver, ISO, and other styles
15

Fjellström, Jonatan. "Gaze Interaction in Modern Trucks." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-114284.

Full text
Abstract:
In this master thesis project carried out on Scania’s interaction design department in Södertälje an evaluation of the technology gaze interaction has been done. The aim was to see if the technology was suitable for implementation in a truck environment and what potential it had. The work started by doing a context analysis to get a deeper knowledge of the research done on within the area related to the subject. Following the context analysis a comprehensive need finding process was done. In this process, data from interviews, observations, ride along with truck drivers, benchmarking and more was analysed. The analysis of this was used to identify the user needs. Based on the user needs the concept development phase was conducted. The whole development phase was done in different stages and started off by an idea generation process. The work flow was made in small iterations with the idea to continuously improve the concepts. All concepts were evaluated in a concept scoring chart to see which of the concepts that best fulfilled the concept specifications. The concepts that best could highlight the techniques strengths and weaknesses were chosen and these are Head Up Display Interaction and Gaze Support System.. These concepts focused on the interaction part of the technique rather than a specific function. Test of the two concepts were conducted in a simulator to get data and see how they performed compared to today´s Scania trucks. The result overall was good and the test subjects were impressed with the systems. However there was no significance in most of the cases of driving except for some conditions where the concepts prove to be better than the systems used today. Gaze interaction is a technology that is suitable for a truck driving environment given that a few slight improvements are made. Implementation of the concepts have a good potential of reducing road accidents caused by human errors.
APA, Harvard, Vancouver, ISO, and other styles
16

Mollenbach, Emilie. "Selection strategies in gaze interaction." Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/8101.

Full text
Abstract:
This thesis deals with selection strategies in gaze interaction, specifically for a context where gaze is the sole input modality for users with severe motor impairments. The goal has been to contribute to the subfield of assistive technology where gaze interaction is necessary for the user to achieve autonomous communication and environmental control. From a theoretical point of view research has been done on the physiology of the gaze, eye tracking technology, and a taxonomy of existing selection strategies has been developed. Empirically two overall approaches have been taken. Firstly, end-user research has been conducted through interviews and observation. The capabilities, requirements, and wants of the end-user have been explored. Secondly, several applications have been developed to explore the selection strategy of single stroke gaze gestures (SSGG) and aspects of complex gaze gestures. The main finding is that single stroke gaze gestures can successfully be used as a selection strategy. Some of the features of SSGG are: That horizontal single stroke gaze gestures are faster than vertical single stroke gaze gestures; That there is a significant difference in completion time depending on gesture length; That single stroke gaze gestures can be completed without visual feedback; That gaze tracking equipment has a significant effect on the completion times and error rates of single stroke gaze gestures; That there is not a significantly greater chance of making selection errors with single stroke gaze gestures compared with dwell selection. The overall conclusion is that the future of gaze interaction should focus on developing multi-modal interactions for mono-modal input.
APA, Harvard, Vancouver, ISO, and other styles
17

Ricciardelli, Paola. "Gaze perception and social attention." Thesis, University College London (University of London), 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Cai, Haibin. "Gaze estimation in unconstrained environments." Thesis, University of Portsmouth, 2018. https://researchportal.port.ac.uk/portal/en/theses/gaze-estimation-in-unconstrained-environments(5c391e0b-4026-4415-a1e1-8995b622d246).html.

Full text
Abstract:
Gaze estimation in unconstrained environments, where the subjects are free to conduct movements without wearing any device, faces a great challenge due to various eye appearance, occlusion of eyelids, large head movements, different viewing angles and illumination conditions. The main contribution of this thesis lies in the development of several algorithms for eye center localization and gaze estimation. Firstly, a novel convolution based integro-differential operator (CIDO) is proposed to detect the eye center quickly by designing different kinds of kernels to convolute the eye images. The low computational cost and accurate localization performance enable CIDO to be easily integrated into real-time gaze related applications. Based on the theory of CIDO, a radial integro-differential method (RIDM) is proposed to further improve the eye center localization accuracy. Experimental results on three publicly available datasets have demonstrated that RIDM outperforms the state-of-the art methods. Secondly, a normalized iris center eye corner vector (NICEC) based gaze estimation method which improves the traditional PCCR based methods by removing the requirement of additional IR light sources is proposed. To overcome the influence of various head movements, this thesis further proposes a simplified eye model based gaze estimation method which outperforms many state-of-the-art methods and achieves an average estimation error of 1.99 o under free head movements. Thirdly, based on the proposed eye center localization methods and gaze estimation methods, a real-time multi-sensory fusion framework is proposed to estimate the gaze in an unconstrained environment. The proposed system facilitates the efficiency and the effectiveness of multi-sensory fusion and addresses significant challenges in multi-modal data acquiring, fusing, and interpreting. Experimental results have shown that not only does the system have the capability of dealing with large head movements but it also can be applied to analysis the gaze behavior of children with autism spectrum disorder (ASD).
APA, Harvard, Vancouver, ISO, and other styles
19

Hipiny, Irwandi. "Egocentric activity recognition using gaze." Thesis, University of Bristol, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.682564.

Full text
Abstract:
When coupled with an egocentric camera, a gaze tracker provides the image point of where the person is fixating at. While performing a familiar task, we tend to fixate on activity- relevant objects at the points in time required in the task at hand. The resulting sequence of gaze regions is therefore very useful for inferring the subject 's activity and action class. This thesis addresses the problem of visual recognition of human activity and action from an egocentric point of view. The higher level task of activity recognition is based on processing the entire sequence of gaze regions as users perform tasks such as cooking or assembling objects, while the mid-level task of action recognition , such as pouring into a cup, is addressed via the automatic segmentation of mutually exclusive sequences prior to recognition. Temporal segmentation is performed by tracking two motion based features inside the successive gaze regions. These features model the underlying structure of image motion data at natural temporal cuts. This segmentation is further improved by the incorporation of a 2D color histogram based detection of human hands inside gaze regions . The proposed method learns activity and action models from the sequence of gaze regions. Activities are learned as a bag of visual words, however we introduce a multi-voting scheme to reduce the effect of noisy matching. Actions are, in addition, modeled as a string of visual words which enforces the structural constraint of an action. We introduce contextual information in the form of location based priors. Furthermore, this thesis addresses the problem of measuring task performance from gaze region modeling. The hypothesis is that subjects with greater task performance scores demonstrate specific gaze patterns as they conduct the task, which is expected to indicate the presence of domain knowledge. This may be reflected in for example requiring minimal visual feedback during the completion of a task. This consistent and strategic use of gaze produces nearly identical activity models among those that score higher, whilst a greater variation is observed between models learned from subjects that have performed less well in the given task. Results are shown on datasets captured using an egocentric gaze tracker with two cameras, a frontal facing camera that captures the scene, and an inward facing camera that tracks the movement of the pupil to estimate the subject's gaze fixation. Our activity and action recognition results are comparable to current literature in egocentric activity recognition, and to the best of our knowledge, the results from the task performance evaluation are the first steps towards automatically modeling user performance from gaze patterns.
APA, Harvard, Vancouver, ISO, and other styles
20

Farid, Mohsen Mohamed. "Eye-gaze : modelling and applications." Thesis, Queen's University Belfast, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.673850.

Full text
Abstract:
This thesis focuses on understanding the nature of fixational eye gaze in humans. It tries to model eye gaze in a variety of methods: Descriptive statistics, wavelet analysis and it uses self similarity calculations to show that fixational eye-gaze indeed possess stochastic long range dependance and then it calculates Hurst parameter. This thesis also addresses the the practical side of eye-gaze. It develops a number of novel two- and three-dimensional applications to show the usefulness of eye gaze as a pointing device, in particular as it applied to patients with Locked-In syndrome. Among the applications is Daisy application which is a full featured word-processing software that shows the usefulness and ease of use of eye tracking as a pointing device. Furthermore, the thesis shows how eye tracking could be used in assisting surgeons during surgery by providing 3D navigation of a human organ.
APA, Harvard, Vancouver, ISO, and other styles
21

Kumar, Manu. "Gaze-enhanced user interface design /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Hall, Courtney D. "Efficacy of Gaze Stability Exercises." Digital Commons @ East Tennessee State University, 2014. https://dc.etsu.edu/etsu-works/582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Huterer, Marko. "Characterization of vestibulo-ocular reflex dynamics : responses to head perturbations during gaze stabilization versus gaze redirection." Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33004.

Full text
Abstract:
In this study, we compare VOR response dynamics induced by high-frequency (>5 Hz) passive head-on-body perturbations, applied during gaze stabilization versus gaze redirection. During gaze stabilization, we first rotated the heads of three Rhesus monkeys over the frequency range 5--25 Hz. The VOR was compensatory across all frequencies tested; mean gains (eye velocity/head velocity) were near unity, with phase lag increasing only slightly with frequency. Transient perturbations ≥80 Hz were then used to further probe VOR dynamics. During periods of gaze stabilization, VOR response latency to the transient perturbations was 5--6 ms, and mean gain induced was near unity for two of the three animals tested. The same perturbations were then applied at different intervals before, during and following 15°, 40° and 60° gaze shifts. The VOR elicited was generally attenuated compared to that evoked during gaze stabilization, and the level of suppression tended to decrease with time from gaze shift onset. We conclude that models for the control of gaze should be modified to account for the time course of VOR gain changes during gaze shifts.
APA, Harvard, Vancouver, ISO, and other styles
24

Lööf, Jenny. "An Inquisitive Gaze: Exploring the Male Gaze and the Portrayal of Gender in Dragon Age: Inquisition." Thesis, Stockholms universitet, Engelska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-117976.

Full text
Abstract:
This paper provides an account of how a normative male gaze is produced and upheld even in a video game famed for its inclusive nature, Dragon Age: Inquisition. The analysis originates in content studies concerning the portrayal of gender in video games in relation to in-game physical gender portrayal. It is followed by a contextualization of specific video sequences and certain game mechanics in relation to Laura Mulvey’s feminist film theory about the Male Gaze. Mulvey’s film theory approach, while useful as an intellectual tool, is not developed to be applied to video games and thus it is also necessary to consider any implications related to the interactivity of the game. As characters are subjected to a gendered male gaze in relation to both their physical appearance and attributes they are made to uphold the normative status quo. The Gaze is evident in how characters are portrayed, how the main character becomes a default male character regardless of actual gender and in the construction of women as something other. But most importantly, in the actual game mechanics through which all characters become objects for the player to use either in combat or to own in the guise of offering romance to the narrative.
APA, Harvard, Vancouver, ISO, and other styles
25

Tong, Irene Go. "Eye gaze tracking in surgical robotics." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/62845.

Full text
Abstract:
Robot-assisted surgery allows surgeons to have improved control and visualization in minimally invasive procedures. Eye gaze tracking is a valuable tool for studying and improving the surgeon experience during robot-assisted surgery. Eye gaze information gives insight on how surgeons are interacting with surgical systems as well as their intentions during surgical tasks. This thesis describes the development of an eye gaze tracker for the da Vinci Surgical System. The eye gaze tracker is designed to track both the 2D and 3D eye gaze of a surgeon. It interfaces with the da Vinci Surgical System through the da Vinci Research Kit (dVRK) and Robot Operating System (ROS) frameworks. The use of the eye gaze tracker is demonstrated in two applications. Firstly, a motor control framework is designed to aid surgeons in moving surgical tools towards their point of gaze. A haptic force is applied to the da Vinci master manipulators to pull the surgeon's hands towards where they are looking. This framework is demonstrated on a full da Vinci Surgical System on dry lab tasks. Secondly, eye gaze information is collected from 7 surgeons performing realistic clinical tasks with the da Vinci Surgical System. A prediction model using a random forest classifier is built based on the eye gaze information and tool kinematic information in order to predict how and when surgeons move their camera. This behavioural model has applications in both surgeon training and endoscope automation.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
26

Ferguson, Sarah Alexandra. "Fracturing the gaze in Approaching Zanzibar." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ38560.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Okamoto-Barth, Sanae. "Gaze processing in chimpanzees and humans." [Maastricht] : Maastricht : UPM, Universitaire Pers Maastricht ; University Library, Maastricht University [Host], 2005. http://arno.unimaas.nl/show.cgi?fid=6377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Reeves, Allison Hillary. "Disrupting the gaze : a film cooperative." Thesis, Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/23098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Stellmach, Sophie [Verfasser]. "Gaze-supported Multimodal Interaction / Sophie Stellmach." München : Verlag Dr. Hut, 2013. http://d-nb.info/1045125547/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Desanghere, Loni. "Gaze strategies in perception and action." Experimental Brain Research, 2011. http://hdl.handle.net/1993/17898.

Full text
Abstract:
When you want to pick up an object, it is usually a simple matter to reach out to its location, and accurately pick it up. Almost every action in such a sequence is guided and checked by vision, with eye movements usually preceding motor actions (Hayhoe & Ballard, 2005; Hayhoe, Shrivastava, Mruczek, & Pelz, 2003). However, most research in this area has been concerned about the sequence of movements in complex “everyday” tasks like making tea or tool use. Less emphasis has been placed on the object itself and where on it the eye and hand movements land, and how gaze behaviour is different when generating a perceptual response to that same object. For those studies that have, very basic geometric shapes have been used such as rectangles, crosses and triangles. In everyday life, however, there are a range of problems that must be computed that go beyond such simple objects. Objects typically have complex contours, different textures or surface properties, and variations in their centre of mass. Accordingly, the primary goals in conducting this research were three fold: (1) To provide a deeper understanding of the function of gaze in perception and action when interacting with simple and complex objects (Experiments 1a, 1b, 1c); (2) To examine how gaze and grasp behaviours are influenced when you dissociate important features of an object such as the COM and the horizontal centre of the block (Experiments 2a, 2c); and (3) To explore whether perceptual biases will influence grasp and gaze behaviours (Experiment 2b). The results from the current series of studies showed the influence of action (i.e., the potential to act) on perception in terms of where we look on an object, and vice versa, the influence of perceptual biases on action output (i.e. grasp locations). In addition, grasp locations were found to be less sensitive to COM changes than previously suggested (for example see Kleinholdermann, Brenner, Franz, & Smeets, 2007), whereas fixation locations were drawn towards the ‘visual’ COM of objects, as shown in other perceptual studies (for example see He & Kowler, 1991; Kowler & Blaser, 1995; McGowan, Kowler, Sharma, & Chubb, 1998; Melcher & Kowler, 1999; Vishwanath & Kowler, 2003, 2004; Vishwanath, Kowler, & Feldman, 2000), even when a motor response was required. The implications of these results in terms of vision for Perception and vision for Action are discussed.
APA, Harvard, Vancouver, ISO, and other styles
31

Cohanim, Samira. "A Glance at the Male Gaze." Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/scripps_theses/513.

Full text
Abstract:
The purpose of this paper is to understand and criticize the representation of women in advertisements. I examine the opposing yet similar ways that women are portrayed in Dove and Axe advertisements, two brands of Unilever. This paper analyzes the way in which brands market their products in such a way to appeal to a gendered audience. I also explore the history of how women have been depicted in art movements such as Surrealism, detournement and culture jamming, corresponding with my project of digital mixed media advertisements. I examine the way in which the prevalence of the male gaze in the media hinders progression to a less dependent, inferior, and sexualized view of women in advertising.
APA, Harvard, Vancouver, ISO, and other styles
32

Wilmut, Kate. "Gaze, attention and coordination in children." Thesis, University of Reading, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.427811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Kaymak, Sertan. "Real-time appearance-based gaze tracking." Thesis, Queen Mary, University of London, 2015. http://qmro.qmul.ac.uk/xmlui/handle/123456789/8949.

Full text
Abstract:
Gaze tracking technology is widely used in Human Computer Interaction applications such as in interfaces for assisting people with disabilities and for driver attention monitoring. However, commercially available gaze trackers are expensive and their performance deteriorates if the user is not positioned in front of the camera and facing it. Also, head motion or being far from the device degrades their accuracy. This thesis focuses on the development of real-time time appearance based gaze tracking algorithms using low cost devices, such as a webcam or Kinect. The proposed algorithms are developed by considering accuracy, robustness to head pose variation and the ability to generalise to different persons. In order to deal with head pose variation, we propose to estimate the head pose and then compensate for the appearance change and the bias to a gaze estimator that it introduces. Head pose is estimated by a novel method that utilizes tensor-based regressors at the leaf nodes of a random forest. For a baseline gaze estimator we use an SVM-based appearance-based regressor. For compensating the appearance variation introduced by the head pose, we use a geometric model, and for compensating for the bias we use a regression function that has been trained on a training set. Our methods are evaluated on publicly available datasets.
APA, Harvard, Vancouver, ISO, and other styles
34

Nunez-Varela, Jose Ignacio. "Gaze control for visually guided manipulation." Thesis, University of Birmingham, 2013. http://etheses.bham.ac.uk//id/eprint/4444/.

Full text
Abstract:
Human studies have shown that gaze shifts are mostly driven by the task. One explanation is that fixations gather information about task relevant properties, where task relevance is signalled by reward. This thesis pursues primarily an engineering science goal to determine what mechanisms a rational decision maker could employ to select a gaze location optimally, or near optimally, given limited information and limited computation time. To do so we formulate and characterise three computational models of gaze shifting (implemented on a simulated humanoid robot), which use lookahead to imagine the informational effects of possible gaze fixations. Our first model selects the gaze that most reduces uncertainty in the scene (Unc), the second maximises expected rewards by reducing uncertainty (Rew+Unc), and the third maximises the expected gain in cumulative reward by reducing uncertainty (Rew+Unc+Gain). We also present an integrated account of a visual search process into the Rew+Unc+Gain gaze scheme. Our secondary goal is concerned with the way in which humans might select the next gaze location. We compare the hand-eye coordination timings of our models to previously published human data, and we provide evidence that only the models that incorporate both uncertainty and reward (Rew+Unc and Rew+Unc+Gain) match human data.
APA, Harvard, Vancouver, ISO, and other styles
35

Al-Sader, Mohamed. "Gaze-driven interaction in video games." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-156718.

Full text
Abstract:
The introduction of input devices with natural user interfaces in gaming hardware has changed the way we interact with games. Hardware with motion-sensing and gesture recognizing capabilities remove the constraint of interacting with games through typical traditional devices like mouse-keyboard and gamepads. This changes the way we approach games and how the game communicates back to us as the player opening new levels of interactivity. This thesis covers how eye tracker technology can be used to affect rendering effects in games.
APA, Harvard, Vancouver, ISO, and other styles
36

Williams, Sara. "The maternal gaze in the Gothic." Thesis, University of Hull, 2011. http://hydra.hull.ac.uk/resources/hull:6756.

Full text
Abstract:
This transdisciplinary thesis excavates the critically-neglected Gothic convention of the maternal tyrant through the theoretical framework of the maternal gaze, recently conceptualised by Alina Luna in Visual Perversity: A Re-articulation of the Maternal Instinct (2004). As a counter-response to the critical heritage of feminist and film scholarship which privileges the presence of an objectifying and fetishising male gaze, Luna argues that the maternal gaze issued from the womb is the most powerful and fatal because it is concerned with nothing apart from devouring the child and reinstalling it in the mother’s body, and punishing the paternal order which had taken it away. Examining how the Gothic articulates intra-uterine symbols and structures, I consider a spectrum of written and visual texts to argue that an omnipotent maternal gaze is pathologically narrativised by the genre. The thesis is structured in three parts of two chapters each which plot the evolution of the maternal gaze in the Gothic. In Part One, ‘The Gothic Heritage’, I discuss maternal symbolism and structures in folkloric and Victorian Gothic texts to show how the infanticidal maternal gaze has existed in the genre since its inception, while Part Two, ‘Gothic Practices’, reveals how the maternal gaze in the late-nineteenth and early-twentieth centuries used the intersecting technological and religious practices of photography, Spiritualism and Marian iconography to Gothicise the domestic space of the maternal practitioner. Part Three comes home to ‘The Gothic Domestic’, which examines how narratives of child abuse, incest and trauma are perpetuated in the domestic space for the maternal gaze through modes of serialisation, and I conclude by showing how the internet has become the modern Gothic web in which the maternal gaze weaves hypertextual narratives through which mothers meditate on and reproduce the image of the abused and traumatised child. This thesis provides new directions for genre criticism and gaze theory, and drawing on feminist, film and psychoanalytic scholarship I use the maternal gaze to write a place for the maternal tyrant in the Gothic, one which she has previously been denied by the critical and cultural blindness to the capabilities of maternal desire.
APA, Harvard, Vancouver, ISO, and other styles
37

Kurauchi, Andrew Toshiaki Nakayama. "EyeSwipe: text entry using gaze paths." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-03072018-151733/.

Full text
Abstract:
People with severe motor disabilities may communicate using their eye movements aided by a virtual keyboard and an eye tracker. Text entry by gaze may also benefit users immersed in virtual or augmented realities, when they do not have access to a physical keyboard or touchscreen. Thus, both users with and without disabilities may take advantage of the ability to enter text by gaze. However, methods for text entry by gaze are typically slow and uncomfortable. In this thesis we propose EyeSwipe as a step further towards fast and comfortable text entry by gaze. EyeSwipe maps gaze paths into words, similarly to how finger traces are used on swipe-based methods for touchscreen devices. A gaze path differs from the finger trace in that it does not have clear start and end positions. To segment the gaze path from the user\'s continuous gaze data stream, EyeSwipe requires the user to explicitly indicate its beginning and end. The user can quickly glance at the vicinity of the other characters that compose the word. Candidate words are sorted based on the gaze path and presented to the user. We discuss two versions of EyeSwipe. EyeSwipe 1 uses a deterministic gaze gesture called Reverse Crossing to select both the first and last letters of the word. Considering the lessons learned during the development and test of EyeSwipe 1 we proposed EyeSwipe 2. The user emits commands to the interface by switching the focus between regions. In a text entry experiment comparing EyeSwipe 2 to EyeSwipe 1, 11 participants achieved an average text entry rate of 12.58 words per minute (wpm) with EyeSwipe 1 and 14.59 wpm with EyeSwipe 2 after using each method for 75 minutes. The maximum entry rates achieved with EyeSwipe 1 and EyeSwipe 2 were, respectively, 21.27 wpm and 32.96 wpm. Participants considered EyeSwipe 2 to be more comfortable and faster, while less accurate than EyeSwipe 1. Additionally, with EyeSwipe 2 we proposed the use of gaze path data to dynamically adjust the gaze estimation. Using data from the experiment we show that gaze paths can be used to dynamically improve gaze estimation during the interaction.
Pessoas com deficiências motoras severas podem se comunicar usando movimentos do olhar com o auxílio de um teclado virtual e um rastreador de olhar. A entrada de texto usando o olhar também beneficia usuários imersos em realidade virtual ou realidade aumentada, quando não possuem acesso a um teclado físico ou tela sensível ao toque. Assim, tanto usuários com e sem deficiência podem se beneficiar da possibilidade de entrar texto usando o olhar. Entretanto, métodos para entrada de texto com o olhar são tipicamente lentos e desconfortáveis. Nesta tese propomos o EyeSwipe como mais um passo em direção à entrada rápida e confortável de texto com o olhar. O EyeSwipe mapeia gestos do olhar em palavras, de maneira similar a como os movimentos do dedo em uma tela sensível ao toque são utilizados em métodos baseados em gestos (swipe). Um gesto do olhar difere de um gesto com os dedos em que ele não possui posições de início e fim claramente definidas. Para segmentar o gesto do olhar a partir do fluxo contínuo de dados do olhar, o EyeSwipe requer que o usuário indique explicitamente seu início e fim. O usuário pode olhar rapidamente a vizinhança dos outros caracteres que compõe a palavra. Palavras candidatas são ordenadas baseadas no gesto do olhar e apresentadas ao usuário. Discutimos duas versões do EyeSwipe. O EyeSwipe 1 usa um gesto do olhar determinístico chamado Cruzamento Reverso para selecionar tanto a primeira quanto a última letra da palavra. Levando em consideração os aprendizados obtidos durante o desenvolvimento e teste do EyeSwipe 1 nós propusemos o EyeSwipe 2. O usuário emite comandos para a interface ao trocar o foco entre as regiões do teclado. Em um experimento de entrada de texto comparando o EyeSwipe 2 com o EyeSwipe 1, 11 participantes atingiram uma taxa de entrada média de 12.58 palavras por minuto (ppm) usando o EyeSwipe 1 e 14.59 ppm com o EyeSwipe 2 após utilizar cada método por 75 minutos. A taxa de entrada de texto máxima alcançada com o EyeSwipe 1 e EyeSwipe 2 foram, respectivamente, 21.27 ppm e 32.96 ppm. Os participantes consideraram o EyeSwipe 2 mais confortável e rápido, mas menos preciso do que o EyeSwipe 1. Além disso, com o EyeSwipe 2 nós propusemos o uso dos dados dos gestos do olhar para ajustar a estimação do olhar dinamicamente. Utilizando dados obtidos no experimento mostramos que os gestos do olhar podem ser usados para melhorar a estimação dinamicamente durante a interação.
APA, Harvard, Vancouver, ISO, and other styles
38

MacDonald, R. G. "Gaze cues and language in communication." Thesis, University of Dundee, 2014. https://discovery.dundee.ac.uk/en/studentTheses/476122c4-9264-44aa-8f08-c70f6dbb14d8.

Full text
Abstract:
During collaboration, people communicate using verbal and non-verbal cues, including gaze cues. Spoken language is usually the primary medium of communication in these interactions, yet despite this co-occurrence of speech and gaze cueing, most experiments have used paradigms without language. Furthermore, previous research has shown that myriad social factors influence behaviour during interactions, yet most studies investigating responses to gaze have been conducted in a lab, far removed from any natural interaction. It was the aim of this thesis to investigate the relationship between language and gaze cue utilisation in natural collaborations. For this reason, the initial study was largely observational, allowing for spontaneous natural language and gaze. Participants were found to rarely look at their partners, but to do so strategically, with listeners looking more at speakers when the latter were of higher social status. Eye movement behaviour also varied with the type of language used in instructions, so in a second study, a more controlled (but still real-world) paradigm was used to investigate the effect of language type on gaze utilisation. Participants used gaze cues flexibly, by seeking and following gaze more when the cues were accompanied by distinct featural verbal information compared to overlapping spatial verbal information. The remaining three studies built on these findings to investigate the relationship between language and gaze using a much more controlled paradigm. Gaze and language cues were reduced to equivalent artificial stimuli and the reliability of each cue was manipulated. Even in this artificial paradigm, language was preferred when cues were equally reliable, supporting the idea that gaze cues are supportive to language. Typical gaze cueing effects were still found, however the size of these effects was modulated by gaze cue reliability. Combined, the studies in this thesis show that although gaze cues may automatically and quickly affect attention, their use in natural communication is mediated by the form and content of concurrent spoken language.
APA, Harvard, Vancouver, ISO, and other styles
39

Pfeuffer, Ken. "Extending touch with eye gaze input." Thesis, Lancaster University, 2017. http://eprints.lancs.ac.uk/89076/.

Full text
Abstract:
Direct touch manipulation with displays has become one of the primary means by which people interact with computers. Exploration of new interaction methods that work in unity with the standard direct manipulation paradigm will be of bene t for the many users of such an input paradigm. In many instances of direct interaction, both the eyes and hands play an integral role in accomplishing the user's interaction goals. The eyes visually select objects, and the hands physically manipulate them. In principle this process includes a two-step selection of the same object: users rst look at the target, and then move their hand to it for the actual selection. This thesis explores human-computer interactions where the principle of direct touch input is fundamentally changed through the use of eye-tracking technology. The change we investigate is a general reduction to a one-step selection process. The need to select using the hands can be eliminated by utilising eye-tracking to enable users to select an object of interest using their eyes only, by simply looking at it. Users then employ their hands for manipulation of the selected object, however they can manipulate it from anywhere as the selection is rendered independent of the hands. When a spatial o set exists between the hands and the object, the user's manual input is indirect. This allows users to manipulate any object they see from any manual input position. This fundamental change can have a substantial e ect on the many human-computer interactions that involve user input through direct manipulation, such as temporary touchscreen interactions. However it is unclear if, when, and how it can become bene cial to users of such an interaction method. To approach these questions, our research in this topic is guided by the following two propositions. The rst proposition is that gaze input can transform a direct input modality such as touch to an indirect modality, and with it provide new and powerful interaction capabilities. We develop this proposition in context of our investigation on integrated gaze interactions within direct manipulation user interfaces. We rst regard eye gaze for generic multi-touch displays, introducing Gaze-Touch as a technique based on the division of labour: gaze selects and touch manipulates. We investigate this technique with a design space analysis, protyping of application examples, and an informal user evaluation. The proposition is further developed by an exploration of hybrid eye and hand inputs with a stylus, for precise and cursor based indirect control; with bimanual input, to rapidly issue input from two hands to gaze-selected objects; with tablets, where Gaze-Touch enables one-handed interaction across the whole screen with the same hand that holds the device; and free-hand gesture in virtual reality to interact with any viewed object at a distance located in the virtual scene. Overall, we demonstrate that using eye gaze to enable indirect input yields many interaction bene ts, such as whole-screen reachability, occlusion-free manipulation, high precision cursor input, and low physical e ort. Integration of eye gaze with manual input raises new questions about how it can complement, instead of replace, the direct interactions users are familiar with. This is important to allow users the choice between direct and indirect inputs as each a ords distinct pros and cons for the usability of human-computer interfaces. These two input forms are normally considered separately from each other, but here we investigate interactions that combine those within the same interface. In this context, the second proposition is that gaze and touch input enables new and seamless ways of combining direct and indirect forms of interaction. We develop this proposition by regarding multiple interaction tasks that a user usually perform in a sequence, or simultaneously. First, we introduce a method to enable users switching between both input forms by implicitly exploiting visual attention during manual input. Direct input is active when looking at the input, and otherwise users will manipulate the object they look at indirectly. A design application for typical drawing and vector-graphics tasks has been prototyped to illustrate and explore this principle. The application contributes many example use cases, where direct drawing activities are complemented with indirect menu actions, precise cursor inputs, and seamless context switching at a glance. We further develop the proposition by investigating simultaneous direct and indirect input by bimanual input, where each input is assigned to one hand. We present an empirical study with an in-depth analysis of using indirect navigation in one hand, and direct pen drawing on the other. We extend this input constellation to tablet devices, by designing compound techniques for use in a more naturalistic setting when one hand holds the device. The interactions show that many typical tablet scenarios, such as browsing, map navigation, homescreen selections, or image gallery, can be enhanced through exploiting eye gaze.
APA, Harvard, Vancouver, ISO, and other styles
40

Alanenpää, Madelene. "Gaze detection in human-robot interaction." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-428387.

Full text
Abstract:
The aim of this thesis is to track gaze direction in a human-robot interaction scenario.The human-robot interaction consisted of a participant playing a geographic gamewith three important objects on which participants could focus: A tablet, a sharedtouchscreen, and a robot (called Furhat). During the game, the participant wasequipped with eye-tracking glasses. These collected a first-person view video as wellas annotations consisting of the participant's center of gaze. In this thesis, I aim to usethis data to detect the three important objects described above from the first-personvideo stream and discriminate whether the gaze of the person fell on one of theobjects of importance and for how long. To achieve this, I trained an accurate and faststate-of-the-art object detector called YOLOv4. To ascertain that this was thecorrect object detector, for this thesis, I compared YOLOv4 with its previousversion, YOLOv3, in terms of accuracy and run time. YOLOv4 was trained with adata set of 337 images consisting of various pictures of tablets, television screens andthe Furhat robot.The trained program was used to extract the relevant objects for each frame of theeye-tracking video and a parser was used to discriminate whether the gaze of theparticipant fell on the relevant objects and for how long. The result is a system thatcould determine, with an accuracy of 90.03%, what object the participant is looking atand for how long the participant is looking at that object.Tryckt av:
APA, Harvard, Vancouver, ISO, and other styles
41

Safavi, Safoura. "The Inner Gaze In Artistic Practice." Thesis, Stockholms konstnärliga högskola, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uniarts:diva-934.

Full text
Abstract:
”A finger pointing at the moon is not the moon...” -Buddhist Quote ”...but it can point you in the right direction.” -Charles Tart, American psychologist In this Master’s Thesis I will be presenting the idea of an Inner Gaze as an inherent witnessing system used in artistic practice. I will be mirroring my own practice as a Musician/Artist/Sound-Designer in the teachings of Hypnosis and The Science of Consciousness. Further I will share and analyze the collected data gathered from interviews with artists from different artistic fields, in order to gain a better understanding of how they experience their creative and performing minds. Is there any coherence in how we experience creativity? How common are the sensations of altered states of consciousness among artists? Can other artists relate to the idea of an inner gaze? Is this something we long to further explore and develop and would such a concept be beneficial for the artist and its works?
APA, Harvard, Vancouver, ISO, and other styles
42

Maclean, Coinneach. "The 'Tourist Gaze' on Gaelic Scotland." Thesis, University of Glasgow, 2014. http://theses.gla.ac.uk/5178/.

Full text
Abstract:
The Scottish Gael is objectified in an un-modified ‘Tourist Gaze’; a condition that is best understood from a post-colonial perspective. John Urry showed that cultures are objectified by the gaze of a global tourist industry. The unequal power relations in that gaze can be mediated through resistance and the production of staged touristic events. The process leads to commoditisation and in-authenticity and this is the current discourse on Scottish tourism icons. An ethnographic study of tour guiding shows a pattern of (re)-presentation of a silenced and near invisible Gaeldom. By building upon Foucauldian theories of power, Said’s critique of Orientalism’s discourse and Spivak on agency, this unmodified gaze can be explained from a postcolonial perspective. Six related aspects of Gaeldom’s (re)-presentation are revealed ; the discourse of the Victorian invention of Scottish cultural icons, and, by metonymic extension, Gaelic culture; the commoditisation of Gaelic culture in the image of the Highland Warrior; the re-naming of landscape and invention of new place narratives; historical presence by invitation; elision with Irish culture; and, the mute Gael. Combined, the elements of (re)-presentation result in the distancing and the rendering opaque of Gaelic culture. The absence of informed mediators, either tourist authorities or individuals, the lack of an oppositional narrative and the pervasive discourse of invention reduces the Gael to a silenced subaltern ‘other’. Thus the unmediated tourist ‘gaze’ continues. This exceptionally singular condition of Scottish Gaeldom is comprehensible through analysis of Scottish tourism from a postcolonial perspective.
APA, Harvard, Vancouver, ISO, and other styles
43

Wakefield, Steve. "Carpentier's baroque fiction : returning Meduza's gaze /." Woodbridge : Tamesis, 2004. http://catalogue.bnf.fr/ark:/12148/cb39927077v.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Fitzgerald, Aimee. "Photography as Gaze, Painting as Caress." Thesis, The University of Sydney, 2016. http://hdl.handle.net/2123/15952.

Full text
Abstract:
Photography and painting have been posited as antagonistic mediums since the former’s inception in the 19th century, and despite generations of artists working across the mediums and disavowing the divide, it still looms large in our cultural imagination. This paper discusses the history of photography and painting as critical adversaries and practical allies, from both a historical and conceptual perspective, with a final segue to the history of gendered looking in art, and the rarity of a ‘female gaze’. My master’s work comprises a series of paintings that are in conversation with photographs I have taken of my partner sleeping. Using intimate portraits in a combination of painting and photography I attempt to draw out dissonant attitudes to authenticity, artistic value and the ‘reality content’ of images. The work interrogates our disparate reactions between photographs and paintings that recreate photographs, using the unique strengths of each to challenge and complicate our instinctive ‘reading’ of the image. They are diverse in style and presentation, while constant in subject. The examination/exhibition will take place in June at the SCA Galleries, and will consist of at least ten paintings on various media (including but not limited to paper, glass, aluminium, and wood).
APA, Harvard, Vancouver, ISO, and other styles
45

Ali, Asad. "Biometric liveness detection using gaze information." Thesis, University of Kent, 2015. https://kar.kent.ac.uk/50524/.

Full text
Abstract:
This thesis is concerned with liveness detection for biometric systems and in particular for face recognition systems. Biometric systems are well studied and have the potential to provide satisfactory solutions for a variety of applications. However, presentation attacks (spoofng), where an attempt is made at subverting them system by making a deliberate presentation at the sensor is a serious challenge to their use in unattended applications. Liveness detection techniques can help with protecting biometric systems from attacks made through the presentation of artefacts and recordings at the sensor. In this work novel techniques for liveness detection are presented using gaze information. The notion of natural gaze stability is introduced and used to develop a number of novel features that rely on directing the gaze of the user and establishing its behaviour. These features are then used to develop systems for detecting spoofng attempts. The attack scenarios considered in this work include the use of hand held photos and photo masks as well as video reply to subvert the system. The proposed features and systems based on them were evaluated extensively using data captured from genuine and fake attempts. The results of the evaluations indicate that gaze-based features can be used to discriminate between genuine and imposter. Combining features through feature selection and score fusion substantially improved the performance of the proposed features.
APA, Harvard, Vancouver, ISO, and other styles
46

WANG, HAOCHEN. ""Gaze-Based Biometrics: some Case Studies"." Doctoral thesis, Università degli studi di Pavia, 2018. http://hdl.handle.net/11571/1280026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

WANG, HAOCHEN. ""Gaze-Based Biometrics: some Case Studies"." Doctoral thesis, Università degli studi di Pavia, 2018. http://hdl.handle.net/11571/1280066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

WANG, HAOCHEN. ""Gaze-Based Biometrics: some Case Studies"." Doctoral thesis, Università degli studi di Pavia, 2018. http://hdl.handle.net/11571/1280086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Wiklund, Alexis. "The Male Gaze som retorisktverktyg : En utredande litteraturstudie över hur the Male Gaze kan användas inom retorikvetenskapen." Thesis, Södertörns högskola, Institutionen för kultur och lärande, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-22981.

Full text
Abstract:
The thought behind this bachelor essay regarding gender and rhetorical feminist criticism developed years ago out of the fact that I read just a little too many dirty chick lit-pockets as a teen. Those chick lit-pockets were my first introduction to sex, to gender roles, to how men and women react and are supposed to react to each other and, in some sense, even to feminism. Those chick lit-pockets, later turning into hardcore Harlequin-books, became my benchmark when I started to contemplate the fact that this is a man’s world, produced and reproduced by a male gaze which influences everything from sex, porn, advertising, gender roles, jobs and payment to dirty pockets for teenage girls. The essays aims to show how the male gaze-phenomenon could be to use for the rhetoric discipline: first combined with other rhetorical theories as a way to analyze and understand gender and objectification. The main question asked; How to put the male gaze on a rhetorical leash? This bachelor essay consists of a qualitative literature study focusing on four articles and one book in which five different male gaze-perspective appears. Every article presented will be followed by an explicative chapter in which the articles, specific male gaze- perspective will be combined with relevant rhetorical theories and applied to pop cultural example cases in order to demonstrate its academic potential. The conclusion of this essay establishes that the rhetorical discipline indeed could have great use of the male gaze-perspective while analyzing different kinds of artifacts, if combined with different kinds of methods. It confirms that it is possible to put the male gaze on a rhetorical leash and explains five specific ways to do so.
APA, Harvard, Vancouver, ISO, and other styles
50

Weaver, Joseph S. "High working memory capacity predicts negative gaze but high self-esteem predicts positive gaze following ego threat." Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1307144564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography