To see the other types of publications on this topic, follow the link: Eye movements.

Dissertations / Theses on the topic 'Eye movements'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Eye movements.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Riddell, Patricia Mary. "Vergence eye movements and dyslexia." Thesis, University of Oxford, 1987. http://ora.ox.ac.uk/objects/uuid:fc695d53-073a-467d-bc8d-8d47c0b9321e.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Santoro, Loredana. "Perception during eye movements." Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.418156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Christophers, R. A. "Vergence eye movements and stereopsis." Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Scholz, Agnes. "Eye Movements, Memory, and Thinking." Doctoral thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-167967.

Full text
Abstract:
This thesis investigates the relationship between eye movements, memory and thinking in five studies based on eye tracking experiments. The studies draw on the human ability to spatially index multimodal events as demonstrated by people’s gaze reverting back to emptied spatial locations when retrieving information that was associated with this location during a preceding encoding phase – the so called “looking-at-nothing” phenomenon. The first part of this thesis aimed at gaining a better understanding of the relationship between eye movements and memory in relation to verbal information. The second part of this thesis investigated what could be learned about the memory processes involved in reasoning and decision-making by studying eye movements to blank spaces. The first study presented in this thesis clarified the role of eye movements for the retrieval of verbal information from memory. More precisely, it questioned if eye movements to nothing are functionally related to memory retrieval for verbal information, i.e. auditorily presented linguistic information. Eye movements were analyzed following correct and incorrect retrievals of previously presented auditory statements concerning artificial places that were probed during a subsequent retrieval phase. Additionally, eye movements were manipulated as the independent variable with the aid of a spatial cue that either guided the eyes towards or away from associated spatial locations. Using verbal materials elicited eye movements to associated but emptied spatial locations, thereby replicating previous findings on eye movements to nothing. This behaviour was more pronounced for correct in comparison to incorrect retrievals. Retrieval performance was higher when the eyes were guided towards in comparison to being guided away from associated spatial locations. In sum, eye movements play a functional role for the retrieval of verbal materials. The second study tested if the looking-at-nothing behaviour can also diminish; for example, does its effect diminish if people gain enough practice in a retrieval task? The same paradigm was employed as in the first study. Participants listened to four different sentences. Each sentence was associated with one of four areas on the screen and was presented 12 times. After every presentation, participants heard a statement probing one sentence, while the computer screen remained blank. More fixations were found to be located in areas associated with the probed sentence than in other locations. Moreover, the more trials participants completed, the less frequently they exhibited the looking-at-nothing behaviour. Looking-at-nothing behaviour can in this way be seen to indeed diminish when knowledge becomes strongly represented in memory. In the third and fourth study eye movements were utilized as a tool to investigate memory search during rule- versus similarity-based decision-making. In both studies participants first memorized multiple pieces of information relating to job candidates (exemplars). In subsequent test trials they judged the suitability of new candidates that varied in their similarity to the previously learned exemplars. Results showed that when using similarity, but not when using a rule, participants fixated longer on the previous location of exemplars that were similar to the new candidates than on the location of dissimilar exemplars. This suggests that people using similarity retrieve previously learned exemplars, whereas people using a rule do not. Eye movements were used yet again as a tool in the fifth study. On this occasion, eye movements were investigated during memory-based diagnostic reasoning. The study tested the effects of symptom order and diversity with symptom sequences that supported two or three contending hypotheses, and which were ambiguous throughout the symptom sequence. Participants first learned information about causes and symptoms presented in spatial frames. Gaze allocation on emptied spatial frames during symptom processing and during the diagnostic response reflected the subjective status of hypotheses held in memory and the preferred interpretation of ambiguous symptoms. Gaze data showed how the diagnostic decision develops and revealed instances of hypothesis change and biases in symptom processing. The results of this thesis demonstrate in very different scenarios the tight interplay between eye movements, memory and thinking. They show that eye movements are not automatically directed to spatial locations. Instead, they reflect the dynamic updating of internal, multimodal memory representations. Eye movements can be used as a direct behavioural correlate of memory processes involved in similarity- versus rule-based decision-making, and they reveal rich time-course information about the process of diagnostic reasoning. The results of this thesis are discussed in light of the current theoretical debates on cognitive processes that guide eye movements, memory and thinking. This thesis concludes by outlining a list of recommendations for using eye movements to investigate thinking processes, an outlook for future research and possible applications for the research findings
Diese Dissertation beschäftigt sich mit der Interaktion von Blickbewegungen, Gedächtnis- und Denkprozessen. In fünf experimentellen Untersuchungen, die auf der Messung von Blickbewegungen beruhen, wurde die menschliche Fähigkeit zum räumlichen Indizieren multimodaler Ereignisse untersucht. Diese Fähigkeit manifestiert sich u.a. im sogenannten „Looking-at-nothing“ Phänomen, das beschreibt, dass Menschen beim Abruf von Informationen aus dem Gedächtnis an Orte zurückblicken, die in einer vorhergehenden Enkodierphase mit den abzurufenden Informationen assoziiert wurden, selbst wenn diese räumlichen Positionen keinerlei erinnerungsrelevante Informationen mehr enthalten. In der ersten Untersuchung wurde der Frage nachgegangen, ob Blickbewegungen an geleerte räumliche Positionen den Abruf von Informationen aus dem Gedächtnis erleichtern. Während ein solches Verhalten für den Abruf zuvor visuell dargebotener Informationen bereits gezeigt werden konnte, ist die Befundlage für die Erinnerungsleistung bei auditiv dargebotenen, linguistischen Informationen unklar. Um diesen Zusammenhang zu untersuchen, wurde das Blickverhalten zunächst als Folge von richtigen und falschen Antworten untersucht. In einem weiteren Schritt wurde das Blickverhalten experimentell manipuliert. Dies geschah mit Hilfe eines räumlichen Hinweisreizes, der die Blicke entweder hin zu der Position leitete, die mit dem abzurufenden Stimulus assoziiert war, oder weg von dieser Position. Die Ergebnisse dieser Untersuchung konnten bisherige Befunde zum Looking-at-nothing Verhalten replizieren. Zudem zeigte sich, dass beim korrekten Abruf von Informationen aus dem Gedächtnis vermehrt Looking-at-nothing gezeigt wurde, während das bei fehlerhaften Abrufen nicht der Fall war. Die Blickmanipulation ergab, dass die Gedächtnisleistung besser war, wenn der Hinweisreiz den Blick hin zur assoziierten räumlichen Position leitete. Im Gegensatz dazu war die Erinnerungsleistung schlechter, wenn der Blick von der assoziierten räumlichen Position weggeleitet wurde. Blickbewegungen an geleerte räumliche Positionen scheinen demnach auch den Abruf verbaler Stimuli zu erleichtern. In der zweiten Untersuchung wurde erforscht, ob das Looking-at-nothing Verhalten nachlässt, wenn das experimentelle Material stark gelernt, d.h. stark im Gedächtnis repräsentiert ist. Dazu wurde das gleiche experimentelle Paradigma, wie in der ersten Untersuchung verwendet. Vier verschiedene Sätze wurden während der Enkodierphase mit vier verschiedenen räumlichen Positionen assoziiert. Nach jeder Präsentation aller vier Sätze, wurde einer der Sätze getestet. Diese Prozedur wiederholte sich in zwölf Durchgängen. In den ersten vier Durchgängen sahen die Versuchspersonen beim Abruf häufiger in das Feld, dass mit der getesteten Information assoziiert war, d.h. sie zeigten wie erwartet das Looking-at-nothing Verhalten. Je mehr Durchgänge die Versuchspersonen bearbeiteten, desto seltener blickten sie zu der assoziierten räumlichen Position. Demnach verschwindet das Looking-at-nothing Verhalten, wenn Informationen stark im Gedächtnis repräsentiert sind. In der dritten und vierten Untersuchung wurden Blickbewegungen an geleerte räumliche Positionen als Methode verwendet um Denkprozesse zu untersuchen. In der dritten Untersuchung lernten Versuchsteilnehmer zunächst Informationen über fiktive Bewerber (Exemplare) für eine freie Position in einem Unternehmen. Jedes Exemplar wurde mit seinen Eigenschaften während der Lernphase mit einer distinkten räumlichen Position verknüpft. In einer nachfolgenden Entscheidungsphase beurteilten die Versuchsteilnehmer neue Bewerber. Diese neuen Bewerber variierten in ihrer Ähnlichkeit mit den zuvor gelernten Bewerbern. Versuchsteilnehmer die eine ähnlichkeitsbasierte Entscheidungsstrategie verwendeten, sahen an die geleerten räumlichen Positionen zurück, die in der Lernphase mit den Exemplaren verknüpft wurden. Wendeten sie jedoch eine abstrakte Regel an, um die neuen Bewerber zu beurteilten, so zeigten sie kein Looking-at-nothing Verhalten. Dieses Ergebnis lässt darauf schließen, dass eine ähnlichkeitsbasierte im Gegensatz zu einer regelbasierten Strategie den Abruf zuvor gelernter Exemplare bewirkt. Auch in der fünften Untersuchung wurden Blickbewegungen als Methode eingesetzt, diesmal zur Untersuchung gedächtnisbasierter Schlussfolgerungsprozesse, wie sie beim Finden von Erklärungen für eine Anzahl gegebener Informationen auftreten. Manipuliert wurden die Reihenfolge der präsentierten Informationen und die Diversität der möglichen Erklärungen. Die getesteten Symptomsequenzen unterstützen stets mindestens zwei mögliche Erklärungen. Die Versuchsteilnehmer lernten in einer vorangestellten Lernphase die Symptome und ihre möglichen Erklärungen. Symptome und Erklärungen wurden mit räumlichen Positionen verknüpft. In einer anschließenden Diagnosephase wurden verschiedene Symptomsequenzen getestet. Das Blickverhalten während der Diagnosephase reflektierte die Interpretation der Symptome im Sinne der subjektiv wahrscheinlichsten Erklärung. Die Aufzeichnung und Analyse der Blickbewegungen erlaubte es die Entwicklung dieser Interpretation über die gesamte Sequenz hinweg zu beobachten und Hypothesenwechsel lokalisieren zu können. Insgesamt stützen die Ergebnisse dieser Dissertation die Annahme einer engen funktionalen Verbindung von Blickbewegungen, Gedächtnis- und Denkprozessen. Sie zeigen, dass Blickbewegungen nicht automatisch an alle assoziierten räumlichen Positionen gerichtet werden, sondern dass sie vielmehr den situations- und aufgabenabhängigen Abruf von Informationen aus dem Gedächtnis widerspiegeln. Blickbewegungen können als direktes Verhaltensmaß zur Messung von Gedächtnisprozessen beim ähnlichkeitsbasierten Entscheiden herangezogen werden und liefern wertvolle Prozessdaten über die Integration von Symptominformationen beim diagnostischen Schließen. Die Ergebnisse dieser Dissertation werden im Lichte der aktuellen theoretischen Diskussion über kognitive Prozesse beim Bewegen der Augen, beim Gedächtnisabruf und beim komplexen Denken betrachtet. Abschließend werden Empfehlungen für die Verwendung der Methode der Blickbewegungsmessung als Prozessmaß zur Untersuchung gedächtnisbasierter Denkprozesse gegeben, ein Überblick über zukünftige Forschungsmöglichkeiten präsentiert und Ideen für Anwendungsmöglichkeiten der präsentierten Befunde aufgezeigt
APA, Harvard, Vancouver, ISO, and other styles
5

Veltri, Leandro A. (Leandro Almeida). "Modeling eye movements in driving." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36979.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (leaves 87-88).
by Leandro A. Veltri.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
6

TANNFELT, WU JENNIFER. "Robot mimicking human eye movements to test eye tracking devices." Thesis, KTH, Mekatronik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-245066.

Full text
Abstract:
Testing of eye tracking devices is done by humans looking at well defined stimuli. This way of testing eye trackers is not accurate enough because of human errors. The goal of this thesis is to design and construct reliable robotic eyes that can mimic the behaviour of human eyes. After a pre-study where human eyes, eye tracking and previous robotic eyes were studied, system requirements and specifications were formulated. Based on the re-quirements important design decisions were taken such as the use of RC servo motors, push rods, microcontrollers and a Raspberry Pi. Later the inverse kinematics of the movements and a saccade’s path planing were modelled. Additional mechanical de-sign features are rotation of the head and adjustment of the interpupillary distance. The robot is controlled using two types of application programming interfaces (APIs.) The first API is used to control the motors and the second API builds on top of the first API but is used to design paths of different eye movements between fixation points. All eye movement calculations are computed on the Raspberry Pi before the movements are communicated in real time to the microcontroller which directly performs the control signal. The robot was tested using the integrated lasers in the eyes and a video cam-era with slow motion capabilities to capture the projected laser dot on a wall. The properties tested are saccade, smooth pursuit, head rotation and eye tracking device compatibility. The results show high precision but not enough accuracy. The robot needs a few mechanical improvements such as removing the backlash in the rotat-ing joints on the eyes, decreasing the flexibility of some of the 3D printed parts and assuring symmetry in the design. The robot is a powerful testing platform capa-ble of performing all eye movement types with high-resolution control of both eyes independently through an API.
Eyetracking utrustning testas av människor som tittar på väldefinierade stimuli. Att testa eyetracking på det här sättet är inte tillräckligt noggrant på grund av mänskligt fel. Malet med detta examensarbete är att designa och bygga en pålitlig ögonrobot som kan härma beteendet hos mänskliga ögon. Efter en förstudie om mänskliga ögon, eyetracking och existerade robotögon formulerades system-krav och -specikationer. Baserat på dessa krav togs en del betydande designbeslut som att använda RC servomotorer, tryckstånger, mikrokontrollers och en Raspberry Pi. Senare modellerades den inverterade kinematiken av rörelserna och saccaders banor. Ytterligare mekaniska funktioner är rotation av huvudet och justering av avståndet mellan pupillerna. Roboten styrs med hjälp av två applikationsprogrammeringsgränssnitt (API). Det första API:et används för att styra motorerna och det andra API:et bygger på det första men används för att bygga rörelsevanor av olika ögonrörelser mellan fixationspunkter. Alla ögonrörelseberåkningar görs på Raspberry Pin innan rörelsen kommuniceras i realtid till mikrokontrollen som på direkten exekverar styrsignalen. Roboten testades med integrerade lasrar i ögonen och en kamera med slow motion funktionalitet för att fånga laser prickens projektion på en vägg. Funktioner som testades är saccader, smooth pursuit, huvudrotation och eyetracking kompatibilitet. Resultat visade en hög precision men inte tillräckligt hög noggrannhet. Roboten behöver några få mekaniska förbättringar som att få bort glappet i de roterande lederna på ögat, minska flexibiliteten i några av de 3D-utskrivna delarna och garantera symmetri i designen. Roboten är en kraftfull testplatform kapabel till att utföra alla typer av ögonrörelser med högupplöst kontroll av båda ögonen var för sig genom ett API.
APA, Harvard, Vancouver, ISO, and other styles
7

Mergenthaler, Konstantin K. "The control of fixational eye movements." Phd thesis, Universität Potsdam, 2009. http://opus.kobv.de/ubp/volltexte/2009/2939/.

Full text
Abstract:
In normal everyday viewing, we perform large eye movements (saccades) and miniature or fixational eye movements. Most of our visual perception occurs while we are fixating. However, our eyes are perpetually in motion. Properties of these fixational eye movements, which are partly controlled by the brainstem, change depending on the task and the visual conditions. Currently, fixational eye movements are poorly understood because they serve the two contradictory functions of gaze stabilization and counteraction of retinal fatigue. In this dissertation, we investigate the spatial and temporal properties of time series of eye position acquired from participants staring at a tiny fixation dot or at a completely dark screen (with the instruction to fixate a remembered stimulus); these time series were acquired with high spatial and temporal resolution. First, we suggest an advanced algorithm to separate the slow phases (named drift) and fast phases (named microsaccades) of these movements, which are considered to play different roles in perception. On the basis of this identification, we investigate and compare the temporal scaling properties of the complete time series and those time series where the microsaccades are removed. For the time series obtained during fixations on a stimulus, we were able to show that they deviate from Brownian motion. On short time scales, eye movements are governed by persistent behavior and on a longer time scales, by anti-persistent behavior. The crossover point between these two regimes remains unchanged by the removal of microsaccades but is different in the horizontal and the vertical components of the eyes. Other analyses target the properties of the microsaccades, e.g., the rate and amplitude distributions, and we investigate, whether microsaccades are triggered dynamically, as a result of earlier events in the drift, or completely randomly. The results obtained from using a simple box-count measure contradict the hypothesis of a purely random generation of microsaccades (Poisson process). Second, we set up a model for the slow part of the fixational eye movements. The model is based on a delayed random walk approach within the velocity related equation, which allows us to use the data to determine control loop durations; these durations appear to be different for the vertical and horizontal components of the eye movements. The model is also motivated by the known physiological representation of saccade generation; the difference between horizontal and vertical components concurs with the spatially separated representation of saccade generating regions. Furthermore, the control loop durations in the model suggest an external feedback loop for the horizontal but not for the vertical component, which is consistent with the fact that an internal feedback loop in the neurophysiology has only been identified for the vertical component. Finally, we confirmed the scaling properties of the model by semi-analytical calculations. In conclusion, we were able to identify several properties of the different parts of fixational eye movements and propose a model approach that is in accordance with the described neurophysiology and described limitations of fixational eye movement control.
Während des alltäglichen Sehens führen wir große (Sakkaden) und Miniatur- oder fixationale Augenbewegungen durch. Die visuelle Wahrnehmung unserer Umwelt geschieht jedoch maßgeblich während des sogenannten Fixierens, obwohl das Auge auch in dieser Zeit ständig in Bewegung ist. Es ist bekannt, dass die fixationalen Augenbewegungen durch die gestellten Aufgaben und die Sichtbedingungen verändert werden. Trotzdem sind die Fixationsbewegungen noch sehr schlecht verstanden, besonders auch wegen ihrer zwei konträren Hauptfunktionen: Das stabilisieren des Bildes und das Vermeiden der Ermüdung retinaler Rezeptoren. In der vorliegenden Dissertation untersuchen wir die zeitlichen und räumlichen Eigenschaften der Fixationsbewegungen, die mit hoher zeitlicher und räumlicher Präzision aufgezeichnet wurden, während die Versuchspersonen entweder einen sichtbaren Punkt oder aber den Ort eines verschwundenen Punktes in völliger Dunkelheit fixieren sollten. Zunächst führen wir einen verbesserten Algorithmus ein, der die Aufspaltung in schnelle (Mikrosakkaden) und langsame (Drift) Fixationsbewegungen ermöglicht. Den beiden Typen von Fixationsbewegungen werden unterschiedliche Beiträge zur Wahrnehmung zugeschrieben. Anschließend wird für die Zeitreihen mit und ohne Mikrosakkaden das zeitliche Skalenverhalten untersucht. Für die Fixationsbewegung während des Fixierens auf den Punkt konnten wir feststellen, dass diese sich nicht durch Brownsche Molekularbewegung beschreiben lässt. Stattdessen fanden wir persistentes Verhalten auf den kurzen und antipersistentes Verhalten auf den längeren Zeitskalen. Während die Position des Übergangspunktes für Zeitreihen mit oder ohne Mikrosakkaden gleich ist, unterscheidet sie sich generell zwischen horizontaler und vertikaler Komponente der Augen. Weitere Analysen zielen auf Eigenschaften der Mikrosakkadenrate und -amplitude, sowie Auslösemechanismen von Mikrosakkaden durch bestimmte Eigenschaften der vorhergehenden Drift ab. Mittels eines Kästchenzählalgorithmus konnten wir die zufällige Generierung (Poisson Prozess) ausschließen. Des weiteren setzten wir ein Modell auf der Grundlage einer Zufallsbewegung mit zeitverzögerter Rückkopplung für den langsamen Teil der Augenbewegung auf. Dies erlaubt uns durch den Vergleich mit den erhobenen Daten die Dauer des Kontrollkreislaufes zu bestimmen. Interessanterweise unterscheiden sich die Dauern für vertikale und horizontale Augenbewegungen, was sich jedoch dadurch erklären lässt, dass das Modell auch durch die bekannte Neurophysiologie der Sakkadengenerierung, die sich räumlich wie auch strukturell zwischen vertikaler und horizontaler Komponente unterscheiden, motiviert ist. Die erhaltenen Dauern legen für die horizontale Komponente einen externen und für die vertikale Komponente einen internen Kontrollkreislauf dar. Ein interner Kontrollkreislauf ist nur für die vertikale Kompoente bekannt. Schließlich wird das Skalenverhalten des Modells noch semianalytisch bestätigt. Zusammenfassend waren wir in der Lage, unterschiedliche Eigenschaften von Teilen der Fixationsbewegung zu identifizieren und ein Modell zu entwerfen, welches auf der bekannten Neurophysiologie aufbaut und bekannte Einschränkungen der Kontrolle der Fixationsbewegung beinhaltet.
APA, Harvard, Vancouver, ISO, and other styles
8

Drew, Anthony Scott. "The brain, attention, and eye movements /." view abstract or download file of text, 2006. http://proquest.umi.com/pqdweb?did=1188872491&sid=1&Fmt=2&clientId=11238&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2006.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 72-80). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
9

Boonstra, Frouke Nienke. "Fusional vergence eye movements in microstrabismus." [Groningen] : [Groningen] : Rijksuniversiteit Groningen ; [University Library Groningen] [Host], 1997. http://irs.ub.rug.nl/ppn/157856534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Harvard, Catriona. "Eye movements strategies during face matching." Thesis, University of Glasgow, 2007. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.502694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Voraphani, Natthapongs. "Color vision screening using eye movements." [Ames, Iowa : Iowa State University], 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
12

Liston, Dorion. "Target selection for voluntary eye movements /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2005. http://wwwlib.umi.com/cr/ucsd/fullcit?p3170236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Satgunam, PremNandhini. "Dynamics of vergence eye movements in pre-vergence adaptation and post-vergence adaptation conditions." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1196173244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Hardwick, David R., and na. "Factors Associated with Saccade Latency." Griffith University. School of Psychology, 2008. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20100705.111516.

Full text
Abstract:
Part of the aim of this thesis was to explore a model for producing very fast saccade latencies in the 80 to 120ms range. Its primary motivation was to explore a possible interaction by uniquely combining three independent saccade factors: the gap effect, target-feature-discrimination, and saccadic inhibition of return (IOR). Its secondary motivation was to replicate (in a more conservative and tightly controlled design) the surprising findings of Trottier and Pratt (2005), who found that requiring a high resolution task at the saccade target location speeded saccades, apparently by disinhibition. Trottier and Pratt’s finding was so surprising it raised the question: Could the oculomotor braking effect of saccadic IOR to previously viewed locations be reduced or removed by requiring a high resolution task at the target location? Twenty naïve untrained undergraduate students participated in exchange for course credit. Multiple randomised temporal and spatial target parameters were introduced in order to increase probability of exogenous responses. The primary measured variable was saccade latency in milliseconds, with the expectation of higher probability of very fast saccades (i.e. 80-120ms). Previous research suggested that these very fast saccades could be elicited in special testing circumstances with naïve participants, such as during the gap task, or in highly trained observers in non-gap tasks (Fischer & Weber, 1993). Trottier and Pratt (2005) found that adding a task demand that required naïve untrained participants to obtain a feature of the target stimulus (and to then make a discriminatory decision) also produced a higher probability of very fast saccade latencies. They stated that these saccades were not the same as saccade latencies previously referred to as express saccades produced in the gap paradigm, and proposed that such very fast saccades were normal. Carpenter (2001) found that in trained participants the probability of finding very fast saccades during the gap task increased when the horizontal direction of the current saccade continued in the same direction as the previous saccade (as opposed to reversing direction) – giving a distinct bimodality in the distribution of latencies in five out of seven participants, and likened his findings to the well known IOR effect. The IOR effect has previously been found in both manual key-press RT and saccadic latency paradigms. Hunt and Kingstone (2003) stated that there were both cortical top-down and oculomotor hard-wired aspects to IOR. An experiment was designed that included obtain-target-feature and oculomotor-prior-direction, crossed with two gap level offsets (0ms & 200ms-gap). Target-feature discrimination accuracy was high (97%). Under-additive main effects were found for each factor, with a three-way interaction effect for gap by obtain-feature by oculomotor-prior-direction. Another new three-way interaction was also found for anticipatory saccade type. Anticipatory saccades became significantly more likely under obtain-target-feature for the continuing oculomotor direction. This appears to be a similar effect to the increased anticipatory direction-error rate in the antisaccade task. These findings add to the saccadic latency knowledge base and in agreement with both Carpenter and Trottier and Pratt, laboratory testing paradigms can affect saccadic latency distributions. That is, salient (meaningful) targets that follow more natural oculomotor trajectories produce higher probability of very fast latencies in the 80-120ms range. In agreement with Hunt and Kingstone, there appears to be an oculomotor component to IOR. Specifically, saccadic target-prior-location interacts differently for obtain-target-feature under 200-ms gap than under 0ms-gap, and is most likely due predominantly to a predictive disinhibitory oculomotor momentum effect, rather than being due to the attentional inhibitory effect proposed for key-press IOR. A new interpretation for the paradigm previously referred to as IOR is offered that includes a link to the smooth pursuit system. Additional studies are planned to explore saccadic interactions in more detail.
APA, Harvard, Vancouver, ISO, and other styles
15

Hardwick, David R. "Factors Associated with Saccade Latency." Thesis, Griffith University, 2008. http://hdl.handle.net/10072/365963.

Full text
Abstract:
Part of the aim of this thesis was to explore a model for producing very fast saccade latencies in the 80 to 120ms range. Its primary motivation was to explore a possible interaction by uniquely combining three independent saccade factors: the gap effect, target-feature-discrimination, and saccadic inhibition of return (IOR). Its secondary motivation was to replicate (in a more conservative and tightly controlled design) the surprising findings of Trottier and Pratt (2005), who found that requiring a high resolution task at the saccade target location speeded saccades, apparently by disinhibition. Trottier and Pratt’s finding was so surprising it raised the question: Could the oculomotor braking effect of saccadic IOR to previously viewed locations be reduced or removed by requiring a high resolution task at the target location? Twenty naïve untrained undergraduate students participated in exchange for course credit. Multiple randomised temporal and spatial target parameters were introduced in order to increase probability of exogenous responses. The primary measured variable was saccade latency in milliseconds, with the expectation of higher probability of very fast saccades (i.e. 80-120ms). Previous research suggested that these very fast saccades could be elicited in special testing circumstances with naïve participants, such as during the gap task, or in highly trained observers in non-gap tasks (Fischer & Weber, 1993). Trottier and Pratt (2005) found that adding a task demand that required naïve untrained participants to obtain a feature of the target stimulus (and to then make a discriminatory decision) also produced a higher probability of very fast saccade latencies. They stated that these saccades were not the same as saccade latencies previously referred to as express saccades produced in the gap paradigm, and proposed that such very fast saccades were normal. Carpenter (2001) found that in trained participants the probability of finding very fast saccades during the gap task increased when the horizontal direction of the current saccade continued in the same direction as the previous saccade (as opposed to reversing direction) – giving a distinct bimodality in the distribution of latencies in five out of seven participants, and likened his findings to the well known IOR effect. The IOR effect has previously been found in both manual key-press RT and saccadic latency paradigms. Hunt and Kingstone (2003) stated that there were both cortical top-down and oculomotor hard-wired aspects to IOR. An experiment was designed that included obtain-target-feature and oculomotor-prior-direction, crossed with two gap level offsets (0ms & 200ms-gap). Target-feature discrimination accuracy was high (97%). Under-additive main effects were found for each factor, with a three-way interaction effect for gap by obtain-feature by oculomotor-prior-direction. Another new three-way interaction was also found for anticipatory saccade type. Anticipatory saccades became significantly more likely under obtain-target-feature for the continuing oculomotor direction. This appears to be a similar effect to the increased anticipatory direction-error rate in the antisaccade task. These findings add to the saccadic latency knowledge base and in agreement with both Carpenter and Trottier and Pratt, laboratory testing paradigms can affect saccadic latency distributions. That is, salient (meaningful) targets that follow more natural oculomotor trajectories produce higher probability of very fast latencies in the 80-120ms range. In agreement with Hunt and Kingstone, there appears to be an oculomotor component to IOR. Specifically, saccadic target-prior-location interacts differently for obtain-target-feature under 200-ms gap than under 0ms-gap, and is most likely due predominantly to a predictive disinhibitory oculomotor momentum effect, rather than being due to the attentional inhibitory effect proposed for key-press IOR. A new interpretation for the paradigm previously referred to as IOR is offered that includes a link to the smooth pursuit system. Additional studies are planned to explore saccadic interactions in more detail.
Thesis (Masters)
Master of Philosophy (MPhil)
School of Psychology
Griffith Health
Full Text
APA, Harvard, Vancouver, ISO, and other styles
16

Schwarzbach, Jens. "Priming of eye movements by masked stimuli." [S.l. : s.n.], 1999. http://deposit.ddb.de/cgi-bin/dokserv?idn=960669604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Gelow, Stefan. "Ratings and eye movements of emotion regulation." Thesis, Stockholm University, Department of Psychology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-30205.

Full text
Abstract:

People  have  different  strategies  to  regulate  and  control  their  own emotions.  For  short-term  emotion  regulation  of  visual  stimuli, cognitive reappraisal and attentional deployment are of relevance. The present  study  used  self-ratings  and  eye-tracking  data  to  replicate previous  findings  that  eye  movements  are  effective  in  emotion regulation.  25  participants  (6  males)  watched  positive  and  negative pictures in an attend condition and a decrease emotion condition. They rated  their  emotional  experience  and  their  eye  movements  were followed  with  an  eye-tracker.  Ratings  showed  that  they  perceived pictures as less emotional in the decrease condition as compared to the attend condition both for positive and negative pictures. This decrease in  ratings  of  emotional  response  was  larger  for  positive  than  for negative  pictures.  Eye-tracking  data  showed  no  significant  effect  of emotion  regulation condition. Further  research  is proposed  to  include self-ratings  in  studies  of  physiological  changes  due  to  emotion regulation,  to  differentiate  between  strategies  of  emotion  regulation potentially used by participants.

APA, Harvard, Vancouver, ISO, and other styles
18

Williams, Diane E. "Patterns of eye movements during visual search." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ27751.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Khayat, Paul. "Attention and eye movements during contour grouping." [S.l. : Amsterdam : s.n.] ; Universiteit van Amsterdam [Host], 2004. http://dare.uva.nl/document/72169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Murdock, Michele N. "Eye Movements of Highly Identified Sport Fans." TopSCHOLAR®, 2015. http://digitalcommons.wku.edu/theses/1447.

Full text
Abstract:
Individuals who are highly identified with a sport team have a strong psychological connection with the team (Wann et al., 2001). Sport team identification can be beneficial to communities and individuals. It provides entertainment, helps form group affiliation, and improves self-esteem. Because team identification is important to people, they notice environmental cues related to the team. Individuals are more likely to attend to a stimulus that is liked or one that is familiar. When an individual has accessible attitudes toward an object, he or she is more likely to attend to and notice the object (Roskos-Ewoldsen & Fazio, 1992). The current study examined the relationship between sport team identification and attention. Participants (n = 31) were presented with 64 displays of college team logos, which were shown in sequential order. While viewing the displays, participants’ eye movements were monitored by the SR Research Eyelink II, an eye-tracking recording system. The participants then completed a questionnaire designed to determine their level of team identification with an indicated team. Higher scores on the questionnaire indicated a higher level of identification. The first hypothesis under study states that highly identified UK fans detect the UK logo faster than the UT logo when each logo appears without the other, whereas low identified UK fans detect both the UK and UT logos equally quickly when each logo appears without the other. A mixed-model ANOVA was conducted to examine the impact of set type on total time to identify the target. The ANOVA yielded no main effects or interactions. The second hypothesis under study states that highly identified UK fans detect the UT logo more slowly when the UK logo is present than the low identified UK fans. A mixed-model ANOVA was conducted to examine how distractible the UK logo was when detecting the UT logo. The ANOVA yielded no main effects or interactions.
APA, Harvard, Vancouver, ISO, and other styles
21

Harwood, Mark Richard. "The Fourier analysis of saccadic eye movements." Thesis, University College London (University of London), 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.407929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Corrales, Paucar L. A. "Generation of ASCII characters by eye movements." Thesis, University of Strathclyde, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.382352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Notice, Keisha Joy. "Visual working memory and saccadic eye movements." Thesis, Anglia Ruskin University, 2013. http://arro.anglia.ac.uk/332975/.

Full text
Abstract:
Saccadic eye movements, produced by the oculomotor system, are used to bring salient information in line with the high resolution fovea. It has been suggested that visual working memory, the cognitive system that temporarily stores and manipulates visual information (Baddeley & Hitch, 1974), is utilised by the oculomotor system in order to maintain saccade programmes across temporal delays (Belopolsky & Theeuwes, 2011). Saccadic eye movements have been found to deviate away from information stored in visual working memory (Theeuwes and colleagues, 2005, 2006). Saccadic deviation away from presented visual stimuli has been associated with top-down suppression (McSorley, Haggard, & Walker, 2006). This thesis examines the extent to which saccade trajectories are influenced by information held in visual working memory. Through a series of experiments behavioural memory data and saccade trajectory data were explored and evidence for visual working memory-oculomotor interaction was found. Other findings included specific interactions with the oculomotor system for the dorsal and ventral pathways as well as evidence for both bottom-up and top-down processing. Evidence of further oculomotor interaction with manual cognitive mechanisms was also illustrated, suggesting that visual working memory does not uniquely interact with the oculomotor system to preserve saccade programmes. The clinical and theoretical implications of this thesis are explored. It is proposed that the oculomotor system may interact with a variety of sensory systems to inform accurate and efficient visual processing.
APA, Harvard, Vancouver, ISO, and other styles
24

Dungan, Brittany. "Visual Working Memory Representations Across Eye Movements." Thesis, University of Oregon, 2015. http://hdl.handle.net/1794/19251.

Full text
Abstract:
We live in a rich visual world that we experience as a seamless and detailed stream of continuous information. However, we can only attend to and remember a small portion of our visual environment. The visual system is tasked with stitching together snapshots of the world through near constant eye movements, with around three saccades per second. The situation is further complicated with the visual system being contralaterally organized. Each eye movement can bring items in our environment into a different visual hemifield. Despite the many challenges and limitations of attention and the visual system, how does the brain stitch together our experience of our visual environment? One potential mechanism that could contribute to our conscious perception of a continuous visual experience could be visual working memory (VWM) working to maintain representations of items across saccades. Electrophysiological activity using event-related potentials has revealed the contralateral delay activity (CDA), which is a sustained negativity contralateral to the side of the visual field where subjects are attending. However, how does this work if we are constantly moving our eyes? How do we form a stable representation of items across eye movements? Does the representation transfer over to the other side of the brain, constantly shuffling the items between the hemispheres? Or does it stay in the hemisphere contralateral to the visual field where the items were located when we originally created the representation? The consequences of eye movements need to be examined at multiple levels and time points throughout the process. The goal of my doctoral dissertation is to investigate VWM representations throughout the dynamic peri-saccadic window. In Experiment 1, I will first compare VWM representations across shifts of attention and eye position. With the focus on the effect of maintaining attention on items across eye movements, Experiment 2 will also explore eye movements both towards and away from attended visual hemifields. Finally, Experiment 3 is designed to substantiate our use of the CDA as a tool for examining VWM representations across eye movements by confirming that the CDA is indeed established in retinotopic coordinates.
APA, Harvard, Vancouver, ISO, and other styles
25

Wu, Chao-Yen. "Long-range predictors for saccadic eye movements." Diss., The University of Arizona, 1988. http://hdl.handle.net/10150/184465.

Full text
Abstract:
To predict the final eye position in the middle of a saccadic eye movement will require long-range prediction. This dissertation investigated techniques for doing this. Many important results about saccadic eye movements and current prediction techinques were reviewed. New prediction techinques have been developed and tested for real saccadic data in computer. Three block processing predictors, two-point linear predictor (TPLP), five-point quadratic predictor (FPQP), and nine-point cubic predictor (NPCP), were derived based on the matrix approach. A different approach to deriving the TPLP, FPQP, and NPCP based on the difference equation was also developed. The difference equation approach is better than the matrix approach because it is not necessary to compute the matrix inversion. Two polynomial predictors: the polynomial-filter predictor 1 (PFP1), which is a linear combination of a TPLP and an FPQP, and the polynomial-filter predictor 2 (PFP2), which is a linear combination of a TPLP, and FPQP, and an NPCP, were also derived. Two recursive predictors: the recursive-least-square (RLS) predictor and the least-mean-square (LMS) predictor, were derived. Results show that the RLS and LMS predictors perform better than TPLP, FPQP, NPCP, PFP1, and PFP2 in the prediction of saccadic eye movements. A mathematical way of verifying the accuracy of the recursive-least-square predictor was developed. This technique also shows that the RLS predictor can be used to identify a signal. Results show that a sinusoidal signal can be described as a second-order difference equation with coefficients 2cosω and -1. In the same way, a cubic signal can be realized as a fourth-order difference equation with coefficients 4, -6, 4, and -1. A parabolic signal can be written as a third-order difference equation with coefficients 3, -3, and 1. And a triangular signal can be described as a second-order difference equation with coefficients 2 and -1. In this dissertation, all predictors were tested with various signals such as saccadic eye movements, ECG, sinusoidal, cubic, triangular, and parabolic signals. The FFT of these signals were studied and analyzed. Computer programs were written in systems language C and run on UNIX supported minicomputer VAX11/750. Results were discussed and compared to that of short-range prediction problems.
APA, Harvard, Vancouver, ISO, and other styles
26

Thurtell, Matthew James. "Effect of eye position on the three-dimensional kinematics of saccadic and vestibular-evoked eye movements." Thesis, The University of Sydney, 2005. http://hdl.handle.net/2123/1665.

Full text
Abstract:
Saccadic and vestibular-evoked eye movements are similar in that their three-dimensional kinematic properties show eye position-dependence. When the line of sight is directed towards an eccentric target, the eye velocity axis tilts in a manner that depends on the instantaneous position of the eye in the head, with the magnitude of tilt also depending on whether the eye movement is saccadic or vestibular-evoked. The mechanism responsible for producing eye velocity axis tilting phenomena is not well understood. Some authorities have suggested that muscle pulleys in the orbit are critical for implementing eye velocity axis tilting, while others have suggested that the cerebellum plays an important role. In the current study, three-dimensional eye and head rotation data were acquired, using the magnetic search coil technique, to confirm the presence of eye position-dependent eye velocity axis tilting during saccadic eye movements. Both normal humans and humans with cerebellar atrophy were studied. While the humans with cerebellar atrophy were noted to have abnormalities in the two-dimensional metrics and consistency of their saccadic eye movements, the eye position-dependent eye velocity axis tilts were similar to those observed in the normal subjects. A mathematical model of the human saccadic and vestibular systems was utilized to investigate the means by which these eye position-dependent properties may arise for both types of eye movement. The predictions of the saccadic model were compared with the saccadic data obtained in the current study, while the predictions of the vestibular model were compared with vestibular-evoked eye movement data obtained in a previous study. The results from the model simulations suggest that the muscle pulleys are responsible for bringing about eye position-dependent eye velocity axis tilting for both saccadic and vestibular-evoked eye movements, and that these phenomena are not centrally programmed.
APA, Harvard, Vancouver, ISO, and other styles
27

Thurtell, Matthew James. "Effect of eye position on the three-dimensional kinematics of saccadic and vestibular-evoked eye movements." Faculty of Medicine, 2005. http://hdl.handle.net/2123/1665.

Full text
Abstract:
Master of Science in Medicine
Saccadic and vestibular-evoked eye movements are similar in that their three-dimensional kinematic properties show eye position-dependence. When the line of sight is directed towards an eccentric target, the eye velocity axis tilts in a manner that depends on the instantaneous position of the eye in the head, with the magnitude of tilt also depending on whether the eye movement is saccadic or vestibular-evoked. The mechanism responsible for producing eye velocity axis tilting phenomena is not well understood. Some authorities have suggested that muscle pulleys in the orbit are critical for implementing eye velocity axis tilting, while others have suggested that the cerebellum plays an important role. In the current study, three-dimensional eye and head rotation data were acquired, using the magnetic search coil technique, to confirm the presence of eye position-dependent eye velocity axis tilting during saccadic eye movements. Both normal humans and humans with cerebellar atrophy were studied. While the humans with cerebellar atrophy were noted to have abnormalities in the two-dimensional metrics and consistency of their saccadic eye movements, the eye position-dependent eye velocity axis tilts were similar to those observed in the normal subjects. A mathematical model of the human saccadic and vestibular systems was utilized to investigate the means by which these eye position-dependent properties may arise for both types of eye movement. The predictions of the saccadic model were compared with the saccadic data obtained in the current study, while the predictions of the vestibular model were compared with vestibular-evoked eye movement data obtained in a previous study. The results from the model simulations suggest that the muscle pulleys are responsible for bringing about eye position-dependent eye velocity axis tilting for both saccadic and vestibular-evoked eye movements, and that these phenomena are not centrally programmed.
APA, Harvard, Vancouver, ISO, and other styles
28

Lind-Hård, Viktor. "What meets the eye : Naturalistic observations of air traffic controllers eye-movements during arrivals using eye-tracking." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-159934.

Full text
Abstract:
How do air traffic controllers, or ATCos, distribute visual attention and can it vary between controllers? In this study, using primarily eye-tracking data and a couple of on-site interviews, these questions are explored. Two ATCos, with the most similar landings, had their eye-movements recorded with Tobii pro glasses 2 and further analysed by categorizing every fixation into different areas of interest during four landings. Two more ATCos were interviewed briefly during an observational visit to the control tower. The results showed that the ATCos distributed their attention fairly equally between the outside of the control tower and the inside. When attending to something outside the runway was the focus and when attention was inside the control tower the radar was usually the focus. The ATCos differed in their attention distribution by the presumably more experienced ATCo distributing their attention more outside the control tower than the presumably less experienced ATCo.  A large number of fixations were not categorized bringing the method of dividing the ATCos eye-tracking view into areas of interest into question.
APA, Harvard, Vancouver, ISO, and other styles
29

Krantz, John H. "Changes in detectability of direction and motion associated with saccadic eye movements." Gainesville, FL, 1988. http://www.archive.org/details/changesindetecta00kran.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Klier, Eliana Mira. "Three-dimensional visual-motor geometry of human saccades." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ27359.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Jonikaitis, Donatas. "Attentional dynamics before coordinated eye and hand movements." Diss., lmu, 2011. http://nbn-resolving.de/urn:nbn:de:bvb:19-134808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Tajik, Diana. "A developmental study of children's pursuit eye movements." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ56209.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Furneaux, Sophia-Louise Maria. "The role of eye movements during music reading." Thesis, University of Sussex, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Mennie, Neil Russell. "Eye movements and visual search in everyday tasks." Thesis, University of Sussex, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.250181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Shelhamer, Mark John. "Linear acceleration and horizontal eye movements in man." Thesis, Massachusetts Institute of Technology, 1990. http://hdl.handle.net/1721.1/42213.

Full text
Abstract:
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 1990.
Vita.
Includes bibliographical references (leaves 156-164).
by Mark John Shelhamer.
Sc.D.
APA, Harvard, Vancouver, ISO, and other styles
36

Hutson, John. "Dissociating eye-movements and comprehension during film viewing." Thesis, Kansas State University, 2016. http://hdl.handle.net/2097/34136.

Full text
Abstract:
Master of Science
Department of Psychological Sciences
Lester Loschky
Film is a ubiquitous medium. However, the process by which we comprehend film narratives is not well understood. Reading research has shown a strong connection between eye-movements and comprehension. In four experiments we tested whether the eye-movement and comprehension relationship held for films. This was done by manipulating viewer comprehension by starting participants at different points in a film, and then tracking their eyes. Overall, the manipulation created large differences in comprehension, but only found small difference in eye-movements. In a condition of the final experiment, a task manipulation was designed to prioritize different stimulus features. This task manipulation created large differences in eye-movements when compared to participants freely viewing the clip. These results indicate that with the implicit task of narrative comprehension, top-down comprehension processes have little effect on eye-movements. To allow for strong, volitional top-down control of eye-movements in film, task manipulations need to make features that are important to comprehension irrelevant to the task.
APA, Harvard, Vancouver, ISO, and other styles
37

Kienzle, Wolf. "Learning an interest operator from human eye movements." Berlin Logos-Verl, 2008. http://d-nb.info/990541908/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Harris, Artistee Shayna Schnell Thomas. "A state machine representation of pilot eye movements." Iowa City : University of Iowa, 2009. http://ir.uiowa.edu/etd/297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Veldre, Aaron. "Individual differences in eye movements during skilled reading." Thesis, The University of Sydney, 2015. http://hdl.handle.net/2123/13486.

Full text
Abstract:
Despite a large and established literature, studies of eye movements during skilled reading generally assume uniformity at the participant level. However, there is growing evidence that individual differences in reading proficiency among skilled adult readers modulate the early stages of lexical processing. Measures of lexical knowledge have also been shown to be more predictive of differences in eye movement patterns than many word- or sentence-level variables that have traditionally been the focus of research in the field. In five experiments, large samples of skilled readers were assessed on measures of reading and spelling ability in order to test the hypothesis that individual differences in proficiency, and, particularly, the precision of a reader’s lexical representations, modulate eye movements during reading. Gaze-contingent display change paradigms were used to manipulate parafoveal information during sentence reading, tapping the early stages of word identification. The combination of high reading and spelling ability, i.e., lexical expertise, was found to consistently predict both the spatial extent and depth of parafoveal processing. Lexical experts made use of a wider perceptual span and were more likely to extract lexical information from upcoming words than readers with imprecise lexical knowledge. Lexical expertise was also associated with more effective integration of parafoveal and foveal information and more immediate comprehension. These results support the lexical quality hypothesis of reading skill and challenge the assumption that skilled adults all read in essentially the same way. The results also provide insight into the role of parafoveal processing during reading and provide opportunities for the refinement of computational models of eye movement control.
APA, Harvard, Vancouver, ISO, and other styles
40

Davitt, Lina I. "Eye movements and the visual perception of shape." Thesis, Bangor University, 2012. https://research.bangor.ac.uk/portal/en/theses/eye-movements-and-the-visual-perception-of-shape(e8c97b73-656b-4041-a54a-74726eeb409c).html.

Full text
Abstract:
This thesis reports the results of five novel studies that used eye movement patterns to elucidate the role of shape information content of object shape representation in human visual perception. In Experiments 1, and 2 eye movements were recorded while observers either actively memorised or passively viewed different sets of novel objects, and during a subsequent recognition memory task. ..• -'", ••. Fixation data were contrasted against different models of shape mralyses based on surface curvature bounding vs. internal contour and low level image visual saliency. The results showed a preference for fixation at regions of internal local features (either concave or/and convex) during both active memorisation and passive viewing of object shape. This pattern changed during the recognition phase where there was a fixation preference towards regions containing concave surface curvature minima. It is proposed that the preference of fixation at regions of concavity reflect the operation of a depth-sensitive view interpolation process that is constrained by key points encoding regions of concave curvature minima. Experiments 3 and 4 examined the extent to which fixation-based local shape analysis patterns are influenced by the perceptual expertise of the observer and the level of stimulus classification required by the task. These studies were based on the paradigm developed by Wong, Palmeri & Gauthier (2009) in which observers are extensively trained to categorize sets of novel objects (Ziggerins) at either a basic or subordinate level of classification. The effects of training were measured by comparing performance between a pre- and post-test sequential shape matching task that required either basic- or subordinate-level judgements. In addition, we also recorded fixation patterns during the pre- and post-tests.
APA, Harvard, Vancouver, ISO, and other styles
41

Kovalenko, Lyudmyla. "The temporal interplay of vision and eye movements." Doctoral thesis, Humboldt-Universität zu Berlin, Lebenswissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17507.

Full text
Abstract:
Das visuelle System erreicht enorme Verarbeitungsmengen, wenn wir unsere Augen auf ein Objekt richten. Mehrere Prozesse sind aktiv bevor unser Blick das neue Objekt erreicht. Diese Arbeit erforscht die räumlichen und zeitlichen Eigenschaften drei solcher Prozesse: 1. aufmerksamkeitsbedingte Steigerung der neuronalen Aktivität und sakkadische Suppression; 2. aufmerksamkeitsbasierte Auswahl des Zielreizes bei einer visuellen Suchaufgabe; 3. zeitliche Entwicklung der Detektiongenauigkeit bei der Objekt-Substitutionsmaskierung. Wir untersuchten diese Prozesse mit einer Kombination aus humaner Elektroenzephalografie (EEG), eye tracking und psychophysischen Verhaltensmessungen. Zuerst untersuchten wir, wie die neuronale Repräsentation eines Reizes von seiner zeitlichen Nähe zur Sakkade geprägt wird. Wir zeigten, dass direkt vor der Sakkade erscheinende Reize am meisten durch Aufmerksamkeit und Suppression geprägt sind. In Studie 2 wurde die Sichtbarkeit des Reizes mit der Objekt-Substitutionsmaskierung verringert, und wir analysierten das Verhältnis zwischen sakkadischen Reaktionszeiten und ihrer Genauigkeit. Dazu erfassten wir neuronale Marker der Aufmerksamkeitslenkung zum Zielreiz und eine subjektive Bewertung seiner Wahrnehmbarkeit. Wir stellten fest, dass schnelle Sakkaden der Maskierung entgingen und Genauigkeit sowie subjektive Wahrnehmbarkeit erhöhten. Dies zeigt, dass bereits in frühen Verarbeitungsstadien eine bewusste und korrekte Wahrnehmung des Reizes entstehen kann. Wir replizierten diesen Befund für manuelle Antworten, um eine Verfälschung der Ergebnisse durch sakkadenspezifische Prozesse auszuschließen. Neben ihrer theoretischen Bedeutung liefern diese Studien einen methodischen Beitrag zum Forschungsgebiet der EEG-Augenbewegung: Entfernung sakkadischer Artefakte aus dem EEG bzw. Erstellung eines künstlichen Vergleichsdatensatzes. Die Arbeit stellt mehrere Ansätze zur Untersuchung der Dynamik visueller Wahrnehmung sowie Lösungen für zukünftige Studien dar.
The visual system achieves a tremendous amount of processing as soon as we set eyes on a new object. Numerous processes are active already before eyes reach the object. This thesis explores the spatio-temporal properties of three such processes: attentional enhancement and saccadic suppression that accompany saccades to target; attentional selection of target in a visual search task; the timecourse of target detection accuracy under object-substitution masking. We monitored these events using a combination of human electrophysiology (EEG), eye tracking and behavioral psychophysics. We first studied how the neural representation of a visual stimulus is affected by its temporal proximity to saccade onset. We show that stimuli immediately preceding a saccade show strongest effects of attentional enhancement and saccadic suppression. Second, using object-substitution masking to reduce visibility, we analyzed the relationship between saccadic reaction times and response accuracy. We also collected subjective visibility ratings and observed neural markers of attentional selection, such as the negative, posterior-contralateral deflection at 200 ms (N2pc). We found that fast saccades escaped the effects of masking, resulted in higher response accuracy and higher awareness ratings. This indicates that early visual processing can trigger awareness and correct behavior. Finally, we replicated this finding with manual responses. Discovering a similar accuracy timecourse in a different modality ruled out saccade-specific mechanisms, such as saccadic suppression and retinal shift, as a potential confound. Next to their theoretical impact, all studies make a methodological contribution to EEG-eye movement research, such as removal of large-scale saccadic artifacts from EEG data and composition of matched surrogate data. In sum, this work uses multiple approaches to describe the dynamics of visual perisaccadic perception and offers solutions for future studies in this field.
APA, Harvard, Vancouver, ISO, and other styles
42

Harris, Artistee Shayna. "A state machine representation of pilot eye movements." Thesis, University of Iowa, 2009. https://ir.uiowa.edu/etd/297.

Full text
Abstract:
With the development of these new interfaces, such as Next Generation Air Transportation System (NextGen), and the evolution of the United States National Air System (NAS) from a ground-based system of Air Traffic Control (ATC) to a satellite-based system of air traffic management (FAA, 2009), new evaluations for efficiency and safety are required. Therefore, these tasks require visual behaviors such as search, fixation, tracking, and grouping. Therefore, designing and implementing a virtual eye movement application that generates gaze and action visualizations could provide detailed data on the allocation of visual attention across interface entities.The goal is to develop state-machine representations of straight-and-level flight, turns, climbs and descents within the Pilot Eye Flight Deck Application to simulate pilots' eye movement.
APA, Harvard, Vancouver, ISO, and other styles
43

Gitchel, George Thomas Jr. "Development of an Accurate Differential Diagnostic Tool for Neurological Movement Disorders Utilizing Eye Movements." VCU Scholars Compass, 2015. http://scholarscompass.vcu.edu/etd/4109.

Full text
Abstract:
Parkinson’s disease and Essential tremor are the two most prevalent movement disorders in the world, but due to overlapping clinical symptoms, accurate differential diagnosis is difficult. As a result, approximately 60% of patients with movement disorders symptoms will have their diagnosis changed at least once before death. By their subjective nature, clinical exams are inherently imprecise, leading to the desire to create an objective, quantifiable test for movement disorders; a test that currently is elusive. Eye movements have been studied for a century, and are widely appreciated to be quantifiably affected in those with neurological disease. Through a collaborative effort between the VA hospital and VCU, over 1,000 movement disorder subjects had their eye movements recorded, utilizing an SR Research Eyelink 2. Patients with Parkinson’s disease exhibited an ocular gaze tremor during fixation, normal reflexive saccades, and reduced blink rate. Subjects with Essential tremor exhibited slowed saccadic dynamics, with increased latencies, in addition to a larger number of square wave jerk interruptions of otherwise stable fixation. After diagnostic features of each disorder were identified, prospective data collection could occur in a blinded fashion, and oculomotor features used to predict clinical diagnoses. It was determined that measures of fixation stability were capable of almost perfectly differentiating subjects with PD, and a novel, combined parameter was capable of similar results in ET. As a group, it appears as if these symptoms do not progress as the disease does, but subanalyses show that individual patients on constant pharmaceutical doses tracked over time do slightly change and progress. The near perfect separation of disease states suggest the ability of oculomotor recording to be a powerful biomarker to be used for the differential diagnosis of movement disorders. This tool could potentially impact and improve the lives of millions of people the world over.
APA, Harvard, Vancouver, ISO, and other styles
44

Levin, Ehud 1957. "A COMPUTER SYSTEM FOR STUDYING HUMAN EYE TRACKING." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276533.

Full text
Abstract:
The improvement in tracking a moving target for an extended period of time was measured on seven human subjects. Each subject was presented with a moving target for a few consecutive runs. The mean square error (MSE) between the eye position and the target position was measured for each run, also, the MSE between the eye velocity and the target velocity was measured. These MSEs were plotted versus time to obtain the learning curves. One subject did not show any improvement in MSEs. For four subjects the position MSE decreased with time. One of these four, the one who obtained the best results, also showed an improvement in his velocity MSE. Two subjects learned to adjust their eye velocity to the target's velocity, as well as to maintain small position mean square errors.
APA, Harvard, Vancouver, ISO, and other styles
45

Weger, Ulrich Wolfgang. "Spatial and linguistic control of eye movements during reading." Diss., Online access via UMI:, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
46

Kliegl, Reinhold, and Ralf Engbert. "Conference Abstracts: 14th European Conference on Eye Movements ECEM2007." Universität Potsdam, 2007. http://opus.kobv.de/ubp/volltexte/2011/5679/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Wagner, Ross. "Image processing applied to the tracking of eye movements." Thesis, McGill University, 1989. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=55657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Kaylor-Hughes, Catherine J. "The role of eye movements in learning to drive." Thesis, University of Sussex, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.487915.

Full text
Abstract:
Learning to drive a car poses novel and demanding problems for the nervous system. In this task the brain must utilise sensory modalities in conjunction with implicit knowledge and learning mechanisms to produce several, almost instantaneous, motor commands. The process' by which the brain achieves this has its basis in vision and perception, which are largely influenced by movements of the eyes. This thesis investigates the role of eye movements in learning to drive. Using a head mounted Camera, three learners were taken for their first three driving lessons and their eye and head movements were recorded. These were then analysed and compared to three experienced drivers who had driven the same rural-suburban circuit. Analysis revealed that learner drivers have a smaller area of visual search and . . longer mean fixation duration compared to the experienced drivers, as seen in previous studies of novice drivers (with some driving experience). In addition, when turning a comer learner drivers do not look into the bend as the experienced drivers do. Instead learners use information about the road in front of the car to guide steering which demonstrates a 'search strategy that is less anticipatory and denies them the opportunity to prepare for the road ahead. This work explores the use of eye movements in extracting useful visual information for learning to drive. In addition it looks at the time scale of learning to elucidate any patterns of eye-movements that may develop in this 'natural' task, as seen in other activities of daily life.
APA, Harvard, Vancouver, ISO, and other styles
49

Kreysa, Helene. "Coordinating speech-related eye movements between comprehension and production." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/5802.

Full text
Abstract:
Although language usually occurs in an interactive and world-situated context (Clark, 1996), most research on language use to date has studied comprehension and production in isolation. This thesis combines research on comprehension and production, and explores the links between them. Its main focus is on the coordination of visual attention between speakers and listeners, as well as the influence this has on the language they use and the ease with which they understand it. Experiment 1 compared participants’ eye movements during comprehension and production of similar sentences: in a syntactic priming task, they first heard a confederate describe an image using active or passive voice, and then described the same kind of picture themselves (cf. Branigan, Pickering, & Cleland, 2000). As expected, the primary influence on eye movements in both tasks was the unfolding sentence structure. In addition, eye movements during target production were affected by the structure of the prime sentence. Eye movements in comprehension were linked more loosely with speech, reflecting the ongoing integration of listeners’ interpretations with the visual context and other conceptual factors. Experiments 2-7 established a novel paradigm to explore how seeing where a speaker was looking during unscripted production would facilitate identification of the objects they were describing in a photographic scene. Visual coordination in these studies was created artificially through an on-screen cursor which reflected the speaker’s original eye movements (cf. Brennan, Chen, Dickinson, Neider, & Zelinsky, 2007). A series of spatial and temporal manipulations of the link between cursor and speech investigated the respective influences of linguistic and visual information at different points in the comprehension process. Implications and potential future applications are discussed, as well as the relevance of this kind of visual cueing to the processing of real gaze in face-to-face interaction.
APA, Harvard, Vancouver, ISO, and other styles
50

Richard, Alby-Réal. "The interaction of visual perception and saccadic eye movements." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=123018.

Full text
Abstract:
Primates have evolved to make high velocity, ballistic eye movements called saccades approximately three to five times per second in order to orient the high resolution part of their retina, or fovea, towards objects of interest. While saccades are generally adaptive in most situations, they also present the brain with certain challenges in order maintain a stable perception of the world. With every movement of the visual axis involving the eyes alone or through a combined eye-head gaze shift, the retina is presented with a rapidly changing view of the world. Most observers are not aware of the actual flow of incoming retinal information during a saccade, and instead perceive the world as being stable from one gaze movement to the next. How the brain accomplishes this stability has been referred to as the problem of 'trans-saccadic' perceptual stability. While this problem been pondered for more than a century by philosophers, psychologists, and neuroscientists, there is still no consensus on the precise mechanism by which visual stability is achieved. One way to approach the problem of perceptual stability is to study the way in which visual perception changes around the time of saccades. It is well known that objects briefly presented around the time of saccadic eye movements are not perceived at their veridical location, a phenomenon called perisaccadic mislocalization. Most observers make errors of two types that are predictable and systematic: a translational shift in the direction of the saccade, and compression towards the target location. This later effect, the compression of visual space towards the saccade target, is the primary phenomenon through which this thesis sought to understand the mechanisms responsible for visual stability across saccades. To this end, a series of psychophysical experiments were conducted to explore which signals may be involved in computing where an object was in space around the time of a saccade. In the fist paper, we described a biological framework in which an oculomotor signal encoding the gaze command interacts with a visual signal encoding afferent information. The outcome of this interaction was related to the perceived position of the object presented around the time of the saccade, and this formulation was able to capture both our results in addition to data from outside our laboratory. After successfully modelling the compression effect within a plausible biological framework, the next paper focused on elucidating the nature of the oculomotor signal. We accomplished this by testing observers in a variety of conditions aimed to disambiguate whether the signal was encoding the eye movement alone or the eye-head gaze shift, and found that compression was indeed linked to the eye-head gaze shift. Moreover, the experiments performed allowed us to further describe the parameters involved in modulating the compression effect. With our understanding of the compression effect and the likely biological signals involved, we then used this model to gain an enhanced understanding of how perisaccadic visual perception may be altered in patients with schizophrenia. The final paper examines the postulate that patients with schizophrenia may have an altered corollary discharge signal in the visual pathway for saccadic eye movements. With this study we were able to show that these patients do in fact exhibit qualitative differences in mislocalization compared to controls, and that these are attributable to a noisy corollary discharge that encodes the eye's position in space. This thesis comprises a systematic overview of what signals are involved in maintaining perceptual stability across saccadic eye and head movements. We have been able to investigate these signals through a combination of psychophysical studies and computational modeling. Finally, we used these paradigms to understand how these signaling mechanisms are altered in patients with schizophrenia.
Au cours de l'évolution, les primates ont développé des mouvements oculaires rapides, ou les saccades. Bien que les saccades soient généralement une fonction adaptive, elles engendrent des défis important au près du système visuel qui cherche à maintenir une perception stable sur le monde. À chaque mouvement de l'axe visuel, que ce soit les yeux seuls ou la tête en combinaison avec les yeux, la rétine reçoit une nouvelle image du monde. La majorité des observateurs n'a pas conscience de ce flux important d'information rétinienne discontinue et perçoit plutôt un monde stable d'un regard à l'autre. Ce phénomène de consolidation de l'influx visuel saccadé en une perception stable et fluide du monde est intitulé le problème de la « perception stable trans-saccadique ». Le phénomène de la « perception stable trans-saccadique » peut être étudié par le biais d'une approche scientifique rigoureuse qui se penche sur la manière dont la perception visuelle évolue à travers les mouvements oculaires. Notamment, il a été démontré que les cibles présentées très brièvement lors d'un saccade sont perçu de façon erronée par rapport à leur emplacement spatial véridique, le phénomène des erreurs de localization peri-saccadique (ELPS). Ces erreurs prédictibles et systématiques sont de deux types : le premier est un simple déplacement dans la direction de la saccade ; le deuxième est sous forme de compression vers l'objet cible. Ce dernier type d'erreur, la compression du champ visuelle vers l'objet de la saccade, est le phénomène principal dont cette thèse s'est servi pour étudier les mécanismes qui engendrent la stabilité visuelle lors des saccades. Une série d'expérience psychophysique a donc été réalisée pour explorer les signaux qui entre en jeux lors du jugement spatial de la cible d'une saccade.Dans le premier chapitre, nous avons élucidé un schéma expérimental qui décrit l'interaction d'un signal oculomoteur qui encode le mouvement oculaire avec un signal visuel qui encode la position de la cible. Selon notre formulation, l'issue de cette interaction est directement reliée au positionnement perçu de la cible qui est présentée autour d'une saccade. Ce modèle a reproduit non seulement les résultats de notre laboratoire mais aussi ceux d'un collaborateur extérieur dont nous avons reçus que les données brutes. Suite à ce premier succès, lors du deuxième chapitre nous nous sommes orientés vers la nature même du signal oculomoteur. Nous avons accomplit cette tache en utilisant une variété de conditions expérimentales qui visaient à préciser si le signal visuel encodait le mouvement oculaire seule ou en conjonction avec le mouvement de la tête. Nos résultats ont clairement démontré que le phénomène de compression est en effet lié à la combinaison des mouvements des yeux et de la tête, que la compression était vers le but du regard et non l'objet de la saccade en tant que tel. Ces expériences nous ont aussi permis de décrire plus précisément les paramètres et les conditions qui affectent la compression. Armé de notre compréhension de l'effet de compression ci-haut et de ses signaux biologiques probables, lors du dernier chapitre nous avons employés notre model biologique pour comprendre davantage la manière dont la vision chez les patients atteints de la schizophrénie pourrait être altérée lors des saccades. Plus spécifiquement, nous avons étudié l'hypothèse que la décharge corollaire (DC) des voies optiques pourrait être altérée chez les patients schizophrènes. Nos études ont en effet souligné que lors des saccades, les patients schizophrènes démontrent des différences qualitatives en terme d'erreur de localisation de signal par rapport aux patients du groupe témoin. Le résultat de cette étude à démontrer que le DC dans les schizophrènes était différent que chez les contrôles, et que cette différence était suffisante pour expliquer les différences remarquées dans leur perception visuelle autour des saccades.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography