To see the other types of publications on this topic, follow the link: Visual scanpaths.

Journal articles on the topic 'Visual scanpaths'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Visual scanpaths.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Król, Michał, and Magdalena Ewa Król. "A Novel Eye Movement Data Transformation Technique that Preserves Temporal Information: A Demonstration in a Face Processing Task." Sensors 19, no. 10 (May 23, 2019): 2377. http://dx.doi.org/10.3390/s19102377.

Full text
Abstract:
Existing research has shown that human eye-movement data conveys rich information about underlying mental processes, and that the latter may be inferred from the former. However, most related studies rely on spatial information about which different areas of visual stimuli were looked at, without considering the order in which this occurred. Although powerful algorithms for making pairwise comparisons between eye-movement sequences (scanpaths) exist, the problem is how to compare two groups of scanpaths, e.g., those registered with vs. without an experimental manipulation in place, rather than individual scanpaths. Here, we propose that the problem might be solved by projecting a scanpath similarity matrix, obtained via a pairwise comparison algorithm, to a lower-dimensional space (the comparison and dimensionality-reduction techniques we use are ScanMatch and t-SNE). The resulting distributions of low-dimensional vectors representing individual scanpaths can be statistically compared. To assess if the differences result from temporal scanpath features, we propose to statistically compare the cross-validated accuracies of two classifiers predicting group membership: (1) based exclusively on spatial metrics; (2) based additionally on the obtained scanpath representation vectors. To illustrate, we compare autistic vs. typically-developing individuals looking at human faces during a lab experiment and find significant differences in temporal scanpath features.
APA, Harvard, Vancouver, ISO, and other styles
2

Brandt, Stephan A., and Lawrence W. Stark. "Spontaneous Eye Movements During Visual Imagery Reflect the Content of the Visual Scene." Journal of Cognitive Neuroscience 9, no. 1 (January 1997): 27–38. http://dx.doi.org/10.1162/jocn.1997.9.1.27.

Full text
Abstract:
In nine naïve subjects eye movements were recorded while subjects viewed and visualized four irregularly-checkered diagrams. Scanpaths, defined as repetitive sequences of fixations and saccades were found during visual imagery and viewing. Positions of fixations were distributed according to the spatial arrangement of subfeatures in the diagrams. For a particular imagined diagrammatic picture, eye movements were closely correlated with the eye movements recorded while viewing the same picture. Thus eye movements during imagery are not random but reflect the content of the visualized scene. The question is discussed whether scanpath eye movements play a significant functional role in the process of visual imagery.
APA, Harvard, Vancouver, ISO, and other styles
3

McClung, Sarah N., and Ziho Kang. "Characterization of Visual Scanning Patterns in Air Traffic Control." Computational Intelligence and Neuroscience 2016 (2016): 1–17. http://dx.doi.org/10.1155/2016/8343842.

Full text
Abstract:
Characterization of air traffic controllers’ (ATCs’) visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs’ linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process.
APA, Harvard, Vancouver, ISO, and other styles
4

Król, Magdalena Ewa, and Michał Król. "Scanpath similarity measure reveals not only a decreased social preference, but also an increased nonsocial preference in individuals with autism." Autism 24, no. 2 (July 27, 2019): 374–86. http://dx.doi.org/10.1177/1362361319865809.

Full text
Abstract:
We compared scanpath similarity in response to repeated presentations of social and nonsocial images representing natural scenes in a sample of 30 participants with autism spectrum disorder and 32 matched typically developing individuals. We used scanpath similarity (calculated using ScanMatch) as a novel measure of attentional bias or preference, which constrains eye-movement patterns by directing attention to specific visual or semantic features of the image. We found that, compared with the control group, scanpath similarity of participants with autism was significantly higher in response to nonsocial images, and significantly lower in response to social images. Moreover, scanpaths of participants with autism were more similar to scanpaths of other participants with autism in response to nonsocial images, and less similar in response to social images. Finally, we also found that in response to nonsocial images, scanpath similarity of participants with autism did not decline with stimulus repetition to the same extent as in the control group, which suggests more perseverative attention in the autism spectrum disorder group. These results show a preferential fixation on certain elements of social stimuli in typically developing individuals compared with individuals with autism, and on certain elements of nonsocial stimuli in the autism spectrum disorder group, compared with the typically developing group.
APA, Harvard, Vancouver, ISO, and other styles
5

Gbadamosi, Joystone, and Wolfgang H. Zangemeister. "Visual Imagery in Hemianopic Patients." Journal of Cognitive Neuroscience 13, no. 7 (October 1, 2001): 855–66. http://dx.doi.org/10.1162/089892901753165782.

Full text
Abstract:
In this article we report some findings about visual imagery in patients with stable homonymous hemianopia compared to healthy control subjects. These findings were obtained by analyzing the gaze control through recording of eye movements in different phases of viewing and imagery. We used six different visual stimuli for the consecutive viewing and imagery phases. With infrared oculography, we recorded eye movements during this presentation phase and in three subsequent imagery phases in absence of the stimulus. Analyzing the basic parameters of the gaze sequences (known as “scanpaths”), we discovered distinct characteristics of the “viewing scanpaths” and the “imagery scanpaths” in both groups, which suggests a reduced extent of the image within the cognitive representation. We applied different similarity measures (string/vector string editing, Markov analysis). We found a “progressive consistency of imagery,” shown through raising similarity values for the comparison of the late imagery scanpaths. This result suggests a strong top-down component in picture exploration: In both groups, healthy subjects and hemianopic patients, a mental model of the viewed picture must evolve very soon and substantially determine the eye movements. As our hemianopic patients showed analogous results to the normal subjects, we conclude that these patients are well adjusted to their deficit and, despite their perceptual defect, have a preserved cognitive representation, which follows the same top-down vision strategies in the process of visual imagery.
APA, Harvard, Vancouver, ISO, and other styles
6

Green, Melissa J., Leanne M. Williams, and Dean Davidson. "Visual scanpaths to threat-related faces in deluded schizophrenia." Psychiatry Research 119, no. 3 (August 2003): 271–85. http://dx.doi.org/10.1016/s0165-1781(03)00129-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wollstadt, Patricia, Martina Hasenjäger, and Christiane B. Wiebel-Herboth. "Quantifying the Predictability of Visual Scanpaths Using Active Information Storage." Entropy 23, no. 2 (January 29, 2021): 167. http://dx.doi.org/10.3390/e23020167.

Full text
Abstract:
Entropy-based measures are an important tool for studying human gaze behavior under various conditions. In particular, gaze transition entropy (GTE) is a popular method to quantify the predictability of a visual scanpath as the entropy of transitions between fixations and has been shown to correlate with changes in task demand or changes in observer state. Measuring scanpath predictability is thus a promising approach to identifying viewers’ cognitive states in behavioral experiments or gaze-based applications. However, GTE does not account for temporal dependencies beyond two consecutive fixations and may thus underestimate the actual predictability of the current fixation given past gaze behavior. Instead, we propose to quantify scanpath predictability by estimating the active information storage (AIS), which can account for dependencies spanning multiple fixations. AIS is calculated as the mutual information between a processes’ multivariate past state and its next value. It is thus able to measure how much information a sequence of past fixations provides about the next fixation, hence covering a longer temporal horizon. Applying the proposed approach, we were able to distinguish between induced observer states based on estimated AIS, providing first evidence that AIS may be used in the inference of user states to improve human–machine interaction.
APA, Harvard, Vancouver, ISO, and other styles
8

Drusch, Gautier, and J. M. Christian Bastien. "Analyzing Web pages visual scanpaths: between and within tasks variability." Work 41 (2012): 1559–66. http://dx.doi.org/10.3233/wor-2012-0353-1559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Laeng, Bruno, and Dinu-Stefan Teodorescu. "Eye scanpaths during visual imagery reenact those of perception of the same visual scene." Cognitive Science 26, no. 2 (March 2002): 207–31. http://dx.doi.org/10.1207/s15516709cog2602_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Boccignone, Giuseppe, and Mario Ferraro. "Feed and fly control of visual scanpaths for foveation image processing." annals of telecommunications - annales des télécommunications 68, no. 3-4 (July 18, 2012): 201–17. http://dx.doi.org/10.1007/s12243-012-0316-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Krichmar, Jeffrey L., Kim T. Blackwell, Garth S. Barbour, Alexander B. Golovan, and Thomas P. Vogl. "A solution to the feature correspondence problem inspired by visual scanpaths." Neurocomputing 26-27 (June 1999): 769–78. http://dx.doi.org/10.1016/s0925-2312(98)00146-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Beedie, Sara A., David M. St.Clair, Dan P. Rujescu, and Philip J. Benson. "SMOOTH PURSUIT AND VISUAL SCANPATHS: RELATED OR INDEPENDENT DEFICITS IN SCHIZOPHRENIA?" Schizophrenia Research 117, no. 2-3 (April 2010): 247–48. http://dx.doi.org/10.1016/j.schres.2010.02.381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Williams, Leanne M., Carmel M. Loughland, Evian Gordon, and Dean Davidson. "Visual scanpaths in schizophrenia: is there a deficit in face recognition?" Schizophrenia Research 40, no. 3 (December 1999): 189–99. http://dx.doi.org/10.1016/s0920-9964(99)00056-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zangemeister, Wolfgang H., and Thomas Liman. "Foveal versus parafoveal scanpaths of visual imagery in virtual hemianopic subjects." Computers in Biology and Medicine 37, no. 7 (July 2007): 975–82. http://dx.doi.org/10.1016/j.compbiomed.2007.01.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Loughland, Carmel M., Leanne M. Williams, and Evian Gordon. "Visual scanpaths to positive and negative facial emotions in an outpatient schizophrenia sample." Schizophrenia Research 55, no. 1-2 (May 2002): 159–70. http://dx.doi.org/10.1016/s0920-9964(01)00186-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Beedie, Sara A., Philip J. Benson, Ina Giegling, Dan Rujescu, and David M. St. Clair. "Smooth pursuit and visual scanpaths: Independence of two candidate oculomotor risk markers for schizophrenia." World Journal of Biological Psychiatry 13, no. 3 (May 5, 2011): 200–210. http://dx.doi.org/10.3109/15622975.2011.566628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Granholm, Eric, Robert F. Asarnow, and Stephen R. Marder. "Display visual angle and attentional scanpaths on the span of apprehension task in schizophrenia." Journal of Abnormal Psychology 105, no. 1 (February 1996): 17–24. http://dx.doi.org/10.1037/0021-843x.105.1.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Marsh, Pamela J., and Leanne M. Williams. "ADHD and schizophrenia phenomenology: Visual scanpaths to emotional faces as a potential psychophysiological marker?" Neuroscience & Biobehavioral Reviews 30, no. 5 (January 2006): 651–65. http://dx.doi.org/10.1016/j.neubiorev.2005.11.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Green, Melissa, Leanne Williams, and Dean Davidson. "Visual scanpaths and facial affect recognition in delusion-prone individuals: Increased sensitivity to threat?" Cognitive Neuropsychiatry 8, no. 1 (January 2003): 19–41. http://dx.doi.org/10.1080/713752236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Spilka, Michael J., Daniel J. Pittman, Signe L. Bray, and Vina M. Goghari. "Manipulating visual scanpaths during facial emotion perception modulates functional brain activation in schizophrenia patients and controls." Journal of Abnormal Psychology 128, no. 8 (November 2019): 855–66. http://dx.doi.org/10.1037/abn0000468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Dion-Marcoux, Y., C. Blais, D. Fiset, and H. Forget. "OVER-RELIANCE ON THE MOUTH AREA IN THE VISUAL SCANPATHS ARE ALSO OBSERVED WITH OLDER EMOTIONAL FACE." Innovation in Aging 1, suppl_1 (June 30, 2017): 499–500. http://dx.doi.org/10.1093/geroni/igx004.1774.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Liberati, Alessio, Roberta Fadda, Giuseppe Doneddu, Sara Congiu, Marco A. Javarone, Tricia Striano, and Alessandro Chessa. "A Statistical Physics Perspective to Understand Social Visual Attention in Autism Spectrum Disorder." Perception 46, no. 8 (January 6, 2017): 889–913. http://dx.doi.org/10.1177/0301006616685976.

Full text
Abstract:
This study investigated social visual attention in children with Autism Spectrum Disorder (ASD) and with typical development (TD) in the light of Brockmann and Geisel’s model of visual attention. The probability distribution of gaze movements and clustering of gaze points, registered with eye-tracking technology, was studied during a free visual exploration of a gaze stimulus. A data-driven analysis of the distribution of eye movements was chosen to overcome any possible methodological problems related to the subjective expectations of the experimenters about the informative contents of the image in addition to a computational model to simulate group differences. Analysis of the eye-tracking data indicated that the scanpaths of children with TD and ASD were characterized by eye movements geometrically equivalent to Lévy flights. Children with ASD showed a higher frequency of long saccadic amplitudes compared with controls. A clustering analysis revealed a greater dispersion of eye movements for these children. Modeling of the results indicated higher values of the model parameter modulating the dispersion of eye movements for children with ASD. Together, the experimental results and the model point to a greater dispersion of gaze points in ASD.
APA, Harvard, Vancouver, ISO, and other styles
23

Palma Fraga, Ricardo, Ziho Kang, Jerry M. Crutchfield, and Saptarshi Mandal. "Visual Search and Conflict Mitigation Strategies Used by Expert en Route Air Traffic Controllers." Aerospace 8, no. 7 (June 23, 2021): 170. http://dx.doi.org/10.3390/aerospace8070170.

Full text
Abstract:
The role of the en route air traffic control specialist (ATCS) is vital to maintaining safety and efficiency within the National Airspace System (NAS). ATCSs must vigilantly scan the airspace under their control and adjacent airspaces using an En Route Automation Modernization (ERAM) radar display. The intent of this research is to provide an understanding of the expert controller visual search and aircraft conflict mitigation strategies that could be used as scaffolding methods during ATCS training. Interviews and experiments were conducted to elicit visual scanning and conflict mitigation strategies from the retired controllers who were employed as air traffic control instructors. The interview results were characterized and classified using various heuristics. In particular, representative visual scanpaths were identified, which accord with the interview results of the visual search strategies. The highlights of our findings include: (1) participants used systematic search patterns, such as circular, spiral, linear or quadrant-based, to extract operation-relevant information; (2) participants applied an information hierarchy when aircraft information was cognitively processed (altitude -> direction -> speed); (3) altitude or direction changes were generally preferred over speed changes when imminent potential conflicts were mitigated. Potential applications exist in the implementation of the findings into the training curriculum of candidates.
APA, Harvard, Vancouver, ISO, and other styles
24

Marsh, P. J., I. Lazzaro, B. R. Manor, A. W. F. Harris, L. M. Williams, E. Gordon, and D. Davidson. "Facial expressions of emotion and visual scanpaths in attention-deficit hyperactivity disorder (ADHD) and first-episode psychosis (FEP)." Schizophrenia Research 41, no. 1 (January 2000): 288. http://dx.doi.org/10.1016/s0920-9964(00)91030-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Xian-Bin, Wen-Long Jiang, Yu-Jie Wen, Chang-Ming Wang, Qing Tian, Yu Fan, Hai-Bo Yang, and Chuan-Yue Wang. "The attenuated visual scanpaths of patients with schizophrenia whilst recognizing emotional facial expressions are worsened in natural social scenes." Schizophrenia Research 220 (June 2020): 155–63. http://dx.doi.org/10.1016/j.schres.2020.03.040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Mikula, Laura, Sergio Mejía-Romero, Romain Chaumillon, Amigale Patoine, Eduardo Lugo, Delphine Bernardin, and Jocelyn Faubert. "Eye-head coordination and dynamic visual scanning as indicators of visuo-cognitive demands in driving simulator." PLOS ONE 15, no. 12 (December 31, 2020): e0240201. http://dx.doi.org/10.1371/journal.pone.0240201.

Full text
Abstract:
Driving is an everyday task involving a complex interaction between visual and cognitive processes. As such, an increase in the cognitive and/or visual demands can lead to a mental overload which can be detrimental for driving safety. Compiling evidence suggest that eye and head movements are relevant indicators of visuo-cognitive demands and attention allocation. This study aims to investigate the effects of visual degradation on eye-head coordination as well as visual scanning behavior during a highly demanding task in a driving simulator. A total of 21 emmetropic participants (21 to 34 years old) performed dual-task driving in which they were asked to maintain a constant speed on a highway while completing a visual search and detection task on a navigation device. Participants did the experiment with optimal vision and with contact lenses that introduced a visual perturbation (myopic defocus). The results indicate modifications of eye-head coordination and the dynamics of visual scanning in response to the visual perturbation induced. More specifically, the head was more involved in horizontal gaze shifts when the visual needs were not met. Furthermore, the evaluation of visual scanning dynamics, based on time-based entropy which measures the complexity and randomness of scanpaths, revealed that eye and gaze movements became less explorative and more stereotyped when vision was not optimal. These results provide evidence for a reorganization of both eye and head movements in response to increasing visual-cognitive demands during a driving task. Altogether, these findings suggest that eye and head movements can provide relevant information about visuo-cognitive demands associated with complex tasks. Ultimately, eye-head coordination and visual scanning dynamics may be good candidates to estimate drivers’ workload and better characterize risky driving behavior.
APA, Harvard, Vancouver, ISO, and other styles
27

Spilka, Michael, Daniel Pittman, Signe Bray, and Vina Goghari. "T212. Does Manipulating Visual Scanpaths During Facial Emotion Perception Modulate Brain Activation in Face-Processing Regions in Schizophrenia Patients and Controls?" Biological Psychiatry 83, no. 9 (May 2018): S210—S211. http://dx.doi.org/10.1016/j.biopsych.2018.02.549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Buzzelli, Marco. "Recent Advances in Saliency Estimation for Omnidirectional Images, Image Groups, and Video Sequences." Applied Sciences 10, no. 15 (July 27, 2020): 5143. http://dx.doi.org/10.3390/app10155143.

Full text
Abstract:
We present a review of methods for automatic estimation of visual saliency: the perceptual property that makes specific elements in a scene stand out and grab the attention of the viewer. We focus on domains that are especially recent and relevant, as they make saliency estimation particularly useful and/or effective: omnidirectional images, image groups for co-saliency, and video sequences. For each domain, we perform a selection of recent methods, we highlight their commonalities and differences, and describe their unique approaches. We also report and analyze the datasets involved in the development of such methods, in order to reveal additional peculiarities of each domain, such as the representation used for the ground truth saliency information (scanpaths, saliency maps, or salient object regions). We define domain-specific evaluation measures, and provide quantitative comparisons on the basis of common datasets and evaluation criteria, highlighting the different impact of existing approaches on each domain. We conclude by synthesizing the emerging directions for research in the specialized literature, which include novel representations for omnidirectional images, inter- and intra- image saliency decomposition for co-saliency, and saliency shift for video saliency estimation.
APA, Harvard, Vancouver, ISO, and other styles
29

Pavlovskaya, Marina, Itzhak Glass, Nachum Soroker, Baruch Blum, and Zeev Groswasser. "Coordinate Frame for Pattern Recognition in Unilateral Spatial Neglect." Journal of Cognitive Neuroscience 9, no. 6 (November 1997): 824–34. http://dx.doi.org/10.1162/jocn.1997.9.6.824.

Full text
Abstract:
The present research examines the effect of spatial (object-centered) attentional constraints on pattern recognition. Four normal subjects and two right-hemisphere-damaged patients with left visual neglect participated in the study. Small, letterlike, prelearned patterns served as stimuli. Short exposure time prevented overt scanpaths during stimulus presentation. Attention was attracted to a central (midsagittal) hation point by precuing this location prior to each stimulus presentation. Minute (up to 1.5° of visual angle) rightward and leftward stimulus shifts caused attention to be allocated each time to a different location on the object space, while remaining in a fixed central position in viewercentered coordinates. The task was to decide which of several prelearned patterns was presented in each trial. In the normal subjects, best performance was achieved when the luminance centroid (LC; derived from the analysis of low-spatial frequencies in the object space) of each pattern coincided with the spatial position of the precue. In contrast, the patients with neglect showed optimal recognition performance when precuing attracted attention to locations within the object space, to the left of the LC. The normal performance suggests that the LC may serve as a center of gravity for attention allocation during pattern recognition. This point seems to be the target location where focal attention is normally directed, following a primary global analysis based on the low spatial frequencies. Thus, the LC of a simple pattern may serve as the origin point for an object-centered-coordiate-frame (OCCF), dividing it into right and left. This, in turn, serves to create a prototype description of the pattern, in its own coordinates, in memory, to be addressed during subsequent recognition tasks. The best match of the percept with the stored description may explain the observed advantage of allocating attention to the LC. The performance of the brain- damaged patients can be explained in terms of neglect operating in the OCCE
APA, Harvard, Vancouver, ISO, and other styles
30

Nam, Beomwoo, Yeseul Kim, Soo Rim Noh, and Taehyun Kim. "M135. DEVELOPMENT OF VISUAL SCANPATH PATTERN ANALYSIS BASED ON FACIAL EMOTION PERCEPTION ENHANCEMENT TRAINING PROGRAM IN SCHIZOPHRENIA PATIENTS." Schizophrenia Bulletin 46, Supplement_1 (April 2020): S186—S187. http://dx.doi.org/10.1093/schbul/sbaa030.447.

Full text
Abstract:
Abstract Background Facial expression is an important non-verbal way of expressing the person’s emotional state. If the process of perceiving facial features is impaired, the ability to recognize the emotional state of others is degraded, which may make it difficult to maintain interpersonal and social communications. Many studies have reported on the association between deficit of facial emotion perception (FEP) and the social functioning in schizophrenia. Therefore, we developed visual scanpath pattern analysis based FEP enhancement training program in schizophrenia. Methods We enrolled patients visited and admitted Gongju National Hospital and those lived in shared housing facilities for rehabilitation from Sep 2018 to May 2019. 128 patients attended to FEP training program as open, blind and randomized-controlled cross-over design. Both FEP training and mock programs were provided twice a week for a month treatment period with a 4-week washout period between treatment periods. Primary outcome was results of heatmaps based on visual scanpath patterns. Results Among 128 patients, 121 completed the study and 7 was dropped out. In FEP training group, their visual scanpath pattern somewhat closer to normal than mock group. When they had an efforts to perceive the emotion of face pictures, FEP training group tended to scan the face pictures more broadly including eye rims, middle of forehead and sided of the mouth in addition to eyes, nose and mouth whereas mock group tended to gaze eyes, nose and mouth intensively. Discussion This FEP training program may improve the ability to integrate facial expression cues through visual scanpath pattern changes in schizophrenia patients. Acknowledgement: This work was supported by the National Research Foundation of Korea (NRF) (Grant number NRF-2016R1E1A2A01953732 & 2018R1E1A2A02059043).
APA, Harvard, Vancouver, ISO, and other styles
31

Wynn, Jordana, Michael Bone, Michelle Dragan, Kari Hoffman, Bradley Buchsbaum, and Jennifer Ryan. "Selective scanpath repetition supports memory-guided visual search." Journal of Vision 15, no. 12 (September 1, 2015): 789. http://dx.doi.org/10.1167/15.12.789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wynn, Jordana S., Michael B. Bone, Michelle C. Dragan, Kari L. Hoffman, Bradley R. Buchsbaum, and Jennifer D. Ryan. "Selective scanpath repetition during memory-guided visual search." Visual Cognition 24, no. 1 (January 2, 2016): 15–37. http://dx.doi.org/10.1080/13506285.2016.1175531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Fujita, Toyomi, and Claudio M. Privitera. "Positional Features and Algorithmic Predictability of Visual Regions-of-Interest in Robot Hand Movement." Journal of Robotics and Mechatronics 21, no. 6 (December 20, 2009): 765–72. http://dx.doi.org/10.20965/jrm.2009.p0765.

Full text
Abstract:
Visual functions are important for robots who engage in cooperative work with other robots. In order to develop an effective visual function for robots, we investigate human visual scanpath features in a scene of robot hand movement. Human regions-of-interest (hROIs) are measured in psychophysical experiments and compared using a positional similarity index,Sp, on the basis of scanpath theory. Results show consistent hROI loci due to dominant top-down active looking in such a scene. This paper also discusses how bottom-up image processing algorithms (IPAs) are able to predict hROIs. We compare algorithmic regions-of-interest (aROIs) generated by IPAs, with the hROIs obtained from robot hand movement images. Results suggest that bottom-up IPAs with support size almost equal to fovea size have a high ability to predict the hROIs.
APA, Harvard, Vancouver, ISO, and other styles
34

Frame, Mary E., Rik Warren, and Anna M. Maresca. "Scanpath comparisons for complex visual search in a naturalistic environment." Behavior Research Methods 51, no. 3 (December 3, 2018): 1454–70. http://dx.doi.org/10.3758/s13428-018-1154-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Bate, Sarah, Catherine Haslam, and Timothy L. Hodgson. "Angry faces are special too: Evidence from the visual scanpath." Neuropsychology 23, no. 5 (2009): 658–67. http://dx.doi.org/10.1037/a0014518.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Braunagel, Christian, David Geisler, Wolfgang Rosenstiel, and Enkelejda Kasneci. "Online Recognition of Driver-Activity Based on Visual Scanpath Classification." IEEE Intelligent Transportation Systems Magazine 9, no. 4 (2017): 23–36. http://dx.doi.org/10.1109/mits.2017.2743171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Augustyniak, P., and R. Tadeusiewicz. "Assessment of electrocardiogram visual interpretation strategy based on scanpath analysis." Physiological Measurement 27, no. 7 (April 27, 2006): 597–608. http://dx.doi.org/10.1088/0967-3334/27/7/004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Pieters, Rik, Edward Rosbergen, and Michel Wedel. "Visual Attention to Repeated Print Advertising: A Test of Scanpath Theory." Journal of Marketing Research 36, no. 4 (November 1999): 424. http://dx.doi.org/10.2307/3151998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Pieters, Rik, Edward Rosbergen, and Michel Wedel. "Visual Attention to Repeated Print Advertising: A Test of Scanpath Theory." Journal of Marketing Research 36, no. 4 (November 1999): 424–38. http://dx.doi.org/10.1177/002224379903600403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Green, Melissa J., Leanne M. Williams, and David R. Hemsley. "Cognitive Theories of Delusion Formation: The Contribution of Visual Scanpath Research." Cognitive Neuropsychiatry 5, no. 1 (February 2000): 63–74. http://dx.doi.org/10.1080/135468000395835.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Mondal, Kiran, Debojyoti Bhattacharyya, Deepti Majumdar, Roshani Meena, and Madhusudan Pal. "Visual Performance under Varying Illumination Conditions while using an Indigenously Developed Wrist Wearable Computer." Defence Life Science Journal 6, no. 3 (July 27, 2021): 242–50. http://dx.doi.org/10.14429/dlsj.6.17170.

Full text
Abstract:
Ambient illumination conditions have significant impact on users’ visual performance while carrying out onscreen reading tasks on visual display units, especially smaller screen sizes. Present study assessed the visual performance responses of different ambient illumination levels during onscreen reading on Wrist Wearable Computer (WWC) developed for the command-control-communication between the control room and the soldiers operating in remote locations. Ten (10) Indian Infantry soldiers performed two different types of loud reading tasks on the display of WWC under three different ambient illumination (mean ±SEM) conditions namely, Indoor controlled (450.00±10.00 lx), Outdoor daylight (11818.7±582.91 lx) and Indoor dark (0.12±0.03 lx) environments. While reading, participants wore an eye tracking glass which recorded the eye movement responses. Visualisation techniques were used to predict the association of illumination levelof surrounding with visual performance of the user. Subjective legibility rating was also applied to understand participants’ preferences towards physical attributes of the onscreen information and illumination level. Results indicated that illumination had a significant effect on eye movement parameters like fixation frequency, fixation duration and scanpath length while completing the tasks. Overall, participants performed better under indoor controlled illumination conditions in terms of fixation profile and scanpath length, apart from improved subjective legibility ratings as compared to other two illumination conditions. Future research attempts need to be directed towards the optimum performance of the display across wide range of ambient illumination conditions and to establish how the display of indigenously developed wearable computer performs in comparison to other such displays available across the globe.
APA, Harvard, Vancouver, ISO, and other styles
42

Yoo, Sangbong, Seongmin Jeong, and Yun Jang. "Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels." Sensors 21, no. 14 (July 8, 2021): 4686. http://dx.doi.org/10.3390/s21144686.

Full text
Abstract:
Many gaze data visualization techniques intuitively show eye movement together with visual stimuli. The eye tracker records a large number of eye movements within a short period. Therefore, visualizing raw gaze data with the visual stimulus appears complicated and obscured, making it difficult to gain insight through visualization. To avoid the complication, we often employ fixation identification algorithms for more abstract visualizations. In the past, many scientists have focused on gaze data abstraction with the attention map and analyzed detail gaze movement patterns with the scanpath visualization. Abstract eye movement patterns change dramatically depending on fixation identification algorithms in the preprocessing. However, it is difficult to find out how fixation identification algorithms affect gaze movement pattern visualizations. Additionally, scientists often spend much time on adjusting parameters manually in the fixation identification algorithms. In this paper, we propose a gaze behavior-based data processing method for abstract gaze data visualization. The proposed method classifies raw gaze data using machine learning models for image classification, such as CNN, AlexNet, and LeNet. Additionally, we compare the velocity-based identification (I-VT), dispersion-based identification (I-DT), density-based fixation identification, velocity and dispersion-based (I-VDT), and machine learning based and behavior-based modelson various visualizations at each abstraction level, such as attention map, scanpath, and abstract gaze movement visualization.
APA, Harvard, Vancouver, ISO, and other styles
43

McCabe, Kathryn, Dominique Rich, Carmel Maree Loughland, Ulrich Schall, and Linda Elisabet Campbell. "Visual scanpath abnormalities in 22q11.2 deletion syndrome: Is this a face specific deficit?" Psychiatry Research 189, no. 2 (September 2011): 292–98. http://dx.doi.org/10.1016/j.psychres.2011.06.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Nigel Teik Ming, Laurenn Maree Thomas, Patrick Joseph Fraser Clarke, Ian Bernard Hickie, and Adam John Guastella. "Hyperscanning and avoidance in social anxiety disorder: The visual scanpath during public speaking." Psychiatry Research 225, no. 3 (February 2015): 667–72. http://dx.doi.org/10.1016/j.psychres.2014.11.025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Horley, Kaye, Leanne M. Williams, Craig Gonsalvez, and Evian Gordon. "Face to face: visual scanpath evidence for abnormal processing of facial expressions in social phobia." Psychiatry Research 127, no. 1-2 (June 2004): 43–53. http://dx.doi.org/10.1016/j.psychres.2004.02.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Loughland, C. "Visual scanpath dysfunction in first-degree relatives of schizophrenia probands: evidence for a vulnerability marker?" Schizophrenia Research 67, no. 1 (March 1, 2004): 11–21. http://dx.doi.org/10.1016/s0920-9964(03)00094-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Toh, Wei Lin, Susan L. Rossell, and David J. Castle. "Current visual scanpath research: a review of investigations into the psychotic, anxiety, and mood disorders." Comprehensive Psychiatry 52, no. 6 (November 2011): 567–79. http://dx.doi.org/10.1016/j.comppsych.2010.12.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

FUJITA, Toyomi. "1A1-M07 Characteristics of Visual Regions-of-Interest in Robot Hand Movement Based on Scanpath Theory." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2009 (2009): _1A1—M07_1—_1A1—M07_4. http://dx.doi.org/10.1299/jsmermd.2009._1a1-m07_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Han, Xueyan, Yang Shao, Shaowei Yang, and Peng Yu. "Entropy-Based Effect Evaluation of Delineators in Tunnels on Drivers’ Gaze Behavior." Entropy 22, no. 1 (January 17, 2020): 113. http://dx.doi.org/10.3390/e22010113.

Full text
Abstract:
Driving safety in tunnels has always been an issue of great concern. Establishing delineators to improve drivers’ instantaneous cognition of the surrounding environment in tunnels can effectively enhance driver safety. Through a simulation study, this paper explored how delineators affect drivers’ gaze behavior (including fixation and scanpath) in tunnels. In addition to analyzing typical parameters, such as fixation position and fixation duration in areas of interest (AOIs), by modeling drivers’ switching process as Markov chains and calculating Shannon’s entropy of the fit Markov model, this paper quantified the complexity of individual switching patterns between AOIs under different delineator configurations and with different road alignments. A total of 25 subjects participated in this research. The results show that setting delineators in tunnels can attract drivers’ attention and make them focus on the pavement. When driving in tunnels equipped with delineators, especially tunnels with both wall delineators and pavement delineators, the participants exhibited a smaller transition entropy H t and stationary entropy H s , which can greatly reduce drivers’ visual fatigue. Compared with left curve and right curve, participants obtained higher H t and H s values in the straight section.
APA, Harvard, Vancouver, ISO, and other styles
50

Loughland, C., K. McCabe, S. Quinn, M. Hunter, and T. Lewin. "01-04 Visual scanpath comparisons between those people with and without comorbid cannabis abuse: the implications for eye movement research in schizophrenia." Acta Neuropsychiatrica 18, no. 6 (December 2006): 314. http://dx.doi.org/10.1017/s0924270800031860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography