Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Gaze Pattern.

Статті в журналах з теми "Gaze Pattern"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Gaze Pattern".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Yuan, Guoliang, Yafei Wang, Huizhu Yan, and Xianping Fu. "Self-calibrated driver gaze estimation via gaze pattern learning." Knowledge-Based Systems 235 (January 2022): 107630. http://dx.doi.org/10.1016/j.knosys.2021.107630.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lu, Feng, Xiaowu Chen, and Yoichi Sato. "Appearance-Based Gaze Estimation via Uncalibrated Gaze Pattern Recovery." IEEE Transactions on Image Processing 26, no. 4 (April 2017): 1543–53. http://dx.doi.org/10.1109/tip.2017.2657880.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Jang, Gukhwa, and Saehoon Kim. "INVESTIGATING THE EFFECT OF A RAISED CYCLE TRACK, PHYSICAL SEPARATION, LAND USE AND NUMBER OF PEDESTRIAN ON CYCLISTS’ GAZE BEHAVIOR." JOURNAL OF ARCHITECTURE AND URBANISM 43, no. 1 (July 4, 2019): 112–22. http://dx.doi.org/10.3846/jau.2019.3786.

Повний текст джерела
Анотація:
Contemporary cities are home to an increasing number of cyclists. The gaze behavior of cyclists has an important impact upon cyclist safety and experience. Yet this behavior has not been studied to access its potential implications for urban design. This study aims to identify the eye-gaze pattern of cyclists and to examine its potential relationships with urban environmental characteristics, such as a raised cycle track, physical separation, land use, and number of pedestrian. This study measured and analyzed 40 cyclist’s gaze patterns using an eye tracker; the results were as follows. First, cyclists presented a T-shaped gaze pattern with two spots of frequent eye fixation points; the pattern suggests that it may benefit cyclists with greater safety and better readiness of road situation to avoid crashes. Second, more active horizontal gaze dispersion within the T-shaped gaze pattern was observed when participants cycled on a shared and non-raised bikeway. This indicates that there is a more suitable gaze behavior with different gaze limitations depending on the environmental characteristics. Therefore, bicycle facilities need to be constructed according to the consideration of the T-shaped gaze area and the change in cyclists’ gaze behavior in each environment to increase the effectiveness of bicycle facilities.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Brusa, Giulia, Sandro Meneghini, Aldo Piccardo, and Nicola Pizio. "Regressive pattern of horizontal gaze palsy." Neuro-Ophthalmology 7, no. 5 (January 1987): 301–6. http://dx.doi.org/10.3109/01658108708996007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Murnani, Suatmi, Noor Akhmad Setiawan, and Sunu Wibirama. "Spontaneous gaze interaction based on smooth pursuit eye movement using difference gaze pattern method." Communications in Science and Technology 7, no. 1 (July 31, 2022): 8–14. http://dx.doi.org/10.21924/cst.7.1.2022.739.

Повний текст джерела
Анотація:
Human gaze is a promising input modality for being able to be used as natural user interface in touchless technology during Covid-19 pandemic. Spontaneous gaze interaction is required to allow participants to directly interact with an application without any prior eye tracking calibration. Smooth pursuit eye movement is commonly used in this kind of spontaneous gaze-based interaction. Many studies have been focused on various object selection techniques in smooth pursuit-based gaze interaction; however, challenges in spatial accuracy and implementation complexity have not been resolved yet. To address these problems, we then proposed an approach using difference patterns between gaze and dynamic objects' trajectories for object selection named Difference Gaze Pattern method (DGP). Based on the experimental results, our proposed method yielded the best object selection accuracy of and success time of ms. The experimental results also showed the robustness of object selection using difference patterns to spatial accuracy and it was relatively simpler to be implemented. The results also suggested that our proposed method can contribute to spontaneous gaze interaction.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Liu, Bing, Weihua Dong, Zhicheng Zhan, Shengkai Wang, and Liqiu Meng. "Differences in the Gaze Behaviours of Pedestrians Navigating between Regular and Irregular Road Patterns." ISPRS International Journal of Geo-Information 9, no. 1 (January 15, 2020): 45. http://dx.doi.org/10.3390/ijgi9010045.

Повний текст джерела
Анотація:
While a road pattern influences wayfinding and navigation, its influence on the gaze behaviours of navigating pedestrians is not well documented. In this study, we compared gaze behaviour differences between regular and irregular road patterns using eye-tracking technology. Twenty-one participants performed orientation (ORI) and shortest route selection (SRS) tasks with both road patterns. We used accuracy of answers and response time to estimate overall performance and time to first fixation duration, average fixation duration, fixation count and fixation duration to estimate gaze behaviour. The results showed that participants performed better with better accuracy of answers using irregular road patterns. For both tasks and both road patterns, the Label areas of interest (AOIs) (including shops and signs) received quicker or greater attention. The road patterns influenced gaze behaviour for both Road AOIs and Label AOIs but exhibited a greater influence on Road AOIs in both tasks. In summary, for orientation and route selection, users are more likely to rely on labels, and roads with irregular patterns are important. These findings may serve as the anchor point for determining how people’s gaze behaviours differ depending on road pattern and indicate that labels and unique road patterns should be highlighted for better wayfinding and navigation.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Yoo, Sangbong, Seongmin Jeong, and Yun Jang. "Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels." Sensors 21, no. 14 (July 8, 2021): 4686. http://dx.doi.org/10.3390/s21144686.

Повний текст джерела
Анотація:
Many gaze data visualization techniques intuitively show eye movement together with visual stimuli. The eye tracker records a large number of eye movements within a short period. Therefore, visualizing raw gaze data with the visual stimulus appears complicated and obscured, making it difficult to gain insight through visualization. To avoid the complication, we often employ fixation identification algorithms for more abstract visualizations. In the past, many scientists have focused on gaze data abstraction with the attention map and analyzed detail gaze movement patterns with the scanpath visualization. Abstract eye movement patterns change dramatically depending on fixation identification algorithms in the preprocessing. However, it is difficult to find out how fixation identification algorithms affect gaze movement pattern visualizations. Additionally, scientists often spend much time on adjusting parameters manually in the fixation identification algorithms. In this paper, we propose a gaze behavior-based data processing method for abstract gaze data visualization. The proposed method classifies raw gaze data using machine learning models for image classification, such as CNN, AlexNet, and LeNet. Additionally, we compare the velocity-based identification (I-VT), dispersion-based identification (I-DT), density-based fixation identification, velocity and dispersion-based (I-VDT), and machine learning based and behavior-based modelson various visualizations at each abstraction level, such as attention map, scanpath, and abstract gaze movement visualization.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

THOMPSON, ROBIN L., KAREN EMMOREY, and ROBERT KLUENDER. "Learning to look: The acquisition of eye gaze agreement during the production of ASL verbs." Bilingualism: Language and Cognition 12, no. 4 (September 16, 2009): 393–409. http://dx.doi.org/10.1017/s1366728909990277.

Повний текст джерела
Анотація:
In American Sign Language (ASL), native signers use eye gaze to mark agreement (Thompson, Emmorey and Kluender, 2006). Such agreement is unique (it is articulated with the eyes) and complex (it occurs with only two out of three verb types, and marks verbal arguments according to a noun phrase accessibility hierarchy). In a language production experiment using head-mounted eye-tracking, we investigated the extent to which eye gaze agreement can be mastered by late second-language (L2) learners. The data showed that proficient late learners (with an average of 18.8 years signing experience) mastered a cross-linguistically prevalent pattern (NP-accessibility) within the eye gaze agreement system but ignored an idiosyncratic feature (marking agreement on only a subset of verbs). Proficient signers produced a grammar for eye gaze agreement that diverged from that of native signers but was nonetheless consistent with language universals. A second experiment examined the eye gaze patterns of novice signers with less than two years of ASL exposure and of English-speaking non-signers. The results provided further evidence that the pattern of acquisition found for proficient L2 learners is directly related to language learning, and does not stem from more general cognitive processes for eye gaze outside the realm of language.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Guterstam, Arvid, Andrew I. Wilterson, Davis Wachtell, and Michael S. A. Graziano. "Other people’s gaze encoded as implied motion in the human brain." Proceedings of the National Academy of Sciences 117, no. 23 (May 26, 2020): 13162–67. http://dx.doi.org/10.1073/pnas.2003110117.

Повний текст джерела
Анотація:
Keeping track of other people’s gaze is an essential task in social cognition and key for successfully reading other people’s intentions and beliefs (theory of mind). Recent behavioral evidence suggests that we construct an implicit model of other people’s gaze, which may incorporate physically incoherent attributes such as a construct of force-carrying beams that emanate from the eyes. Here, we used functional magnetic resonance imaging and multivoxel pattern analysis to test the prediction that the brain encodes gaze as implied motion streaming from an agent toward a gazed-upon object. We found that a classifier, trained to discriminate the direction of visual motion, significantly decoded the gaze direction in static images depicting a sighted face, but not a blindfolded one, from brain activity patterns in the human motion-sensitive middle temporal complex (MT+) and temporo-parietal junction (TPJ). Our results demonstrate a link between the visual motion system and social brain mechanisms, in which the TPJ, a key node in theory of mind, works in concert with MT+ to encode gaze as implied motion. This model may be a fundamental aspect of social cognition that allows us to efficiently connect agents with the objects of their attention. It is as if the brain draws a quick visual sketch with moving arrows to help keep track of who is attending to what. This implicit, fluid-flow model of other people’s gaze may help explain culturally universal myths about the mind as an energy-like, flowing essence.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Nation, Kate, and Sophia Penny. "Sensitivity to eye gaze in autism: Is it normal? Is it automatic? Is it social?" Development and Psychopathology 20, no. 1 (2008): 79–97. http://dx.doi.org/10.1017/s0954579408000047.

Повний текст джерела
Анотація:
AbstractChildren with autism are developmentally delayed in following the direction of another person's gaze in social situations. A number of studies have measured reflexive orienting to eye gaze cues using Posner-style laboratory tasks in children with autism. Some studies observe normal patterns of cueing, suggesting that children with autism are alert to the significance of the eyes, whereas other studies reveal an atypical pattern of cueing. We review this contradictive evidence to consider the extent to which sensitivity to gaze is normal, and ask whether apparently normal performance may be a consequence of atypical (nonsocial) mechanisms. Our review concludes by highlighting the importance of adopting a developmental perspective if we are to understand the reasons why people with autism process eye gaze information atypically.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Lansing, Charissa R., and George W. McConkie. "Attention to Facial Regions in Segmental and Prosodic Visual Speech Perception Tasks." Journal of Speech, Language, and Hearing Research 42, no. 3 (June 1999): 526–39. http://dx.doi.org/10.1044/jslhr.4203.526.

Повний текст джерела
Анотація:
Two experiments were conducted to test the hypothesis that visual information related to segmental versus prosodic aspects of speech is distributed differently on the face of the talker. In the first experiment, eye gaze was monitored for 12 observers with normal hearing. Participants made decisions about segmental and prosodic categories for utterances presented without sound. The first experiment found that observers spend more time looking at and direct more gazes toward the upper part of the talker's face in making decisions about intonation patterns than about the words being spoken. The second experiment tested the Gaze Direction Assumption underlying Experiment 1—that is, that people direct their gaze to the stimulus region containing information required for their task. In this experiment, 18 observers with normal hearing made decisions about segmental and prosodic categories under conditions in which face motion was restricted to selected areas of the face. The results indicate that information in the upper part of the talker's face is more critical for intonation pattern decisions than for decisions about word segments or primary sentence stress, thus supporting the Gaze Direction Assumption. Visual speech perception proficiency requires learning where to direct visual attention for cues related to different aspects of speech.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Brown, Christopher. "Gaze controls cooperating through prediction." Image and Vision Computing 8, no. 1 (February 1990): 10–17. http://dx.doi.org/10.1016/0262-8856(90)90050-f.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Ho, Yuen Wan, and Derek Isaacowitz. "The Role of Attentional Control in Attention to Emotional Stimuli Among Middle-Aged and Older Adults." Innovation in Aging 4, Supplement_1 (December 1, 2020): 503. http://dx.doi.org/10.1093/geroni/igaa057.1624.

Повний текст джерела
Анотація:
Abstract Prior studies have examined age differences in attention to emotional stimuli; in the current study, we considered how this might relate to dispositional measures of attentional control across age groups. Participants were 116 middle-aged (aged 35 – 64 years) and 39 older (aged 65-86 years) adults in the United States. In the study, participants filled in the Emotional Attentional Control Scale. Then participants watched fearful, happy, neutral, and disgusting videos. The gaze time for each video was measured by an eye tracker. Results did not show significant age difference in attention to happy and neutral videos. However, middle-aged adults gazed relatively more to the disgusting video and relatively less to the fearful video, t (115) =2.16, p =.03. The opposite pattern was found among older adults, t (38) =5.85, p<.001. Self-reported emotional attentional control was not significantly related to attention in either age group. These findings suggest that different stimuli may yield age differences in fixation that are a less consistent pattern with the age-related positivity effect reported in previous studies, and also that self-reported emotional attentional control may not relate to gaze patterns.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

SUZUKI, Kota, and Yohsuke YOSHIOKA. "EFFECT OF DISTANCE BETWEEN CORNER AND STEP ON GAZE PATTERN AND WALKING PATTERN." AIJ Journal of Technology and Design 26, no. 62 (February 20, 2020): 267–71. http://dx.doi.org/10.3130/aijt.26.267.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Oh, Young Hoon, and Da Young Ju. "Age-Related Differences in Fixation Pattern on a Companion Robot." Sensors 20, no. 13 (July 7, 2020): 3807. http://dx.doi.org/10.3390/s20133807.

Повний текст джерела
Анотація:
Recent studies have addressed the various benefits of companion robots and expanded the research scope to their design. However, the viewpoints of older adults have not been deeply investigated. Therefore, this study aimed to examine the distinctive viewpoints of older adults by comparing them with those of younger adults. Thirty-one older and thirty-one younger adults participated in an eye-tracking experiment to investigate their impressions of a bear-like robot mockup. They also completed interviews and surveys to help us understand their viewpoints on the robot design. The gaze behaviors and the impressions of the two groups were significantly different. Older adults focused significantly more on the robot’s face and paid little attention to the rest of the body. In contrast, the younger adults gazed at more body parts and viewed the robot in more detail than the older adults. Furthermore, the older adults rated physical attractiveness and social likeability of the robot significantly higher than the younger adults. The specific gaze behavior of the younger adults was linked to considerable negative feedback on the robot design. Based on these empirical findings, we recommend that impressions of older adults be considered when designing companion robots.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Melnyk, Joseph W., David M. McCord, and Jamie Vaske. "Gaze Pattern Variations among Men When Assessing Female Attractiveness." Evolutionary Psychology 12, no. 1 (January 2014): 147470491401200. http://dx.doi.org/10.1177/147470491401200113.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Wang, Peng, and Zhi Qiang Liu. "Study on Driver's Unsafe Gaze Behavior Detection Technology." Applied Mechanics and Materials 427-429 (September 2013): 1903–6. http://dx.doi.org/10.4028/www.scientific.net/amm.427-429.1903.

Повний текст джерела
Анотація:
A system for detecting and evaluating drivers gaze behavior was proposed. A system for recognizing the drivers unsafe gaze behavior was established using multi-level information and fusion decision method as well. The driving environment and condition is complex as well as the gaze behavior characteristics, and given that, a solution consists of patten classification and the multi-information decision-level fusion were put forward to estimate the different kind model of the driver's gaze behavior. In order to test the proposed strategies,the real time driver's gaze behavior detection system was established. The T characteristic curve proposed through the abnormal behavior parameters of the transverse width between the eyes and the vertical distance between mouth and the midpoint of two eyes, combined with the driver's eyelid closure and the proportion and location characteristics of iris - sclera were studied to get the characterization of the drivers gaze status information. The simulation results indicate that the adaptability and accuracy as well as the intelligent level is significantly improved by using the pattern classification and decision-making technology through multi-source information fusion.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Wynn, Jordana S., Jennifer D. Ryan, and Bradley R. Buchsbaum. "Eye movements support behavioral pattern completion." Proceedings of the National Academy of Sciences 117, no. 11 (March 2, 2020): 6246–54. http://dx.doi.org/10.1073/pnas.1917586117.

Повний текст джерела
Анотація:
The ability to recall a detailed event from a simple reminder is supported by pattern completion, a cognitive operation performed by the hippocampus wherein existing mnemonic representations are retrieved from incomplete input. In behavioral studies, pattern completion is often inferred through the false endorsement of lure (i.e., similar) items as old. However, evidence that such a response is due to the specific retrieval of a similar, previously encoded item is severely lacking. We used eye movement (EM) monitoring during a partial-cue recognition memory task to index reinstatement of lure images behaviorally via the recapitulation of encoding-related EMs or gaze reinstatement. Participants reinstated encoding-related EMs following degraded retrieval cues and this reinstatement was negatively correlated with accuracy for lure images, suggesting that retrieval of existing representations (i.e., pattern completion) underlies lure false alarms. Our findings provide evidence linking gaze reinstatement and pattern completion and advance a functional role for EMs in memory retrieval.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Caruana, Nathan, Kiley Seymour, Jon Brock, and Robyn Langdon. "Responding to joint attention bids in schizophrenia: An interactive eye-tracking study." Quarterly Journal of Experimental Psychology 72, no. 8 (February 21, 2019): 2068–83. http://dx.doi.org/10.1177/1747021819829718.

Повний текст джерела
Анотація:
This study investigated social cognition in schizophrenia using a virtual reality paradigm to capture the dynamic processes of evaluating and responding to eye gaze as an intentional communicative cue. A total of 21 patients with schizophrenia and 21 age-, gender-, and IQ-matched healthy controls completed an interactive computer game with an on-screen avatar that participants believed was controlled by an off-screen partner. On social trials, participants were required to achieve joint attention by correctly interpreting and responding to gaze cues. Participants also completed non-social trials in which they responded to an arrow cue within the same task context. While patients and controls took equivalent time to process communicative intent from gaze shifts, patients made significantly more errors than controls when responding to the directional information conveyed by gaze, but not arrow, cues. Despite no differences in response times to gaze cues between groups, patients were significantly slower than controls when responding to arrow cues. This is the opposite pattern of results previously observed in autistic adults using the same task and suggests that, despite general impairments in attention orienting or oculomotor control, patients with schizophrenia demonstrate a facilitation effect when responding to communicative gaze cues. Findings indicate a hyper-responsivity to gaze cues of communicative intent in schizophrenia. The possible effects of self-referential biases when evaluating gaze direction are discussed, as are clinical implications.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Wang, Jian-Gang, and Eric Sung. "Gaze determination via images of irises." Image and Vision Computing 19, no. 12 (October 2001): 891–911. http://dx.doi.org/10.1016/s0262-8856(01)00051-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

McDonough, Kim, Pavel Trofimovich, Phung Dao, and Alexandre Dion. "EYE GAZE AND PRODUCTION ACCURACY PREDICT ENGLISH L2 SPEAKERS’ MORPHOSYNTACTIC LEARNING." Studies in Second Language Acquisition 39, no. 4 (December 1, 2016): 851–68. http://dx.doi.org/10.1017/s0272263116000395.

Повний текст джерела
Анотація:
This study investigated the relationship between second language (L2) speakers’ success in learning a new morphosyntactic pattern and characteristics of one-on-one learning activities, including opportunities to comprehend and produce the target pattern, receive feedback from an interlocutor, and attend to the meaning of the pattern through self- and interlocutor-initiated eye-gaze behaviors. L2 English students (N = 48) were exposed to the transitive construction in Esperanto (e.g., filino mordas pomon [SVO] or pomon mordas filino [OVS] “girl bites apple”) through comprehension and production activities with an interlocutor, receiving feedback in the form of recasts for their Esperanto errors. The L2 speakers’ interpretation and production of Esperanto transitives were then tested using known and novel lexical items. The results indicated that OVS test performance was predicted by the duration of self-initiated eye gaze to images illustrating the OVS pattern during the comprehension learning activity and by accurate production of OVS sentences during the production learning activity. The findings suggest important roles for eye-gaze behavior and production opportunities in L2 pattern learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Kar, Anuradha. "MLGaze: Machine Learning-Based Analysis of Gaze Error Patterns in Consumer Eye Tracking Systems." Vision 4, no. 2 (May 7, 2020): 25. http://dx.doi.org/10.3390/vision4020025.

Повний текст джерела
Анотація:
Analyzing the gaze accuracy characteristics of an eye tracker is a critical task as its gaze data is frequently affected by non-ideal operating conditions in various consumer eye tracking applications. In previous research on pattern analysis of gaze data, efforts were made to model human visual behaviors and cognitive processes. What remains relatively unexplored are questions related to identifying gaze error sources as well as quantifying and modeling their impacts on the data quality of eye trackers. In this study, gaze error patterns produced by a commercial eye tracking device were studied with the help of machine learning algorithms, such as classifiers and regression models. Gaze data were collected from a group of participants under multiple conditions that commonly affect eye trackers operating on desktop and handheld platforms. These conditions (referred here as error sources) include user distance, head pose, and eye-tracker pose variations, and the collected gaze data were used to train the classifier and regression models. It was seen that while the impact of the different error sources on gaze data characteristics were nearly impossible to distinguish by visual inspection or from data statistics, machine learning models were successful in identifying the impact of the different error sources and predicting the variability in gaze error levels due to these conditions. The objective of this study was to investigate the efficacy of machine learning methods towards the detection and prediction of gaze error patterns, which would enable an in-depth understanding of the data quality and reliability of eye trackers under unconstrained operating conditions. Coding resources for all the machine learning methods adopted in this study were included in an open repository named MLGaze to allow researchers to replicate the principles presented here using data from their own eye trackers.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Espino Palma, Carlos, Vicente Luis del Campo, and Diego Muñoz Marín. "Visual Behaviours of Expert Padel Athletes When Playing on Court: An In Situ Approach with a Portable Eye Tracker." Sensors 23, no. 3 (January 28, 2023): 1438. http://dx.doi.org/10.3390/s23031438.

Повний текст джерела
Анотація:
Eye-tracking research has allowed the characterisation of gaze behaviours in some racket sports (e.g., tennis, badminton), both in controlled laboratory settings and in real-world scenarios. However, there are no studies about visual patterns displayed by athletes in padel. Method: The aim of this exploratory case study was to address the visual behaviours of eight young expert padel athletes when playing match games on a padel court. Specifically, their gaze behaviours were examined with an in situ approach while returned trays/smashes, serves, and volleys were performed by their counterparts. Gaze patterns were registered with an SMI Eye Tracking Glasses 2 Wireless. Results: The participants’ gaze was mainly focused on the ball-flight trajectory and on the upper body of the opponents because they were the two visual locations with a larger number of fixations and longer fixation time. No differences were found in these variables for each type of visual location when the three return situations were compared, or independently of them. Conclusions: Padel players displayed a similar gaze behaviour during different representative return situations. This visual pattern was characterised by fixating at the ball and some opponents’ upper kinematics (head, shoulders, trunk, and the region of arm–hand–racket) to perform real interceptive actions while playing against them on a padel court.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Neugebauer, Alexander, Katarina Stingl, Iliya Ivanov, and Siegfried Wahl. "Influence of Systematic Gaze Patterns in Navigation and Search Tasks with Simulated Retinitis Pigmentosa." Brain Sciences 11, no. 2 (February 12, 2021): 223. http://dx.doi.org/10.3390/brainsci11020223.

Повний текст джерела
Анотація:
People living with a degenerative retinal disease such as retinitis pigmentosa are oftentimes faced with difficulties navigating in crowded places and avoiding obstacles due to their severely limited field of view. The study aimed to assess the potential of different patterns of eye movement (scanning patterns) to (i) increase the effective area of perception of participants with simulated retinitis pigmentosa scotoma and (ii) maintain or improve performance in visual tasks. Using a virtual reality headset with eye tracking, we simulated tunnel vision of 20° in diameter in visually healthy participants (n = 9). Employing this setup, we investigated how different scanning patterns influence the dynamic field of view—the average area over time covered by the field of view—of the participants in an obstacle avoidance task and in a search task. One of the two tested scanning patterns showed a significant improvement in both dynamic field of view (navigation 11%, search 7%) and collision avoidance (33%) when compared to trials without the suggested scanning pattern. However, participants took significantly longer (31%) to finish the navigation task when applying this scanning pattern. No significant improvements in search task performance were found when applying scanning patterns.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Lipunova, O. A., I. L. Plisov, V. V. Cherhykh, N. G. Antsiferova, V. B. Pushchina, and G. V. Gladysheva. "Exophoria: clinical features, diagnosis, treatment. The modern view on the problem. Literature review." Modern technologies in ophtalmology, no. 2 (June 15, 2021): 52–55. http://dx.doi.org/10.25276/2312-4911-2021-2-52-55.

Повний текст джерела
Анотація:
Purpose. Create a summary classification of exophoria. To propose an optimal algorithm for optometric and surgical methods of treatment. A modern view of the problem. It is optimal to subdivide exophoria according to the state of the vergent-duction balance into divergence excess, basic exophoria, convergence insufficiency, divergence pseudo-excess, lateral gaze incomitance. By the degree of compensation for compensated, subcompensated, uncompensated, decompensated. In combination with alphabetic pattern for exophoria without pattern, exophoria in combination with horizontal type A pattern, exophoria in combination with vertical type A pattern, exophoria in combination with horizontal type V pattern, exophoria in combination with vertical type V pattern. Features of optimal optical and prismatic correction depend on the state of the vergent-duction balance. In cases of exophoria without pattern surgical treatment is carried out during the transition from a state of subcompensation to non-compensation. In case of exophoria with horizontal-type alphabetical pattern, combined horizontal-transpositional surgery is optimal: elimination of exophoria, the protocol is based on the amount of deviation in the direct gaze position; elimination of the pattern, the protocol is based on vertical transposition of muscles of horizontal action. In cases of exophoria with vertical-type pattern, it is necessary to perform staged vertical-horizontal surgery: stage 1 – elimination of vertical heterotropy in adduction; stage 2 – elimination of exophoria (the protocol is based on the amount of deviation in the direct gaze position). Conclusions. The treatment protocol should be based on a reliably diagnosed diagnosis and consists at the pre-surgical stage in the optimal optical and prismatic correction, the appointment of orthopto-diplopto-prismatic treatment. The effectiveness of treatment is assessed by the dynamics of the disease: the magnitude of exodeviation and the stage of compensation. The protocol of surgical treatment must be reasonable and timely. Key words: exophoria, divergence excess, convergence insufficiency, basic exotropia, lateral gaze incomitance, alphabet pattern.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Kawahara, Misako, Hisako Yamamoto, and Akihiro Tanaka. "Cultural Differences of Eye Gaze Pattern during Multisensory Emotion Perception." Proceedings of the Annual Convention of the Japanese Psychological Association 83 (September 11, 2019): 2C—037–2C—037. http://dx.doi.org/10.4992/pacjpa.83.0_2c-037.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Zahno, Stephan. "Creativity and gaze behaviour in football." Current Issues in Sport Science (CISS) 8, no. 2 (February 14, 2023): 044. http://dx.doi.org/10.36950/2023.2ciss044.

Повний текст джерела
Анотація:
Performing creative actions is considered a decisive element in football. Recent studies have suggested that football players’ creativity is underpinned by a specific visual search strategy (e.g., Roca et al., 2021). Specifically, studies showed that players who scored high in a football-specific creativity task used more fixations of shorter durations than players who scored low. In the creativity task, players were asked to name as many solution ideas as possible. From an applied perspective, the questions arise: is this gaze strategy generally beneficial to perform creative actions? Should this gaze pattern be trained to improve creativity? In contrast to the idea of one single creativity-related gaze pattern, eye-tracking research in sports suggests that optimal gaze behaviour is highly dependent on situational task demands (Vater et al., 2020). Moreover, generally increasing the number of short fixations is expected to be dysfunctional to both motor accuracy and to the perception of task-relevant opportunities due to saccade-related costs. Accordingly, we hypothesized that many fixations of short duration are positively associated with players’ ability to generate many ideas in the specific creativity task but not with performing creative actions. Fifteen footballers participated in an experiment with two conditions. In one condition (DT-condition), we replicated Roca et al.’s studies: participants stood in front of a large screen and with a ball in front of them. They were asked to view 20 videos of attacking situations and imagine themselves as the player in ball possession. At key moments of the situation, the videos were occluded. At this point, players’ task was to physically play the ball and verbally confirm their decision. Subsequently, the last frame of the video reappeared, and their task was to name as many solution ideas as possible within 45 s. Moreover, a second condition was added (action-condition). While the first part of the task remained identical, the last frame did not reappear after playing the ball. In the action condition, the task-instruction was—as in a real game—to perform the most promising solution. In both conditions, eye tracking data were recorded. Results indicate that many fixations of short durations were linked to generating more ideas in the DT-condition—replicating Roca et al.’s (2021) finding—, however, not with performing creative solutions in the action-condition. As predicted, in the action-condition, the pattern was reversed: Players that performed more functional and creative actions used less fixations per second. For practice the results challenge the idea of training a specific creativity-related gaze pattern to improve creativity. Rather, our findings suggest that training should provide learning opportunities to acquire functional gaze strategies that are optimally adapted to situational task demands. References Roca, A., Ford, P. R., & Memmert, D. (2021). Perceptual-cognitive processes underlying creative expert performance in soccer. Psychological Research, 85(3), 1146–1155. https://doi.org/10.1007/s00426-020-01320-5 Vater, C., Williams, A. M., & Hossner, E.-J. (2020). What do we see out of the corner of our eye? The role of visual pivots and gaze anchors in sport. International Review of Sport and Exercise Psychology, 13(1), 81–103. https://doi.org/10.1080/1750984X.2019.1582082
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Jin, Nan, Sébastien Mavromatis, Jean Sequeira, and Stéphane Curcio. "A Robust Method of Eye Torsion Measurement for Medical Applications." Information 11, no. 9 (August 21, 2020): 408. http://dx.doi.org/10.3390/info11090408.

Повний текст джерела
Анотація:
The detection of eye torsion is an important element for diagnosis of balance disorders, although it is rarely available in existing eye tracking systems. A novel method is proposed in this paper to provide robust measurement of torsional eye movements. A numerical approach is presented to estimate the iris boundary only according to the gaze direction, so the segmentation of the iris is more robust against occlusions and ambiguities. The perspective distortion of the iris pattern at eccentric eye positions is also corrected, benefiting from the transformation relation that is established for the iris estimation. The angle of the eye torsion is next measured on the unrolled iris patterns via a TM (Template Matching) technique. The principle of the proposed method is validated and its robustness in practice is assessed. A very low mean FPR (False Positive Rate) is reported (i.e., 3.3%) in a gaze test when testing on five participants with very different eye morphologies. The present method always gave correct measurement on the iris patterns with simulated eye torsions and rarely provided mistaken detections in the absence of eye torsion in practical conditions. Therefore, it shows a good potential to be further applied in medical applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Han, J., and S. Lee. "RESIDENT'S SATISFACTION IN STREET LANDSCAPE USING THE IMMERSIVE VIRTUAL ENVIRONMENT-BASED EYE-TRACKING TECHNIQUE AND DEEP LEARNING MODEL." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-4/W4-2022 (October 14, 2022): 45–52. http://dx.doi.org/10.5194/isprs-archives-xlviii-4-w4-2022-45-2022.

Повний текст джерела
Анотація:
Abstract. Virtual reality technology provides a significant clue to understanding the human visual perception process by enabling the interaction between humans and computers. In addition, deep learning techniques in the visual field provide analysis methods for image classification, processing, and segmentation. This study reviewed the applicability of gaze movement and deep learning-based satisfaction evaluation on the landscape using an immersive virtual reality-based eye-tracking device. To this end, the following research procedures were established and analysed. First, the gaze movement of the test taker is measured using an immersive virtual environment-based eye tracker. The relationship between the gaze movement pattern of the test taker and the satisfaction evaluation result for the landscape image is analysed. Second, using the Convolutional Neural Networks (CNN)-based Class Activation Map (CAM) technique, a model for estimating the satisfaction evaluation result is constructed, and the gaze pattern of the test taker is derived. Third, we compare and analyse the similarity between the gaze heat map derived through the immersive virtual environment-based gaze tracker and the heat map generated by CAM. This study suggests the applicability of urban environment technology and deep learning methods to understand landscape planning factors that affect urban landscape satisfaction, resulting from the three-dimensional and immediate visual cognitive activity.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Yang, Shiyan, Brook Shiferaw, Trey Roady, Jonny Kuo, and Michael G. Lenné. "Drivers Glance Like Lizards during Cell Phone Distraction in Assisted Driving." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 65, no. 1 (September 2021): 1410–14. http://dx.doi.org/10.1177/1071181321651147.

Повний текст джерела
Анотація:
Head pose has been proposed as a surrogate for eye movement to predict areas of interest (AOIs) where drivers allocate their attention. However, head pose may disassociate with AOIs in glance behavior involving zero or subtle head movements, commonly known as “lizard” glance pattern. In contrast, “owl” glance pattern is used to describe glance behavior along with larger head movements. It remains unclear which glance pattern is prevalent during driver cell phone distraction and what are appropriate metrics to detect such distraction. To address this gap, we analyzed the gaze direction and head pose of 36 participants who completed an email-sorting task using a cell phone while driving a Tesla on the test track in Autopilot mode. The dispersion-threshold algorithm identified driver gaze fixations and synchronized them with head movements. The results showed that when using a cell phone either near the lap or behind the steering wheel, participants exhibited a dominant lizard-type glance pattern with minimal shift in head position. As a result, head pose alone may not provide sufficient information for cell phone distraction detection, and gaze metrics should be involved in enhancing this application.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Sohmiya, Tamotsu, and Kazuko Sohmiya. "Explanation of Illusory Contours in Terms of Strength of Pattern and its Spread Effect." Perceptual and Motor Skills 81, no. 3 (December 1995): 1003–20. http://dx.doi.org/10.2466/pms.1995.81.3.1003.

Повний текст джерела
Анотація:
The generation of illusory contours is closely related to distinct parts of a pattern such as dots, line ends, and corner points. On the other hand, the remarkable property is that gaze at one point of the contours diminishes the illusion and a return of gaze to the whole pattern restores it. Therefore, illusory contours depend on local parts and the whole pattern formed by the parts, and fitting data on the two aspects is necessary to clarify underlying mechanisms. We have obtained such data from the experiments performed to elucidate other visual phenomena. On the basis of the data, the concepts of strength of pattern, strength of its spread effect, ridgelines of the spread effect, and a hollow of the spread effect are introduced and then various phenomena on illusory contours, including the Kanizsa triangle, are explained in terms of these concepts.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Wasaki, Natsuko, and Tatsuto Takeuchi. "The effect of previous experience on gaze pattern in visual search." Proceedings of the Annual Convention of the Japanese Psychological Association 82 (September 25, 2018): 1AM—076–1AM—076. http://dx.doi.org/10.4992/pacjpa.82.0_1am-076.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Han, Kiwan, Jungeun Shin, Sang Young Yoon, Dong-Pyo Jang, and Jae-Jin Kim. "Deficient gaze pattern during virtual multiparty conversation in patients with schizophrenia." Computers in Biology and Medicine 49 (June 2014): 60–66. http://dx.doi.org/10.1016/j.compbiomed.2014.03.012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Cristina, Stefania, and Kenneth P. Camilleri. "Unobtrusive and pervasive video-based eye-gaze tracking." Image and Vision Computing 74 (June 2018): 21–40. http://dx.doi.org/10.1016/j.imavis.2018.04.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Peacock, Candace E., Ben Lafreniere, Ting Zhang, Stephanie Santosa, Hrvoje Benko, and Tanya R. Jonker. "Gaze as an Indicator of Input Recognition Errors." Proceedings of the ACM on Human-Computer Interaction 6, ETRA (May 13, 2022): 1–18. http://dx.doi.org/10.1145/3530883.

Повний текст джерела
Анотація:
Input recognition errors are common in gesture- and touch-based recognition systems, and negatively affect user experience and performance. When errors occur, systems are unaware of them, but the user's gaze following an error may provide valuable cues for error detection. A study was conducted using a manual serial selection task to investigate whether gaze could be used to discriminate user-initiated selections from injected false positive selection errors. Logistic regression models of gaze dynamics could successfully identify injected selection errors as early as 50 milliseconds following a selection, with performance peaking at 550 milliseconds. A two-phase gaze pattern was observed in which users exhibited high gaze motion immediately following errors, and then decreased gaze motion as the error was noticed. Together, these results provide the first demonstration that gaze dynamics can be used to detect input recognition errors, and open new possibilities for systems that can assist with error recovery.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

van Maarseveen, Mariëtte J. J., Raôul R. D. Oudejans, David L. Mann, and Geert J. P. Savelsbergh. "Perceptual-cognitive skill and the in situ performance of soccer players." Quarterly Journal of Experimental Psychology 71, no. 2 (January 1, 2018): 455–70. http://dx.doi.org/10.1080/17470218.2016.1255236.

Повний текст джерела
Анотація:
Many studies have shown that experts possess better perceptual-cognitive skills than novices (e.g., in anticipation, decision making, pattern recall), but it remains unclear whether a relationship exists between performance on those tests of perceptual-cognitive skill and actual on-field performance. In this study, we assessed the in situ performance of skilled soccer players and related the outcomes to measures of anticipation, decision making, and pattern recall. In addition, we examined gaze behaviour when performing the perceptual-cognitive tests to better understand whether the underlying processes were related when those perceptual-cognitive tasks were performed. The results revealed that on-field performance could not be predicted on the basis of performance on the perceptual-cognitive tests. Moreover, there were no strong correlations between the level of performance on the different tests. The analysis of gaze behaviour revealed differences in search rate, fixation duration, fixation order, gaze entropy, and percentage viewing time when performing the test of pattern recall, suggesting that it is driven by different processes to those used for anticipation and decision making. Altogether, the results suggest that the perceptual-cognitive tests may not be as strong determinants of actual performance as may have previously been assumed.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Wang, Peng, and Zhi Qiang Liu. "Study of Driver's Gaze Behavior Recognition." Applied Mechanics and Materials 488-489 (January 2014): 1011–14. http://dx.doi.org/10.4028/www.scientific.net/amm.488-489.1011.

Повний текст джерела
Анотація:
The reliability and accuracy of driver gaze behavior detection was improved by the multi-dimensional feature fusion method. In view of the effects of the complexity of the driving environment, a variety of working conditions and the diversity of gaze behavior characteristics,with support vector machine (SVM) theory technique, the multi-dimensional feature decision-level fusion was proposed to estimate the different kind model of the driver's gaze behavior. The results show that the T characteristic curve proposed by the gaze behavior parameters of the transverse width between the eyes and the vertical distance between mouth and the midpoint of two eyes, combined with the driver's eyelid closure and the proportion and location characteristics of iris-sclera were studied to get the characterization of the driver gaze status. The simulation results indicate that the adaptability and accuracy as well as the intelligence level of the bad fixation characterization information screening are significantly improved by using the pattern classification and decision technology of multi-dimensional feature fusion.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Hausegger, Thomas, Christian Vater, and Ernst-Joachim Hossner. "Peripheral Vision in Martial Arts Experts: The Cost-Dependent Anchoring of Gaze." Journal of Sport and Exercise Psychology 41, no. 3 (June 17, 2019): 137–45. http://dx.doi.org/10.1123/jsep.2018-0091.

Повний текст джерела
Анотація:
Research on martial arts has suggested that gaze anchoring is functional for optimizing the use of peripheral visual information. The current study predicted that the height of gaze anchoring on the opponent’s body would depend on the potential attacking locations that need to be monitored. To test this prediction, the authors compared high-level athletes in kung fu (Qwan Ki Do), who attack with their arms and legs, with Tae Kwon Do fighters, who attack mostly with their legs. As predicted, the results show that Qwan Ki Do athletes anchor their gaze higher than Tae Kwon Do athletes do before and even during the first attack. In addition, gaze anchoring seems to depend on 3 factors: the particulars of the evolving situation, crucial cues, and specific visual costs (especially suppressed information pickup during saccades). These 3 factors should be considered in future studies on gaze behavior in sports to find the most functional, that is, cost-benefit-optimized, gaze pattern.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Kim, Haena, Jung Eun Shin, Yeon-Ju Hong, Yu-Bin Shin, Young Seok Shin, Kiwan Han, Jae-Jin Kim, and Soo-Hee Choi. "Aversive eye gaze during a speech in virtual environment in patients with social anxiety disorder." Australian & New Zealand Journal of Psychiatry 52, no. 3 (June 14, 2017): 279–85. http://dx.doi.org/10.1177/0004867417714335.

Повний текст джерела
Анотація:
Objective: One of the main characteristics of social anxiety disorder is excessive fear of social evaluation. In such situations, anxiety can influence gaze behaviour. Thus, the current study adopted virtual reality to examine eye gaze pattern of social anxiety disorder patients while presenting different types of speeches. Methods: A total of 79 social anxiety disorder patients and 51 healthy controls presented prepared speeches on general topics and impromptu speeches on self-related topics to a virtual audience while their eye gaze was recorded. Their presentation performance was also evaluated. Results: Overall, social anxiety disorder patients showed less eye gaze towards the audience than healthy controls. Types of speech did not influence social anxiety disorder patients’ gaze allocation towards the audience. However, patients with social anxiety disorder showed significant correlations between the amount of eye gaze towards the audience while presenting self-related speeches and social anxiety cognitions. Conclusion: The current study confirms that eye gaze behaviour of social anxiety disorder patients is aversive and that their anxiety symptoms are more dependent on the nature of topic.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Kanda, Daigo, Shin Kawai, and Hajime Nobuhara. "Visualization Method Corresponding to Regression Problems and Its Application to Deep Learning-Based Gaze Estimation Model." Journal of Advanced Computational Intelligence and Intelligent Informatics 24, no. 5 (September 20, 2020): 676–84. http://dx.doi.org/10.20965/jaciii.2020.p0676.

Повний текст джерела
Анотація:
The human gaze contains substantial personal information and can be extensively employed in several applications if its relevant factors can be accurately measured. Further, several fields could be substantially innovated if the gaze could be analyzed using popular and familiar smart devices. Deep learning-based methods are robust, making them crucial for gaze estimation on smart devices. However, because internal functions in deep learning are black boxes, deep learning systems often make estimations for unclear reasons. In this paper, we propose a visualization method corresponding to a regression problem to solve the black box problem of the deep learning-based gaze estimation model. The proposed visualization method can clarify which region of an image contributes to deep learning-based gaze estimation. We visualized the gaze estimation model proposed by a research group at the Massachusetts Institute of Technology. The accuracy of the estimation was low, even when the facial features important for gaze estimation were recognized correctly. The effectiveness of the proposed method was further determined through quantitative evaluation using the area over the MoRF perturbation curve (AOPC).
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Kenangil, Gulay, Dilek Necioglu Orken, Destina Yalcin, Lale Gündogdu, and Hulki Forta. "Triphasic EEG Pattern in Bilateral Paramedian Thalamic Infarction." Clinical EEG and Neuroscience 39, no. 4 (October 2008): 185–90. http://dx.doi.org/10.1177/155005940803900407.

Повний текст джерела
Анотація:
Two cases of bilateral paramedian thalamic infarction (BPTI) showing triphasic waves (TWs) on the electroencephalogram (EEG) at acute stage are presented in this study. BPTI is a rare syndrome with decreased level of consciousness, gaze abnormalities and cognitive deterioration. TWs are nonspecific EEG findings occurring in both metabolic and nonmetabolic conditions. The TWs in BPTI might be related to level of consciousness and does not always predict a poor prognosis in BPTI.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Ramsey, Richard, Emily S. Cross, and Antonia F. de C. Hamilton. "Eye Can See What You Want: Posterior Intraparietal Sulcus Encodes the Object of an Actor's Gaze." Journal of Cognitive Neuroscience 23, no. 11 (November 2011): 3400–3409. http://dx.doi.org/10.1162/jocn_a_00074.

Повний текст джерела
Анотація:
In a social setting, seeing Sally look at a clock means something different to seeing her gaze longingly at a slice of chocolate cake. In both cases, her eyes and face might be turned rightward, but the information conveyed is markedly different, depending on the object of her gaze. Numerous studies have examined brain systems underlying the perception of gaze direction, but less is known about the neural basis of perceiving gaze shifts to specific objects. During fMRI, participants observed an actor look toward one of two objects, each occupying a distinct location. Video stimuli were sequenced to obtain repetition suppression (RS) for object identity, independent of spatial location. In a control condition, a spotlight highlighted one of the objects, but no actor was present. Observation of the human actor's gaze compared with the spotlight engaged frontal, parietal, and temporal cortices, consistent with a broad action observation network. RS for gazed object in the human condition was found in posterior intraparietal sulcus (pIPS). RS for highlighted object in the spotlight condition was found in middle occipital, inferior temporal, medial fusiform gyri, and superior parietal lobule. These results suggest that human pIPS is specifically sensitive to the type object that an observed actor looks at (tool vs. food), irrespective of the observed actor's gaze location (left vs. right). A general attention or lower-level object feature processing mechanism cannot account for the findings because a very different response pattern was seen in the spotlight control condition. Our results suggest that, in addition to spatial orienting, human pIPS has an important role in object-centered social orienting.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Stefic, Daria, and Ioannis Patras. "Action recognition using saliency learned from recorded human gaze." Image and Vision Computing 52 (August 2016): 195–205. http://dx.doi.org/10.1016/j.imavis.2016.06.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Goffart, Laurent, Denis Pélisson, and Alain Guillaume. "Orienting Gaze Shifts During Muscimol Inactivation of Caudal Fastigial Nucleus in the Cat. II. Dynamics and Eye-Head Coupling." Journal of Neurophysiology 79, no. 4 (April 1, 1998): 1959–76. http://dx.doi.org/10.1152/jn.1998.79.4.1959.

Повний текст джерела
Анотація:
Goffart, Laurent, Denis Pélisson, and Alain Guillaume. Orienting gaze shifts during muscimol inactivation of caudalfastigial nucleus in the cat. II. Dynamics and eye-head coupling. J. Neurophysiol. 79: 1959–1976, 1998. We have shown in the companion paper that muscimol injection in the caudal part of the fastigial nucleus (cFN) consistently leads to dysmetria of visually triggered gaze shifts that depends on movement direction. Based on the observations of a constant error and misdirected movements toward the inactivated side, we have proposed that the cFN contributes to the specification of the goal of the impending ipsiversive gaze shift. To test this hypothesis and also to better define the nature of the hypometria that affects contraversive gaze shifts, we report in this paper on various aspects of movement dynamics and of eye/head coordination patterns. Unilateral muscimol injection in cFN leads to a slight modification in the dynamics of both ipsiversive and contraversive gaze shifts (average velocity decrease = 55°/s). This slowing in gaze displacements results from changes in both eye and head. In some experiments, a larger gaze velocity decrease is observed for ipsiversive gaze shifts as compared with contraversive ones, and this change is restricted to the deceleration phase. For two particular experiments testing the effect of visual feedback, we have observed a dramatic decrease in the velocity of ipsiversive gaze shifts after the animal had received visual information about its inaccurate gaze responses; but virtually no change in hypermetria was noted. These observations suggest that there is no obvious causal relationship between changes in dynamics and in accuracy of gaze shifts after muscimol injection in the cFN. Eye and head both contribute to the dysmetria of gaze. Indeed, muscimol injection leads to parallel changes in amplitude of both ocular and cephalic components. As a global result, the relative contribution of eye and head to the amplitude of ipsiversive gaze shifts remains statistically indistinguishable from that of control responses, and a small (1.6°) increase in the head contribution to contraversive gaze shifts is found. The delay between eye and head movement onsets is increased by 7.3 ± 7.4 ms for contraversive and decreased by 8.3 ± 10.1 ms for ipsiversive gaze shifts, corresponding respectively to an increased or decreased lead time of head movement initiation. The modest changes in gaze dynamics, the absence of a link between eventual dynamics changes and dysmetria, and a similar pattern of eye-head coordination to that of control responses, altogether are compatible with the hypothesis that the hypermetria of ipsiversive gaze shifts results from an impaired specification of the metrics of the impending gaze shift. Regarding contraversive gaze shifts, the weak changes in head contribution do not seem to reflect a pathological coordination between eye and head but would rather result from the tonic deviations of gaze and head toward the inactivated side. Hence, our data suggest that the hypometria of contraversive gaze shifts also might result largely from an alteration of processes that specify the goal rather than the on-going trajectory, of saccadic gaze shifts.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Krebs, Christine, Michael Falkner, Joel Niklaus, Luca Persello, Stefan Klöppel, Tobias Nef, and Prabitha Urwyler. "Application of Eye Tracking in Puzzle Games for Adjunct Cognitive Markers: Pilot Observational Study in Older Adults." JMIR Serious Games 9, no. 1 (March 22, 2021): e24151. http://dx.doi.org/10.2196/24151.

Повний текст джерела
Анотація:
Background Recent studies suggest that computerized puzzle games are enjoyable, easy to play, and engage attentional, visuospatial, and executive functions. They may help mediate impairments seen in cognitive decline in addition to being an assessment tool. Eye tracking provides a quantitative and qualitative analysis of gaze, which is highly useful in understanding visual search behavior. Objective The goal of the research was to test the feasibility of eye tracking during a puzzle game and develop adjunct markers for cognitive performance using eye-tracking metrics. Methods A desktop version of the Match-3 puzzle game with 15 difficulty levels was developed using Unity 3D (Unity Technologies). The goal of the Match-3 puzzle was to find configurations (target patterns) that could be turned into a row of 3 identical game objects (tiles) by swapping 2 adjacent tiles. Difficulty levels were created by manipulating the puzzle board size (all combinations of width and height from 4 to 8) and the number of unique tiles on the puzzle board (from 4 to 8). Each level consisted of 4 boards (ie, target patterns to match) with one target pattern each. In this study, the desktop version was presented on a laptop computer setup with eye tracking. Healthy older subjects were recruited to play a full set of 15 puzzle levels. A paper-pencil–based assessment battery was administered prior to the Match-3 game. The gaze behavior of all participants was recorded during the game. Correlation analyses were performed on eye-tracking data correcting for age to examine if gaze behavior pertains to target patterns and distractor patterns and changes with puzzle board size (set size). Additionally, correlations between cognitive performance and eye movement metrics were calculated. Results A total of 13 healthy older subjects (mean age 70.67 [SD 4.75] years; range 63 to 80 years) participated in this study. In total, 3 training and 12 test levels were played by the participants. Eye tracking recorded 672 fixations in total, 525 fixations on distractor patterns and 99 fixations on target patterns. Significant correlations were found between executive functions (Trail Making Test B) and number of fixations on distractor patterns (P=.01) and average fixations (P=.005). Conclusions Overall, this study shows that eye tracking in puzzle games can act as a supplemental source of data for cognitive performance. The relationship between a paper-pencil test for executive functions and fixations confirms that both are related to the same cognitive processes. Therefore, eye movement metrics might be used as an adjunct marker for cognitive abilities like executive functions. However, further research is needed to evaluate the potential of the various eye movement metrics in combination with puzzle games as visual search and attentional marker.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

White, Robert L., and Lawrence H. Snyder. "A Neural Network Model of Flexible Spatial Updating." Journal of Neurophysiology 91, no. 4 (April 2004): 1608–19. http://dx.doi.org/10.1152/jn.00277.2003.

Повний текст джерела
Анотація:
Neurons in many cortical areas involved in visuospatial processing represent remembered spatial information in retinotopic coordinates. During a gaze shift, the retinotopic representation of a target location that is fixed in the world (world-fixed reference frame) must be updated, whereas the representation of a target fixed relative to the center of gaze (gaze-fixed) must remain constant. To investigate how such computations might be performed, we trained a 3-layer recurrent neural network to store and update a spatial location based on a gaze perturbation signal, and to do so flexibly based on a contextual cue. The network produced an accurate readout of target position when cued to either reference frame, but was less precise when updating was performed. This output mimics the pattern of behavior seen in animals performing a similar task. We tested whether updating would preferentially use gaze position or gaze velocity signals, and found that the network strongly preferred velocity for updating world-fixed targets. Furthermore, we found that gaze position gain fields were not present when velocity signals were available for updating. These results have implications for how updating is performed in the brain.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Miyake, Hidenori, and Kazuki Sekine. "The gaze pattern in the process of gesture-speech integration in children." Proceedings of the Annual Convention of the Japanese Psychological Association 84 (September 8, 2020): PO—051—PO—051. http://dx.doi.org/10.4992/pacjpa.84.0_po-051.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Oh, Jooyoung, Ji-Won Chun, Jung Lee, and Jae-Jin Kim. "Relationship between abstract thinking and eye gaze pattern in patients with schizophrenia." Behavioral and Brain Functions 10, no. 1 (2014): 13. http://dx.doi.org/10.1186/1744-9081-10-13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Zhao, Shuo, Shota Uono, Sayaka Yoshimura, Yasutaka Kubota, and Motomi Toichi. "Atypical Gaze Cueing Pattern in a Complex Environment in Individuals with ASD." Journal of Autism and Developmental Disorders 47, no. 7 (April 8, 2017): 1978–86. http://dx.doi.org/10.1007/s10803-017-3116-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Wang, Jian-Gang, Eric Sung, and Ronda Venkateswarlu. "Estimating the eye gaze from one eye." Computer Vision and Image Understanding 98, no. 1 (April 2005): 83–103. http://dx.doi.org/10.1016/j.cviu.2004.07.008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії