Статті в журналах з теми "Gaze Coordination"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Gaze Coordination.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Gaze Coordination".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Arora, Harbandhan Kaur, Vishal Bharmauria, Xiaogang Yan, Saihong Sun, Hongying Wang, and John Douglas Crawford. "Eye-head-hand coordination during visually guided reaches in head-unrestrained macaques." Journal of Neurophysiology 122, no. 5 (November 1, 2019): 1946–61. http://dx.doi.org/10.1152/jn.00072.2019.

Повний текст джерела
Анотація:
Nonhuman primates have been used extensively to study eye-head coordination and eye-hand coordination, but the combination—eye-head-hand coordination—has not been studied. Our goal was to determine whether reaching influences eye-head coordination (and vice versa) in rhesus macaques. Eye, head, and hand motion were recorded in two animals with search coil and touch screen technology, respectively. Animals were seated in a customized “chair” that allowed unencumbered head motion and reaching in depth. In the reach condition, animals were trained to touch a central LED at waist level while maintaining central gaze and were then rewarded if they touched a target appearing at 1 of 15 locations in a 40° × 20° (visual angle) array. In other variants, initial hand or gaze position was varied in the horizontal plane. In similar control tasks, animals were rewarded for gaze accuracy in the absence of reach. In the Reach task, animals made eye-head gaze shifts toward the target followed by reaches that were accompanied by prolonged head motion toward the target. This resulted in significantly higher head velocities and amplitudes (and lower eye-in-head ranges) compared with the gaze control condition. Gaze shifts had shorter latencies and higher velocities and were more precise, despite the lack of gaze reward. Initial hand position did not influence gaze, but initial gaze position influenced reach latency. These results suggest that eye-head coordination is optimized for visually guided reach, first by quickly and accurately placing gaze at the target to guide reach transport and then by centering the eyes in the head, likely to improve depth vision as the hand approaches the target. NEW & NOTEWORTHY Eye-head and eye-hand coordination have been studied in nonhuman primates but not the combination of all three effectors. Here we examined the timing and kinematics of eye-head-hand coordination in rhesus macaques during a simple reach-to-touch task. Our most novel finding was that (compared with hand-restrained gaze shifts) reaching produced prolonged, increased head rotation toward the target, tending to center the binocular field of view on the target/hand.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tweed, D., B. Glenn, and T. Vilis. "Eye-head coordination during large gaze shifts." Journal of Neurophysiology 73, no. 2 (February 1, 1995): 766–79. http://dx.doi.org/10.1152/jn.1995.73.2.766.

Повний текст джерела
Анотація:
1. Three-dimensional (3D) eye and head rotations were measured with the use of the magnetic search coil technique in six healthy human subjects as they made large gaze shifts. The aims of this study were 1) to see whether the kinematic rules that constrain eye and head orientations to two degrees of freedom between saccades also hold during movements; 2) to chart the curvature and looping in eye and head trajectories; and 3) to assess whether the timing and paths of eye and head movements are more compatible with a single gaze error command driving both movements, or with two different feedback loops. 2. Static orientations of the eye and head relative to space are known to resemble the distribution that would be generated by a Fick gimbal (a horizontal axis moving on a fixed vertical axis). We show that gaze point trajectories during eye-head gaze shifts fit the Fick gimbal pattern, with horizontal movements following straight "line of latitude" paths and vertical movements curving like lines of longitude. However, horizontal (and to a lesser extent vertical) movements showed direction-dependent looping, with rightward and leftward (and up and down) saccades tracing slightly different paths. Plots of facing direction (the analogue of gaze direction for the head) also showed the latitude/longitude pattern, without looping. In radial saccades, the gaze point initially moved more vertically than the target direction and then curved; head trajectories were straight. 3. The eye and head components of randomly sequenced gaze shifts were not time locked to one another. The head could start moving at any time from slightly before the eye until 200 ms after, and the standard deviation of this interval could be as large as 80 ms. The head continued moving for a long (up to 400 ms) and highly variable time after the gaze error had fallen to zero. For repeated saccades between the same targets, peak eye and head velocities were directly, but very weakly, correlated; fast eye movements could accompany slow head movements and vice versa. Peak head acceleration and deceleration were also very weakly correlated with eye velocity. Further, the head rotated about an essentially fixed axis, with a smooth bell-shaped velocity profile, whereas the axis of eye rotation relative to the head varied throughout the movement and the velocity profiles were more ragged. 4. Plots of 3D eye orientation revealed strong and consistent looping in eye trajectories relative to space.(ABSTRACT TRUNCATED AT 400 WORDS)
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Freedman, Edward G., and David L. Sparks. "Eye-Head Coordination During Head-Unrestrained Gaze Shifts in Rhesus Monkeys." Journal of Neurophysiology 77, no. 5 (May 1, 1997): 2328–48. http://dx.doi.org/10.1152/jn.1997.77.5.2328.

Повний текст джерела
Анотація:
Freedman, Edward G. and David L. Sparks. Eye-head coordination during head-unrestrained gaze shifts in rhesus monkeys. J. Neurophysiol. 77: 2328–2348, 1997. We analyzed gaze shifts made by trained rhesus monkeys with completely unrestrained heads during performance of a delayed gaze shift task. Subjects made horizontal, vertical, and oblique gaze shifts to visual targets. We found that coordinated eye-head movements are characterized by a set of lawful relationships, and that the initial position of the eyes in the orbits and the direction of the gaze shift are two factors that influence these relationships. Head movements did not contribute to the change in gaze position during small gaze shifts (<20°) directed along the horizontal meridian, when the eyes were initially centered in the orbits. For larger gaze shifts (25–90°), the head contribution to the gaze shift increased linearly with increasing gaze shift amplitude, and eye movement amplitude saturated at an asymptotic amplitude of ∼35°. When the eyes began deviated in the orbits contralateral to the direction of the ensuing gaze shift, the head contributed less and the eyes more to amplitude-matched gaze shifts. The relative timing of eye and head movements was altered by initial eye position; head latency relative to gaze onset increased as the eyes began in more contralateral initial positions. The direction of the gaze shift also affected the relative amplitudes of eye and head movements; as gaze shifts were made in progressively more vertical directions, eye amplitude increased and head contribution declined systematically. Eye velocity was a saturating function of gaze amplitude for movements without a head contribution (gaze amplitude <20°). As head contribution increased with increasing gaze amplitude (20–60°), peak eye velocity declined by >200°/s and head velocity increased by 100°/s. For constant-amplitude eye movements (∼30°), eye velocity declined as the velocity of the concurrent head movement increased. On the basis of these relationships, it is possible to accurately predict gaze amplitude, the amplitudes of the eye and head components of the gaze shift, and gaze, eye, and head velocities, durations and latencies if the two-dimensional displacement of the target and the initial position of the eyes in the orbits are known. These data indicate that signals related to the initial positions of the eyes in the orbits and the direction of the gaze shift influence separate eye and head movement commands. The hypothesis that this divergence of eye and head commands occurs downstream from the superior colliculus is supported by recent electrical stimulation and single-unit recording data.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Hayashi, Yugo. "Gaze awareness and metacognitive suggestions by a pedagogical conversational agent: an experimental investigation on interventions to support collaborative learning process and performance." International Journal of Computer-Supported Collaborative Learning 15, no. 4 (December 2020): 469–98. http://dx.doi.org/10.1007/s11412-020-09333-3.

Повний текст джерела
Анотація:
AbstractResearch on collaborative learning has revealed that peer-collaboration explanation activities facilitate reflection and metacognition and that establishing common ground and successful coordination are keys to realizing effective knowledge-sharing in collaborative learning tasks. Studies on computer-supported collaborative learning have investigated how awareness tools can facilitate coordination within a group and how the use of external facilitation scripts can elicit elaborated knowledge during collaboration. However, the separate and joint effects of these tools on the nature of the collaborative process and performance have rarely been investigated. This study investigates how two facilitation methods—coordination support via learner gaze-awareness feedback and metacognitive suggestion provision via a pedagogical conversational agent (PCA)—are able to enhance the learning process and learning gains. Eighty participants, organized into dyads, were enrolled in a 2 × 2 between-subject study. The first and second factors were the presence of real-time gaze feedback (no vs. visible gaze) and that of a suggestion-providing PCA (no vs. visible agent), respectively. Two evaluation methods were used: namely, dialog analysis of the collaborative process and evaluation of learning gains. The real-time gaze feedback and PCA suggestions facilitated the coordination process, while gaze was relatively more effective in improving the learning gains. Learners in the Gaze-feedback condition achieved superior learning gains upon receiving PCA suggestions. A successful coordination/high learning performance correlation was noted solely for learners receiving visible gaze feedback and PCA suggestions simultaneously (visible gaze/visible agent). This finding has the potential to yield improved collaborative processes and learning gains through integration of these two methods as well as contributing towards design principles for collaborative-learning support systems more generally.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Paquette, Caroline, and Joyce Fung. "Old age affects gaze and postural coordination." Gait & Posture 33, no. 2 (February 2011): 227–32. http://dx.doi.org/10.1016/j.gaitpost.2010.11.010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Stamenkovic, Alexander, Paul J. Stapley, Rebecca Robins, and Mark A. Hollands. "Do postural constraints affect eye, head, and arm coordination?" Journal of Neurophysiology 120, no. 4 (October 1, 2018): 2066–82. http://dx.doi.org/10.1152/jn.00200.2018.

Повний текст джерела
Анотація:
If a whole body reaching task is produced when standing or adopting challenging postures, it is unclear whether changes in attentional demands or the sensorimotor integration necessary for balance control influence the interaction between visuomotor and postural components of the movement. Is gaze control prioritized by the central nervous system (CNS) to produce coordinated eye movements with the head and whole body regardless of movement context? Considering the coupled nature of visuomotor and whole body postural control during action, this study aimed to understand how changing equilibrium constraints (in the form of different postural configurations) influenced the initiation of eye, head, and arm movements. We quantified the eye-head metrics and segmental kinematics as participants executed either isolated gaze shifts or whole body reaching movements to visual targets. In total, four postural configurations were compared: seated, natural stance, with the feet together (narrow stance), or while balancing on a wooden beam. Contrary to our initial predictions, the lack of distinct changes in eye-head metrics; timing of eye, head, and arm movement initiation; and gaze accuracy, in spite of kinematic differences, suggests that the CNS integrates postural constraints into the control necessary to initiate gaze shifts. This may be achieved by adopting a whole body gaze strategy that allows for the successful completion of both gaze and reaching goals. NEW & NOTEWORTHY Differences in sequence of movement among the eye, head, and arm have been shown across various paradigms during reaching. Here we show that distinct changes in eye characteristics and movement sequence, coupled with stereotyped profiles of head and gaze movement, are not observed when adopting postures requiring changes to balance constraints. This suggests that a whole body gaze strategy is prioritized by the central nervous system with postural control subservient to gaze stability requirements.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Soechting, John F., Kevin C. Engel, and Martha Flanders. "The Duncker Illusion and Eye–Hand Coordination." Journal of Neurophysiology 85, no. 2 (February 1, 2001): 843–54. http://dx.doi.org/10.1152/jn.2001.85.2.843.

Повний текст джерела
Анотація:
A moving background alters the perceived direction of target motion (the Duncker illusion). To test whether this illusion also affects pointing movements to remembered/extrapolated target locations, we constructed a display in which a target moved in a straight line and disappeared behind a band of moving random dots. Subjects were required to touch the spot where the target would emerge from the occlusion. The four directions of random-dot motion induced pointing errors that were predictable from the Duncker illusion. Because it has been previously established that saccadic direction is influenced by this illusion, gaze was subsequently recorded in a second series of experiments while subjects performed the pointing task and a similar task with eye-tracking only. In the pointing task, subjects typically saccaded to the lower border of the occlusion zone as soon as the target disappeared and then tried to maintain fixation at that spot. However, it was particularly obvious in the eye-tracking-only condition that horizontally moving random dots generally evoked an appreciable ocular following response, altering the gaze direction. Hand-pointing errors were related to the saccadic gaze error but were more highly correlated with final gaze errors (resulting from the initial saccade and the subsequent ocular following response). The results suggest a model of limb control in which gaze position can provide the target signal for limb movement.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Monteon, Jachin A., Marie Avillac, Xiaogang Yan, Hongying Wang, and J. Douglas Crawford. "Neural mechanisms for predictive head movement strategies during sequential gaze shifts." Journal of Neurophysiology 108, no. 10 (November 15, 2012): 2689–707. http://dx.doi.org/10.1152/jn.00222.2012.

Повний текст джерела
Анотація:
Humans adopt very different head movement strategies for different gaze behaviors, for example, when playing sports versus watching sports on television. Such strategy switching appears to depend on both context and expectation of future gaze positions. Here, we explored the neural mechanisms for such behaviors by training three monkeys to make head-unrestrained gaze shifts toward eccentric radial targets. A randomized color cue provided predictive information about whether that target would be followed by either a return gaze shift to center or another, more eccentric gaze shift, but otherwise animals were allowed to develop their own eye-head coordination strategy. In the first two animals we then stimulated the frontal eye fields (FEF) in conjunction with the color cue, and in the third animal we recorded from neurons in the superior colliculus (SC). Our results show that 1) monkeys can optimize eye-head coordination strategies from trial to trial, based on learned associations between color cues and future gaze sequences, 2) these cue-dependent coordination strategies were preserved in gaze saccades evoked during electrical stimulation of the FEF, and 3) two types of SC responses (the saccade burst and a more prolonged response related to head movement) modulated with these cue-dependent strategies, although only one (the saccade burst) varied in a predictive fashion. These data show that from one moment to the next, the brain can use contextual sensory cues to set up internal “coordination states” that convert fixed cortical gaze commands into the brain stem signals required for predictive head motion.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Abekawa, Naotoshi, and Hiroaki Gomi. "Online gain update for manual following response accompanied by gaze shift during arm reaching." Journal of Neurophysiology 113, no. 4 (February 15, 2015): 1206–16. http://dx.doi.org/10.1152/jn.00281.2014.

Повний текст джерела
Анотація:
To capture objects by hand, online motor corrections are required to compensate for self-body movements. Recent studies have shown that background visual motion, usually caused by body movement, plays a significant role in such online corrections. Visual motion applied during a reaching movement induces a rapid and automatic manual following response (MFR) in the direction of the visual motion. Importantly, the MFR amplitude is modulated by the gaze direction relative to the reach target location (i.e., foveal or peripheral reaching). That is, the brain specifies the adequate visuomotor gain for an online controller based on gaze-reach coordination. However, the time or state point at which the brain specifies this visuomotor gain remains unclear. More specifically, does the gain change occur even during the execution of reaching? In the present study, we measured MFR amplitudes during a task in which the participant performed a saccadic eye movement that altered the gaze-reach coordination during reaching. The results indicate that the MFR amplitude immediately after the saccade termination changed according to the new gaze-reach coordination, suggesting a flexible online updating of the MFR gain during reaching. An additional experiment showed that this gain updating mostly started before the saccade terminated. Therefore, the MFR gain updating process would be triggered by an ocular command related to saccade planning or execution based on forthcoming changes in the gaze-reach coordination. Our findings suggest that the brain flexibly updates the visuomotor gain for an online controller even during reaching movements based on continuous monitoring of the gaze-reach coordination.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Constantin, Alina G., Hongying Wang, and J. Douglas Crawford. "Role of Superior Colliculus in Adaptive Eye–Head Coordination During Gaze Shifts." Journal of Neurophysiology 92, no. 4 (October 2004): 2168–84. http://dx.doi.org/10.1152/jn.00103.2004.

Повний текст джерела
Анотація:
The goal of this study was to determine which aspects of adaptive eye–head coordination are implemented upstream or downstream from the motor output layers of the superior colliculus (SC). Two monkeys were trained to perform head-free gaze shifts while looking through a 10° aperture in opaque, head-fixed goggles. This training produced context-dependent alterations in eye–head coordination, including a coordinated pattern of saccade–vestibuloocular reflex (VOR) eye movements that caused eye position to converge toward the aperture, and an increased contribution of head movement to the gaze shift. One would expect the adaptations that were implemented downstream from the SC to be preserved in gaze shifts evoked by SC stimulation. To test this, we analyzed gaze shifts evoked from 19 SC sites in monkey 1 and 38 sites in monkey 2, both with and without goggles. We found no evidence that the goggle paradigm altered the basic gaze position–dependent spatial coding of the evoked movements (i.e., gaze was still coded in an eye-centered frame). However, several aspects of the context-dependent coordination strategy were preserved during stimulation, including the adaptive convergence of final eye position toward the goggles aperture, and the position-dependent patterns of eye and head movement required to achieve this. For example, when initial eye position was offset from the learned aperture location at the time of stimulation, a coordinated saccade–VOR eye movement drove it back to the original aperture, and the head compensated to preserve gaze kinematics. Some adapted amplitude–velocity relationships in eye, gaze, and head movement also may have been preserved. In contrast, context-dependent changes in overall eye and head contribution to gaze amplitude were not preserved during SC stimulation. We conclude that 1) the motor output command from the SC to the brain stem can be adapted to produce different position-dependent coordination strategies for different behavioral contexts, particularly for eye-in-head position, but 2) these brain stem coordination mechanisms implement only the default (normal) level of head amplitude contribution to the gaze shift. We propose that a parallel cortical drive, absent during SC stimulation, is required to adjust the overall head contribution for different behavioral contexts.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Fasold, Frowin, Benjamin Noël, Fabian Wolf, and Stefanie Hüttermann. "Coordinated gaze behaviour of handball referees: a practical exploration with focus on the methodical implementation." Movement & Sport Sciences - Science & Motricité, no. 102 (2018): 71–79. http://dx.doi.org/10.1051/sm/2018029.

Повний текст джерела
Анотація:
Though the interaction of team members in sport has already been considered when analysing team expertise and performance, there is no comparable research addressing the interplay of referee teams as part of their expertise. Based on lab-based research on coordinated gaze behaviour, we assumed that orchestrating referees’ gaze is an important way of improving referee performances. To first scrutinize if handball referees coordinate their gaze, the gaze fixations of a handball referee team was analysed while they were presiding over a game. Results showed that referees mostly fixated the same aspects of game action (75%) and behaved differently as stated in existing guidelines for refereeing in handball. That is, the current results indicate that handball referees’ coordination of gaze behaviour seems far from optimal (they focused on the same aspects of game action too often) and should be considered when thinking about avenues to performance improvement. Furthermore, we tried to discuss potentials and limitations of the current research approach for future studies that seem necessary to gain more insight into the expertise interplay of referees.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Feng, Xingyang, Qingbin Wang, Hua Cong, Yu Zhang, and Mianhao Qiu. "Gaze Point Tracking Based on a Robotic Body–Head–Eye Coordination Method." Sensors 23, no. 14 (July 11, 2023): 6299. http://dx.doi.org/10.3390/s23146299.

Повний текст джерела
Анотація:
When the magnitude of a gaze is too large, human beings change the orientation of their head or body to assist their eyes in tracking targets because saccade alone is insufficient to keep a target at the center region of the retina. To make a robot gaze at targets rapidly and stably (as a human does), it is necessary to design a body–head–eye coordinated motion control strategy. A robot system equipped with eyes and a head is designed in this paper. Gaze point tracking problems are divided into two sub-problems: in situ gaze point tracking and approaching gaze point tracking. In the in situ gaze tracking state, the desired positions of the eye, head and body are calculated on the basis of minimizing resource consumption and maximizing stability. In the approaching gaze point tracking state, the robot is expected to approach the object at a zero angle. In the process of tracking, the three-dimensional (3D) coordinates of the object are obtained by the bionic eye and then converted to the head coordinate system and the mobile robot coordinate system. The desired positions of the head, eyes and body are obtained according to the object’s 3D coordinates. Then, using sophisticated motor control methods, the head, eyes and body are controlled to the desired position. This method avoids the complex process of adjusting control parameters and does not require the design of complex control algorithms. Based on this strategy, in situ gaze point tracking and approaching gaze point tracking experiments are performed by the robot. The experimental results show that body–head–eye coordination gaze point tracking based on the 3D coordinates of an object is feasible. This paper provides a new method that differs from the traditional two-dimensional image-based method for robotic body–head–eye gaze point tracking.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Richardson, Daniel C., Rick Dale, and John M. Tomlinson. "Conversation, Gaze Coordination, and Beliefs About Visual Context." Cognitive Science 33, no. 8 (November 2009): 1468–82. http://dx.doi.org/10.1111/j.1551-6709.2009.01057.x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

PEDRONO, C., G. OBRECHT, and L. STARK. "Eye-Head Coordination with Laterally “Modulated” Gaze Field." Optometry and Vision Science 64, no. 11 (November 1987): 853–60. http://dx.doi.org/10.1097/00006324-198711000-00009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Murillo, Eva, and Mercedes Belinchón. "Gestural-vocal coordination." Gesture 12, no. 1 (September 7, 2012): 16–39. http://dx.doi.org/10.1075/gest.12.1.02mur.

Повний текст джерела
Анотація:
The aim of this study was to examine longitudinally gestural and vocal coordination in multimodal communicative patterns during the period of transition to first words, and its role in early lexical development. Eleven monolingual Spanish children were observed from 9 to 12 and 15 months of age in a semi-structured play situation. We obtained three main findings: (1) the use of multimodal patterns of communication increases significantly with age during the period studied; (2) the rate of use of those multimodal patterns at 12 months predicts lexical development at 15 months; and (3) the use of the pointing gesture at 12 months, especially when it is accompanied with vocalization and social use of gaze, is the best predictor of lexical outcome at 15 months. Our findings support the idea that gestures, gazes and vocalizations are part of an integrated and developing system that children use flexibly to communicate from early on. The coordination of these three types of elements, especially when a pointing gesture is involved, has a predictive value on early lexical development and appears as a key for progress in language development.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Crawford, J. D., W. P. Medendorp, and J. J. Marotta. "Spatial Transformations for Eye–Hand Coordination." Journal of Neurophysiology 92, no. 1 (July 2004): 10–19. http://dx.doi.org/10.1152/jn.00117.2004.

Повний текст джерела
Анотація:
Eye–hand coordination is complex because it involves the visual guidance of both the eyes and hands, while simultaneously using eye movements to optimize vision. Since only hand motion directly affects the external world, eye movements are the slave in this system. This eye– hand visuomotor system incorporates closed-loop visual feedback but here we focus on early feedforward mechanisms that allow primates to make spatially accurate reaches. First, we consider how the parietal cortex might store and update gaze-centered representations of reach targets during a sequence of gaze shifts and fixations. Recent evidence suggests that such representations might be compared with hand position signals within this early gaze-centered frame. However, the resulting motor error commands cannot be treated independently of their frame of origin or the frame of their destined motor command. Behavioral experiments show that the brain deals with the nonlinear aspects of such reference frame transformations, and incorporates internal models of the complex linkage geometry of the eye–head–shoulder system. These transformations are modeled as a series of vector displacement commands, rotated by eye and head orientation, and implemented between parietal and frontal cortex through efficient parallel neuronal architectures. Finally, we consider how this reach system might interact with the visually guided grasp system through both parallel and coordinated neural algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

André-Deshays, C., I. Israël, O. Charade, A. Berthoz, K. Popov, and M. Lipshits. "Gaze Control in Microgravity: 1. Saccades, Pursuit, Eye-Head Coordination." Journal of Vestibular Research 3, no. 3 (September 1, 1993): 331–43. http://dx.doi.org/10.3233/ves-1993-3313.

Повний текст джерела
Анотація:
During the long-duration spaceflight Aragatz on board the Mir station, an experiment exploring the different oculomotor subsystems involved in gaze control during orientation to a fixed target or when tracking a moving target was executed by two cosmonauts. Gaze orientation: with head fixed, the “main sequence” relationships of primary horizontal saccades were modified, peak velocity was higher and saccade duration was shorter in flight than on earth, latency was decreased and saccade accuracy was better in flight. With head free, gaze orientation toward the target was achieved by coordinated eye and head movements, their timing was maintained in the horizontal plane; when gaze was stabilized on the target, there was a trend of a larger eye than head contribution not seen in preflight tests. Pursuit: Horizontal pursuit at 0.25 and 0.5 Hz frequency remained smooth with a 0.98 gain and minor phase lag, on earth and in flight. In the vertical plane, the eye did not track the target with a pure smooth pursuit eye movement, but the saccadic system contributed to gaze control. Upward tracking was mainly achieved with a succession of saccades, whereas downward tracking was due to combined smooth pursuit and catch-up saccades. This asymmetry was maintained during flight in head fixed and head free situations. On earth head pea velocity was maxima upward, and in flight it was maximal downward.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Zubair, Humza N., Kevin M. I. Chu, Justin L. Johnson, Trevor J. Rivers, and Irina N. Beloozerova. "Gaze coordination with strides during walking in the cat." Journal of Physiology 597, no. 21 (October 6, 2019): 5195–229. http://dx.doi.org/10.1113/jp278108.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Guitton, D. "Control of eye—head coordination during orienting gaze shifts." Trends in Neurosciences 15, no. 5 (May 1992): 174–79. http://dx.doi.org/10.1016/0166-2236(92)90169-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Di Fabio, Richard P., Cris Zampieri, and Paul Tuite. "Gaze Control and Foot Kinematics During Stair Climbing: Characteristics Leading to Fall Risk in Progressive Supranuclear Palsy." Physical Therapy 88, no. 2 (February 1, 2008): 240–50. http://dx.doi.org/10.2522/ptj.20070159.

Повний текст джерела
Анотація:
Background and PurposeDoes gaze control influence lower-extremity motor coordination in people with neurological deficits? The purpose of this study was to determine whether foot kinematics during stair climbing are influenced by gaze shifts prior to stair step initiation.Subjects and MethodsTwelve subjects with gaze palsy (mild versus severe) secondary to progressive supranuclear palsy were evaluated during a stair-climbing task in a cross-sectional study of mechanisms influencing eye-foot coordination. Infrared oculography and electromagnetic tracking sensors measured eye and foot kinematics, respectively. The primary outcome measures were vertical gaze fixation scores, foot lift asymmetries, and sagittal-plane foot trajectories.ResultsThe subjects with severe gaze palsy had significantly lower lag foot lift relative to lead foot lift than those with a mild form of gaze palsy. The lag foot trajectory for the subjects with severe gaze palsy tended to be low, with a heading toward contact with the edge of the stair. Subjects with severe gaze palsy were 28 times more likely to experience “fixation intrusion” (high vertical gaze fixation score) during an attempted shift of gaze downward than those with mild ocular motor deficits (odds ratio [OR]=28.3, 95% confidence interval [CI]=6.4–124.8). Subjects with severe gaze shift deficits also were 4 times more likely to have lower lag foot lift with respect to lead foot lift than those with mild ocular motor dysfunction (OR=4.0, 95% CI=1.7–9.7).Discussion and ConclusionThe small number of subjects and the variation in symptom profiles make the generalization of findings preliminary. Deficits in gaze control may influence stepping behaviors and increase the risk of trips or falls during stair climbing. Neural and kinematic hypotheses are discussed as possible contributing mechanisms.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Goffart, Laurent, Denis Pélisson, and Alain Guillaume. "Orienting Gaze Shifts During Muscimol Inactivation of Caudal Fastigial Nucleus in the Cat. II. Dynamics and Eye-Head Coupling." Journal of Neurophysiology 79, no. 4 (April 1, 1998): 1959–76. http://dx.doi.org/10.1152/jn.1998.79.4.1959.

Повний текст джерела
Анотація:
Goffart, Laurent, Denis Pélisson, and Alain Guillaume. Orienting gaze shifts during muscimol inactivation of caudalfastigial nucleus in the cat. II. Dynamics and eye-head coupling. J. Neurophysiol. 79: 1959–1976, 1998. We have shown in the companion paper that muscimol injection in the caudal part of the fastigial nucleus (cFN) consistently leads to dysmetria of visually triggered gaze shifts that depends on movement direction. Based on the observations of a constant error and misdirected movements toward the inactivated side, we have proposed that the cFN contributes to the specification of the goal of the impending ipsiversive gaze shift. To test this hypothesis and also to better define the nature of the hypometria that affects contraversive gaze shifts, we report in this paper on various aspects of movement dynamics and of eye/head coordination patterns. Unilateral muscimol injection in cFN leads to a slight modification in the dynamics of both ipsiversive and contraversive gaze shifts (average velocity decrease = 55°/s). This slowing in gaze displacements results from changes in both eye and head. In some experiments, a larger gaze velocity decrease is observed for ipsiversive gaze shifts as compared with contraversive ones, and this change is restricted to the deceleration phase. For two particular experiments testing the effect of visual feedback, we have observed a dramatic decrease in the velocity of ipsiversive gaze shifts after the animal had received visual information about its inaccurate gaze responses; but virtually no change in hypermetria was noted. These observations suggest that there is no obvious causal relationship between changes in dynamics and in accuracy of gaze shifts after muscimol injection in the cFN. Eye and head both contribute to the dysmetria of gaze. Indeed, muscimol injection leads to parallel changes in amplitude of both ocular and cephalic components. As a global result, the relative contribution of eye and head to the amplitude of ipsiversive gaze shifts remains statistically indistinguishable from that of control responses, and a small (1.6°) increase in the head contribution to contraversive gaze shifts is found. The delay between eye and head movement onsets is increased by 7.3 ± 7.4 ms for contraversive and decreased by 8.3 ± 10.1 ms for ipsiversive gaze shifts, corresponding respectively to an increased or decreased lead time of head movement initiation. The modest changes in gaze dynamics, the absence of a link between eventual dynamics changes and dysmetria, and a similar pattern of eye-head coordination to that of control responses, altogether are compatible with the hypothesis that the hypermetria of ipsiversive gaze shifts results from an impaired specification of the metrics of the impending gaze shift. Regarding contraversive gaze shifts, the weak changes in head contribution do not seem to reflect a pathological coordination between eye and head but would rather result from the tonic deviations of gaze and head toward the inactivated side. Hence, our data suggest that the hypometria of contraversive gaze shifts also might result largely from an alteration of processes that specify the goal rather than the on-going trajectory, of saccadic gaze shifts.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Klier, Eliana M., Hongying Wang, and J. Douglas Crawford. "Three-Dimensional Eye-Head Coordination Is Implemented Downstream From the Superior Colliculus." Journal of Neurophysiology 89, no. 5 (May 1, 2003): 2839–53. http://dx.doi.org/10.1152/jn.00763.2002.

Повний текст джерела
Анотація:
How the brain transforms two-dimensional visual signals into multi-dimensional motor commands, and subsequently how it constrains the redundant degrees of freedom, are fundamental problems in sensorimotor control. During fixations between gaze shifts, the redundant torsional degree of freedom is determined by various neural constraints. For example, the eye- and head-in-space are constrained by Donders' law, whereas the eye-in-head obeys Listing's law. However, where and how the brain implements these laws is not yet known. In this study, we show that eye and head movements, elicited by unilateral microstimulations of the superior colliculus (SC) in head-free monkeys, obey the same Donders' strategies observed in normal behavior (i.e., Listing's law for final eye positions and the Fick strategy for the head). Moreover, these evoked movements showed a pattern of three-dimensional eye-head coordination, consistent with normal behavior, where the eye is driven purposely out of Listing's plane during the saccade portion of the gaze shift in opposition to a subsequent torsional vestibuloocular reflex slow phase, such that the final net torsion at the end of each head-free gaze shift is zero. The required amount of saccade-related torsion was highly variable, depending on the initial position of the eye and head prior to a gaze shift and the size of the gaze shift, pointing to a neural basis of torsional control. Because these variable, context-appropriate torsional saccades were correctly elicited by fixed SC commands during head-free stimulations, this shows that the SC only encodes the horizontal and vertical components of gaze, leaving the complexity of torsional organization to downstream control systems. Thus we conclude that Listing's and Donders' laws of the eyes and head, and their three-dimensional coordination mechanisms, must be implemented after the SC.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Populin, Luis C., and Abigail Z. Rajala. "Target modality determines eye-head coordination in nonhuman primates: implications for gaze control." Journal of Neurophysiology 106, no. 4 (October 2011): 2000–2011. http://dx.doi.org/10.1152/jn.00331.2011.

Повний текст джерела
Анотація:
We have studied eye-head coordination in nonhuman primates with acoustic targets after finding that they are unable to make accurate saccadic eye movements to targets of this type with the head restrained. Three male macaque monkeys with experience in localizing sounds for rewards by pointing their gaze to the perceived location of sources served as subjects. Visual targets were used as controls. The experimental sessions were configured to minimize the chances that the subject would be able to predict the modality of the target as well as its location and time of presentation. The data show that eye and head movements are coordinated differently to generate gaze shifts to acoustic targets. Chiefly, the head invariably started to move before the eye and contributed more to the gaze shift. These differences were more striking for gaze shifts of <20–25° in amplitude, to which the head contributes very little or not at all when the target is visual. Thus acoustic and visual targets trigger gaze shifts with different eye-head coordination. This, coupled to the fact that anatomic evidence involves the superior colliculus as the link between auditory spatial processing and the motor system, suggests that separate signals are likely generated within this midbrain structure.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Chen, Xiao-Lin, and Wen-Jun Hou. "Gaze-Based Interaction Intention Recognition in Virtual Reality." Electronics 11, no. 10 (May 21, 2022): 1647. http://dx.doi.org/10.3390/electronics11101647.

Повний текст джерела
Анотація:
With the increasing need for eye tracking in head-mounted virtual reality displays, the gaze-based modality has the potential to predict user intention and unlock intuitive new interaction schemes. In the present work, we explore whether gaze-based data and hand-eye coordination data can predict a user’s interaction intention with the digital world, which could be used to develop predictive interfaces. We validate it on the eye-tracking data collected from 10 participants in item selection and teleporting tasks in virtual reality. We demonstrate successful prediction of the onset of item selection and teleporting with an 0.943 F1-Score using a Gradient Boosting Decision Tree, which is the best among the four classifiers compared, while the model size of the Support Vector Machine is the smallest. It is also proven that hand-eye-coordination-related features can improve interaction intention recognition in virtual reality environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Skantze, Gabriel. "Real-Time Coordination in Human-Robot Interaction Using Face and Voice." AI Magazine 37, no. 4 (January 17, 2017): 19–31. http://dx.doi.org/10.1609/aimag.v37i4.2686.

Повний текст джерела
Анотація:
When humans interact and collaborate with each other, they coordinate their turn-taking behaviors using verbal and nonverbal signals, expressed in the face and voice. If robots of the future are supposed to engage in social interaction with humans, it is essential that they can generate and understand these behaviors. In this article, I give an overview of several studies that show how humans in interaction with a humanlike robot make use of the same coordination signals typically found in studies on human-human interaction, and that it is possible to automatically detect and combine these cues to facilitate real-time coordination. The studies also show that humans react naturally to such signals when used by a robot, without being given any special instructions. They follow the gaze of the robot to disambiguate referring expressions, they conform when the robot selects the next speaker using gaze, and they respond naturally to subtle cues, such as gaze aversion, breathing, facial gestures and hesitation sounds.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Perrott, David R. "Auditory Psychomotor Coordination." Proceedings of the Human Factors Society Annual Meeting 32, no. 2 (October 1988): 81–85. http://dx.doi.org/10.1177/154193128803200217.

Повний текст джерела
Анотація:
A series of choice-reaction time experiments are described in which subjects were required to locate and identify the information contained on a small visual target. Across trials, the lateral position of the target was randomly varied across a 240° region (± 120° relative to the subject's initial line of gaze). The vertical position of the target was either fixed at 0° elevation or varied by ± 46°. Whether the target was in the forward or lateral field, a significant reduction in the visual search period was evident when an acoustic signal indicated the location of the visual target. Auditory spatial information was particularly effective in improving performance when the position of the target was varied in elevation or the target was located in the rear field. The current results support the notion that the auditory system can be used to direct eye-head movements toward a remote visual target.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Streeck, Jürgen. "Gesture as communication I: Its coordination with gaze and speech." Communication Monographs 60, no. 4 (December 1993): 275–99. http://dx.doi.org/10.1080/03637759309376314.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

de Croon, G., E. O. Postma, and H. J. van den Herik. "A situated model for sensory–motor coordination in gaze control." Pattern Recognition Letters 27, no. 11 (August 2006): 1181–90. http://dx.doi.org/10.1016/j.patrec.2005.07.016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

McCluskey, Meaghan K., and Kathleen E. Cullen. "Eye, Head, and Body Coordination During Large Gaze Shifts in Rhesus Monkeys: Movement Kinematics and the Influence of Posture." Journal of Neurophysiology 97, no. 4 (April 2007): 2976–91. http://dx.doi.org/10.1152/jn.00822.2006.

Повний текст джерела
Анотація:
Coordinated movements of the eye, head, and body are used to redirect the axis of gaze between objects of interest. However, previous studies of eye-head gaze shifts in head-unrestrained primates generally assumed the contribution of body movement to be negligible. Here we characterized eye-head-body coordination during horizontal gaze shifts made by trained rhesus monkeys to visual targets while they sat upright in a standard primate chair and assumed a more natural sitting posture in a custom-designed chair. In both postures, gaze shifts were characterized by the sequential onset of eye, head, and body movements, which could be described by predictable relationships. Body motion made a small but significant contribution to gaze shifts that were ≥40° in amplitude. Furthermore, as gaze shift amplitude increased (40–120°), body contribution and velocity increased systematically. In contrast, peak eye and head velocities plateaued at velocities of ∼250–300°/s, and the rotation of the eye-in-orbit and head-on-body remained well within the physical limits of ocular and neck motility during large gaze shifts, saturating at ∼35 and 60°, respectively. Gaze shifts initiated with the eye more contralateral in the orbit were accompanied by smaller body as well as head movement amplitudes and velocities were greater when monkeys were seated in the more natural body posture. Taken together, our findings show that body movement makes a predictable contribution to gaze shifts that is systematically influenced by factors such as orbital position and posture. We conclude that body movements are part of a coordinated series of motor events that are used to voluntarily reorient gaze and that these movements can be significant even in a typical laboratory setting. Our results emphasize the need for caution in the interpretation of data from neurophysiological studies of the control of saccadic eye movements and/or eye-head gaze shifts because single neurons can code motor commands to move the body as well as the head and eyes.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Rotman, Gerben, Nikolaus F. Troje, Roland S. Johansson, and J. Randall Flanagan. "Eye Movements When Observing Predictable and Unpredictable Actions." Journal of Neurophysiology 96, no. 3 (September 2006): 1358–69. http://dx.doi.org/10.1152/jn.00227.2006.

Повний текст джерела
Анотація:
We previously showed that, when observers watch an actor performing a predictable block-stacking task, the coordination between the observer's gaze and the actor's hand is similar to the coordination between the actor's gaze and hand. Both the observer and the actor direct gaze to forthcoming grasp and block landing sites and shift their gaze to the next grasp or landing site at around the time the hand contacts the block or the block contacts the landing site. Here we compare observers' gaze behavior in a block manipulation task when the observers did and when they did not know, in advance, which of two blocks the actor would pick up first. In both cases, observers managed to fixate the target ahead of the actor's hand and showed proactive gaze behavior. However, these target fixations occurred later, relative to the actor's movement, when observers did not know the target block in advance. In perceptual tests, in which observers watched animations of the actor reaching partway to the target and had to guess which block was the target, we found that the time at which observers were able to correctly do so was very similar to the time at which they would make saccades to the target block. Overall, our results indicate that observers use gaze in a fashion that is appropriate for hand movement planning and control. This in turn suggests that they implement representations of the manual actions required in the task and representations that direct task-specific eye movements.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Yamazaki, Akiko, Keiichi Yamazaki, Keiko Ikeda, Matthew Burdelski, Mihoko Fukushima, Tomoyuki Suzuki, Miyuki Kurihara, Yoshinori Kuno, and Yoshinori Kobayashi. "Interactions between a quiz robot and multiple participants." Interaction Studies 14, no. 3 (December 31, 2013): 366–89. http://dx.doi.org/10.1075/is.14.3.04yam.

Повний текст джерела
Анотація:
This paper reports on a quiz robot experiment in which we explore similarities and differences in human participant speech, gaze, and bodily conduct in responding to a robot’s speech, gaze, and bodily conduct across two languages. Our experiment involved three-person groups of Japanese and English-speaking participants who stood facing the robot and a projection screen that displayed pictures related to the robot’s questions. The robot was programmed so that its speech was coordinated with its gaze, body position, and gestures in relation to transition relevance places (TRPs), key words, and deictic words and expressions (e.g. this, this picture) in both languages. Contrary to findings on human interaction, we found that the frequency of English speakers’ head nodding was higher than that of Japanese speakers in human-robot interaction (HRI). Our findings suggest that the coordination of the robot’s verbal and non-verbal actions surrounding TRPs, key words, and deictic words and expressions is important for facilitating HRI irrespective of participants’ native language. Keywords: coordination of verbal and non-verbal actions; robot gaze comparison between English and Japanese; human-robot interaction (HRI); transition relevance place (TRP); conversation analysis
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Martin, Tod A., Scott A. Norris, Bradley E. Greger, and W. Thomas Thach. "Dynamic Coordination of Body Parts During Prism Adaptation." Journal of Neurophysiology 88, no. 4 (October 1, 2002): 1685–94. http://dx.doi.org/10.1152/jn.2002.88.4.1685.

Повний текст джерела
Анотація:
We studied coordination across body parts in throwing during adaptation to prisms. Human subjects threw balls at a target before, during, and after wearing laterally shifting prism eyeglasses. Positions of head, shoulders, arm, and ball were video-recorded continuously. We computed body angles of eyes-in-head, head-on-trunk, trunk-on-arm, and arm-on-ball. In each subject, the gaze-throw adjustment during adaptation was distributed across all sets of coupled body parts. The distribution of coupling changed unpredictably from throw to throw within a single session. The angular variation among coupled body parts was typically significantly larger than angular variation of on-target hits. Thus coupled body parts changed interdependently to account for the high accuracy of ball-on-target. Principal components and Monte Carlo analyses showed variability in body angles across throws with a wide range of variability/stereotypy across subjects. The data support a model of a dynamic and generalized solution as evidenced by the distribution of the gaze-throw adjustment across body parts.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Belton, T., and R. A. McCrea. "Contribution of the Cerebellar Flocculus to Gaze Control during Active Head Movements." Journal of Neurophysiology 81, no. 6 (June 1, 1999): 3105–9. http://dx.doi.org/10.1152/jn.1999.81.6.3105.

Повний текст джерела
Анотація:
Contribution of the cerebellar flocculus to gaze control during active head movements. The flocculus and ventral paraflocculus are adjacent regions of the cerebellar cortex that are essential for controlling smooth pursuit eye movements and for altering the performance of the vestibulo-ocular reflex (VOR). The question addressed in this study is whether these regions of the cerebellum are more globally involved in controlling gaze, regardless of whether eye or active head movements are used to pursue moving visual targets. Single-unit recordings were obtained from Purkinje (Pk) cells in the floccular region of squirrel monkeys that were trained to fixate and pursue small visual targets. Cell firing rate was recorded during smooth pursuit eye movements, cancellation of the VOR, combined eye-head pursuit, and spontaneous gaze shifts in the absence of targets. Pk cells were found to be much less sensitive to gaze velocity during combined eye–head pursuit than during ocular pursuit. They were not sensitive to gaze or head velocity during gaze saccades. Temporary inactivation of the floccular region by muscimol injection compromised ocular pursuit but had little effect on the ability of monkeys to pursue visual targets with head movements or to cancel the VOR during active head movements. Thus the signals produced by Pk cells in the floccular region are necessary for controlling smooth pursuit eye movements but not for coordinating gaze during active head movements. The results imply that individual functional modules in the cerebellar cortex are less involved in the global organization and coordination of movements than with parametric control of movements produced by a specific part of the body.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Mole, Callum D., Otto Lappi, Oscar Giles, Gustav Markkula, Franck Mars, and Richard M. Wilkie. "Getting Back Into the Loop: The Perceptual-Motor Determinants of Successful Transitions out of Automated Driving." Human Factors: The Journal of the Human Factors and Ergonomics Society 61, no. 7 (March 6, 2019): 1037–65. http://dx.doi.org/10.1177/0018720819829594.

Повний текст джерела
Анотація:
Objective: To present a structured, narrative review highlighting research into human perceptual-motor coordination that can be applied to automated vehicle (AV)–human transitions. Background: Manual control of vehicles is made possible by the coordination of perceptual-motor behaviors (gaze and steering actions), where active feedback loops enable drivers to respond rapidly to ever-changing environments. AVs will change the nature of driving to periods of monitoring followed by the human driver taking over manual control. The impact of this change is currently poorly understood. Method: We outline an explanatory framework for understanding control transitions based on models of human steering control. This framework can be summarized as a perceptual-motor loop that requires (a) calibration and (b) gaze and steering coordination. A review of the current experimental literature on transitions is presented in the light of this framework. Results: The success of transitions are often measured using reaction times, however, the perceptual-motor mechanisms underpinning steering quality remain relatively unexplored. Conclusion: Modeling the coordination of gaze and steering and the calibration of perceptual-motor control will be crucial to ensure safe and successful transitions out of automated driving. Application: This conclusion poses a challenge for future research on AV-human transitions. Future studies need to provide an understanding of human behavior that will be sufficient to capture the essential characteristics of drivers reengaging control of their vehicle. The proposed framework can provide a guide for investigating specific components of human control of steering and potential routes to improving manual control recovery.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Abekawa, Naotoshi, Hiroaki Gomi, and Jörn Diedrichsen. "Gaze control during reaching is flexibly modulated to optimize task outcome." Journal of Neurophysiology 126, no. 3 (September 1, 2021): 816–26. http://dx.doi.org/10.1152/jn.00134.2021.

Повний текст джерела
Анотація:
During visually guided reaching, our eyes usually fixate the target and saccades elsewhere are delayed (“gaze anchoring”). We here show that the degree of gaze anchoring is flexibly modulated by the reward contingencies of saccade latency and reach accuracy. Reach error became larger when saccades occurred earlier. These results suggest that early saccades are costly for reaching and the brain modulates inhibitory online coordination from the hand to the eye system depending on task requirements.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Deng, Shujie, Jian Chang, Julie A. Kirkby, and Jian J. Zhang. "Gaze–mouse coordinated movements and dependency with coordination demands in tracing." Behaviour & Information Technology 35, no. 8 (May 11, 2016): 665–79. http://dx.doi.org/10.1080/0144929x.2016.1181209.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Sidenmark, Ludwig, and Hans Gellersen. "Eye, Head and Torso Coordination During Gaze Shifts in Virtual Reality." ACM Transactions on Computer-Human Interaction 27, no. 1 (January 23, 2020): 1–40. http://dx.doi.org/10.1145/3361218.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Becker, W., and A. F. Fuchs. "Lid-eye coordination during vertical gaze changes in man and monkey." Journal of Neurophysiology 60, no. 4 (October 1, 1988): 1227–52. http://dx.doi.org/10.1152/jn.1988.60.4.1227.

Повний текст джерела
Анотація:
1. To investigate the coordination between the upper lid and the eye during vertical gaze changes, the movements of the lid and the eye were measured by the electromagnetic search-coil technique in three humans and two monkeys. 2. In both man and monkey, there was a close correspondence between the metrics of the lid movement and those of the concomitant eye movement during vertical fixation, smooth pursuit, and saccades. 3. During steady fixation, the eye and lid assumed essentially equal average positions; however, in man the lid would often undergo small idiosyncratic movements of up to 5 degrees when the eye was completely stationary. 4. During sinusoidal smooth pursuit between 0.2 and 1.0 Hz, the gain and phase shift of eye and lid movements were remarkably similar. The smaller gain and larger phase lag for downward smooth pursuit eye movements was mirrored in a similar reduced gain and increased phase lag for downward lid movements. 5. The time course of vertical lid movements associated with saccades was generally a faithful replica of the time course of the concomitant saccade; the similarity was especially impressive when the details of the velocity profiles were compared. Consequently, lid movements associated with vertical eye saccades are called lid saccades. 6. On average, lid saccades start some 5 ms later than the concomitant eye saccades but reach peak velocity at about the same time as the eye saccade. Concurrent lid and eye saccades in the downward direction have similar amplitudes and velocities. Lid saccades in the upward direction are often smaller and slower than the concomitant eye saccades. The relation of peak velocity versus amplitude and of duration versus amplitude are similar for lid and eye saccades. 7. To investigate the neural signal responsible for lid saccades, isometric tension and EMG activity were recorded from the lids of the two authors. 8. The isometric tensions during upward lid saccades exceeded the tensions required to hold the lid in its final position.(ABSTRACT TRUNCATED AT 400 WORDS)
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Andrist, Sean, A. R. Ruis, and David Williamson Shaffer. "A network analytic approach to gaze coordination during a collaborative task." Computers in Human Behavior 89 (December 2018): 339–48. http://dx.doi.org/10.1016/j.chb.2018.07.017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Warlop, Griet, Pieter Vansteenkiste, Matthieu Lenoir, Jérôme Van Causenbroeck, and Frederik J. A. Deconinck. "Gaze behaviour during walking in young adults with developmental coordination disorder." Human Movement Science 71 (June 2020): 102616. http://dx.doi.org/10.1016/j.humov.2020.102616.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Crawford, J. Douglas, Melike Z. Ceylan, Eliana M. Klier, and Daniel Guitton. "Three-Dimensional Eye-Head Coordination During Gaze Saccades in the Primate." Journal of Neurophysiology 81, no. 4 (April 1, 1999): 1760–82. http://dx.doi.org/10.1152/jn.1999.81.4.1760.

Повний текст джерела
Анотація:
Three-dimensional eye-head coordination during gaze saccades in the primate. The purpose of this investigation was to describe the neural constraints on three-dimensional (3-D) orientations of the eye in space (Es), head in space (Hs), and eye in head (Eh) during visual fixations in the monkey and the control strategies used to implement these constraints during head-free gaze saccades. Dual scleral search coil signals were used to compute 3-D orientation quaternions, two-dimensional (2-D) direction vectors, and 3-D angular velocity vectors for both the eye and head in three monkeys during the following visual tasks: radial to/from center, repetitive horizontal, nonrepetitive oblique, random (wide 2-D range), and random with pin-hole goggles. Although 2-D gaze direction (of Es) was controlled more tightly than the contributing 2-D Hs and Eh components, the torsional standard deviation of Es was greater (mean 3.55°) than Hs (3.10°), which in turn was greater than Eh (1.87°) during random fixations. Thus the 3-D Es range appeared to be the byproduct of Hs and Eh constraints, resulting in a pseudoplanar Es range that was twisted (in orthogonal coordinates) like the zero torsion range of Fick coordinates. The Hs fixation range was similarly Fick-like, whereas the Eh fixation range was quasiplanar. The latter Eh range was maintained through exquisite saccade/slow phase coordination, i.e., during each head movement, multiple anticipatory saccades drove the eye torsionally out of the planar range such that subsequent slow phases drove the eye back toward the fixation range. The Fick-like Hs constraint was maintained by the following strategies: first, during purely vertical/horizontal movements, the head rotated about constantly oriented axes that closely resembled physical Fick gimbals, i.e., about head-fixed horizontal axes and space-fixed vertical axes, respectively (although in 1 animal, the latter constraint was relaxed during repetitive horizontal movements, allowing for trajectory optimization). However, during large oblique movements, head orientation made transient but dramatic departures from the zero-torsion Fick surface, taking the shortest path between two torsionally eccentric fixation points on the surface. Moreover, in the pin-hole goggle task, the head-orientation range flattened significantly, suggesting a task-dependent default strategy similar to Listing’s law. These and previous observations suggest two quasi-independent brain stem circuits: an oculomotor 2-D to 3-D transformation that coordinates anticipatory saccades with slow phases to uphold Listing’s law, and a flexible “Fick operator” that selects head motor error; both nested within a dynamic gaze feedback loop.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Hanes, Douglas A., and Gin McCollum. "Variables Contributing to the Coordination of Rapid Eye/Head Gaze Shifts." Biological Cybernetics 94, no. 4 (March 15, 2006): 300–324. http://dx.doi.org/10.1007/s00422-006-0049-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Wu, Qiyou, and Changyuan Wang. "Research on the Estimation of Gaze Location for Head-eye Coordination Movement." International Journal of Advanced Network, Monitoring and Controls 7, no. 1 (January 1, 2022): 116–31. http://dx.doi.org/10.2478/ijanmc-2022-0009.

Повний текст джерела
Анотація:
Abstract Sight is the main source for humans to obtain information from the outside world. Due to the structure of the human eye [1], the range of human sight is limited. For this reason, people need to constantly move their line of sight when observing the surrounding environment and the target, and the movement of the sight is based on the coordinated movement of the head and the eye[2]. Therefore, the key issue for gaze research is how to correctly establish the relationship between head-eye movement and gaze movement. Taking the simulated flight environment as the research background, this paper collects a large number of head-eye images through the designed “three-camera and eight-light source” head-eye data acquisition platform, and proposes a gaze estimation method based on the combination of appearance and features, which effectively combines The relationship of head-eye coordination movement. Then, the ResNet-18 deep residual network structure and the traditional BP neural network structure are used to complete the effective fusion of the head pose and human eye features in the process of capturing the sight target, so as to realize the accurate estimation of the sight drop point, and its average accuracy up to 89.9%.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Guitton, D., and M. Volle. "Gaze control in humans: eye-head coordination during orienting movements to targets within and beyond the oculomotor range." Journal of Neurophysiology 58, no. 3 (September 1, 1987): 427–59. http://dx.doi.org/10.1152/jn.1987.58.3.427.

Повний текст джерела
Анотація:
Gaze, the direction of the visual axis in space, is the sum of the eye position relative to the head (E) plus head position relative to space (H). In the old explanation, which we call the oculocentric motor strategy, of how a rapid orienting gaze shift is controlled, it is assumed that 1) a saccadic eye movement is programmed with an amplitude equal to the target's offset angle, 2) this eye movement is programmed without reference to whether a head movement is planned, 3) if the head turns simultaneously the saccade is reduced in size by an amount equal to the head's contribution, and 4) the saccade is attenuated by the vestibuloocular reflex (VOR) slow phase. Humans have an oculomotor range (OMR) of about +/- 55 degrees. The use of the oculocentric motor strategy to acquire targets lying beyond the OMR requires programming saccades that cannot be made physically. We have studied in normal human subjects rapid horizontal gaze shifts to visible and remembered targets situated within and beyond the OMR at offsets ranging from 30 to 160 degrees. Heads were attached to an apparatus that permitted short unexpected perturbations of the head trajectory. The acceleration and deceleration phases of the head perturbation could be timed to occur at different points in the eye movement. 4. Single-step rapid gaze shifts of all sizes up to at least 160 degrees (the limit studied) could be accomplished with the classic single-eye saccade and an accompanying saccadelike head movement. In gaze shifts less than approximately 45 degrees, when head motion was prevented totally by the brake, the eye attained the target. For larger target eccentricities the gaze shift was interrupted by the brake and the average eye saccade amplitude was approximately 45 degrees, well short of the OMR. Thus saccadic eye movement amplitude was neurally, not mechanically, limited. When the head's motion was not perturbed by the brake, the eye saccade amplitude was a function of head velocity: for a given target offset, the faster the head the smaller the saccade. For gaze shifts to targets beyond the OMR and when head velocity was low, the eye frequently attained the 45 degrees position limit and remained there, immobile, until gaze attained the target.(ABSTRACT TRUNCATED AT 400 WORDS)
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Haji-Abolhassani, Iman, Daniel Guitton, and Henrietta L. Galiana. "Modeling eye-head gaze shifts in multiple contexts without motor planning." Journal of Neurophysiology 116, no. 4 (October 1, 2016): 1956–85. http://dx.doi.org/10.1152/jn.00605.2015.

Повний текст джерела
Анотація:
During gaze shifts, the eyes and head collaborate to rapidly capture a target (saccade) and fixate it. Accordingly, models of gaze shift control should embed both saccadic and fixation modes and a mechanism for switching between them. We demonstrate a model in which the eye and head platforms are driven by a shared gaze error signal. To limit the number of free parameters, we implement a model reduction approach in which steady-state cerebellar effects at each of their projection sites are lumped with the parameter of that site. The model topology is consistent with anatomy and neurophysiology, and can replicate eye-head responses observed in multiple experimental contexts: 1) observed gaze characteristics across species and subjects can emerge from this structure with minor parametric changes; 2) gaze can move to a goal while in the fixation mode; 3) ocular compensation for head perturbations during saccades could rely on vestibular-only cells in the vestibular nuclei with postulated projections to burst neurons; 4) two nonlinearities suffice, i.e., the experimentally-determined mapping of tectoreticular cells onto brain stem targets and the increased recruitment of the head for larger target eccentricities; 5) the effects of initial conditions on eye/head trajectories are due to neural circuit dynamics, not planning; and 6) “compensatory” ocular slow phases exist even after semicircular canal plugging, because of interconnections linking eye-head circuits. Our model structure also simulates classical vestibulo-ocular reflex and pursuit nystagmus, and provides novel neural circuit and behavioral predictions, notably that both eye-head coordination and segmental limb coordination are possible without trajectory planning.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Belton, Timothy, and Robert A. McCrea. "Role of the Cerebellar Flocculus Region in the Coordination of Eye and Head Movements During Gaze Pursuit." Journal of Neurophysiology 84, no. 3 (September 1, 2000): 1614–26. http://dx.doi.org/10.1152/jn.2000.84.3.1614.

Повний текст джерела
Анотація:
The contribution of the flocculus region of the cerebellum to horizontal gaze pursuit was studied in squirrel monkeys. When the head was free to move, the monkeys pursued targets with a combination of smooth eye and head movements; with the majority of the gaze velocity produced by smooth tracking head movements. In the accompanying study we reported that the flocculus region was necessary for cancellation of the vestibuloocular reflex (VOR) evoked by passive whole body rotation. The question addressed in this study was whether the flocculus region of the cerebellum also plays a role in canceling the VOR produced by active head movements during gaze pursuit. The firing behavior of 121 Purkinje (Pk) cells that were sensitive to horizontal smooth pursuit eye movements was studied. The sample included 66 eye velocity Pk cells and 55 gaze velocity Pk cells. All of the cells remained sensitive to smooth pursuit eye movements during combined eye and head tracking. Eye velocity Pk cells were insensitive to smooth pursuit head movements. Gaze velocity Pk cells were nearly as sensitive to active smooth pursuit head movements as they were passive whole body rotation; but they were less than half as sensitive (≈43%) to smooth pursuit head movements as they were to smooth pursuit eye movements. Considered as a whole, the Pk cells in the flocculus region of the cerebellar cortex were <20% as sensitive to smooth pursuit head movements as they were to smooth pursuit eye movements, which suggests that this region does not produce signals sufficient to cancel the VOR during smooth head tracking. The comparative effect of injections of muscimol into the flocculus region on smooth pursuit eye and head movements was studied in two monkeys. Muscimol inactivation of the flocculus region profoundly affected smooth pursuit eye movements but had little effect on smooth pursuit head movements or on smooth tracking of visual targets when the head was free to move. We conclude that the signals produced by flocculus region Pk cells are neither necessary nor sufficient to cancel the VOR during gaze pursuit.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Walton, Mark M. G., Bernard Bechara, and Neeraj J. Gandhi. "Effect of Reversible Inactivation of Superior Colliculus on Head Movements." Journal of Neurophysiology 99, no. 5 (May 2008): 2479–95. http://dx.doi.org/10.1152/jn.01112.2007.

Повний текст джерела
Анотація:
Because of limitations in the oculomotor range, many gaze shifts must be accomplished using coordinated movements of the eyes and head. Stimulation and recording data have implicated the primate superior colliculus (SC) in the control of these gaze shifts. The precise role of this structure in head movement control, however, is not known. The present study uses reversible inactivation to gain insight into the role of this structure in the control of head movements, including those that accompany gaze shifts and those that occur in the absence of a change in gaze. Forty-five lidocaine injections were made in two monkeys that had been trained on a series of behavioral tasks that dissociate movements of the eyes and head. Reversible inactivation resulted in clear impairments in the animals’ ability to perform gaze shifts, manifested by increased reaction times, lower peak velocities, and increased durations. In contrast, comparable effects were not found for head movements (with or without gaze shifts) with the exception of a very small increase in reaction times of head movements associated with gaze shifts. Eye-head coordination was clearly affected by the injections with gaze onset occurring relatively later with respect to head onset. Following the injections, the head contributed slightly more to the gaze shift. These results suggest that head movements (with and without gaze shifts) can be controlled by pathways that do not involve SC.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Richard, Nancy B. "Interaction Between Mothers and Infants with Down Syndrome." Topics in Early Childhood Special Education 6, no. 3 (October 1986): 54–71. http://dx.doi.org/10.1177/027112148600600305.

Повний текст джерела
Анотація:
Studies of mother-infant dyads indicate that individual differences of both partners contribute to the development of reciprocal interaction. When an infant is born with Down Syndrome, infant responses are reported to be delayed. Infant characteristics that contribute to social interaction with caregivers differ between nonhandicapped infants and those with Down syndrome. In this review, studies of infant characteristics, including temperament, state control, gaze, gesture, and vocalization, are discussed. Although infants with Down syndrome, like nonhandicapped infants, develop social communication behaviors, vulnerable characteristics are found. Differences in the development of state control, gaze patterns, coordination of gesture, gaze, vocalization, and frequency of vocalization have implications for parents and professionals in early intervention.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

OBEN, BERT, and GEERT BRÔNE. "What you see is what you do: on the relationship between gaze and gesture in multimodal alignment." Language and Cognition 7, no. 4 (November 2, 2015): 546–62. http://dx.doi.org/10.1017/langcog.2015.22.

Повний текст джерела
Анотація:
abstractInteractive language use inherently involves a process of coordination, which often leads to matching behaviour between interlocutors in different semiotic channels. We study this process of interactive alignment from a multimodal perspective: using data from head-mounted eye-trackers in a corpus of face-to-face conversations, we measure which effect gaze fixations by speakers (on their own gestures, condition 1) and fixations by interlocutors (on the gestures by those speakers, condition 2) have on subsequent gesture production by those interlocutors. The results show there is a significant effect of interlocutor gaze (condition 2), but not of speaker gaze (condition 1) on the amount of gestural alignment, with an interaction between the conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Macdonald, Ross G., and Benjamin W. Tatler. "Gaze in a real-world social interaction: A dual eye-tracking study." Quarterly Journal of Experimental Psychology 71, no. 10 (January 1, 2018): 2162–73. http://dx.doi.org/10.1177/1747021817739221.

Повний текст джерела
Анотація:
People communicate using verbal and non-verbal cues, including gaze cues. Gaze allocation can be influenced by social factors; however, most research on gaze cueing has not considered these factors. The presence of social roles was manipulated in a natural, everyday collaborative task while eye movements were measured. In pairs, participants worked together to make a cake. Half of the pairs were given roles (“Chef” or “Gatherer”) and the other half were not. Across all participants we found, contrary to the results of static-image experiments, that participants spent very little time looking at each other, challenging the generalisability of the conclusions from lab-based paradigms. However, participants were more likely than not to look at their partner when receiving an instruction, highlighting the typical coordination of gaze cues and verbal communication in natural interactions. The mean duration of instances in which the partners looked at each other (partner gaze) was longer in the roles condition, and these participants were quicker to align their gaze with their partners (shared gaze). In addition, we found some indication that when hearing spoken instructions, listeners in the roles condition looked at the speaker more than listeners in the no roles condition. We conclude that social context can affect our gaze behaviour during a social interaction.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії