Artículos de revistas sobre el tema "Guided gaze"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Guided gaze.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Guided gaze".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Prime, S. L. y J. J. Marotta. "Gaze strategies during visually-guided and memory-guided grasping". Journal of Vision 11, n.º 11 (23 de septiembre de 2011): 967. http://dx.doi.org/10.1167/11.11.967.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Prime, Steven L. y Jonathan J. Marotta. "Gaze strategies during visually-guided versus memory-guided grasping". Experimental Brain Research 225, n.º 2 (13 de diciembre de 2012): 291–305. http://dx.doi.org/10.1007/s00221-012-3358-3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Powers, Alice S., Michele A. Basso y Craig Evinger. "Blinks slow memory-guided saccades". Journal of Neurophysiology 109, n.º 3 (1 de febrero de 2013): 734–41. http://dx.doi.org/10.1152/jn.00746.2012.

Texto completo
Resumen
Memory-guided saccades are slower than visually guided saccades. The usual explanation for this slowing is that the absence of a visual drive reduces the discharge of neurons in the superior colliculus. We tested a related hypothesis: that the slowing of memory-guided saccades was due also to the more frequent occurrence of gaze-evoked blinks with memory-guided saccades compared with visually guided saccades. We recorded gaze-evoked blinks in three monkeys while they performed visually guided and memory-guided saccades and compared the kinematics of the different saccade types with and without blinks. Gaze-evoked blinks were more common during memory-guided saccades than during visually guided saccades, and the well-established relationship between peak and average velocity for saccades was disrupted by blinking. The occurrence of gaze-evoked blinks was associated with a greater slowing of memory-guided saccades compared with visually guided saccades. Likewise, when blinks were absent, the peak velocity of visually guided saccades was only slightly higher than that of memory-guided saccades. Our results reveal interactions between circuits generating saccades and blink-evoked eye movements. The interaction leads to increased curvature of saccade trajectories and a corresponding decrease in saccade velocity. Consistent with this interpretation, the amount of saccade curvature and slowing increased with gaze-evoked blink amplitude. Thus, although the absence of vision decreases the velocity of memory-guided saccades relative to visually guided saccades somewhat, the cooccurrence of gaze-evoked blinks produces the majority of slowing for memory-guided saccades.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Vesterby, Tore, Jonas C. Voss, John Paulin Hansen, Arne John Glenstrup, Dan Witzner Hansen y Mark Rudolph. "Gaze-guided viewing of interactive movies". Digital Creativity 16, n.º 4 (enero de 2005): 193–204. http://dx.doi.org/10.1080/14626260500476523.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Baker, Justin T., Timothy M. Harper y Lawrence H. Snyder. "Spatial Memory Following Shifts of Gaze. I. Saccades to Memorized World-Fixed and Gaze-Fixed Targets". Journal of Neurophysiology 89, n.º 5 (1 de mayo de 2003): 2564–76. http://dx.doi.org/10.1152/jn.00610.2002.

Texto completo
Resumen
During a shift of gaze, an object can move along with gaze or stay fixed in the world. To examine the effect of an object's reference frame on spatial working memory, we trained monkeys to memorize locations of visual stimuli as either fixed in the world or fixed to gaze. Each trial consisted of an initial reference frame instruction, followed by a peripheral visual flash, a memory-period gaze shift, and finally a memory-guided saccade to the location consistent with the instructed reference frame. The memory-period gaze shift was either rapid (a saccade) or slow (smooth pursuit or whole body rotation). This design allowed a comparison of memory-guided saccade performance under various conditions. Our data indicate that after a rotation or smooth-pursuit eye movement, saccades to memorized world-fixed targets are more variable than saccades to memorized gaze-fixed targets. In contrast, memory-guided saccades to world- and gaze-fixed targets are equally variable following a visually guided saccade. Across all conditions, accuracy, latency, and main sequence characteristics of memory-guided saccades are not influenced by the target's reference frame. Memory-guided saccades are, however, more accurate after fast compared with slow gaze shifts. These results are most consistent with an eye-centered representational system for storing the spatial locations of memorized objects but suggest that the visual system may engage different mechanisms to update the stored signal depending on how gaze is shifted.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Messmer, Nathan, Nathan Leggett, Melissa Prince y Jason S. McCarley. "Gaze Linking in Visual Search: A Help or a Hindrance?" Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, n.º 1 (septiembre de 2017): 1376–79. http://dx.doi.org/10.1177/1541931213601828.

Texto completo
Resumen
Gaze linking allows team members in a collaborative visual task to scan separate computer monitors simultaneously while their eye movements are tracked and projected onto each other’s displays. The present study explored the benefits of gaze linking to performance in unguided and guided visual search tasks. Participants completed either an unguided or guided serial search task as both independent and gaze-linked searchers. Although it produced shorter mean response times than independent search, gaze linked search was highly inefficient, and gaze linking did not differentially affect performance in guided and unguided groups. Results suggest that gaze linking is likely to be of little value in improving applied visual search.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Fowler, Garth A. y Helen Sherk. "Gaze during visually-guided locomotion in cats". Behavioural Brain Research 139, n.º 1-2 (febrero de 2003): 83–96. http://dx.doi.org/10.1016/s0166-4328(02)00096-7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Prime, Steven L. y Jonathan J. Marotta. "Erratum to: Gaze strategies during visually-guided versus memory-guided grasping". Experimental Brain Research 225, n.º 2 (13 de febrero de 2013): 307. http://dx.doi.org/10.1007/s00221-013-3432-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Stone, Lesego S. y Gyan P. Nyaupane. "The Tourist Gaze: Domestic versus International Tourists". Journal of Travel Research 58, n.º 5 (27 de junio de 2018): 877–91. http://dx.doi.org/10.1177/0047287518781890.

Texto completo
Resumen
This article investigates domestic and international tourists’ “gaze” using tourism imagery. Domestic and international tourists’ preferences are critically examined using the concept of the “tourist gaze” and “local gaze.” Through qualitative, in-depth photo-elicitation interviews (PEIs) guided by 16 photographs covering various tourist attractions in Botswana, results indicate dissimilar tourist gazes between international and domestic tourists. Culture, livelihoods, and crowded spaces, with a variety of activities, influence domestic tourists’ gaze, whereas privacy, tranquility, and quietness influence the international tourists’ gaze. The tourist gaze thus can be seen as a culturally contingent concept that is not universal. Despite the differences, results indicate the continued promotion of an international tourist’s gaze. Results help explain low visitation by domestic tourists to protected areas in Botswana and Africa. In view of the study’s results, theoretical and policy implications are also discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

B. N., Pavan Kumar, Adithya Balasubramanyam, Ashok Kumar Patil, Chethana B. y Young Ho Chai. "GazeGuide: An Eye-Gaze-Guided Active Immersive UAV Camera". Applied Sciences 10, n.º 5 (1 de marzo de 2020): 1668. http://dx.doi.org/10.3390/app10051668.

Texto completo
Resumen
Over the years, gaze input modality has been an easy and demanding human–computer interaction (HCI) method for various applications. The research of gaze-based interactive applications has advanced considerably, as HCIs are no longer constrained to traditional input devices. In this paper, we propose a novel immersive eye-gaze-guided camera (called GazeGuide) that can seamlessly control the movements of a camera mounted on an unmanned aerial vehicle (UAV) from the eye-gaze of a remote user. The video stream captured by the camera is fed into a head-mounted display (HMD) with a binocular eye tracker. The user’s eye-gaze is the sole input modality to maneuver the camera. A user study was conducted considering the static and moving targets of interest in a three-dimensional (3D) space to evaluate the proposed framework. GazeGuide was compared with a state-of-the-art input modality remote controller. The qualitative and quantitative results showed that the proposed GazeGuide performed significantly better than the remote controller.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Arora, Harbandhan Kaur, Vishal Bharmauria, Xiaogang Yan, Saihong Sun, Hongying Wang y John Douglas Crawford. "Eye-head-hand coordination during visually guided reaches in head-unrestrained macaques". Journal of Neurophysiology 122, n.º 5 (1 de noviembre de 2019): 1946–61. http://dx.doi.org/10.1152/jn.00072.2019.

Texto completo
Resumen
Nonhuman primates have been used extensively to study eye-head coordination and eye-hand coordination, but the combination—eye-head-hand coordination—has not been studied. Our goal was to determine whether reaching influences eye-head coordination (and vice versa) in rhesus macaques. Eye, head, and hand motion were recorded in two animals with search coil and touch screen technology, respectively. Animals were seated in a customized “chair” that allowed unencumbered head motion and reaching in depth. In the reach condition, animals were trained to touch a central LED at waist level while maintaining central gaze and were then rewarded if they touched a target appearing at 1 of 15 locations in a 40° × 20° (visual angle) array. In other variants, initial hand or gaze position was varied in the horizontal plane. In similar control tasks, animals were rewarded for gaze accuracy in the absence of reach. In the Reach task, animals made eye-head gaze shifts toward the target followed by reaches that were accompanied by prolonged head motion toward the target. This resulted in significantly higher head velocities and amplitudes (and lower eye-in-head ranges) compared with the gaze control condition. Gaze shifts had shorter latencies and higher velocities and were more precise, despite the lack of gaze reward. Initial hand position did not influence gaze, but initial gaze position influenced reach latency. These results suggest that eye-head coordination is optimized for visually guided reach, first by quickly and accurately placing gaze at the target to guide reach transport and then by centering the eyes in the head, likely to improve depth vision as the hand approaches the target. NEW & NOTEWORTHY Eye-head and eye-hand coordination have been studied in nonhuman primates but not the combination of all three effectors. Here we examined the timing and kinematics of eye-head-hand coordination in rhesus macaques during a simple reach-to-touch task. Our most novel finding was that (compared with hand-restrained gaze shifts) reaching produced prolonged, increased head rotation toward the target, tending to center the binocular field of view on the target/hand.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Dirik, Castillo y Kocamaz. "Gaze-Guided Control of an Autonomous Mobile Robot Using Type-2 Fuzzy Logic". Applied System Innovation 2, n.º 2 (24 de abril de 2019): 14. http://dx.doi.org/10.3390/asi2020014.

Texto completo
Resumen
Motion control of mobile robots in a cluttered environment with obstacles is an important problem. It is unsatisfactory to control a robot’s motion using traditional control algorithms in a complex environment in real time. Gaze tracking technology has brought an important perspective to this issue. Gaze guided driving a vehicle based on eye movements supply significant features of nature task to realization. This paper presents an intelligent vision-based gaze guided robot control (GGC) platform that uses a user-computer interface based on gaze tracking enables a user to control the motion of a mobile robot using eyes gaze coordinate as inputs to the system. In this paper, an overhead camera, eyes tracking device, a differential drive mobile robot, vision and interval type-2 fuzzy inference (IT2FIS) tools are utilized. The methodology incorporates two basic behaviors; map generation and go-to-goal behavior. Go-to-goal behavior based on an IT2FIS is more soft and steady progress in data processing with uncertainties to generate better performance. The algorithms are implemented in the indoor environment with the presence of obstacles. Experiments and simulation results indicated that intelligent vision-based gaze guided robot control (GGC) system can be successfully applied and the IT2FIS can successfully make operator intention, modulate speed and direction accordingly.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Yan, Qiunv, Weiwei Zhang, Wenhao Hu, Guohua Cui, Dan Wei y Jiejie Xu. "Gaze dynamics with spatiotemporal guided feature descriptor for prediction of driver’s maneuver behavior". Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 235, n.º 12 (31 de marzo de 2021): 3051–65. http://dx.doi.org/10.1177/09544070211007807.

Texto completo
Resumen
At different levels of driving automation, driver’s gaze maintains great indispensable importance on semantic perception of the surround. In this work, we model gaze dynamics and clarify its relationship with driver’s maneuver behaviors from personalized driving style. Firstly, this paper proposes an Occlusion-immune Face Detector (OFD) for facial landmark detection, which can adaptively solve the facial occlusion introduced by the body and glasses frame in the real-world driving scenarios. Meanwhile, an Eye-head Coordination Model is brought up to bridge the error gap in gaze direction through determining eye pose and head pose fused pattern. Then, a vectorized spatiotemporal guidance feature (STGF) descriptor combining gaze accumulation and gaze transition frequency is proposed to construct gaze dynamics within a time window. Finally, we predict driver’s maneuver behavior through STGF descriptor considering different driving styles to clarify the relationship between gaze dynamics and driving maneuver task. Natural driving data are sampled in a dedicated instrumented vehicle testbed, on which 15 drivers with three kind of driving styles participated. Experimental results show that the prediction model achieves the best performance, estimating driver’s behavior an average of 1 s ahead of actual behavior with 83.6% accuracy considering driving style, compared with other approaches.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Abdelkarim, M., M. K. Abbas, Alaa Osama, Dalia Anwar, Mostafa Azzam, M. Abdelalim, H. Mostafa, Samah El-Tantawy y Ibrahim Sobh. "GG-Net: Gaze Guided Network for Self-driving Cars". Electronic Imaging 2021, n.º 17 (18 de enero de 2021): 171–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.17.avm-171.

Texto completo
Resumen
Imitation learning is used massively in autonomous driving for training networks to predict steering commands from frames using annotated data collected by an expert driver. Believing that the frames taken from a front-facing camera are completely mimicking the driver’s eyes raises the question of how eyes and the complex human vision system attention mechanisms perceive the scene. This paper proposes the idea of incorporating eye gaze information with the frames into an end-to-end deep neural network in the lane-following task. The proposed novel architecture, GG-Net, is composed of a spatial transformer network (STN), and a multitask network to predict steering angle as well as the gaze map for the input frame. The experimental results of this architecture show a great improvement in steering angle prediction accuracy of 36% over the baseline with inference time of 0.015 seconds per frame (66 fps) using NVIDIA K80 GPU enabling the proposed model to operate in real-time. We argue that incorporating gaze maps enhances the model generalization capability to the unseen environments. Additionally, a novel course-steering angle conversion algorithm with a complementing mathematical proof is proposed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Kokubu, M., J. C. Dessing y J. D. Crawford. "Hand-specificity in gaze-dependent memory-guided reach errors". Journal of Vision 12, n.º 9 (10 de agosto de 2012): 427. http://dx.doi.org/10.1167/12.9.427.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Dits, Joyce, Johan J. M. Pel, Angelique Remmers y Johannes van der Steen. "Version–Vergence Interactions during Memory-Guided Binocular Gaze Shifts". Investigative Opthalmology & Visual Science 54, n.º 3 (5 de marzo de 2013): 1656. http://dx.doi.org/10.1167/iovs.12-10680.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Chen, L. Longtang. "Head Movements Evoked by Electrical Stimulation in the Frontal Eye Field of the Monkey: Evidence for Independent Eye and Head Control". Journal of Neurophysiology 95, n.º 6 (junio de 2006): 3528–42. http://dx.doi.org/10.1152/jn.01320.2005.

Texto completo
Resumen
When the head is free to move, electrical stimulation in the frontal eye field (FEF) evokes eye and head movements. However, it is unclear whether FEF stimulation-evoked head movements contribute to shifting the line of sight, like visually guided coordinated eye-head gaze shifts. Here we investigated this issue by systematically varying initial eye (IEP) and head (IHP) positions at stimulation onset. Despite the large variability of IEP and IHP and the extent of stimulation-evoked gaze amplitudes, gaze displacement was entirely accounted for by eye (re head) displacement. Overall, the majority (3/4) of stimulation-evoked gaze shifts consisted of eye-alone movements, in which head movements were below the detection threshold. When head movements did occur, they often started late (re gaze shift onset) and coincided with rapid eye deceleration, resulting in little change in the ensuing gaze amplitudes. These head movements often reached their peak velocities over 100 ms after the end of gaze shifts, indicating that the head velocity profile was temporally dissociated from the gaze drive. Interestingly, head movements were sometimes evoked by FEF stimulation in the absence of gaze shifts, particularly when IEP was deviated contralaterally (re the stimulated side) at stimulation onset. Furthermore, head movements evoked by FEF stimulation resembled a subset of head movements occurring during visually guided gaze shifts. These unique head movements minimized the eye deviation from the center of the orbit and contributed little to gaze shifts. The results suggest that head motor control may be independent from eye control in the FEF.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Flanagan, J. Randall, Gerben Rotman, Andreas F. Reichelt y Roland S. Johansson. "The role of observers' gaze behaviour when watching object manipulation tasks: predicting and evaluating the consequences of action". Philosophical Transactions of the Royal Society B: Biological Sciences 368, n.º 1628 (19 de octubre de 2013): 20130063. http://dx.doi.org/10.1098/rstb.2013.0063.

Texto completo
Resumen
When watching an actor manipulate objects, observers, like the actor, naturally direct their gaze to each object as the hand approaches and typically maintain gaze on the object until the hand departs. Here, we probed the function of observers' eye movements, focusing on two possibilities: (i) that observers' gaze behaviour arises from processes involved in the prediction of the target object of the actor's reaching movement and (ii) that this gaze behaviour supports the evaluation of mechanical events that arise from interactions between the actor's hand and objects. Observers watched an actor reach for and lift one of two presented objects. The observers' task was either to predict the target object or judge its weight. Proactive gaze behaviour, similar to that seen in self-guided action–observation, was seen in the weight judgement task, which requires evaluating mechanical events associated with lifting, but not in the target prediction task. We submit that an important function of gaze behaviour in self-guided action observation is the evaluation of mechanical events associated with interactions between the hand and object. By comparing predicted and actual mechanical events, observers, like actors, can gain knowledge about the world, including information about objects they may subsequently act upon.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Freedman, E. G., T. R. Stanford y D. L. Sparks. "Combined eye-head gaze shifts produced by electrical stimulation of the superior colliculus in rhesus monkeys". Journal of Neurophysiology 76, n.º 2 (1 de agosto de 1996): 927–52. http://dx.doi.org/10.1152/jn.1996.76.2.927.

Texto completo
Resumen
1. We electrically stimulated the intermediate and deep layers of the superior colliculus (SC) in two rhesus macaques free to move their heads both vertically and horizontally (head unrestrained). Stimulation of the primate SC can elicit high-velocity, combined, eye-head gaze shifts that are similar to visually guided gaze shifts of comparable amplitude and direction. The amplitude of gaze shifts produced by collicular stimulation depends on the site of stimulation and on the parameters of stimulation (frequency, current, and duration of the stimulation train). 2. The maximal amplitude gaze shifts, produced by electrical stimulation at 56 sites in the SC of two rhesus monkeys, ranged in amplitude from approximately 7 to approximately 80 deg. Because the head was unrestrained, stimulation-induced gaze shifts often included movements of the head. Head movements produced at the 56 stimulation sites ranged in amplitude from 0 to approximately 70 deg. 3. The relationships between peak velocity and amplitude and between duration and amplitude of stimulation-induced head movements and gaze shifts were comparable with the relationships observed during visually guided gaze shifts. The relative contributions of the eyes and head to visually guided and stimulation-induced gaze shifts were also similar. 4. As was true for visually guided gaze shifts, the head contribution to stimulation-induced gaze shifts depended on the position of the eyes relative to the head at the onset of stimulation. When the eyes were deviated in the direction of the ensuing gaze shift, the head contribution increased and the latency to head movement onset was decreased. 5. We systematically altered the duration of stimulation trains (10-400 ms) while stimulation frequency and current remained constant. Increases in stimulation duration systematically increased the amplitude of the evoked gaze shift until a site specific maximal amplitude was reached. Further increases in stimulation duration did not increase gaze amplitude. There was a high correlation between the end of the stimulation train and the end of the evoked gaze shift for movements smaller than the site-specific maximal amplitude. 6. Unlike the effects of stimulation duration on gaze amplitude, the amplitude and duration of evoked head movements did not saturate for the range of durations tested (10-400 ms), but continued to increase linearly with increases in stimulation duration. 7. The frequency of stimulation was systematically varied (range: 63-1,000 Hz) while other stimulation parameters remained constant. The velocity of evoked gaze shifts was related to the frequency of stimulation; higher stimulation frequencies resulted in higher peak velocities. The maximal, site-specific amplitude was independent of stimulation frequency. 8. When stimulating a single collicular site using identical stimulation parameters, the amplitude and direction of stimulation-induced gaze shifts, initiated from different initial positions, were relatively constant. In contrast, the amplitude and direction of the eye component of these fixed vector gaze shifts depended upon the initial position of the eyes in the orbits; the endpoints of the eye movements converged on an orbital region, or "goal," that depended on the site of collicular stimulation. 9. When identical stimulation parameters were used and when the eyes were centered initially in the orbits, the gaze shifts produced by caudal collicular stimulation when the head was restrained were typically smaller than those evoked from the same site when the head was unrestrained. This attenuation occurred because stimulation drove the eyes to approximately the same orbital position when the head was restrained or unrestrained. Thus movements produced when the head was restrained were reduced in amplitude by approximately the amount that the head would have contributed if free to move. 10. When the head was restrained, only the eye component of the intended gaze shift
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Israël, I., C. André-Deshays, O. Charade, A. Berthoz, K. Popov y M. Lipshits. "Gaze Control in Microgravity: 2. Sequences of Saccades toward Memorized Visual Targets". Journal of Vestibular Research 3, n.º 3 (1 de septiembre de 1993): 345–60. http://dx.doi.org/10.3233/ves-1993-3314.

Texto completo
Resumen
The reproduction, in complete darkness, of sequences of 5 horizontal saccades towards previously presented visual targets has been investigated in human subjects on the ground (control subjects) and one cosmonaut in microgravity. The incidence of corrective saccades during the execution of the memory-guided saccades in darkness has been examined. It was quite large for the control subjects (more than half of all saccades), and increased during the flight, while the corrective visually guided saccades incidence decreased. Direction errors occurred in about the third of all sequences on the ground, and this parameter also increased in microgravity. Memory-guided sequences were mostly hypermetric. Whereas the absolute error continuously increased with the target rank, it was not the case with the amplitude ratio, which presented a peak at the third rank, that is, at the middle of the sequence. The accuracy of the reproduction of the sequences did depend on the sequence pattern as much as on the subject. Some learning was observed in repeated reproduction of the same pattern. Although the average error did not change in microgravity, the linear regression coefficient between the visually guided and memory-guided saccades decreased.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Seara, J. F. y G. Schmidt. "Intelligent gaze control for vision-guided humanoid walking: methodological aspects". Robotics and Autonomous Systems 48, n.º 4 (octubre de 2004): 231–48. http://dx.doi.org/10.1016/j.robot.2004.07.003.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Zhang, Ruohan. "Attention Guided Imitation Learning and Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 9906–7. http://dx.doi.org/10.1609/aaai.v33i01.33019906.

Texto completo
Resumen
We propose a framework that uses learned human visual attention model to guide the learning process of an imitation learning or reinforcement learning agent. We have collected high-quality human action and eye-tracking data while playing Atari games in a carefully controlled experimental setting. We have shown that incorporating a learned human gaze model into deep imitation learning yields promising results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Meerhoff, L. A., J. Bruneau, A. Vu, A. H. Olivier y J. Pettré. "Guided by gaze: Prioritization strategy when navigating through a virtual crowd can be assessed through gaze activity". Acta Psychologica 190 (octubre de 2018): 248–57. http://dx.doi.org/10.1016/j.actpsy.2018.07.009.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Brägger, L., L. Baumgartner, K. Koebel, J. Scheidegger y A. Çöltekin. "INTERACTION AND VISUALIZATION DESIGN CONSIDERATIONS FOR GAZE-GUIDED COMMUNICATION IN COLLABORATIVE EXTENDED REALITY". ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-4-2022 (18 de mayo de 2022): 205–12. http://dx.doi.org/10.5194/isprs-annals-v-4-2022-205-2022.

Texto completo
Resumen
Abstract. There is evidence in literature that collaborative work while using digital tools could benefit from visualizing the real time eye movements of a selected participant, or possibly, several participants. In this study, we examine alternative gaze interaction and visualization design prototypes in a digital collaboration scenario, in which assumed collaboration environment is a co-located mixed reality environment. Specifically, we implemented a virtual pointer as a baseline, and representations of gaze as a line, a cursor, and an ‘automated line’ where the line and cursor are automatically alternated based on occlusion detection. These prototypes are then evaluated in a series of usability studies with additional exploratory observations for a spatial communication scenario. In the scenario participants either describe routes to someone else or learn them from someone else for navigational planning. In this paper we describe the alternative interaction design prototypes, as well as various visualization designs for the gaze itself (continuous line and dashed line) and the point of regard (donut, dashed donut, sphere, rectangle) to guide collaboration and report our findings from several usability studies (n=6). We also interviewed our participants which allows us to make some qualitative observations on the potential function and usefulness of these visualization and interaction prototypes. Overall, the outcomes suggest that gaze visualization solutions in general are promising approaches to assist communication in collaborative XR, although, not surprisingly, how they are designed is important.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Hayhoe, Mary M. y Jonathan Samir Matthis. "Control of gaze in natural environments: effects of rewards and costs, uncertainty and memory in target selection". Interface Focus 8, n.º 4 (15 de junio de 2018): 20180009. http://dx.doi.org/10.1098/rsfs.2018.0009.

Texto completo
Resumen
The development of better eye and body tracking systems, and more flexible virtual environments have allowed more systematic exploration of natural vision and contributed a number of insights. In natural visually guided behaviour, humans make continuous sequences of sensory-motor decisions to satisfy current goals, and the role of vision is to provide the relevant information in order to achieve those goals. This paper reviews the factors that control gaze in natural visually guided actions such as locomotion, including the rewards and costs associated with the immediate behavioural goals, uncertainty about the state of the world and prior knowledge of the environment. These general features of human gaze control may inform the development of artificial systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Duinkharjav, Budmonde, Kenneth Chen, Abhishek Tyagi, Jiayi He, Yuhao Zhu y Qi Sun. "Color-Perception-Guided Display Power Reduction for Virtual Reality". ACM Transactions on Graphics 41, n.º 6 (30 de noviembre de 2022): 1–16. http://dx.doi.org/10.1145/3550454.3555473.

Texto completo
Resumen
Battery life is an increasingly urgent challenge for today's untethered VR and AR devices. However, the power efficiency of head-mounted displays is naturally at odds with growing computational requirements driven by better resolution, refresh rate, and dynamic ranges, all of which reduce the sustained usage time of untethered AR/VR devices. For instance, the Oculus Quest 2, under a fully-charged battery, can sustain only 2 to 3 hours of operation time. Prior display power reduction techniques mostly target smartphone displays. Directly applying smartphone display power reduction techniques, however, degrades the visual perception in AR/VR with noticeable artifacts. For instance, the "power-saving mode" on smartphones uniformly lowers the pixel luminance across the display and, as a result, presents an overall darkened visual perception to users if directly applied to VR content. Our key insight is that VR display power reduction must be cognizant of the gaze-contingent nature of high field-of-view VR displays. To that end, we present a gaze-contingent system that, without degrading luminance, minimizes the display power consumption while preserving high visual fidelity when users actively view immersive video sequences. This is enabled by constructing 1) a gaze-contingent color discrimination model through psychophysical studies, and 2) a display power model (with respect to pixel color) through real-device measurements. Critically, due to the careful design decisions made in constructing the two models, our algorithm is cast as a constrained optimization problem with a closed-form solution, which can be implemented as a real-time, image-space shader. We evaluate our system using a series of psychophysical studies and large-scale analyses on natural images. Experiment results show that our system reduces the display power by as much as 24% (14% on average) with little to no perceptual fidelity degradation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Petit, Laurent y Michael S. Beauchamp. "Neural Basis of Visually Guided Head Movements Studied With fMRI". Journal of Neurophysiology 89, n.º 5 (1 de mayo de 2003): 2516–27. http://dx.doi.org/10.1152/jn.00988.2002.

Texto completo
Resumen
We used event-related fMRI to measure brain activity while subjects performed saccadic eye, head, and gaze movements to visually presented targets. Two distinct patterns of response were observed. One set of areas was equally active during eye, head, and gaze movements and consisted of the superior and inferior subdivisions of the frontal eye fields, the supplementary eye field, the intraparietal sulcus, the precuneus, area MT in the lateral occipital sulcus and subcortically in basal ganglia, thalamus, and the superior colliculus. These areas have been previously observed in functional imaging studies of human eye movements, suggesting that a common set of brain areas subserves both oculomotor and head movement control in humans, consistent with data from single-unit recording and microstimulation studies in nonhuman primates that have described overlapping eye- and head-movement representations in oculomotor control areas. A second set of areas was active during head and gaze movements but not during eye movements. This set of areas included the posterior part of the planum temporale and the cortex at the temporoparietal junction, known as the parieto-insular vestibular cortex (PIVC). Activity in PIVC has been observed during imaging studies of invasive vestibular stimulation, and we confirm its role in processing the vestibular cues accompanying natural head movements. Our findings demonstrate that fMRI can be used to study the neural basis of head movements and show that areas that control eye movements also control head movements. In addition, we provide the first evidence for brain activity associated with vestibular input produced by natural head movements as opposed to invasive caloric or galvanic vestibular stimulation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Williamson, Shawn S., Ari Z. Zivotofsky y Michele A. Basso. "Modulation of Gaze-Evoked Blinks Depends Primarily on Extraretinal Factors". Journal of Neurophysiology 93, n.º 1 (enero de 2005): 627–32. http://dx.doi.org/10.1152/jn.00820.2004.

Texto completo
Resumen
Gaze-evoked blinks are contractions of the orbicularis oculi (OO)—the lid closing muscle—occurring during rapid movements of the head and eyes and result from a common drive to the gaze and blink motor systems. However, blinks occurring during shifts of gaze are often suppressed when the gaze shift is made to an important visual stimulus, suggesting that the visual system can modulate the occurrence of these blinks. In head-stabilized, human subjects, we tested the hypothesis that the presence of a visual stimulus was sufficient, but not necessary, to modulate OO EMG (OOemg) activity during saccadic eye movements. Rapid, reorienting movements of the eyes (saccades) were made to visual targets that remained illuminated (visually guided trials) or were briefly flashed (memory-guided trials) at different amplitudes along the horizontal meridian. We measured OOemg activity and found that the magnitude and probability of OOemg activity occurrence was reduced when a saccade was made to the memory of the spatial location as well as to the actual visual stimulus. The reduced OOemg activity occurred only when the location of the target was previously cued. OOemg activity occurred reliably with spontaneous saccades that were made to locations with no explicit visual stimulus, generally, back to the fixation location. Thus the modulation of gaze-evoked OOemg activity does not depend on the presence of visual information per se, but rather, results from an extraretinal signal.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Abekawa, Naotoshi, Hiroaki Gomi y Jörn Diedrichsen. "Gaze control during reaching is flexibly modulated to optimize task outcome". Journal of Neurophysiology 126, n.º 3 (1 de septiembre de 2021): 816–26. http://dx.doi.org/10.1152/jn.00134.2021.

Texto completo
Resumen
During visually guided reaching, our eyes usually fixate the target and saccades elsewhere are delayed (“gaze anchoring”). We here show that the degree of gaze anchoring is flexibly modulated by the reward contingencies of saccade latency and reach accuracy. Reach error became larger when saccades occurred earlier. These results suggest that early saccades are costly for reaching and the brain modulates inhibitory online coordination from the hand to the eye system depending on task requirements.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Cheng, Yihua, Shiyao Huang, Fei Wang, Chen Qian y Feng Lu. "A Coarse-to-Fine Adaptive Network for Appearance-Based Gaze Estimation". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 10623–30. http://dx.doi.org/10.1609/aaai.v34i07.6636.

Texto completo
Resumen
Human gaze is essential for various appealing applications. Aiming at more accurate gaze estimation, a series of recent works propose to utilize face and eye images simultaneously. Nevertheless, face and eye images only serve as independent or parallel feature sources in those works, the intrinsic correlation between their features is overlooked. In this paper we make the following contributions: 1) We propose a coarse-to-fine strategy which estimates a basic gaze direction from face image and refines it with corresponding residual predicted from eye images. 2) Guided by the proposed strategy, we design a framework which introduces a bi-gram model to bridge gaze residual and basic gaze direction, and an attention component to adaptively acquire suitable fine-grained feature. 3) Integrating the above innovations, we construct a coarse-to-fine adaptive network named CA-Net and achieve state-of-the-art performances on MPIIGaze and EyeDiap.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Herter, Troy M. y Daniel Guitton. "Human Head-Free Gaze Saccades to Targets Flashed Before Gaze-Pursuit Are Spatially Accurate". Journal of Neurophysiology 80, n.º 5 (1 de noviembre de 1998): 2785–89. http://dx.doi.org/10.1152/jn.1998.80.5.2785.

Texto completo
Resumen
Herter, Troy M. and Daniel Guitton. Human head-free gaze saccades to targets flashed before gaze-pursuit are spatially accurate. J. Neurophysiol. 80: 2785–2789, 1998. Previous studies have shown that accurate saccades can be generated, in the dark, that compensate for movements of the visual axis that result from movements of either the eyes alone or the head alone that intervene between target presentation and saccade onset. We have carried out experiments with human subjects to test whether gaze saccades (gaze = eye-in-space = eye-in-head + head-in-space) can be generated that compensate for smooth pursuit movements of gaze that intervene between target onset and gaze-saccade onset. In both head-unrestrained (head-free) and -restrained (head-fixed) conditions, subjects were asked to make gaze shifts, in the dark, to the remembered location of a briefly flashed target. On most trials, during the memory period, the subjects carried out intervening head-free gaze pursuit or head-fixed ocular pursuit along the horizontal meridian. On the remaining (control) trials, subjects did not carry out intervening pursuit movements during the memory period; this was the classical memory-guided saccade task. We found that the subjects accurately compensated for intervening movements of the visual axis in both the head-free and head-fixed conditions. We conclude that the human gaze-motor system is able to monitor on-line changes in gaze position and add them to initial retinal error, to program spatially accurate gaze saccades.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Ekici Cilkin, Remziye y Beykan Cizel. "Tourist gazes through photographs". Journal of Vacation Marketing 28, n.º 2 (22 de octubre de 2021): 188–210. http://dx.doi.org/10.1177/13567667211038955.

Texto completo
Resumen
This article investigates the tourism experiences reflected on the photographs according to the tourist gaze theory. Tourists’ experiences are critically examined using the concept of the “romantic gaze” and “collective gaze.” Through qualitative, in-depth photo elicitation interviews (PEIs) guided by their own 185 photographs covering various tourist attractions in Kaleici (Antalya), results indicate similar and dissimilar tourism experiences between romantic and collective gaze. The experiences of tourists are categorized as tangible cultural heritage, intangible cultural heritage, natural heritage, atmosphere, contrast, living species, authenticity, emotion, sensory perception, interest, similarity, and touristic activities. While the feeling of curiosity, difference and interest, a sense of self-awareness, authenticity, and nostalgia are prominent in romantic gaze; tourists with a collective gaze reflect their group, friends, family, similarities and hedonistic feelings (entertainment, consumption, rest and interaction) to the photographs. The tourist gaze, which changes according to the society, social group, and historical period, is built on these differences. There is simply no universal experience that is always available to all tourists. In view of the results of this research, which aims to develop the theory about questioning the tourism experience, which is one of the most important issues for today's tourism marketers in practice through the tourist gaze, theoretical and policy implications are also discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Powell, Nathaniel V., Xavier Marshall, Gabriel J. Diaz y Brett R. Fajen. "Coordination of gaze and action during high-speed steering and obstacle avoidance". PLOS ONE 19, n.º 3 (8 de marzo de 2024): e0289855. http://dx.doi.org/10.1371/journal.pone.0289855.

Texto completo
Resumen
When humans navigate through complex environments, they coordinate gaze and steering to sample the visual information needed to guide movement. Gaze and steering behavior have been extensively studied in the context of automobile driving along a winding road, leading to accounts of movement along well-defined paths over flat, obstacle-free surfaces. However, humans are also capable of visually guiding self-motion in environments that are cluttered with obstacles and lack an explicit path. An extreme example of such behavior occurs during first-person view drone racing, in which pilots maneuver at high speeds through a dense forest. In this study, we explored the gaze and steering behavior of skilled drone pilots. Subjects guided a simulated quadcopter along a racecourse embedded within a custom-designed forest-like virtual environment. The environment was viewed through a head-mounted display equipped with an eye tracker to record gaze behavior. In two experiments, subjects performed the task in multiple conditions that varied in terms of the presence of obstacles (trees), waypoints (hoops to fly through), and a path to follow. Subjects often looked in the general direction of things that they wanted to steer toward, but gaze fell on nearby objects and surfaces more often than on the actual path or hoops. Nevertheless, subjects were able to perform the task successfully, steering at high speeds while remaining on the path, passing through hoops, and avoiding collisions. In conditions that contained hoops, subjects adapted how they approached the most immediate hoop in anticipation of the position of the subsequent hoop. Taken together, these findings challenge existing models of steering that assume that steering is tightly coupled to where actors look. We consider the study’s broader implications as well as limitations, including the focus on a small sample of highly skilled subjects and inherent noise in measurement of gaze direction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Dominguez-Zamora, Javier, Shaila Gunn y Daniel Marigold. "Does uncertainty about the terrain explain gaze behavior during visually guided walking?" Journal of Vision 17, n.º 10 (31 de agosto de 2017): 709. http://dx.doi.org/10.1167/17.10.709.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Han, Yangtianyue. "The Male Gaze and Its Impact on Womens Public Living Space". Communications in Humanities Research 5, n.º 1 (14 de septiembre de 2023): 145–50. http://dx.doi.org/10.54254/2753-7064/5/20230148.

Texto completo
Resumen
Under the gaze of men, the motive of objectifying the opposite sex to please the same sex is bound to cause derogation to women. Through the shaping of female images by media or literature, they are sexualized, objectified and alienated, and integrated into the society of male values as an alien. When the general trend of society excludes women and citizens lack social identity for women, women will be marginalized by society. This gaze is known as the male gaze, defined as interpreting women from a male perspective and objectifying them to please the same sex. When the criteria for evaluating female bodies depend on a male-guided aesthetic, it creates the illusion of male criticism power over female strangers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Yang, Yaokun, Yihan Yin y Feng Lu. "Gaze Target Detection by Merging Human Attention and Activity Cues". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 7 (24 de marzo de 2024): 6585–93. http://dx.doi.org/10.1609/aaai.v38i7.28480.

Texto completo
Resumen
Despite achieving impressive performance, current methods for detecting gaze targets, which depend on visual saliency and spatial scene geometry, continue to face challenges when it comes to detecting gaze targets within intricate image backgrounds. One of the primary reasons for this lies in the oversight of the intricate connection between human attention and activity cues. In this study, we introduce an innovative approach that amalgamates the visual saliency detection with the body-part & object interaction both guided by the soft gaze attention. This fusion enables precise and dependable detection of gaze targets amidst intricate image backgrounds. Our approach attains state-of-the-art performance on both the Gazefollow benchmark and the GazeVideoAttn benchmark. In comparison to recent methods that rely on intricate 3D reconstruction of a single input image, our approach, which solely leverages 2D image information, still exhibits a substantial lead across all evaluation metrics, positioning it closer to human-level performance. These outcomes underscore the potent effectiveness of our proposed method in the gaze target detection task.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Yoshida, Hanako, Aakash Patel y Joseph Burling. "Gaze as a Window to the Process of Novel Adjective Mapping". Languages 4, n.º 2 (3 de junio de 2019): 33. http://dx.doi.org/10.3390/languages4020033.

Texto completo
Resumen
This study evaluated two explanations for how learning of novel adjectives is facilitated when all the objects are from the same category (e.g., exemplar and testing objects are all CUPS) and the object category is a known to the children. One explanation (the category knowledge account) focuses on early knowledge of syntax–meaning correspondence, and another (the attentional account) focuses on the role of repeated perceptual properties. The first account presumes implicit understanding that all the objects belong to the same category, and the second account presumes only that redundant perceptual experiences minimize distraction from irrelevant features and thus guide children’s attention directly to the correct item. The present study tests the two accounts by documenting moment-to-moment attention allocation (e.g., looking at experimenter’s face, exemplar object, target object) during a novel adjective learning task with 50 3-year-olds. The results suggest that children’s attention was guided directly to the correct item during the adjective mapping and that such direct attention allocation to the correct item predicted children’s adjective mapping performance. Results are discussed in relation to their implication for children’s active looking as the determinant of process for mapping new words to their meanings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Pritchett, Lisa M., Michael J. Carnevale y Laurence R. Harris. "Body and gaze centered coding of touch locations during a dynamic task". Seeing and Perceiving 25 (2012): 195. http://dx.doi.org/10.1163/187847612x648242.

Texto completo
Resumen
We have previously reported that head position affects the perceived location of touch differently depending on the dynamics of the task the subject is involved in. When touch was delivered and responses were made with head rotated touch location shifted in the opposite direction to the head position, consistent with body-centered coding. When touch was delivered with head rotated but response was made with head centered touch shifted in the same direction as the head, consistent with gaze-centered coding. Here we tested whether moving the head in-between touch and response would modulate the effects of head position on touch location. Each trial consisted of three periods, in the first arrows and LEDs guided the subject to a randomly chosen head orientation (90° left, right, or center) and a vibration stimulus was delivered. Next, they were either guided to turn their head or to remain in the same location. In the final period they again were guided to turn or to remain in the same location before reporting the perceived location of the touch on a visual scale using a mouse and computer screen. Reported touch location was shifted in the opposite direction of head orientation during touch presentation regardless of the orientation during response or whether a movement was made before the response. The size of the effect was much reduced compared to our previous results. These results are consistent with touch location being coded in both a gaze centered and body centered reference frame during dynamic conditions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Reingold, Eyal M., Lester C. Loschky, George W. McConkie y David M. Stampe. "Gaze-Contingent Multiresolutional Displays: An Integrative Review". Human Factors: The Journal of the Human Factors and Ergonomics Society 45, n.º 2 (junio de 2003): 307–28. http://dx.doi.org/10.1518/hfes.45.2.307.27235.

Texto completo
Resumen
Gaze-contingent multiresolutional displays (GCMRDs) center high-resolution information on the user's gaze position, matching the user's area of interest (AOI). Image resolution and details outside the AOI are reduced, lowering the requirements for processing resources and transmission bandwidth in demanding display and imaging applications. This review provides a general framework within which GCMRD research can be integrated, evaluated, and guided. GCMRDs (or “moving windows”) are analyzed in terms of (a) the nature of their images (i.e., “multiresolution,” “variable resolution,” “space variant,” or “level of detail”), and (b) the movement of the AOI (i.e., “gaze contingent,” “foveated,” or “eye slaved”). We also synthesize the known human factors research on GCMRDs and point out important questions for future research and development. Actual or potential applications of this research include flight, medical, and driving simulators; virtual reality; remote piloting and teleoperation; infrared and indirect vision; image transmission and retrieval; telemedicine; video teleconferencing; and artificial vision systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Quinet, Julie y Laurent Goffart. "Saccade Dysmetria in Head-Unrestrained Gaze Shifts After Muscimol Inactivation of the Caudal Fastigial Nucleus in the Monkey". Journal of Neurophysiology 93, n.º 4 (abril de 2005): 2343–49. http://dx.doi.org/10.1152/jn.00705.2004.

Texto completo
Resumen
Lesions in the caudal fastigial nucleus (cFN) severely impair the accuracy of visually guided saccades in the head-restrained monkey. Is the saccade dysmetria a central perturbation in issuing commands for orienting gaze (eye in space) or is it a more peripheral impairment in generating oculomotor commands? This question was investigated in two head-unrestrained monkeys by analyzing the effect of inactivating one cFN on horizontal gaze shifts generated from a straight ahead fixation light-emitting diode (LED) toward a 40° eccentric target LED. After muscimol injections, when viewing the fixation LED, the starting position of the head was changed (ipsilesional and upward deviations). Ipsilesional gaze shifts were associated with a 24% increase in the eye saccade amplitude and a 58% reduction in the amplitude of the head contribution. Contralesional gaze shifts were associated with a decrease in the amplitude of both eye and head components (40 and 37% reduction, respectively). No correlation between the changes in the eye amplitude and in head contribution was observed. The amplitude of the complete head movement was decreased for ipsilesional movements (57% reduction) and unaffected for contralesional movements. For both ipsilesional and contralesional gaze shifts, the changes in eye saccade amplitude were strongly correlated with the changes in gaze amplitude and largely accounted for the gaze dysmetria. These results indicate a major role of cFN in the generation of appropriate saccadic oculomotor commands during head-unrestrained gaze shifts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Flanagan, J. Randall, Yasuo Terao y Roland S. Johansson. "Gaze Behavior When Reaching to Remembered Targets". Journal of Neurophysiology 100, n.º 3 (septiembre de 2008): 1533–43. http://dx.doi.org/10.1152/jn.90518.2008.

Texto completo
Resumen
People naturally direct their gaze to visible hand movement goals. Doing so improves reach accuracy through use of signals related to gaze position and visual feedback of the hand. Here, we studied where people naturally look when acting on remembered target locations. Four targets were presented on a screen, in peripheral vision, while participants fixed a central cross (encoding phase). Four seconds later, participants used a pen to mark the remembered locations while free to look wherever they wished (recall phase). Visual references, including the screen and the cross, were present throughout. During recall, participants neither looked at the marked locations nor prevented eye movements. Instead, gaze behavior was erratic and was comprised of gaze shifts loosely coupled in time and space with hand movements. To examine whether eye and hand movements during encoding affected gaze behavior during recall, in additional encoding conditions, participants marked the visible targets with either free gaze or with central cross fixation or just looked at the targets. All encoding conditions yielded similar erratic gaze behavior during recall. Furthermore, encoding mode did not influence recall performance, suggesting that participants, during recall, did not exploit sensorimotor memories related to hand and gaze movements during encoding. Finally, we recorded a similar lose coupling between hand and eye movements during an object manipulation task performed in darkness after participants had viewed the task environment. We conclude that acting on remembered versus visible targets can engage fundamentally different control strategies, with gaze largely decoupled from movement goals during memory-guided actions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Wohlgemuth, Melville, Angeles Salles y Cynthia Moss. "Sonar-guided attention in natural tasks". Molecular Psychology: Brain, Behavior, and Society 1 (26 de junio de 2023): 4. http://dx.doi.org/10.12688/molpsychol.17488.2.

Texto completo
Resumen
Little is known about neural dynamics that accompany rapid shifts in spatial attention in freely behaving animals, primarily because reliable, fine scale indicators of attention are lacking in standard model organisms engaged in natural tasks. The echolocating bat can serve to bridge this gap, as it exhibits robust dynamic behavioral indicators of spatial attention while it explores its environment. In particular, the bat actively shifts the aim of its sonar beam to inspect objects in different directions, akin to eye movements and foveation in humans and other visually dominant animals. Further, the bat adjusts the temporal features of sonar calls to attend to objects at different distances, yielding a direct metric of acoustic gaze along the range axis. Thus, an echolocating bat’s call features not only convey the information it uses to probe its surroundings, but also reveal its auditory attention to objects in 3D space. These explicit metrics of spatial attention provide a powerful and robust system for analyzing changes in attention at a behavioral level, as well as the underlying neural mechanisms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Wohlgemuth, Melville, Angeles Salles y Cynthia Moss. "Sonar-guided attention in natural tasks". Molecular Psychology: Brain, Behavior, and Society 1 (7 de agosto de 2023): 4. http://dx.doi.org/10.12688/molpsychol.17488.3.

Texto completo
Resumen
Little is known about neural dynamics that accompany rapid shifts in spatial attention in freely behaving animals, primarily because reliable, fine scale indicators of attention are lacking in standard model organisms engaged in natural tasks. The echolocating bat can serve to bridge this gap, as it exhibits robust dynamic behavioral indicators of spatial attention while it explores its environment. In particular, the bat actively shifts the aim of its sonar beam to inspect objects in different directions, akin to eye movements and foveation in humans and other visually dominant animals. Further, the bat adjusts the temporal features of sonar calls to attend to objects at different distances, yielding a direct metric of acoustic gaze along the range axis. Thus, an echolocating bat’s call features not only convey the information it uses to probe its surroundings, but also reveal its auditory attention to objects in 3D space. These explicit metrics of spatial attention provide a powerful and robust system for analyzing changes in attention at a behavioral level, as well as the underlying neural mechanisms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Fu, Xianping, Yuxiao Yan, Yang Yan, Jinjia Peng y Huibing Wang. "Purifying real images with an attention-guided style transfer network for gaze estimation". Engineering Applications of Artificial Intelligence 91 (mayo de 2020): 103609. http://dx.doi.org/10.1016/j.engappai.2020.103609.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Wu, Chenglin, Huanqiang Hu, Kean Lin, Qing Wang, Tianjian Liu y Guannan Chen. "Attention-guided and fine-grained feature extraction from face images for gaze estimation". Engineering Applications of Artificial Intelligence 126 (noviembre de 2023): 106994. http://dx.doi.org/10.1016/j.engappai.2023.106994.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Lee, Seung Won, Hwan Kim, Taeha Yi y Kyung Hoon Hyun. "BIGaze: An eye-gaze action-guided Bayesian information gain framework for information exploration". Advanced Engineering Informatics 58 (octubre de 2023): 102159. http://dx.doi.org/10.1016/j.aei.2023.102159.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Schall, Jeffrey D. "The neural selection and control of saccades by the frontal eye field". Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 357, n.º 1424 (29 de agosto de 2002): 1073–82. http://dx.doi.org/10.1098/rstb.2002.1098.

Texto completo
Resumen
Recent research has provided new insights into the neural processes that select the target for and control the production of a shift of gaze. Being a key node in the network that subserves visual processing and saccade production, the frontal eye field (FEF) has been an effective area in which to monitor these processes. Certain neurons in the FEF signal the location of conspicuous or meaningful stimuli that may be the targets for saccades. Other neurons control whether and when the gaze shifts. The existence of distinct neural processes for visual selection and saccade production is necessary to explain the flexibility of visually guided behaviour.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

HAYET, JEAN-BERNARD, CLAUDIA ESTEVES, GUSTAVO ARECHAVALETA, OLIVIER STASSE y EIICHI YOSHIDA. "HUMANOID LOCOMOTION PLANNING FOR VISUALLY GUIDED TASKS". International Journal of Humanoid Robotics 09, n.º 02 (junio de 2012): 1250009. http://dx.doi.org/10.1142/s0219843612500090.

Texto completo
Resumen
In this work, we propose a landmark-based navigation approach that integrates (1) high-level motion planning capabilities that take into account the landmarks position and visibility and (2) a stack of feasible visual servoing tasks based on footprints to follow. The path planner computes a collision-free path that considers sensory, geometric, and kinematic constraints that are specific to humanoid robots. Based on recent results in movement neuroscience that suggest that most humans exhibit nonholonomic constraints when walking in open spaces, the humanoid steering behavior is modeled as a differential-drive wheeled robot (DDR). The obtained paths are made of geometric primitives that are the shortest in distance in free spaces. The footprints around the path and the positions of the landmarks to which the gaze must be directed are used within a stack-of-tasks (SoT) framework to compute the whole-body motion of the humanoid. We provide some experiments that verify the effectiveness of the proposed strategy on the HRP-2 platform.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Norris, Scott A., Emily N. Hathaway, Jordan A. Taylor y W. Thomas Thach. "Cerebellar inactivation impairs memory of learned prism gaze-reach calibrations". Journal of Neurophysiology 105, n.º 5 (mayo de 2011): 2248–59. http://dx.doi.org/10.1152/jn.01009.2010.

Texto completo
Resumen
Three monkeys performed a visually guided reach-touch task with and without laterally displacing prisms. The prisms offset the normally aligned gaze/reach and subsequent touch. Naive monkeys showed adaptation, such that on repeated prism trials the gaze-reach angle widened and touches hit nearer the target. On the first subsequent no-prism trial the monkeys exhibited an aftereffect, such that the widened gaze-reach angle persisted and touches missed the target in the direction opposite that of initial prism-induced error. After 20–30 days of training, monkeys showed long-term learning and storage of the prism gaze-reach calibration: they switched between prism and no-prism and touched the target on the first trials without adaptation or aftereffect. Injections of lidocaine into posterolateral cerebellar cortex or muscimol or lidocaine into dentate nucleus temporarily inactivated these structures. Immediately after injections into cortex or dentate, reaches were displaced in the direction of prism-displaced gaze, but no-prism reaches were relatively unimpaired. There was little or no adaptation on the day of injection. On days after injection, there was no adaptation and both prism and no-prism reaches were horizontally, and often vertically, displaced. A single permanent lesion (kainic acid) in the lateral dentate nucleus of one monkey immediately impaired only the learned prism gaze-reach calibration and in subsequent days disrupted both learning and performance. This effect persisted for the 18 days of observation, with little or no adaptation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Kühnlenz, Kolja y Martin Buss. "Multi-Focal Vision and Gaze Control Improve Navigation Performance". International Journal of Advanced Robotic Systems 9, n.º 1 (1 de enero de 2012): 25. http://dx.doi.org/10.5772/50920.

Texto completo
Resumen
Multi-focal vision systems comprise cameras with various fields of view and measurement accuracies. This article presents a multi-focal approach to localization and mapping of mobile robots with active vision. An implementation of the novel concept is done considering a humanoid robot navigation scenario where the robot is visually guided through a structured environment with several landmarks. Various embodiments of multi-focal vision systems are investigated and the impact on navigation performance is evaluated in comparison to a conventional mono-focal stereo set-up. The comparative studies clearly show the benefits of multi-focal vision for mobile robot navigation: flexibility to assign the different available sensors optimally in each situation, enhancement of the visible field, higher localization accuracy, and, thus, better task performance, i.e. path following behavior of the mobile robot. It is shown that multi-focal vision may strongly improve navigation performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía