To see the other types of publications on this topic, follow the link: Visual attention in time.

Journal articles on the topic 'Visual attention in time'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Visual attention in time.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhou, Yan-Bang, Qiang Li, and Hong-Zhi Liu. "Visual attention and time preference reversals." Judgment and Decision Making 16, no. 4 (July 2021): 1010–38. http://dx.doi.org/10.1017/s1930297500008068.

Full text
Abstract:
AbstractTime preference reversal refers to systematic inconsistencies between preferences and bids for intertemporal options. From the two eye-tracking studies (N1 = 60, N2 = 110), we examined the underlying mechanisms of time preference reversal. We replicated the reversal effect in which individuals facing a pair of intertemporal options choose the smaller-sooner option but assign a higher value to the larger-later one. Results revealed that the mean fixation duration and the proportion of gaze time on the outcome attribute varied across the choice and bid tasks. In addition, time preference reversals correlated with individual differences in maximizing tendencies. Findings support the contingent weighting hypothesis and strategy compatibility hypothesis and allow for improved theoretical understanding of the potential mechanisms and processes involved in time preference reversals.
APA, Harvard, Vancouver, ISO, and other styles
2

Busse, L. "The Time Course of Shifting Visual Attention." Journal of Neuroscience 26, no. 15 (April 12, 2006): 3885–86. http://dx.doi.org/10.1523/jneurosci.0459-06.2006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Egeth, Howard E., and Steven Yantis. "VISUAL ATTENTION: Control, Representation, and Time Course." Annual Review of Psychology 48, no. 1 (February 1997): 269–97. http://dx.doi.org/10.1146/annurev.psych.48.1.269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ruhnau, E., and V. Haase. "Space-time structure of selective visual attention." International Journal of Psychophysiology 14, no. 2 (February 1993): 146. http://dx.doi.org/10.1016/0167-8760(93)90239-l.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ward, Robert, John Duncan, and Kimron Shapiro. "The Slow Time-Course of Visual Attention." Cognitive Psychology 30, no. 1 (February 1996): 79–109. http://dx.doi.org/10.1006/cogp.1996.0003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chun, Marvin M. "Visual working memory as visual attention sustained internally over time." Neuropsychologia 49, no. 6 (May 2011): 1407–9. http://dx.doi.org/10.1016/j.neuropsychologia.2011.01.029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Srivastava, Priyanka, and Narayanan Srinivasan. "Time course of visual attention with emotional faces." Attention, Perception, & Psychophysics 72, no. 2 (February 2010): 369–77. http://dx.doi.org/10.3758/app.72.2.369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chastain, Garvin. "Time-course of location changes of visual attention." Bulletin of the Psychonomic Society 29, no. 5 (May 1991): 425–28. http://dx.doi.org/10.3758/bf03333960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Couffe, C., R. Mizzi, and G. A. Michael. "Salience-based progression of visual attention: Time course." Psychologie Française 61, no. 3 (September 2016): 163–75. http://dx.doi.org/10.1016/j.psfr.2015.04.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Drisdelle, Brandi L., Greg L. West, and Pierre Jolicoeur. "The deployment of visual spatial attention during visual search predicts response time." NeuroReport 27, no. 16 (November 2016): 1237–42. http://dx.doi.org/10.1097/wnr.0000000000000684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Mathôt, Sebastiaan, and Jan Theeuwes. "Visual attention and stability." Philosophical Transactions of the Royal Society B: Biological Sciences 366, no. 1564 (February 27, 2011): 516–27. http://dx.doi.org/10.1098/rstb.2010.0187.

Full text
Abstract:
In the present review, we address the relationship between attention and visual stability. Even though with each eye, head and body movement the retinal image changes dramatically, we perceive the world as stable and are able to perform visually guided actions. However, visual stability is not as complete as introspection would lead us to believe. We attend to only a few items at a time and stability is maintained only for those items. There appear to be two distinct mechanisms underlying visual stability. The first is a passive mechanism: the visual system assumes the world to be stable, unless there is a clear discrepancy between the pre- and post-saccadic image of the region surrounding the saccade target. This is related to the pre-saccadic shift of attention, which allows for an accurate preview of the saccade target. The second is an active mechanism: information about attended objects is remapped within retinotopic maps to compensate for eye movements. The locus of attention itself, which is also characterized by localized retinotopic activity, is remapped as well. We conclude that visual attention is crucial in our perception of a stable world.
APA, Harvard, Vancouver, ISO, and other styles
12

RODRIGUEZ-SANCHEZ, ANTONIO J., EVGUENI SIMINE, and JOHN K. TSOTSOS. "ATTENTION AND VISUAL SEARCH." International Journal of Neural Systems 17, no. 04 (August 2007): 275–88. http://dx.doi.org/10.1142/s0129065707001135.

Full text
Abstract:
Selective Tuning (ST) presents a framework for modeling attention and in this work we show how it performs in covert visual search tasks by comparing its performance to human performance. Two implementations of ST have been developed. The Object Recognition Model recognizes and attends to simple objects formed by the conjunction of various features and the Motion Model recognizes and attends to motion patterns. The validity of the Object Recognition Model was first tested by successfully duplicating the results of Nagy and Sanchez. A second experiment was aimed at an evaluation of the model's performance against the observed continuum of search slopes for feature-conjunction searches of varying difficulty. The Motion Model was tested against two experiments dealing with searches in the visual motion domain. A simple odd-man-out search for counter-clockwise rotating octagons among identical clockwise rotating octagons produced linear increase in search time with the increase of set size. The second experiment was similar to one described by Thorton and Gilden. The results from both implementations agreed with the psychophysical data from the simulated experiments. We conclude that ST provides a valid explanatory mechanism for human covert visual search performance, an explanation going far beyond the conventional saliency map based explanations.
APA, Harvard, Vancouver, ISO, and other styles
13

Gottlob, Lawrence R. "Location cuing and response time distributions in visual attention." Perception & Psychophysics 66, no. 8 (November 2004): 1293–302. http://dx.doi.org/10.3758/bf03194999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Watanabe, Katsumi. "Maintaining Visual Attention over Time: Effects of Object Continuity." i-Perception 2, no. 4 (May 2011): 207. http://dx.doi.org/10.1068/ic207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Carlson, Thomas A., Hinze Hogendoorn, and Frans A. J. Verstraten. "The speed of visual attention: What time is it?" Journal of Vision 6, no. 12 (December 12, 2006): 6. http://dx.doi.org/10.1167/6.12.6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Scalf, Paige, Elexa ST. JOHN-SAALTINK, Markus Barth, Hakwan Lau, and Floris De Lange. "Time-resolved fMRI tracks attention through the visual field." Journal of Vision 16, no. 12 (September 1, 2016): 907. http://dx.doi.org/10.1167/16.12.907.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Tünnermann, Jan, and Bärbel Mertsching. "Region-Based Artificial Visual Attention in Space and Time." Cognitive Computation 6, no. 1 (June 27, 2013): 125–43. http://dx.doi.org/10.1007/s12559-013-9220-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kim, Yeongbin, Joongchol Shin, Hasil Park, and Joonki Paik. "Real-Time Visual Tracking with Variational Structure Attention Network." Sensors 19, no. 22 (November 9, 2019): 4904. http://dx.doi.org/10.3390/s19224904.

Full text
Abstract:
Online training framework based on discriminative correlation filters for visual tracking has recently shown significant improvement in both accuracy and speed. However, correlation filter-base discriminative approaches have a common problem of tracking performance degradation when the local structure of a target is distorted by the boundary effect problem. The shape distortion of the target is mainly caused by the circulant structure in the Fourier domain processing, and it makes the correlation filter learn distorted training samples. In this paper, we present a structure–attention network to preserve the target structure from the structure distortion caused by the boundary effect. More specifically, we adopt a variational auto-encoder as a structure–attention network to make various and representative target structures. We also proposed two denoising criteria using a novel reconstruction loss for variational auto-encoding framework to capture more robust structures even under the boundary condition. Through the proposed structure–attention framework, discriminative correlation filters can learn robust structure information of targets during online training with an enhanced discriminating performance and adaptability. Experimental results on major visual tracking benchmark datasets show that the proposed method produces a better or comparable performance compared with the state-of-the-art tracking methods with a real-time processing speed of more than 80 frames per second.
APA, Harvard, Vancouver, ISO, and other styles
19

Duck, Julie M., Robert A. M. Gregson, Eileen B. J. Jones, Grant Noble, and Michael Noy. "Children's visual attention to “playschool”: A time series analysis." Australian Journal of Psychology 40, no. 4 (December 1988): 413–20. http://dx.doi.org/10.1080/00049538808260060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Barnard, Philip, Cristina Ramponi, Geoffrey Battye, and Bundy Mackintosh. "Anxiety and the deployment of visual attention over time." Visual Cognition 12, no. 1 (January 2005): 181–211. http://dx.doi.org/10.1080/13506280444000139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Saarinen, Jukka. "Focal Visual Attention and Pattern Discrimination." Perception 22, no. 5 (May 1993): 509–15. http://dx.doi.org/10.1068/p220509.

Full text
Abstract:
Pattern discrimination in the presence of distractor patterns is improved when the stimulus display is preceded by a precue designating the location of the target pattern. Experiments were conducted to determine how big an improvement the precue produced. The specific question of whether the observer is able to process selectively the stimulus pattern in the cued location of the display and ignore the patterns of the noncued locations was addressed. In order to study this, reaction time for pattern discrimination on a blank background (no distractors) was compared with the reaction time when the observer performed the same discrimination task in the presence of distractors and a precue had indicated the location of the stimulus pattern to be discriminated. The results showed that these two reaction times were equal if the cue preceded the stimulus patterns at intervals which were longer than some minimum time. Hence, stimuli outside the ‘aperture’ of focal attention can be ignored. These results could not be attributed to eye movements, because the longest duration of the whole sequence of precue and stimulus patterns was only 200 ms.
APA, Harvard, Vancouver, ISO, and other styles
22

Mondor, T. A., and M. P. Bryden. "On the Relation between Visual Spatial Attention and Visual Field Asymmetries." Quarterly Journal of Experimental Psychology Section A 44, no. 3 (April 1992): 529–55. http://dx.doi.org/10.1080/14640749208401297.

Full text
Abstract:
In the typical visual laterality experiment, words and letters are more rapidly and accurately identified in the right visual field than in the left. However, while such studies usually control fixation, the deployment of visual attention is rarely restricted. The present studies investigated the influence of visual attention on the visual field asymmetries normally observed in single-letter identification and lexical decision tasks. Attention was controlled using a peripheral cue that provided advance knowledge of the location of the forthcoming stimulus. The time period between the onset of the cue and the onset of the stimulus (Stimulus Onset Asynchrony—SOA) was varied, such that the time available for attention to focus upon the location was controlled. At short SO As a right visual field advantage for identifying single letters and for making lexical decisions was apparent. However, at longer SOAs letters and words presented in the two visual fields were identified equally well. It is concluded that visual field advantages arise from an interaction of attentional and structural factors and that the attentional component in visual field asymmetries must be controlled in order to approximate more closely a true assessment of the relative functional capabilities of the right and left cerebral hemispheres.
APA, Harvard, Vancouver, ISO, and other styles
23

Pu, Lei, Xinxi Feng, Zhiqiang Hou, Wangsheng Yu, and Yufei Zha. "SiamDA: Dual attention Siamese network for real-time visual tracking." Signal Processing: Image Communication 95 (July 2021): 116293. http://dx.doi.org/10.1016/j.image.2021.116293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Iavecchia, Helene P., and Charles L. Folk. "Shifting Visual Attention in Stereographic Displays: A Time Course Analysis." Human Factors: The Journal of the Human Factors and Ergonomics Society 36, no. 4 (December 1994): 606–18. http://dx.doi.org/10.1177/001872089403600404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Sanscartier, Shayne, Jessica Maxwell, Eric Taylor, and Penelope Lockwood. "Attachment Avoidance and Visual Attention for Emotional Faces over Time." Journal of Vision 16, no. 12 (September 1, 2016): 79. http://dx.doi.org/10.1167/16.12.79.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Deco, Gustavo, Olga Pollatos, and Josef Zihl. "The time course of selective visual attention: theory and experiments." Vision Research 42, no. 27 (December 2002): 2925–45. http://dx.doi.org/10.1016/s0042-6989(02)00358-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ouerhani, Nabil, and Heinz Hügli. "Real-time visual attention on a massively parallel SIMD architecture." Real-Time Imaging 9, no. 3 (June 2003): 189–96. http://dx.doi.org/10.1016/s1077-2014(03)00036-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Battistoni, Elisa, Daniel Kaiser, Clayton Hickey, and Marius V. Peelen. "The time course of spatial attention during naturalistic visual search." Cortex 122 (January 2020): 225–34. http://dx.doi.org/10.1016/j.cortex.2018.11.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Madden, D. J. "Adult Age Differences in the Time Course of Visual Attention." Journal of Gerontology 45, no. 1 (January 1, 1990): P9—P16. http://dx.doi.org/10.1093/geronj/45.1.p9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Dombrowe, Isabel, Christian N. L. Olivers, and Mieke Donk. "The time course of working memory effects on visual attention." Visual Cognition 18, no. 8 (May 17, 2010): 1089–112. http://dx.doi.org/10.1080/13506281003651146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Srivastava, Priyanka, Devpriya Kumar, and Narayanan Srinivasan. "Time course of visual attention across perceptual levels and objects." Acta Psychologica 135, no. 3 (November 2010): 335–42. http://dx.doi.org/10.1016/j.actpsy.2010.09.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Buonocore, Antimo, Niklas Dietze, and Robert D. McIntosh. "Time-dependent inhibition of covert shifts of attention." Experimental Brain Research 239, no. 8 (July 3, 2021): 2635–48. http://dx.doi.org/10.1007/s00221-021-06164-y.

Full text
Abstract:
AbstractVisual transients can interrupt overt orienting by abolishing the execution of a planned eye movement due about 90 ms later, a phenomenon known as saccadic inhibition (SI). It is not known if the same inhibitory process might influence covert orienting in the absence of saccades, and consequently alter visual perception. In Experiment 1 (n = 14), we measured orientation discrimination during a covert orienting task in which an uninformative exogenous visual cue preceded the onset of an oriented probe by 140–290 ms. In half of the trials, the onset of the probe was accompanied by a brief irrelevant flash, a visual transient that would normally induce SI. We report a time-dependent inhibition of covert orienting in which the irrelevant flash impaired orientation discrimination accuracy when the probe followed the cue by 190 and 240 ms. The interference was more pronounced when the cue was incongruent with the probe location, suggesting an impact on the reorienting component of the attentional shift. In Experiment 2 (n = 12), we tested whether the inhibitory effect of the flash could occur within an earlier time range, or only within the later, reorienting range. We presented probes at congruent cue locations in a time window between 50 and 200 ms. Similar to Experiment 1, discrimination performance was altered at 200 ms after the cue. We suggest that covert attention may be susceptible to similar inhibitory mechanisms that generate SI, especially in later stages of attentional shifting (> 200 ms after a cue), typically associated with reorienting.
APA, Harvard, Vancouver, ISO, and other styles
33

Furst, David M., and Gershon Tenenbaum. "Influence of Attentional Focus on Reaction Time." Psychological Reports 56, no. 1 (February 1985): 299–302. http://dx.doi.org/10.2466/pr0.1985.56.1.299.

Full text
Abstract:
It was hypothesized that attention could be directed to the emphasized task regardless of its spatial location. 20 subjects performed a simple RT to a stimulus located in foveal vision and a simple RT to four surrounding stimuli set in the visual periphery. Importance of task was manipulated through instructions. Analysis of variance showed subjects had shorter RTs to the emphasized area regardless of its spatial location. The attentional demands of the tasks and the importance assigned to the tasks were critical factors in response speed. This may help to explain the results of visual-narrowing experiments which have indicated narrowing after placing an attention-demanding task in foveal vision.
APA, Harvard, Vancouver, ISO, and other styles
34

Xia, Ru Ting, and Xiao Yan Zhou. "Measurement on Reaction Time of Visual Attention in Depth during Driving." Applied Mechanics and Materials 319 (May 2013): 343–47. http://dx.doi.org/10.4028/www.scientific.net/amm.319.343.

Full text
Abstract:
This research aimed to reveal characteristics of visual attention of low-vision drivers. Near and far stimuli were used by means of a three-dimensional (3D) attention measurement system that simulated traffic environment. We measured the reaction time of subjects while attention shifted in three kinds of imitational peripheral environment illuminance (daylight, twilight and dawn conditions). Subjects were required to judge whether the target presented nearer than fixation point or further than it. The results showed that the peripheral environment illuminance had evident influence on the reaction time of drivers, the reaction time was slow in dawn and twilight conditions than in daylight condition, distribution of attention had the advantage in nearer space than farther space, that is, and the shifts of attention in 3D space had an anisotropy characteristic in depth. The results suggested that (1) visual attention might be operated with both precueing paradigm and stimulus controls included the depth information, (2) an anisotropy characteristic of attention shifting depend on the attention moved distance, and it showed remarkably in dawn condition than in daylight and twilight conditions.
APA, Harvard, Vancouver, ISO, and other styles
35

Roberts, Ian D., Yi Yang Teoh, and Cendri A. Hutcherson. "Time to Pay Attention? Information Search Explains Amplified Framing Effects Under Time Pressure." Psychological Science 33, no. 1 (December 3, 2021): 90–104. http://dx.doi.org/10.1177/09567976211026983.

Full text
Abstract:
Decades of research have established the ubiquity and importance of choice biases, such as the framing effect, yet why these seemingly irrational behaviors occur remains unknown. A prominent dual-system account maintains that alternate framings bias choices because of the unchecked influence of quick, affective processes, and findings that time pressure increases the framing effect have provided compelling support. Here, we present a novel alternative account of magnified framing biases under time pressure that emphasizes shifts in early visual attention and strategic adaptations in the decision-making process. In a preregistered direct replication ( N = 40 adult undergraduates), we found that time constraints produced strong shifts in visual attention toward reward-predictive cues that, when combined with truncated information search, amplified the framing effect. Our results suggest that an attention-guided, strategic information-sampling process may be sufficient to explain prior results and raise challenges for using time pressure to support some dual-system accounts.
APA, Harvard, Vancouver, ISO, and other styles
36

Mason, Deanna J., Glyn W. Humphreys, and Lindsey S. Kent. "Exploring selective attention in ADHD: visual search through space and time." Journal of Child Psychology and Psychiatry 44, no. 8 (October 16, 2003): 1158–76. http://dx.doi.org/10.1111/1469-7610.00204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Wei, Chun-Chun, and Min-Yuan Ma. "Influences of Visual Attention and Reading Time on Children and Adults." Reading & Writing Quarterly 33, no. 2 (April 25, 2016): 97–108. http://dx.doi.org/10.1080/10573569.2015.1092100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Blough, Donald S. "Reaction time drifts identify objects of attention in pigeon visual search." Journal of Experimental Psychology: Animal Behavior Processes 19, no. 2 (1993): 107–20. http://dx.doi.org/10.1037/0097-7403.19.2.107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ekanayake, Jinendra, Chloe Hutton, Gerard Ridgway, Frank Scharnowski, Nikolaus Weiskopf, and Geraint Rees. "Real-time decoding of covert attention in higher-order visual areas." NeuroImage 169 (April 2018): 462–72. http://dx.doi.org/10.1016/j.neuroimage.2017.12.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Moya, L., S. Shomstein, A. Bagic, and M. Behrmann. "The time course of neural activity in object-based visual attention." Journal of Vision 8, no. 6 (March 27, 2010): 549. http://dx.doi.org/10.1167/8.6.549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Ewen, JB, DM Caggiano, BM Lakshmanan, H. Rosen, and S. Yantis. "Time-Course of Top-Down Shifts of Covert Visual Spatial Attention." NeuroImage 47 (July 2009): S66. http://dx.doi.org/10.1016/s1053-8119(09)70361-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

deBettencourt, M. T., R. F. Lee, J. D. Cohen, K. A. Norman, and N. B. Turk-Browne. "Externalizing internal states with real-time neurofeedback to train visual attention." Journal of Vision 13, no. 9 (July 25, 2013): 1132. http://dx.doi.org/10.1167/13.9.1132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Danno, Mikio, Matti Kutila, and Juha M. Kortelainen. "Measurement of Driver’s Visual Attention Capabilities Using Real-Time UFOV Method." International Journal of Intelligent Transportation Systems Research 9, no. 3 (June 8, 2011): 115–27. http://dx.doi.org/10.1007/s13177-011-0033-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Konstantinova, M. V., V. N. Anisimov, L. V. Tereshchenko, and A. V. Latanov. "The Link between Visual Attention and the Subjective Perception of Time." Neuroscience and Behavioral Physiology 49, no. 9 (November 2019): 1145–49. http://dx.doi.org/10.1007/s11055-019-00851-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Zuber, Irena, and Bo Ekehammar. "Personality, time of day and visual perception: Preferences and selective attention." Personality and Individual Differences 9, no. 2 (1988): 345–52. http://dx.doi.org/10.1016/0191-8869(88)90097-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Kashiwase, Yoshiyuki, Kazumichi Matsumiya, Ichiro Kuriki, and Satoshi Shioiri. "Time Courses of Attentional Modulation in Neural Amplification and Synchronization Measured with Steady-state Visual-evoked Potentials." Journal of Cognitive Neuroscience 24, no. 8 (August 2012): 1779–93. http://dx.doi.org/10.1162/jocn_a_00212.

Full text
Abstract:
Endogenous attention modulates the amplitude and phase coherence of steady-state visual-evoked potentials (SSVEPs). In efforts to decipher the neural mechanisms of attentional modulation, we compared the time course of attentional modulation of SSVEP amplitude (thought to reflect the magnitude of neural population activity) and phase coherence (thought to reflect neural response synchronization). We presented two stimuli flickering at different frequencies in the left and right visual hemifields and asked observers to shift their attention to either stimulus. Our results demonstrated that attention increased SSVEP phase coherence earlier than it increased SSVEP amplitude, with a positive correlation between the attentional modulations of SSVEP phase coherence and amplitude. Furthermore, the behavioral dynamics of attention shifts were more closely associated with changes in phase coherence than with changes in amplitude. These results are consistent with the possibility that attention increases neural response synchronization, which in turn leads to increased neural population activity.
APA, Harvard, Vancouver, ISO, and other styles
47

Kida, Tetsuo, Koji Inui, Toshiaki Wasaka, Kosuke Akatsuka, Emi Tanaka, and Ryusuke Kakigi. "Time-Varying Cortical Activations Related to Visual–Tactile Cross-Modal Links in Spatial Selective Attention." Journal of Neurophysiology 97, no. 5 (May 2007): 3585–96. http://dx.doi.org/10.1152/jn.00007.2007.

Full text
Abstract:
The neural mechanisms underlying unimodal spatial attention have long been studied, but the cortical processes underlying cross-modal links remain a matter of debate. To reveal the cortical processes underlying the cross-modal links between vision and touch in spatial attention, we recorded magnetoencephalographic (MEG) responses to electrocutaneous stimuli when subjects directed attention to an electrocutaneous or visual stimulus presented randomly in the left or right space. Neural responses recorded around the bilateral sylvian fissures at 85 and 100 ms after the electrocutaneous stimulus were significantly enhanced by spatial attention in both the touch-irrelevant and -relevant modalities. Source analysis revealed that the sylvian responses were generated in the secondary somatosensory cortex (SII). An early response, M50c, generated in the contralateral primary somatosensory cortex (SI), was not modulated by attention. There were no significant attentional changes in the source location or magnetic field distribution, suggesting attentional facilitation of the neural activity in SII itself, rather than a tonic bias effect or overlapping of separate neuronal populations. The results show that spatial attention enhances responses to tactile inputs in SII, independent of sensory modality attended. The underlying mechanism remains to be determined, but may be an increase in gain.
APA, Harvard, Vancouver, ISO, and other styles
48

Lee, A. C., J. P. Harris, and J. E. Calvert. "Overt Visual Attention in Parkinson's Disease." Perception 25, no. 1_suppl (August 1996): 93. http://dx.doi.org/10.1068/v96p0303.

Full text
Abstract:
The ability of Parkinsonian (PD) patients to control overt visual attention was investigated, by measuring reaction time to a visual stimulus presented at different distances (1.5 deg, 6 deg, and 12 deg) and directions (left or right) from a central fixation point. Prior to the onset of the target stimulus (a square), a cue stimulus appeared just above the fixation point. With equal probability, the arrow pointed to the left, or to the right, or was ambiguous (with two heads). On 20% of their presentations, the left and right arrows pointed in the direction opposite to where the target was to appear. Subjects were informed that 20% of cues would be misleading, and correcting lenses were used to optimise their visual acuity. In previous work with a similar paradigm, only one target eccentricity was used, and subjects were not refracted, leaving open the possibility that PD subjects had more difficulty in seeing the cues and targets. The eight PD subjects had longer reaction times than age-matched normal controls (and were relatively slower for the more eccentric targets), but made fewer errors in all conditions. In particular, they were more accurate than the controls on the presentations when the cue was misleading or ambiguous, suggesting that the PD group were ignoring the cue. It seems unlikely that the subjects discriminate the direction of the cues, given the use of optical correction, and they reported seeing the cues. Our data are consistent with those of other workers who have described a similar ‘disengagement of attention’ in PD (Clark et al, 1989 Neuropsychologia27 131 – 140) and attributed it to decreased catecholaminergic activity following destruction of midbrain structures (Wright et al, 1990 Neuropsychologia28 151 – 159).
APA, Harvard, Vancouver, ISO, and other styles
49

Hoang Dinh, Thang, Tuan Do Ngoc, Kien Thai Trung, and Long Tran Quoc. "Real-time Siamese visual object tracking using attention and anchor-free mechanism." Journal of Military Science and Technology, no. 80 (June 28, 2022): 132–41. http://dx.doi.org/10.54939/1859-1043.j.mst.80.2022.132-141.

Full text
Abstract:
Trackers based on Siamese have consistently demonstrated superior performance in tracking visual objects. The majority of existing trackers calculate the features of the target template and search image independently and then estimate the target's scale and aspect ratio using either a multi-scale searching scheme or pre-defined anchor boxes. This paper proposed a Siamese attention network for tracking visual objects. An attention fusion mechanism is generated using pixel-level matching of template and search features. The framework proposed is anchor-free, making it both simple and effective. Extensive experiments on visual tracking benchmark VOT2018 and UAV123 demonstrate that our tracker operates at 42 fps and achieves state-of-the-art performance.
APA, Harvard, Vancouver, ISO, and other styles
50

Li, Ning, and Linda Ng Boyle. "Allocation of Driver Attention for Varying In-Vehicle System Modalities." Human Factors: The Journal of the Human Factors and Ergonomics Society 62, no. 8 (December 30, 2019): 1349–64. http://dx.doi.org/10.1177/0018720819879585.

Full text
Abstract:
Objective This paper examines drivers’ allocation of attention using response time to a tactile detection response task (TDRT) while interacting with an in-vehicle information system (IVIS) over time. Background Longer TDRT response time is associated with higher cognitive workload. However, it is not clear what role is assumed by the human and system in response to varying in-vehicle environments over time. Method A driving simulator study with 24 participants was conducted with a restaurant selection task of two difficulty levels (easy and hard) presented in three modalities (audio only, visual only, hybrid). A linear mixed-effects model was applied to identify factors that affect TDRT response time. A nonparametric time-series model was also used to explore the visual attention allocation under the hybrid mode over time. Results The visual-only mode significantly increased participants’ response time compared with the audio-only mode. Females took longer to respond to the TDRT when engaged with an IVIS. The study showed that participants tend to use the visual component more toward the end of the easy tasks, whereas the visual mode was used more at the beginning of the harder tasks. Conclusion The visual-only mode of the IVIS increased drivers’ cognitive workload when compared with the auditory-only mode. Drivers showed different visual attention allocation during the easy and hard restaurant selection tasks in the hybrid mode. Application The findings can help guide the design of automotive user interfaces and help manage cognitive workload.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography