Academic literature on the topic 'Visual attention in time'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Visual attention in time.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Visual attention in time"
Zhou, Yan-Bang, Qiang Li, and Hong-Zhi Liu. "Visual attention and time preference reversals." Judgment and Decision Making 16, no. 4 (July 2021): 1010–38. http://dx.doi.org/10.1017/s1930297500008068.
Full textBusse, L. "The Time Course of Shifting Visual Attention." Journal of Neuroscience 26, no. 15 (April 12, 2006): 3885–86. http://dx.doi.org/10.1523/jneurosci.0459-06.2006.
Full textEgeth, Howard E., and Steven Yantis. "VISUAL ATTENTION: Control, Representation, and Time Course." Annual Review of Psychology 48, no. 1 (February 1997): 269–97. http://dx.doi.org/10.1146/annurev.psych.48.1.269.
Full textRuhnau, E., and V. Haase. "Space-time structure of selective visual attention." International Journal of Psychophysiology 14, no. 2 (February 1993): 146. http://dx.doi.org/10.1016/0167-8760(93)90239-l.
Full textWard, Robert, John Duncan, and Kimron Shapiro. "The Slow Time-Course of Visual Attention." Cognitive Psychology 30, no. 1 (February 1996): 79–109. http://dx.doi.org/10.1006/cogp.1996.0003.
Full textChun, Marvin M. "Visual working memory as visual attention sustained internally over time." Neuropsychologia 49, no. 6 (May 2011): 1407–9. http://dx.doi.org/10.1016/j.neuropsychologia.2011.01.029.
Full textSrivastava, Priyanka, and Narayanan Srinivasan. "Time course of visual attention with emotional faces." Attention, Perception, & Psychophysics 72, no. 2 (February 2010): 369–77. http://dx.doi.org/10.3758/app.72.2.369.
Full textChastain, Garvin. "Time-course of location changes of visual attention." Bulletin of the Psychonomic Society 29, no. 5 (May 1991): 425–28. http://dx.doi.org/10.3758/bf03333960.
Full textCouffe, C., R. Mizzi, and G. A. Michael. "Salience-based progression of visual attention: Time course." Psychologie Française 61, no. 3 (September 2016): 163–75. http://dx.doi.org/10.1016/j.psfr.2015.04.003.
Full textDrisdelle, Brandi L., Greg L. West, and Pierre Jolicoeur. "The deployment of visual spatial attention during visual search predicts response time." NeuroReport 27, no. 16 (November 2016): 1237–42. http://dx.doi.org/10.1097/wnr.0000000000000684.
Full textDissertations / Theses on the topic "Visual attention in time"
Jefferies, Lisa N. "Tracking attention in space and time : the dynamics of human visual attention." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/11564.
Full textSutton, Jennifer E. "Attention to time, space, and visual pattern by the pigeon." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0002/MQ30771.pdf.
Full textBraithwaite, Jason John. "Visual search in space and time : where attention and inattention collide?" Thesis, University of Birmingham, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269885.
Full textAzevêdo, Adriana Medeiros Sales de. "Mapeamento espacial da atenção visual mobilizada pela via visual ventral." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/42/42137/tde-25032010-145400/.
Full textVisual processing has two pathways: Dorsal (localization/movement) mobilized for Simple Reaction Time tasks (SRT); Ventral (shape/color) mobilized for Choice Reaction Time tasks (CRT). We presented an approach to investigate visual attentional resources. Usual psychophysical methods sample many times few points. We opted to sample many points few times aiming to enlarge the sampled visual field. It was obtained major details of the attentional distribution. Voluntary attention task: I. SRT, for Dorsal pathway. Stimuli were different in color answered triggering a button, in a diffusion attention paradigm. II. CRT, for Ventral pathway. Stimuli were two different color answered by triggering a button for each color in a diffuse paradigm. III. CRT, experimental subject instructed to focus attention in two frames for a splitted attention paradigm. Results showed anisotropy in the diffuse attention distribution, favouring the lower hemifield for SRT and superior hemifield for CRT. The splitted attention paradigm evidenced the presence of two attentional focuses.
Soares, Sandra C. "Fear commands attention snakes as the archetypal fear stimulus? /." Stockholm, 2010. http://diss.kib.ki.se/2010/978-91-7409-824-2/.
Full textContenças, Thaís Santos. "É possível uma divisão da atenção visual automática no espaço?" Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/42/42137/tde-01072009-123939/.
Full textSeveral studies demonstrated that voluntary visual attention can be divided. The possibility that this also occurs for automatic visual attention was investigated here. In the first and second experiments of this study the possibility of attention division in the same hemifield was examined. In the third and fouth experiments the possibility of attention division between hemifields was examined. The results suggest that automatic visual attention can not divide in the same hemifield but may divide between hemifields.
Correani, Alessia. "Normal and abnormal attentional dwell time : constrains of temporal coding in visual attention in neurological patients and normal individuals." Thesis, University of Birmingham, 2011. http://etheses.bham.ac.uk//id/eprint/1781/.
Full textMontassier, Ana Beatriz Sacomano. "Atenção visual em crianças e adolescentes com distúrbio de aprendizagem." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/25/25143/tde-04122013-091051/.
Full textAttention is included in the group of psychic functions, grouped under the name of cognitive functions, and that support the learning process in school. Regarding the visual attention, literature has pointed to the existence of differences in reaction time to visual stimuli between students with and without learning disabilities (LD). In this sense, the purpose of this study was to characterize the visual attentional function in children with LD. A total of 50 students, including 25 with learning disorders without signs of Attention Deficit Disorder and Hyperactivity Disorder (ADHD), forming the study group (SG), and 25 children without impairments, forming the control group (CG) , aged between eight and 14 years old. The instruments used were the Test of Visual Attention (TAVIS4), computerized test consists of three tasks to assess the ability to sustain attentional, select and change the focus of attention to visual stimuli and motor impulsivity, and the Scale of the Conners abbreviated form for teachers, appropriate to discriminate children with behavior problems and ADHD signs. The results showed that the hit reaction time (HRT) of the CG was significantly less than the SG in the sustained attention task. SG also showed statistically significant differences in the alternating attention, with less number of right answers (RA), higher number of omission errors (OE) and higher number of commissions errors (CE). Scale of the Conners scores of GE was higher than the GC. There was a correlation between tests in alternating attention tasks and sustained attention tasks to the number of right answers (RA), omission errors (OE) and commission errors (CE). We may deduce that children with LD have deficits of attentional processes, although they cannot be characterized with ADHD. In the subgroup of adolescents was significant difference in selective attention to the number of omission errors (OE), the HRT of sustained attention and alternating attention to the number of omission errors (OE) and commission errors (CE). There was a correlation between tests, this subgroup of SG and CG in selective attention to the number of right answers (RA), omission errors (EO) and commissions errors (CE). So, the highest rates indicated on the scale (attentional deficits) are associated with worse outcomes of participants in the tasks of sustained and alternating attention. It can be observed that the higher the age of the participants, the better the ability of selective attention, sustained attention and alternating attention. That way, the HMT less subgroup of adolescents compared with the overall group may show an improvement in attention to development. However, adolescents SG improved their attentional capacity, but some changes persist especially when compared to CG suggesting a dysfunction of neuropsychological mechanisms underlying the processing of visual attention in adolescents with LD.
Righi, Luana Lira. "Características do efeito da atenção intermodal automática." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/42/42137/tde-17042013-105052/.
Full textThe current work examined the possible contribution of signal to noise ratio, the asynchrony between the onsets of the cue and the target (SOA) and the kind of task performed by the observer to the manifestation of crossmodal attentional effects. The Experiments 1 and 2 showed that crossmodal attentional effect appears when there is visual noise, but it does not appear when there is no visual noise at 133 ms SOA. The Experiment 3 showed that when the SOA is longer than 133 ms, the crossmodal attentional effect appears when there is no visual noise. The Experiment 4 showed that in a localization task, the crossmodal attentional effect appears even in a short SOA (133 ms). Taken together, the results indicate that crossmodal attentional effects appear when there is visual noise and when there is no visual noise. However, in the later condition and when the target has to be identified, the crossmodal attentional effect takes longer to appear.
Li, Hui. "Experiments on the dynamics of attention: Perception of visual rhythm and the time course of inhibition of return in the visual field." Diss., Ludwig-Maximilians-Universität München, 2014. http://nbn-resolving.de/urn:nbn:de:bvb:19-172019.
Full textWie Aufmerksamkeit kontrolliert wird, ist eine der besonderen Herausforderungen in den kognitiven Neurowissenschaften und der Psychologie. Für räumlich repräsentierte Reize konnte gezeigt werden, dass bestimmte Aspekte visueller Reize wie verschiedene Farben sofort hervorstechen („pop-out“), während für andere Reize serielle Such-Strategien notwendig sind, die also mentalen Aufwand erfordern. Es ist eine offene Frage, ob dynamische Merkmale von Reizen ohne besonderen Aufwand verarbeitet werden, oder ob serielle Prozesse erforderlich sind, um sie zu erkennen. Diese Frage wurde in Experimenten über Rhythmus-Wahrnehmung mit periodisch sich bewegenden Reizen untersucht, und ein visuelles Such-Paradigma wurde angewandt. Es wurden auf einem Display vertikal sich bewegende Punkte gezeigt, wobei einer der Punkte sich mit einer anderen Periode, schneller oder langsamer, bewegte, und diese Punkte mussten so schnell wie möglich erkannt werden. Um nur die Periode als kritische Variable zu untersuchen, wurde die Phase und die Amplitude der anderen Reizpunkte randomisiert. Es wurde festgestellt, dass die unterschiedliche Periode allein nicht zu einem „pop-out“-Effekt führt. Damit ein abweichender, sich bewegender dynamischer Reiz erkannt wird, müssen offenbar Periode, Phase und Amplitude übereinstimmen. Reize mit einer kürzeren Periode als die Hintergrundreize wurden deutlich schneller erkannt. In weiteren Experimenten konnte beispielsweise gezeigt werden, dass akustische Information die Extraktion rhythmisch sich bewegender visueller Reize deutlich verbessert, was auf intermodale Effekte hinweist. In einer weiteren Studie wurde untersucht, ob die neuronale Aufmerksamkeits-Maschinerie gemeinsamen zeitlichen Prinzipien gehorcht. Versuche zum Phänomen des „Inhibition of Return“ (IOR, Hemmung der Aufmerksamkeits-Wiederkehr) haben ergeben, dass die Mechanismen der Aufmerksamkeits-Steuerung im perifovealen Bereich anderen Gesetzen gehorchen als in der Peripherie des Gesichtsfeldes. Dieser „Ekzentrizitäts-Effekt“ wirft die Frage auf, ob die zeitlichen Prozesse der Aufmerksamkeits-Kontrolle in der Peripherie durch längere Zeitkonstanten gekennzeichnet sind, da die inhibitorische Kontrolle dort ausgeprägter ist. Es zeigt sich allerdings, dass die beiden Aufmerksamkeits-Systeme das gleiche Zeitfenster von etwa drei Sekunden nutzen. Diese Beobachtungen stützen das Konzept der funktionellen Inhomogenität des Gesichtsfeldes, die aber durch einen gemeinsamen zeitlichen Mechanismus in eine kognitive Einheit gebracht wird.
Books on the topic "Visual attention in time"
Sinclair, Joshua James. The effects of target type and expectancy on attention, as a function of time and accuracy for a visual search task. Sudbury, Ont: Laurentian University, Department of Psychology, 1997.
Find full text1956-, Wright Richard D., ed. Visual attention. New York: Oxford University Press, 1998.
Find full textCantoni, Virginio, Maria Marinaro, and Alfredo Petrosino, eds. Visual Attention Mechanisms. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4615-0111-4.
Full textZhang, Liming, and Weisi Lin. Selective Visual Attention. Singapore: John Wiley & Sons (Asia) Pte Ltd, 2013. http://dx.doi.org/10.1002/9780470828144.
Full textV, Cantoni, Marinaro M, and Petrosino Alfredo, eds. Visual attention mechanisms. New York: Kluwer Academic/Plenum Publishers, 2002.
Find full textGiovanni, Berlucchi, and Rizzolatti G, eds. Selective visual attention. Oxford: Pergamon, 1987.
Find full textCantoni, V. Visual Attention Mechanisms. Boston, MA: Springer US, 2002.
Find full textH, Zangemeister W., Stiehl H. S, and Freksa C, eds. Visual attention and cognition. Amsterdam: Elsevier, 1996.
Find full textJochen, Braun, Koch Christof, and Davis Joel L. 1942-, eds. Visual attention and cortical circuits. Cambridge, Mass: MIT Press, 2001.
Find full textAttention and time. Oxford: Oxford University Press, 2010.
Find full textBook chapters on the topic "Visual attention in time"
Burch, Michael. "Time-Preserving Visual Attention Maps." In Intelligent Decision Technologies 2016, 273–83. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-39627-9_24.
Full textBonnet, Claude. "Time Factors in the Processing of Visual Movement Information." In Attention and Performance VII, 25–41. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003310228-3.
Full textBatista, Jorge P. "A Real-Time Driver Visual Attention Monitoring System." In Pattern Recognition and Image Analysis, 200–208. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11492429_25.
Full textBernardino, Alexandre, and José Santos-Victor. "A Real-Time Gabor Primal Sketch for Visual Attention." In Pattern Recognition and Image Analysis, 335–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11492429_41.
Full textStasse, Olivier, Yasuo Kuniyoshi, and Gordon Cheng. "Development of a Biologically Inspired Real-Time Visual Attention System." In Biologically Motivated Computer Vision, 150–59. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-45482-9_15.
Full textHou, Xiaodi, and Liqing Zhang. "A Time-Dependent Model of Information Capacity of Visual Attention." In Neural Information Processing, 127–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11893028_15.
Full textChoi, Byung Geun, and Kyung Joo Cheoi. "Development of a Biologically Inspired Real-Time Spatiotemporal Visual Attention System." In Intelligent Information and Database Systems, 416–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20039-7_42.
Full textPu, Lei, Xinxi Feng, Zhiqiang Hou, Wangsheng Yu, Yufei Zha, and Zhiqiang Jiao. "MHASiam: Mixed High-Order Attention Siamese Network for Real-Time Visual Tracking." In Pattern Recognition and Computer Vision, 445–56. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60639-8_37.
Full textPalenichka, Roman M., and Peter Zinterhof. "Time-Effective Detection of Objects of Interest in Images by Means of A Visual Attention Mechanism." In Human and Machine Perception 3, 113–22. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4615-1361-2_9.
Full textBee, Nikolaus, Helmut Prendinger, Arturo Nakasone, Elisabeth André, and Mitsuru Ishizuka. "AutoSelect: What You Want Is What You Get: Real-Time Processing of Visual Attention and Affect." In Perception and Interactive Technologies, 40–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11768029_5.
Full textConference papers on the topic "Visual attention in time"
Li, Zhichao, Yi Yang, Xiao Liu, Feng Zhou, Shilei Wen, and Wei Xu. "Dynamic Computational Time for Visual Attention." In 2017 IEEE International Conference on Computer Vision Workshop (ICCVW). IEEE, 2017. http://dx.doi.org/10.1109/iccvw.2017.145.
Full textZeng, Yingsen, Xiaoqiang Guo, Haiying Wang, Mingjin Geng, and Ting Lu. "Efficient Dual Attention Module for Real-Time Visual Tracking." In 2019 IEEE Visual Communications and Image Processing (VCIP). IEEE, 2019. http://dx.doi.org/10.1109/vcip47243.2019.8965683.
Full textZhang, Qieshi, Dian Lin, Ziliang Ren, Yuhang Kang, Fuxiang Wu, and Jun Cheng. "Attention Mechanism-based Monocular Depth Estimation and Visual Odometry." In 2021 IEEE International Conference on Real-time Computing and Robotics (RCAR). IEEE, 2021. http://dx.doi.org/10.1109/rcar52367.2021.9517422.
Full textHuang, Huiwen, Jinling Chen, Hong Xue, Yaping Huang, and Tiesong Zhao. "Time-Variant Visual Attention in 360-Degree Video Playback." In 2018 IEEE International Symposium on Haptic, Audio and Visual Environments and Games (HAVE). IEEE, 2018. http://dx.doi.org/10.1109/have.2018.8547419.
Full textGeng, Mingjin, Haiying Wang, and Yingsen Zeng. "Enhanced Semantic Features via Attention for Real-Time Visual Tracking." In 2019 IEEE Visual Communications and Image Processing (VCIP). IEEE, 2019. http://dx.doi.org/10.1109/vcip47243.2019.8965870.
Full textMackeben, Manfred. "The Topography of Visual Focal Attention." In Vision Science and its Applications. Washington, D.C.: Optica Publishing Group, 1996. http://dx.doi.org/10.1364/vsia.1996.fa.4.
Full textLiu, Shengzhong, Xinzhe Fu, Maggie Wigness, Philip David, Shuochao Yao, Lui Sha, and Tarek Abdelzaher. "Self-Cueing Real-Time Attention Scheduling in Criticality-Aware Visual Machine Perception." In 2022 IEEE 28th Real-Time and Embedded Technology and Applications Symposium (RTAS). IEEE, 2022. http://dx.doi.org/10.1109/rtas54340.2022.00022.
Full textSun, Yaoru, Xingui Hu, Jinhua Zeng, and Zuo Zhang. "Tracking Humans in Real-Time by Opponent-Motion and Visual Attention." In 2010 WASE International Conference on Information Engineering (ICIE 2010). IEEE, 2010. http://dx.doi.org/10.1109/icie.2010.61.
Full textJinhua Zeng and Yaoru Sun. "Real-time pedestrian tracking by visual attention and human knowledge learning." In 2010 International Conference on Progress in Informatics and Computing (PIC). IEEE, 2010. http://dx.doi.org/10.1109/pic.2010.5687433.
Full textDuan, Qiyu, Hua Zhang, Jing Zhang, HuiLong Zhu, Fengtian Tian, and Jian Zhou. "Global Attention Visual-Tactile Fusion Algorithm Based on Time Series Modeling." In 2022 2nd International Conference on Computer Science, Electronic Information Engineering and Intelligent Control Technology (CEI). IEEE, 2022. http://dx.doi.org/10.1109/cei57409.2022.9950127.
Full textReports on the topic "Visual attention in time"
Dutra, Lauren M., James Nonnemaker, Nathaniel Taylor, Ashley Feld, Brian Bradfield, John Holloway, Edward (Chip) Hill, and Annice Kim. Visual Attention to Tobacco-Related Stimuli in a 3D Virtual Store. RTI Press, May 2020. http://dx.doi.org/10.3768/rtipress.2020.rr.0036.2005.
Full textHoffman, James E. Visual Selective Attention. Fort Belvoir, VA: Defense Technical Information Center, February 1990. http://dx.doi.org/10.21236/ada219204.
Full textYan, Yujie, and Jerome F. Hajjar. Automated Damage Assessment and Structural Modeling of Bridges with Visual Sensing Technology. Northeastern University, May 2021. http://dx.doi.org/10.17760/d20410114.
Full textShulman, Gordon L. Relating Attention to Visual Mechanisms. Fort Belvoir, VA: Defense Technical Information Center, February 1989. http://dx.doi.org/10.21236/ada206452.
Full textReeves, Adam. A Model for Visual Attention. Fort Belvoir, VA: Defense Technical Information Center, February 1987. http://dx.doi.org/10.21236/ada179589.
Full textReeves, Adam. A Model for Visual Attention. Fort Belvoir, VA: Defense Technical Information Center, April 1988. http://dx.doi.org/10.21236/ada193061.
Full textWolfe, Jeremy M. The Deployment of Visual Attention. Fort Belvoir, VA: Defense Technical Information Center, August 2003. http://dx.doi.org/10.21236/ada416391.
Full textWolfe, Jeremy M. The Deployment of Visual Attention. Fort Belvoir, VA: Defense Technical Information Center, June 2009. http://dx.doi.org/10.21236/ada510413.
Full textKoch, Christoff. Toward a Neurobiological Theory of Visual Attention. Fort Belvoir, VA: Defense Technical Information Center, January 1995. http://dx.doi.org/10.21236/ada299945.
Full textKoch, Christof. Toward a Neurobiological Theory of Visual Attention. Fort Belvoir, VA: Defense Technical Information Center, September 1993. http://dx.doi.org/10.21236/ada270724.
Full text