Literatura académica sobre el tema "Visual tasks analysis"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Visual tasks analysis".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Visual tasks analysis":
Alexiev, Kiril y T. Teodorvakarelsky. "Eye movement analysis in simple visual tasks". Computer Science and Information Systems, n.º 00 (2021): 65. http://dx.doi.org/10.2298/csis210418065a.
Fukuda, Kyosuke. "Analysis of Eyeblink Activity during Discriminative Tasks". Perceptual and Motor Skills 79, n.º 3_suppl (diciembre de 1994): 1599–608. http://dx.doi.org/10.2466/pms.1994.79.3f.1599.
Goodall, John R. "An Evaluation of Visual and Textual Network Analysis Tools". Information Visualization 10, n.º 2 (abril de 2011): 145–57. http://dx.doi.org/10.1057/ivs.2011.2.
Taylor, Donald H. "An Analysis of Visual Watchkeeping". Journal of Navigation 44, n.º 2 (mayo de 1991): 152–58. http://dx.doi.org/10.1017/s0373463300009899.
Cole, Jason C., Lisa A. Fasnacht-Hill, Scott K. Robinson y Caroline Cordahi. "Differentiation of Fluid, Visual, and Simultaneous Cognitive Tasks". Psychological Reports 89, n.º 3 (diciembre de 2001): 541–46. http://dx.doi.org/10.2466/pr0.2001.89.3.541.
Shimizu, Toshiya, Yoriko Oguchi, Kiyotaka Hoshiai, Keiko Nagashima, Kiyoyuki Yamazaki, Takashi Itoh y Katsuro Okamoto. "Analysis of cognitive functioning during visual target monitoring tasks (1)". Japanese journal of ergonomics 30, Supplement (1994): 210–11. http://dx.doi.org/10.5100/jje.30.supplement_210.
Oguchi, Yoriko, Toshiya Shimizu, Kiyotaka Hoshiai, Keiko Nagashima, Kiyoyuki Yamazaki, Takashi Itoh y Katsuro Okamoto. "Analysis of cognitive functioning during visual target monitoring tasks (2)". Japanese journal of ergonomics 30, Supplement (1994): 212–13. http://dx.doi.org/10.5100/jje.30.supplement_212.
Mateeff, Stefan, Biljana Genova y Joachim Hohnsbein. "Visual Analysis of Changes of Motion in Reaction-Time Tasks". Perception 34, n.º 3 (marzo de 2005): 341–56. http://dx.doi.org/10.1068/p5184.
Mecklinger, Axel, Burkhard Maess, Bertram Opitz, Erdmut Pfeifer, Douglas Cheyne y Harold Weinberg. "A MEG analysis of the P300 in visual discrimination tasks". Electroencephalography and Clinical Neurophysiology/Evoked Potentials Section 108, n.º 1 (enero de 1998): 45–56. http://dx.doi.org/10.1016/s0168-5597(97)00092-0.
Shin, Bok-Suk, Zezhong Xu y Reinhard Klette. "Visual lane analysis and higher-order tasks: a concise review". Machine Vision and Applications 25, n.º 6 (12 de abril de 2014): 1519–47. http://dx.doi.org/10.1007/s00138-014-0611-8.
Tesis sobre el tema "Visual tasks analysis":
Kerracher, Natalie. "Tasks and visual techniques for the exploration of temporal graph data". Thesis, Edinburgh Napier University, 2017. http://researchrepository.napier.ac.uk/Output/977758.
Kang, Youn Ah. "Informing design of visual analytics systems for intelligence analysis: understanding users, user tasks, and tool usage". Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44847.
Mukherjee, Anuradha. "Effect of Secondary Motor and Cognitive Tasks on Timed Up and Go Test in Older Adults". University of Toledo / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1375713209.
Eziolisa, Ositadimma Nnanna. "Investigation of Capabilities of Observers in a Watch Window Study". Wright State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=wright1401889055.
Benkirane, Fatima Ezzahra. "Integration of contextual knowledge in deep Learning modeling for vision-based scene analysis". Electronic Thesis or Diss., Bourgogne Franche-Comté, 2024. http://www.theses.fr/2024UBFCA002.
Computer vision has made an important evolution starting from traditional methods to advanced Deep Learning (DL) models. One of the goals of computer vision tasks is to effectively emulate human perception. The classical process of DL models is completely dependent on visual features, which only reflects how humans visually perceive their surroundings. However, for humans to comprehensively understand their environment, their reasoning not only depends on what they see but also on their pre-acquired knowledge. Addressing this gap is essential as achieving human-like reasoning requires a seamless combination of data-driven and knowledge-driven methods. In this thesis, we propose new approaches to improve the performance of DL models by integrating Knowledge-Based Systems (KBS) within Deep Neural Networks (DNNs). The goal is to empower these networks to make informed decisions by leveraging both visual features and knowledge to emulate human-like visual analysis. These methodologies involve two main axes. First, define the representation of KBS to incorporate useful information for a specific computer vision task. Second, investigate how to integrate this knowledge into DNNs to enhance their performance. To do so, we worked on two main contributions. The first work focuses on monocular depth estimation. Considering humans as an example, they can estimate their distance with respect to seen objects, even using just one eye, based on what is called monocular cues. Our contribution involves integrating these monocular cues as human-like reasoning for monocular depth estimation within DNNs. For this purpose, we investigate the possibility of directly integrating geometric and semantic information into the monocular depth estimation process. We suggest using an ontology model in a DL context to represent the environment as a structured set of concepts linked with semantic relationships. Monocular cues information is extracted through reasoning performed on the proposed ontology and is fed together with the RGB image in a multi-stream way into the DNNs. Our approach is validated and evaluated on widespread benchmark datasets. The second work focuses on panoptic segmentation task that aims to identify and analyze all objects captured in an image. More precisely, we propose a new informed deep learning approach that combines the strengths of DNNs with some additional knowledge about spatial relationships between objects. We have chosen spatial relationships knowledge for this task because it can provide useful cues for resolving ambiguities, distinguishing between overlapping or similar object instances, and capturing the holistic structure of the scene. More precisely, we propose a novel training methodology that integrates knowledge directly into the DNNs optimization process. Our approach includes a process for extracting and representing spatial relationships knowledge, which is incorporated into the training using a specially designed loss function. The performance of the proposed method was also evaluated on various challenging datasets. To validate the effectiveness of the proposed approaches for combining KBS and DNNs regarding different methodologies, we have chosen the urban environment and autonomous vehicles as our main use case application. This domain is particularly interesting because it is a challenging and novel field in continuous development, with significant implications for the safety, comfort and mobility of humans. As a conclusion, the proposed approaches validate that the integration of knowledge-driven and data-driven methods consistently leads to improved results. Integration improves the learning process for DNNs and enhances results of computer vision tasks, providing more accurate predictions. The challenge always lies in choosing the relevant knowledge for each task, representing it in the best structure to leverage meaningful information, and integrating it most optimally into the DNN architecture
Huang, Xiaoke. "USING GRAPH MODELING IN SEVERAL VISUAL ANALYTIC TASKS". Kent State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=kent1467738860.
Mordeglia, Cristina. "The Home-Office Lighting Kit". Thesis, KTH, Ljusdesign, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297959.
Miller, Robert Howard. "A component task analysis of stereoscopic displays". Diss., Virginia Tech, 1994. http://hdl.handle.net/10919/39685.
Tanner, Ashley E. "Implementation of a Task Analysis to Increase Reliability of the Visual Inspection of Functional Analysis Results". OpenSIUC, 2014. https://opensiuc.lib.siu.edu/theses/1430.
Zeried, Ferial M. "Effects of optical blur on visual performance and comfort of computer users". Thesis, Birmingham, Ala. : University of Alabama at Birmingham, 2007. http://www.mhsl.uab.edu/dt/2007p/zeried.pdf.
Libros sobre el tema "Visual tasks analysis":
J, Chipman Laure y United States. National Aeronautics and Space Administration., eds. A Graph theoretic approach to scene matching. [Washington, DC: National Aeronautics and Space Administration, 1991.
A. H. C. van der Heijden. Attention in vision: Perception, communication, and action. New York: Psychology Press, 2003.
Martin, Graham R. What Drives Bird Senses? Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780199694532.003.0008.
Baele, Stephane J., Katharine A. Boyd y Travis G. Coan, eds. ISIS Propaganda. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190932459.001.0001.
Bosse, Heinrich y Ursula Renner, eds. Literaturwissenschaft. Rombach Wissenschaft – ein Verlag in der Nomos Verlagsgesellschaft, 2021. http://dx.doi.org/10.5771/9783968217970.
Bosse, Heinrich y Ursula Renner, eds. Literaturwissenschaft. Rombach Wissenschaft – ein Verlag in der Nomos Verlagsgesellschaft, 2021. http://dx.doi.org/10.5771/9783968217970.
A. H. C. van der Heijden. Attention in Vision: Perception, Communication and Action. Taylor & Francis Group, 2004.
A. H. C. van der Heijden. Attention in Vision: Perception, Communication and Action. Taylor & Francis Group, 2004.
A. H. C. van der Heijden. Attention in Vision: Perception, Communication and Action. Taylor & Francis Group, 2004.
Baker, Courtney R., ed. Emmett Till, Justice, and the Task of Recognition. University of Illinois Press, 2017. http://dx.doi.org/10.5406/illinois/9780252039485.003.0004.
Capítulos de libros sobre el tema "Visual tasks analysis":
Lin, Liang, Dongyu Zhang, Ping Luo y Wangmeng Zuo. "Human-Centric Visual Analysis: Tasks and Progress". En Human Centric Visual Analysis with Deep Learning, 15–25. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-2387-4_2.
Nunes, Afonso, Rui Figueiredo y Plinio Moreno. "Learning to Perform Visual Tasks from Human Demonstrations". En Pattern Recognition and Image Analysis, 346–58. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31321-0_30.
Bhatia, Nitesh, Dibakar Sen y Anand V. Pathak. "Visual Behavior Analysis of Human Performance in Precision Tasks". En Engineering Psychology and Cognitive Ergonomics, 95–106. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20373-7_10.
Conder, Jonathan, Josephine Jefferson, Nathan Pages, Khurram Jawed, Alireza Nejati y Mark Sagar. "Efficient Transfer Learning for Visual Tasks via Continuous Optimization of Prompts". En Image Analysis and Processing – ICIAP 2022, 297–309. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06427-2_25.
Andrienko, Gennady, Natalia Andrienko, Fabian Patterson, Siming Chen, Robert Weibel, Haosheng Huang, Christos Doulkeridis et al. "Visual Analytics for Characterizing Mobility Aspects of Urban Context". En Urban Informatics, 727–55. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8983-6_40.
Bai, Lianfa, Jing Han y Jiang Yue. "Multi-visual Tasks Based on Night-Vision Data Structure and Feature Analysis". En Night Vision Processing and Understanding, 45–85. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-1669-2_3.
Wang, Liang y Jianxin Zhao. "Performance Accelerators". En Architecture of Advanced Numerical Analysis Systems, 191–213. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8853-5_7.
Heinemann, Moritz, Filip Sadlo y Thomas Ertl. "Interactive Visualization of Droplet Dynamic Processes". En Fluid Mechanics and Its Applications, 29–46. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09008-0_2.
Silva, Eduardo L., Ana Filipa Sampaio, Luís F. Teixeira y Maria João M. Vasconcelos. "Cervical Cancer Detection and Classification in Cytology Images Using a Hybrid Approach". En Advances in Visual Computing, 299–312. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-90436-4_24.
McGee, Fintan, Mohammad Ghoniem, Benoît Otjacques, Benjamin Renoust, Daniel Archambault, Andreas Kerren, Bruno Pinaud, Guy Melançon, Margit Pohl y Tatiana von Landesberger. "Task Taxonomy for Multilayer Networks". En Visual Analysis of Multilayer Networks, 37–44. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-031-02608-9_4.
Actas de conferencias sobre el tema "Visual tasks analysis":
Hihoud, Chaima, Beatriz Rey, Noura Aknin, Vera Pakhutik, Salhi El Mekki, Jose Tembl y Mariano Alcaniz. "Analysis of brain activation during visual tasks". En 2012 International Conference on Multimedia Computing and Systems (ICMCS). IEEE, 2012. http://dx.doi.org/10.1109/icmcs.2012.6320185.
Wang, Changhan, Anirudh Jain, Danlu Chen y Jiatao Gu. "VizSeq: a visual analysis toolkit for text generation tasks". En Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-3043.
Rosero-Rodriguez, Christian Camilo y Wilfredo Alfonso-Morales. "Automated Preprocessing Pipeline for EEG Analysis in Visual Imagery Tasks". En 2021 IEEE Colombian Conference on Applications of Computational Intelligence (ColCACI). IEEE, 2021. http://dx.doi.org/10.1109/colcaci52978.2021.9469578.
Båth, Magnus, Sara Zachrisson y Lars Gunnar Månsson. "VGC analysis: application of the ROC methodology to visual grading tasks". En Medical Imaging, editado por Berkman Sahiner y David J. Manning. SPIE, 2008. http://dx.doi.org/10.1117/12.770687.
Barnett, Kevin D. y Mohan M. Trivedi. "Analysis Of Thermal Infrared And Visual Images For Industrial Inspection Tasks". En SPIE 1989 Technical Symposium on Aerospace Sensing, editado por Mohan M. Trivedi. SPIE, 1989. http://dx.doi.org/10.1117/12.969297.
Laha, Bireswar, Doug A. Bowman, David H. Laidlaw y John J. Socha. "A classification of user tasks in visual analysis of volume data". En 2015 IEEE Scientific Visualization Conference (SciVis). IEEE, 2015. http://dx.doi.org/10.1109/scivis.2015.7429485.
Fuchen, Dongxin, Ningyue Peng, Haiyan Wang, Yafeng Niu y Chengqi Xue. "The View Switching Cost Analysis by the Visuo auditory Dual-task Paradigm". En Human Systems Engineering and Design (IHSED 2021) Future Trends and Applications. AHFE International, 2021. http://dx.doi.org/10.54941/ahfe1001182.
Inkaew, Narongrit, Nattaphon Charoenkitkamjorn, Chongkon Yangpaiboon, Montri Phothisonothai y Chaiwat Nuthong. "Frequency component analysis of eeg recording on various visual tasks: Steady-state visual evoked potential experiment". En 2015 7th International Conference on Knowledge and Smart Technology (KST). IEEE, 2015. http://dx.doi.org/10.1109/kst.2015.7051483.
Hashimoto, Naohisa, Wu Yanbin y Masaki Masuda. "Analysis of Bus Driver Actions for Development of Automated Bus Passenger Safety System - Bowtie Analysis-". En 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1005240.
Peres, S. Camille y Daniel Verona. "A Task-Analysis-Based Evaluation of Sonification Designs for Two sEMG Tasks". En The 22nd International Conference on Auditory Display. Arlington, Virginia: The International Community for Auditory Display, 2016. http://dx.doi.org/10.21785/icad2016.038.
Informes sobre el tema "Visual tasks analysis":
Semerikov, Serhiy O., Mykhailo M. Mintii y Iryna S. Mintii. Review of the course "Development of Virtual and Augmented Reality Software" for STEM teachers: implementation results and improvement potentials. [б. в.], 2021. http://dx.doi.org/10.31812/123456789/4591.
Golovko, Khrystyna. TRAVEL REPORT BY ALEKSANDER JANTA-POŁCZYNSKI «INTO THE USSR» (1932): FROG PERSPECTIVE. Ivan Franko National University of Lviv, marzo de 2021. http://dx.doi.org/10.30970/vjo.2021.50.11091.
Yatsymirska, Mariya. Мова війни і «контрнаступальна» лексика у стислих медійних текстах. Ivan Franko National University of Lviv, marzo de 2023. http://dx.doi.org/10.30970/vjo.2023.52-53.11742.
Jacobsen, Nils. Linjebussens vekst og fall i den voksende byen: en studie av bybussenes geografiske kvalitet Stavanger – Sandnes 1920 – 2010. University of Stavanger, noviembre de 2019. http://dx.doi.org/10.31265/usps.244.