Literatura científica selecionada sobre o tema "Visual tasks analysis"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Índice
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Visual tasks analysis".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Visual tasks analysis"
Alexiev, Kiril, e T. Teodorvakarelsky. "Eye movement analysis in simple visual tasks". Computer Science and Information Systems, n.º 00 (2021): 65. http://dx.doi.org/10.2298/csis210418065a.
Texto completo da fonteFukuda, Kyosuke. "Analysis of Eyeblink Activity during Discriminative Tasks". Perceptual and Motor Skills 79, n.º 3_suppl (dezembro de 1994): 1599–608. http://dx.doi.org/10.2466/pms.1994.79.3f.1599.
Texto completo da fonteGoodall, John R. "An Evaluation of Visual and Textual Network Analysis Tools". Information Visualization 10, n.º 2 (abril de 2011): 145–57. http://dx.doi.org/10.1057/ivs.2011.2.
Texto completo da fonteTaylor, Donald H. "An Analysis of Visual Watchkeeping". Journal of Navigation 44, n.º 2 (maio de 1991): 152–58. http://dx.doi.org/10.1017/s0373463300009899.
Texto completo da fonteCole, Jason C., Lisa A. Fasnacht-Hill, Scott K. Robinson e Caroline Cordahi. "Differentiation of Fluid, Visual, and Simultaneous Cognitive Tasks". Psychological Reports 89, n.º 3 (dezembro de 2001): 541–46. http://dx.doi.org/10.2466/pr0.2001.89.3.541.
Texto completo da fonteShimizu, Toshiya, Yoriko Oguchi, Kiyotaka Hoshiai, Keiko Nagashima, Kiyoyuki Yamazaki, Takashi Itoh e Katsuro Okamoto. "Analysis of cognitive functioning during visual target monitoring tasks (1)". Japanese journal of ergonomics 30, Supplement (1994): 210–11. http://dx.doi.org/10.5100/jje.30.supplement_210.
Texto completo da fonteOguchi, Yoriko, Toshiya Shimizu, Kiyotaka Hoshiai, Keiko Nagashima, Kiyoyuki Yamazaki, Takashi Itoh e Katsuro Okamoto. "Analysis of cognitive functioning during visual target monitoring tasks (2)". Japanese journal of ergonomics 30, Supplement (1994): 212–13. http://dx.doi.org/10.5100/jje.30.supplement_212.
Texto completo da fonteMateeff, Stefan, Biljana Genova e Joachim Hohnsbein. "Visual Analysis of Changes of Motion in Reaction-Time Tasks". Perception 34, n.º 3 (março de 2005): 341–56. http://dx.doi.org/10.1068/p5184.
Texto completo da fonteMecklinger, Axel, Burkhard Maess, Bertram Opitz, Erdmut Pfeifer, Douglas Cheyne e Harold Weinberg. "A MEG analysis of the P300 in visual discrimination tasks". Electroencephalography and Clinical Neurophysiology/Evoked Potentials Section 108, n.º 1 (janeiro de 1998): 45–56. http://dx.doi.org/10.1016/s0168-5597(97)00092-0.
Texto completo da fonteShin, Bok-Suk, Zezhong Xu e Reinhard Klette. "Visual lane analysis and higher-order tasks: a concise review". Machine Vision and Applications 25, n.º 6 (12 de abril de 2014): 1519–47. http://dx.doi.org/10.1007/s00138-014-0611-8.
Texto completo da fonteTeses / dissertações sobre o assunto "Visual tasks analysis"
Kerracher, Natalie. "Tasks and visual techniques for the exploration of temporal graph data". Thesis, Edinburgh Napier University, 2017. http://researchrepository.napier.ac.uk/Output/977758.
Texto completo da fonteKang, Youn Ah. "Informing design of visual analytics systems for intelligence analysis: understanding users, user tasks, and tool usage". Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44847.
Texto completo da fonteMukherjee, Anuradha. "Effect of Secondary Motor and Cognitive Tasks on Timed Up and Go Test in Older Adults". University of Toledo / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1375713209.
Texto completo da fonteEziolisa, Ositadimma Nnanna. "Investigation of Capabilities of Observers in a Watch Window Study". Wright State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=wright1401889055.
Texto completo da fonteBenkirane, Fatima Ezzahra. "Integration of contextual knowledge in deep Learning modeling for vision-based scene analysis". Electronic Thesis or Diss., Bourgogne Franche-Comté, 2024. http://www.theses.fr/2024UBFCA002.
Texto completo da fonteComputer vision has made an important evolution starting from traditional methods to advanced Deep Learning (DL) models. One of the goals of computer vision tasks is to effectively emulate human perception. The classical process of DL models is completely dependent on visual features, which only reflects how humans visually perceive their surroundings. However, for humans to comprehensively understand their environment, their reasoning not only depends on what they see but also on their pre-acquired knowledge. Addressing this gap is essential as achieving human-like reasoning requires a seamless combination of data-driven and knowledge-driven methods. In this thesis, we propose new approaches to improve the performance of DL models by integrating Knowledge-Based Systems (KBS) within Deep Neural Networks (DNNs). The goal is to empower these networks to make informed decisions by leveraging both visual features and knowledge to emulate human-like visual analysis. These methodologies involve two main axes. First, define the representation of KBS to incorporate useful information for a specific computer vision task. Second, investigate how to integrate this knowledge into DNNs to enhance their performance. To do so, we worked on two main contributions. The first work focuses on monocular depth estimation. Considering humans as an example, they can estimate their distance with respect to seen objects, even using just one eye, based on what is called monocular cues. Our contribution involves integrating these monocular cues as human-like reasoning for monocular depth estimation within DNNs. For this purpose, we investigate the possibility of directly integrating geometric and semantic information into the monocular depth estimation process. We suggest using an ontology model in a DL context to represent the environment as a structured set of concepts linked with semantic relationships. Monocular cues information is extracted through reasoning performed on the proposed ontology and is fed together with the RGB image in a multi-stream way into the DNNs. Our approach is validated and evaluated on widespread benchmark datasets. The second work focuses on panoptic segmentation task that aims to identify and analyze all objects captured in an image. More precisely, we propose a new informed deep learning approach that combines the strengths of DNNs with some additional knowledge about spatial relationships between objects. We have chosen spatial relationships knowledge for this task because it can provide useful cues for resolving ambiguities, distinguishing between overlapping or similar object instances, and capturing the holistic structure of the scene. More precisely, we propose a novel training methodology that integrates knowledge directly into the DNNs optimization process. Our approach includes a process for extracting and representing spatial relationships knowledge, which is incorporated into the training using a specially designed loss function. The performance of the proposed method was also evaluated on various challenging datasets. To validate the effectiveness of the proposed approaches for combining KBS and DNNs regarding different methodologies, we have chosen the urban environment and autonomous vehicles as our main use case application. This domain is particularly interesting because it is a challenging and novel field in continuous development, with significant implications for the safety, comfort and mobility of humans. As a conclusion, the proposed approaches validate that the integration of knowledge-driven and data-driven methods consistently leads to improved results. Integration improves the learning process for DNNs and enhances results of computer vision tasks, providing more accurate predictions. The challenge always lies in choosing the relevant knowledge for each task, representing it in the best structure to leverage meaningful information, and integrating it most optimally into the DNN architecture
Huang, Xiaoke. "USING GRAPH MODELING IN SEVERAL VISUAL ANALYTIC TASKS". Kent State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=kent1467738860.
Texto completo da fonteMordeglia, Cristina. "The Home-Office Lighting Kit". Thesis, KTH, Ljusdesign, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297959.
Texto completo da fonteMiller, Robert Howard. "A component task analysis of stereoscopic displays". Diss., Virginia Tech, 1994. http://hdl.handle.net/10919/39685.
Texto completo da fonteTanner, Ashley E. "Implementation of a Task Analysis to Increase Reliability of the Visual Inspection of Functional Analysis Results". OpenSIUC, 2014. https://opensiuc.lib.siu.edu/theses/1430.
Texto completo da fonteZeried, Ferial M. "Effects of optical blur on visual performance and comfort of computer users". Thesis, Birmingham, Ala. : University of Alabama at Birmingham, 2007. http://www.mhsl.uab.edu/dt/2007p/zeried.pdf.
Texto completo da fonteLivros sobre o assunto "Visual tasks analysis"
J, Chipman Laure, e United States. National Aeronautics and Space Administration., eds. A Graph theoretic approach to scene matching. [Washington, DC: National Aeronautics and Space Administration, 1991.
Encontre o texto completo da fonteA. H. C. van der Heijden. Attention in vision: Perception, communication, and action. New York: Psychology Press, 2003.
Encontre o texto completo da fonteMartin, Graham R. What Drives Bird Senses? Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780199694532.003.0008.
Texto completo da fonteBaele, Stephane J., Katharine A. Boyd e Travis G. Coan, eds. ISIS Propaganda. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190932459.001.0001.
Texto completo da fonteBosse, Heinrich, e Ursula Renner, eds. Literaturwissenschaft. Rombach Wissenschaft – ein Verlag in der Nomos Verlagsgesellschaft, 2021. http://dx.doi.org/10.5771/9783968217970.
Texto completo da fonteBosse, Heinrich, e Ursula Renner, eds. Literaturwissenschaft. Rombach Wissenschaft – ein Verlag in der Nomos Verlagsgesellschaft, 2021. http://dx.doi.org/10.5771/9783968217970.
Texto completo da fonteA. H. C. van der Heijden. Attention in Vision: Perception, Communication and Action. Taylor & Francis Group, 2004.
Encontre o texto completo da fonteA. H. C. van der Heijden. Attention in Vision: Perception, Communication and Action. Taylor & Francis Group, 2004.
Encontre o texto completo da fonteA. H. C. van der Heijden. Attention in Vision: Perception, Communication and Action. Taylor & Francis Group, 2004.
Encontre o texto completo da fonteBaker, Courtney R., ed. Emmett Till, Justice, and the Task of Recognition. University of Illinois Press, 2017. http://dx.doi.org/10.5406/illinois/9780252039485.003.0004.
Texto completo da fonteCapítulos de livros sobre o assunto "Visual tasks analysis"
Lin, Liang, Dongyu Zhang, Ping Luo e Wangmeng Zuo. "Human-Centric Visual Analysis: Tasks and Progress". In Human Centric Visual Analysis with Deep Learning, 15–25. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-2387-4_2.
Texto completo da fonteNunes, Afonso, Rui Figueiredo e Plinio Moreno. "Learning to Perform Visual Tasks from Human Demonstrations". In Pattern Recognition and Image Analysis, 346–58. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31321-0_30.
Texto completo da fonteBhatia, Nitesh, Dibakar Sen e Anand V. Pathak. "Visual Behavior Analysis of Human Performance in Precision Tasks". In Engineering Psychology and Cognitive Ergonomics, 95–106. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20373-7_10.
Texto completo da fonteConder, Jonathan, Josephine Jefferson, Nathan Pages, Khurram Jawed, Alireza Nejati e Mark Sagar. "Efficient Transfer Learning for Visual Tasks via Continuous Optimization of Prompts". In Image Analysis and Processing – ICIAP 2022, 297–309. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06427-2_25.
Texto completo da fonteAndrienko, Gennady, Natalia Andrienko, Fabian Patterson, Siming Chen, Robert Weibel, Haosheng Huang, Christos Doulkeridis et al. "Visual Analytics for Characterizing Mobility Aspects of Urban Context". In Urban Informatics, 727–55. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8983-6_40.
Texto completo da fonteBai, Lianfa, Jing Han e Jiang Yue. "Multi-visual Tasks Based on Night-Vision Data Structure and Feature Analysis". In Night Vision Processing and Understanding, 45–85. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-1669-2_3.
Texto completo da fonteWang, Liang, e Jianxin Zhao. "Performance Accelerators". In Architecture of Advanced Numerical Analysis Systems, 191–213. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8853-5_7.
Texto completo da fonteHeinemann, Moritz, Filip Sadlo e Thomas Ertl. "Interactive Visualization of Droplet Dynamic Processes". In Fluid Mechanics and Its Applications, 29–46. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09008-0_2.
Texto completo da fonteSilva, Eduardo L., Ana Filipa Sampaio, Luís F. Teixeira e Maria João M. Vasconcelos. "Cervical Cancer Detection and Classification in Cytology Images Using a Hybrid Approach". In Advances in Visual Computing, 299–312. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-90436-4_24.
Texto completo da fonteMcGee, Fintan, Mohammad Ghoniem, Benoît Otjacques, Benjamin Renoust, Daniel Archambault, Andreas Kerren, Bruno Pinaud, Guy Melançon, Margit Pohl e Tatiana von Landesberger. "Task Taxonomy for Multilayer Networks". In Visual Analysis of Multilayer Networks, 37–44. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-031-02608-9_4.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Visual tasks analysis"
Hihoud, Chaima, Beatriz Rey, Noura Aknin, Vera Pakhutik, Salhi El Mekki, Jose Tembl e Mariano Alcaniz. "Analysis of brain activation during visual tasks". In 2012 International Conference on Multimedia Computing and Systems (ICMCS). IEEE, 2012. http://dx.doi.org/10.1109/icmcs.2012.6320185.
Texto completo da fonteWang, Changhan, Anirudh Jain, Danlu Chen e Jiatao Gu. "VizSeq: a visual analysis toolkit for text generation tasks". In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-3043.
Texto completo da fonteRosero-Rodriguez, Christian Camilo, e Wilfredo Alfonso-Morales. "Automated Preprocessing Pipeline for EEG Analysis in Visual Imagery Tasks". In 2021 IEEE Colombian Conference on Applications of Computational Intelligence (ColCACI). IEEE, 2021. http://dx.doi.org/10.1109/colcaci52978.2021.9469578.
Texto completo da fonteBåth, Magnus, Sara Zachrisson e Lars Gunnar Månsson. "VGC analysis: application of the ROC methodology to visual grading tasks". In Medical Imaging, editado por Berkman Sahiner e David J. Manning. SPIE, 2008. http://dx.doi.org/10.1117/12.770687.
Texto completo da fonteBarnett, Kevin D., e Mohan M. Trivedi. "Analysis Of Thermal Infrared And Visual Images For Industrial Inspection Tasks". In SPIE 1989 Technical Symposium on Aerospace Sensing, editado por Mohan M. Trivedi. SPIE, 1989. http://dx.doi.org/10.1117/12.969297.
Texto completo da fonteLaha, Bireswar, Doug A. Bowman, David H. Laidlaw e John J. Socha. "A classification of user tasks in visual analysis of volume data". In 2015 IEEE Scientific Visualization Conference (SciVis). IEEE, 2015. http://dx.doi.org/10.1109/scivis.2015.7429485.
Texto completo da fonteFuchen, Dongxin, Ningyue Peng, Haiyan Wang, Yafeng Niu e Chengqi Xue. "The View Switching Cost Analysis by the Visuo auditory Dual-task Paradigm". In Human Systems Engineering and Design (IHSED 2021) Future Trends and Applications. AHFE International, 2021. http://dx.doi.org/10.54941/ahfe1001182.
Texto completo da fonteInkaew, Narongrit, Nattaphon Charoenkitkamjorn, Chongkon Yangpaiboon, Montri Phothisonothai e Chaiwat Nuthong. "Frequency component analysis of eeg recording on various visual tasks: Steady-state visual evoked potential experiment". In 2015 7th International Conference on Knowledge and Smart Technology (KST). IEEE, 2015. http://dx.doi.org/10.1109/kst.2015.7051483.
Texto completo da fonteHashimoto, Naohisa, Wu Yanbin e Masaki Masuda. "Analysis of Bus Driver Actions for Development of Automated Bus Passenger Safety System - Bowtie Analysis-". In 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1005240.
Texto completo da fontePeres, S. Camille, e Daniel Verona. "A Task-Analysis-Based Evaluation of Sonification Designs for Two sEMG Tasks". In The 22nd International Conference on Auditory Display. Arlington, Virginia: The International Community for Auditory Display, 2016. http://dx.doi.org/10.21785/icad2016.038.
Texto completo da fonteRelatórios de organizações sobre o assunto "Visual tasks analysis"
Semerikov, Serhiy O., Mykhailo M. Mintii e Iryna S. Mintii. Review of the course "Development of Virtual and Augmented Reality Software" for STEM teachers: implementation results and improvement potentials. [б. в.], 2021. http://dx.doi.org/10.31812/123456789/4591.
Texto completo da fonteGolovko, Khrystyna. TRAVEL REPORT BY ALEKSANDER JANTA-POŁCZYNSKI «INTO THE USSR» (1932): FROG PERSPECTIVE. Ivan Franko National University of Lviv, março de 2021. http://dx.doi.org/10.30970/vjo.2021.50.11091.
Texto completo da fonteYatsymirska, Mariya. Мова війни і «контрнаступальна» лексика у стислих медійних текстах. Ivan Franko National University of Lviv, março de 2023. http://dx.doi.org/10.30970/vjo.2023.52-53.11742.
Texto completo da fonteJacobsen, Nils. Linjebussens vekst og fall i den voksende byen: en studie av bybussenes geografiske kvalitet Stavanger – Sandnes 1920 – 2010. University of Stavanger, novembro de 2019. http://dx.doi.org/10.31265/usps.244.
Texto completo da fonte