Auswahl der wissenschaftlichen Literatur zum Thema „Visual tasks analysis“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Visual tasks analysis" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Visual tasks analysis"
Alexiev, Kiril, und T. Teodorvakarelsky. „Eye movement analysis in simple visual tasks“. Computer Science and Information Systems, Nr. 00 (2021): 65. http://dx.doi.org/10.2298/csis210418065a.
Der volle Inhalt der QuelleFukuda, Kyosuke. „Analysis of Eyeblink Activity during Discriminative Tasks“. Perceptual and Motor Skills 79, Nr. 3_suppl (Dezember 1994): 1599–608. http://dx.doi.org/10.2466/pms.1994.79.3f.1599.
Der volle Inhalt der QuelleGoodall, John R. „An Evaluation of Visual and Textual Network Analysis Tools“. Information Visualization 10, Nr. 2 (April 2011): 145–57. http://dx.doi.org/10.1057/ivs.2011.2.
Der volle Inhalt der QuelleTaylor, Donald H. „An Analysis of Visual Watchkeeping“. Journal of Navigation 44, Nr. 2 (Mai 1991): 152–58. http://dx.doi.org/10.1017/s0373463300009899.
Der volle Inhalt der QuelleCole, Jason C., Lisa A. Fasnacht-Hill, Scott K. Robinson und Caroline Cordahi. „Differentiation of Fluid, Visual, and Simultaneous Cognitive Tasks“. Psychological Reports 89, Nr. 3 (Dezember 2001): 541–46. http://dx.doi.org/10.2466/pr0.2001.89.3.541.
Der volle Inhalt der QuelleShimizu, Toshiya, Yoriko Oguchi, Kiyotaka Hoshiai, Keiko Nagashima, Kiyoyuki Yamazaki, Takashi Itoh und Katsuro Okamoto. „Analysis of cognitive functioning during visual target monitoring tasks (1)“. Japanese journal of ergonomics 30, Supplement (1994): 210–11. http://dx.doi.org/10.5100/jje.30.supplement_210.
Der volle Inhalt der QuelleOguchi, Yoriko, Toshiya Shimizu, Kiyotaka Hoshiai, Keiko Nagashima, Kiyoyuki Yamazaki, Takashi Itoh und Katsuro Okamoto. „Analysis of cognitive functioning during visual target monitoring tasks (2)“. Japanese journal of ergonomics 30, Supplement (1994): 212–13. http://dx.doi.org/10.5100/jje.30.supplement_212.
Der volle Inhalt der QuelleMateeff, Stefan, Biljana Genova und Joachim Hohnsbein. „Visual Analysis of Changes of Motion in Reaction-Time Tasks“. Perception 34, Nr. 3 (März 2005): 341–56. http://dx.doi.org/10.1068/p5184.
Der volle Inhalt der QuelleMecklinger, Axel, Burkhard Maess, Bertram Opitz, Erdmut Pfeifer, Douglas Cheyne und Harold Weinberg. „A MEG analysis of the P300 in visual discrimination tasks“. Electroencephalography and Clinical Neurophysiology/Evoked Potentials Section 108, Nr. 1 (Januar 1998): 45–56. http://dx.doi.org/10.1016/s0168-5597(97)00092-0.
Der volle Inhalt der QuelleShin, Bok-Suk, Zezhong Xu und Reinhard Klette. „Visual lane analysis and higher-order tasks: a concise review“. Machine Vision and Applications 25, Nr. 6 (12.04.2014): 1519–47. http://dx.doi.org/10.1007/s00138-014-0611-8.
Der volle Inhalt der QuelleDissertationen zum Thema "Visual tasks analysis"
Kerracher, Natalie. „Tasks and visual techniques for the exploration of temporal graph data“. Thesis, Edinburgh Napier University, 2017. http://researchrepository.napier.ac.uk/Output/977758.
Der volle Inhalt der QuelleKang, Youn Ah. „Informing design of visual analytics systems for intelligence analysis: understanding users, user tasks, and tool usage“. Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44847.
Der volle Inhalt der QuelleMukherjee, Anuradha. „Effect of Secondary Motor and Cognitive Tasks on Timed Up and Go Test in Older Adults“. University of Toledo / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1375713209.
Der volle Inhalt der QuelleEziolisa, Ositadimma Nnanna. „Investigation of Capabilities of Observers in a Watch Window Study“. Wright State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=wright1401889055.
Der volle Inhalt der QuelleBenkirane, Fatima Ezzahra. „Integration of contextual knowledge in deep Learning modeling for vision-based scene analysis“. Electronic Thesis or Diss., Bourgogne Franche-Comté, 2024. http://www.theses.fr/2024UBFCA002.
Der volle Inhalt der QuelleComputer vision has made an important evolution starting from traditional methods to advanced Deep Learning (DL) models. One of the goals of computer vision tasks is to effectively emulate human perception. The classical process of DL models is completely dependent on visual features, which only reflects how humans visually perceive their surroundings. However, for humans to comprehensively understand their environment, their reasoning not only depends on what they see but also on their pre-acquired knowledge. Addressing this gap is essential as achieving human-like reasoning requires a seamless combination of data-driven and knowledge-driven methods. In this thesis, we propose new approaches to improve the performance of DL models by integrating Knowledge-Based Systems (KBS) within Deep Neural Networks (DNNs). The goal is to empower these networks to make informed decisions by leveraging both visual features and knowledge to emulate human-like visual analysis. These methodologies involve two main axes. First, define the representation of KBS to incorporate useful information for a specific computer vision task. Second, investigate how to integrate this knowledge into DNNs to enhance their performance. To do so, we worked on two main contributions. The first work focuses on monocular depth estimation. Considering humans as an example, they can estimate their distance with respect to seen objects, even using just one eye, based on what is called monocular cues. Our contribution involves integrating these monocular cues as human-like reasoning for monocular depth estimation within DNNs. For this purpose, we investigate the possibility of directly integrating geometric and semantic information into the monocular depth estimation process. We suggest using an ontology model in a DL context to represent the environment as a structured set of concepts linked with semantic relationships. Monocular cues information is extracted through reasoning performed on the proposed ontology and is fed together with the RGB image in a multi-stream way into the DNNs. Our approach is validated and evaluated on widespread benchmark datasets. The second work focuses on panoptic segmentation task that aims to identify and analyze all objects captured in an image. More precisely, we propose a new informed deep learning approach that combines the strengths of DNNs with some additional knowledge about spatial relationships between objects. We have chosen spatial relationships knowledge for this task because it can provide useful cues for resolving ambiguities, distinguishing between overlapping or similar object instances, and capturing the holistic structure of the scene. More precisely, we propose a novel training methodology that integrates knowledge directly into the DNNs optimization process. Our approach includes a process for extracting and representing spatial relationships knowledge, which is incorporated into the training using a specially designed loss function. The performance of the proposed method was also evaluated on various challenging datasets. To validate the effectiveness of the proposed approaches for combining KBS and DNNs regarding different methodologies, we have chosen the urban environment and autonomous vehicles as our main use case application. This domain is particularly interesting because it is a challenging and novel field in continuous development, with significant implications for the safety, comfort and mobility of humans. As a conclusion, the proposed approaches validate that the integration of knowledge-driven and data-driven methods consistently leads to improved results. Integration improves the learning process for DNNs and enhances results of computer vision tasks, providing more accurate predictions. The challenge always lies in choosing the relevant knowledge for each task, representing it in the best structure to leverage meaningful information, and integrating it most optimally into the DNN architecture
Huang, Xiaoke. „USING GRAPH MODELING IN SEVERAL VISUAL ANALYTIC TASKS“. Kent State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=kent1467738860.
Der volle Inhalt der QuelleMordeglia, Cristina. „The Home-Office Lighting Kit“. Thesis, KTH, Ljusdesign, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297959.
Der volle Inhalt der QuelleMiller, Robert Howard. „A component task analysis of stereoscopic displays“. Diss., Virginia Tech, 1994. http://hdl.handle.net/10919/39685.
Der volle Inhalt der QuelleTanner, Ashley E. „Implementation of a Task Analysis to Increase Reliability of the Visual Inspection of Functional Analysis Results“. OpenSIUC, 2014. https://opensiuc.lib.siu.edu/theses/1430.
Der volle Inhalt der QuelleZeried, Ferial M. „Effects of optical blur on visual performance and comfort of computer users“. Thesis, Birmingham, Ala. : University of Alabama at Birmingham, 2007. http://www.mhsl.uab.edu/dt/2007p/zeried.pdf.
Der volle Inhalt der QuelleBücher zum Thema "Visual tasks analysis"
J, Chipman Laure, und United States. National Aeronautics and Space Administration., Hrsg. A Graph theoretic approach to scene matching. [Washington, DC: National Aeronautics and Space Administration, 1991.
Den vollen Inhalt der Quelle findenA. H. C. van der Heijden. Attention in vision: Perception, communication, and action. New York: Psychology Press, 2003.
Den vollen Inhalt der Quelle findenMartin, Graham R. What Drives Bird Senses? Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780199694532.003.0008.
Der volle Inhalt der QuelleBaele, Stephane J., Katharine A. Boyd und Travis G. Coan, Hrsg. ISIS Propaganda. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190932459.001.0001.
Der volle Inhalt der QuelleBosse, Heinrich, und Ursula Renner, Hrsg. Literaturwissenschaft. Rombach Wissenschaft – ein Verlag in der Nomos Verlagsgesellschaft, 2021. http://dx.doi.org/10.5771/9783968217970.
Der volle Inhalt der QuelleBosse, Heinrich, und Ursula Renner, Hrsg. Literaturwissenschaft. Rombach Wissenschaft – ein Verlag in der Nomos Verlagsgesellschaft, 2021. http://dx.doi.org/10.5771/9783968217970.
Der volle Inhalt der QuelleA. H. C. van der Heijden. Attention in Vision: Perception, Communication and Action. Taylor & Francis Group, 2004.
Den vollen Inhalt der Quelle findenA. H. C. van der Heijden. Attention in Vision: Perception, Communication and Action. Taylor & Francis Group, 2004.
Den vollen Inhalt der Quelle findenA. H. C. van der Heijden. Attention in Vision: Perception, Communication and Action. Taylor & Francis Group, 2004.
Den vollen Inhalt der Quelle findenBaker, Courtney R., Hrsg. Emmett Till, Justice, and the Task of Recognition. University of Illinois Press, 2017. http://dx.doi.org/10.5406/illinois/9780252039485.003.0004.
Der volle Inhalt der QuelleBuchteile zum Thema "Visual tasks analysis"
Lin, Liang, Dongyu Zhang, Ping Luo und Wangmeng Zuo. „Human-Centric Visual Analysis: Tasks and Progress“. In Human Centric Visual Analysis with Deep Learning, 15–25. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-2387-4_2.
Der volle Inhalt der QuelleNunes, Afonso, Rui Figueiredo und Plinio Moreno. „Learning to Perform Visual Tasks from Human Demonstrations“. In Pattern Recognition and Image Analysis, 346–58. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31321-0_30.
Der volle Inhalt der QuelleBhatia, Nitesh, Dibakar Sen und Anand V. Pathak. „Visual Behavior Analysis of Human Performance in Precision Tasks“. In Engineering Psychology and Cognitive Ergonomics, 95–106. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20373-7_10.
Der volle Inhalt der QuelleConder, Jonathan, Josephine Jefferson, Nathan Pages, Khurram Jawed, Alireza Nejati und Mark Sagar. „Efficient Transfer Learning for Visual Tasks via Continuous Optimization of Prompts“. In Image Analysis and Processing – ICIAP 2022, 297–309. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06427-2_25.
Der volle Inhalt der QuelleAndrienko, Gennady, Natalia Andrienko, Fabian Patterson, Siming Chen, Robert Weibel, Haosheng Huang, Christos Doulkeridis et al. „Visual Analytics for Characterizing Mobility Aspects of Urban Context“. In Urban Informatics, 727–55. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8983-6_40.
Der volle Inhalt der QuelleBai, Lianfa, Jing Han und Jiang Yue. „Multi-visual Tasks Based on Night-Vision Data Structure and Feature Analysis“. In Night Vision Processing and Understanding, 45–85. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-1669-2_3.
Der volle Inhalt der QuelleWang, Liang, und Jianxin Zhao. „Performance Accelerators“. In Architecture of Advanced Numerical Analysis Systems, 191–213. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8853-5_7.
Der volle Inhalt der QuelleHeinemann, Moritz, Filip Sadlo und Thomas Ertl. „Interactive Visualization of Droplet Dynamic Processes“. In Fluid Mechanics and Its Applications, 29–46. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09008-0_2.
Der volle Inhalt der QuelleSilva, Eduardo L., Ana Filipa Sampaio, Luís F. Teixeira und Maria João M. Vasconcelos. „Cervical Cancer Detection and Classification in Cytology Images Using a Hybrid Approach“. In Advances in Visual Computing, 299–312. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-90436-4_24.
Der volle Inhalt der QuelleMcGee, Fintan, Mohammad Ghoniem, Benoît Otjacques, Benjamin Renoust, Daniel Archambault, Andreas Kerren, Bruno Pinaud, Guy Melançon, Margit Pohl und Tatiana von Landesberger. „Task Taxonomy for Multilayer Networks“. In Visual Analysis of Multilayer Networks, 37–44. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-031-02608-9_4.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Visual tasks analysis"
Hihoud, Chaima, Beatriz Rey, Noura Aknin, Vera Pakhutik, Salhi El Mekki, Jose Tembl und Mariano Alcaniz. „Analysis of brain activation during visual tasks“. In 2012 International Conference on Multimedia Computing and Systems (ICMCS). IEEE, 2012. http://dx.doi.org/10.1109/icmcs.2012.6320185.
Der volle Inhalt der QuelleWang, Changhan, Anirudh Jain, Danlu Chen und Jiatao Gu. „VizSeq: a visual analysis toolkit for text generation tasks“. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-3043.
Der volle Inhalt der QuelleRosero-Rodriguez, Christian Camilo, und Wilfredo Alfonso-Morales. „Automated Preprocessing Pipeline for EEG Analysis in Visual Imagery Tasks“. In 2021 IEEE Colombian Conference on Applications of Computational Intelligence (ColCACI). IEEE, 2021. http://dx.doi.org/10.1109/colcaci52978.2021.9469578.
Der volle Inhalt der QuelleBåth, Magnus, Sara Zachrisson und Lars Gunnar Månsson. „VGC analysis: application of the ROC methodology to visual grading tasks“. In Medical Imaging, herausgegeben von Berkman Sahiner und David J. Manning. SPIE, 2008. http://dx.doi.org/10.1117/12.770687.
Der volle Inhalt der QuelleBarnett, Kevin D., und Mohan M. Trivedi. „Analysis Of Thermal Infrared And Visual Images For Industrial Inspection Tasks“. In SPIE 1989 Technical Symposium on Aerospace Sensing, herausgegeben von Mohan M. Trivedi. SPIE, 1989. http://dx.doi.org/10.1117/12.969297.
Der volle Inhalt der QuelleLaha, Bireswar, Doug A. Bowman, David H. Laidlaw und John J. Socha. „A classification of user tasks in visual analysis of volume data“. In 2015 IEEE Scientific Visualization Conference (SciVis). IEEE, 2015. http://dx.doi.org/10.1109/scivis.2015.7429485.
Der volle Inhalt der QuelleFuchen, Dongxin, Ningyue Peng, Haiyan Wang, Yafeng Niu und Chengqi Xue. „The View Switching Cost Analysis by the Visuo auditory Dual-task Paradigm“. In Human Systems Engineering and Design (IHSED 2021) Future Trends and Applications. AHFE International, 2021. http://dx.doi.org/10.54941/ahfe1001182.
Der volle Inhalt der QuelleInkaew, Narongrit, Nattaphon Charoenkitkamjorn, Chongkon Yangpaiboon, Montri Phothisonothai und Chaiwat Nuthong. „Frequency component analysis of eeg recording on various visual tasks: Steady-state visual evoked potential experiment“. In 2015 7th International Conference on Knowledge and Smart Technology (KST). IEEE, 2015. http://dx.doi.org/10.1109/kst.2015.7051483.
Der volle Inhalt der QuelleHashimoto, Naohisa, Wu Yanbin und Masaki Masuda. „Analysis of Bus Driver Actions for Development of Automated Bus Passenger Safety System - Bowtie Analysis-“. In 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1005240.
Der volle Inhalt der QuellePeres, S. Camille, und Daniel Verona. „A Task-Analysis-Based Evaluation of Sonification Designs for Two sEMG Tasks“. In The 22nd International Conference on Auditory Display. Arlington, Virginia: The International Community for Auditory Display, 2016. http://dx.doi.org/10.21785/icad2016.038.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Visual tasks analysis"
Semerikov, Serhiy O., Mykhailo M. Mintii und Iryna S. Mintii. Review of the course "Development of Virtual and Augmented Reality Software" for STEM teachers: implementation results and improvement potentials. [б. в.], 2021. http://dx.doi.org/10.31812/123456789/4591.
Der volle Inhalt der QuelleGolovko, Khrystyna. TRAVEL REPORT BY ALEKSANDER JANTA-POŁCZYNSKI «INTO THE USSR» (1932): FROG PERSPECTIVE. Ivan Franko National University of Lviv, März 2021. http://dx.doi.org/10.30970/vjo.2021.50.11091.
Der volle Inhalt der QuelleYatsymirska, Mariya. Мова війни і «контрнаступальна» лексика у стислих медійних текстах. Ivan Franko National University of Lviv, März 2023. http://dx.doi.org/10.30970/vjo.2023.52-53.11742.
Der volle Inhalt der QuelleJacobsen, Nils. Linjebussens vekst og fall i den voksende byen: en studie av bybussenes geografiske kvalitet Stavanger – Sandnes 1920 – 2010. University of Stavanger, November 2019. http://dx.doi.org/10.31265/usps.244.
Der volle Inhalt der Quelle