Articoli di riviste sul tema "Visual tasks analysis"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Visual tasks analysis.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Visual tasks analysis".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Alexiev, Kiril, e T. Teodorvakarelsky. "Eye movement analysis in simple visual tasks". Computer Science and Information Systems, n. 00 (2021): 65. http://dx.doi.org/10.2298/csis210418065a.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The small eye movements in the process of fixation on an image element give us knowledge about the human visual information perception. An indepth analysis of these movements can reveal the influence of personality, mood and mental state of the examined subject on the process of perception. The modern eye tracking technology provides us with the necessary technical means to study these movements. Nevertheless, still a lot of problems remains open. In the present paper two approaches for noise cancellation in the eye-tracker signal and two approaches for microsaccade detection are proposed. The analysis of the obtained results can be a good starting point for interpretation by neurobiologists about the causes of different types of movement and their dependence on the individuality of the observed person and the specific mental and physical condition.
2

Fukuda, Kyosuke. "Analysis of Eyeblink Activity during Discriminative Tasks". Perceptual and Motor Skills 79, n. 3_suppl (dicembre 1994): 1599–608. http://dx.doi.org/10.2466/pms.1994.79.3f.1599.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
To evaluate the blinking pattern during and after cognitive processing, 10 subjects' eyeblinks were recorded by a videotape recording camera placed 100 cm from the subjects' side. The subjects' task was to discriminate two kinds of auditory tones presented serially and to discriminate two kinds of visual stimuli presented serially. Treatments were composed of the baseline condition preexperiment, the visual task with no discrimination, the visual discriminative task, the auditory task with no discrimination, and the auditory discriminative task. The blink rate in each treatment, the temporal distribution of blinks poststimulus, and the blink waveform were evaluated. Although blinks were not inhibited during tasks, frequent blinks after tasks were observed in both modalities. Blinks concentrated between 300 msec. and 800 msec. after the discriminated stimulus and formulated the blink-rate peak. The closing velocity of lid in the blink rate peak was lower after auditory stimulus. Moreover, the lid's opening velocity after the auditory discrimination was higher. These results indicated that the eyelid closed slowly and opened quickly after the auditory discriminative stimulus.
3

Goodall, John R. "An Evaluation of Visual and Textual Network Analysis Tools". Information Visualization 10, n. 2 (aprile 2011): 145–57. http://dx.doi.org/10.1057/ivs.2011.2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
User testing is an integral component of user-centered design, but has only rarely been applied to visualization for cyber security applications. This article presents the results of a comparative evaluation between a visualization-based application and a more traditional, table-based application for analyzing computer network packet captures. We conducted this evaluation as part of the user-centered design process. Participants performed both structured, well-defined tasks and exploratory, open-ended tasks with both tools. We measured accuracy and efficiency for the well-defined tasks, number of insights was measured for exploratory tasks and user perceptions were recorded for each tool. The results of this evaluation demonstrated that users performed significantly more accurately in the well-defined tasks, discovered a higher number of insights and demonstrated a clear preference for the visualization tool. The study design presented may be useful for future researchers performing user testing on visualization for cyber security applications.
4

Taylor, Donald H. "An Analysis of Visual Watchkeeping". Journal of Navigation 44, n. 2 (maggio 1991): 152–58. http://dx.doi.org/10.1017/s0373463300009899.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Ships' bridge watchkeeping, although the prime task of the officer of the watch, makes very low task demands in terms of information processing and required speed of response compared to most other vigilance tasks. Consequently, performance of the visual lookout task is difficult to describe and evaluate systematically. Results from an on-board observational study show that lookout lapse periods can be economically described by simple mathematical models, which assist in comparison and evaluation of task performance.
5

Cole, Jason C., Lisa A. Fasnacht-Hill, Scott K. Robinson e Caroline Cordahi. "Differentiation of Fluid, Visual, and Simultaneous Cognitive Tasks". Psychological Reports 89, n. 3 (dicembre 2001): 541–46. http://dx.doi.org/10.2466/pr0.2001.89.3.541.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The constructs of fluid reasoning and spatial visualization (Horn, 1989) as well as the construct of simultaneous processing (Luria, 1966) have been tapped by various cognitive assessment batteries. In order to determine whether these constructs could be differentiated from one another, factor analyses of subtest scores from six cognitive tasks were conducted. Fluid reasoning, spatial visualization, and simultaneous processing emerged as separate factors in the analysis, supporting the hypothesis that these constructs can be differentiated in psychoeducational testing. These results extend the findings of a preliminary study which found factorial differentiation between fluid and simultaneous reasoning.
6

Shimizu, Toshiya, Yoriko Oguchi, Kiyotaka Hoshiai, Keiko Nagashima, Kiyoyuki Yamazaki, Takashi Itoh e Katsuro Okamoto. "Analysis of cognitive functioning during visual target monitoring tasks (1)". Japanese journal of ergonomics 30, Supplement (1994): 210–11. http://dx.doi.org/10.5100/jje.30.supplement_210.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Oguchi, Yoriko, Toshiya Shimizu, Kiyotaka Hoshiai, Keiko Nagashima, Kiyoyuki Yamazaki, Takashi Itoh e Katsuro Okamoto. "Analysis of cognitive functioning during visual target monitoring tasks (2)". Japanese journal of ergonomics 30, Supplement (1994): 212–13. http://dx.doi.org/10.5100/jje.30.supplement_212.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Mateeff, Stefan, Biljana Genova e Joachim Hohnsbein. "Visual Analysis of Changes of Motion in Reaction-Time Tasks". Perception 34, n. 3 (marzo 2005): 341–56. http://dx.doi.org/10.1068/p5184.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Subjects observed a random-dot pattern moving uniformly in the vertical direction (vector V1). The motion vector abruptly changed to V2, both in speed and direction simultaneously. It was found that the time of simple reaction to such changes V1 → V2 can be described by a function of a single variable, | w( V1 — V2C) + (1 – w) V2N|, 0 < w < 0.5, where V2C and V2N are the components of V2 collinear with and normal to V1. The choice-reaction time for changes in direction that are accompanied by changes in speed can be described by a function solely of the absolute value of V2N. Unlike the simple-reaction time, the choice-reaction time was independent of the initial speed of motion. The processes that may be engaged in simple and choice reactions to motion are discussed.
9

Mecklinger, Axel, Burkhard Maess, Bertram Opitz, Erdmut Pfeifer, Douglas Cheyne e Harold Weinberg. "A MEG analysis of the P300 in visual discrimination tasks". Electroencephalography and Clinical Neurophysiology/Evoked Potentials Section 108, n. 1 (gennaio 1998): 45–56. http://dx.doi.org/10.1016/s0168-5597(97)00092-0.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Shin, Bok-Suk, Zezhong Xu e Reinhard Klette. "Visual lane analysis and higher-order tasks: a concise review". Machine Vision and Applications 25, n. 6 (12 aprile 2014): 1519–47. http://dx.doi.org/10.1007/s00138-014-0611-8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

CHIBA, Fumiya, Takashi OYAMA e Teruaki ITO. "NIRS Measurement on Visual Reaction Tasks for Analysis of Esports". Proceedings of Design & Systems Conference 2022.32 (2022): 1404. http://dx.doi.org/10.1299/jsmedsd.2022.32.1404.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Meinecke, Christofer, Ahmad Dawar Hakimi e Stefan Jänicke. "Explorative Visual Analysis of Rap Music". Information 13, n. 1 (28 dicembre 2021): 10. http://dx.doi.org/10.3390/info13010010.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Detecting references and similarities in music lyrics can be a difficult task. Crowdsourced knowledge platforms such as Genius. can help in this process through user-annotated information about the artist and the song but fail to include visualizations to help users find similarities and structures on a higher and more abstract level. We propose a prototype to compute similarities between rap artists based on word embedding of their lyrics crawled from Genius. Furthermore, the artists and their lyrics can be analyzed using an explorative visualization system applying multiple visualization methods to support domain-specific tasks.
13

Vázquez, Pere-Pau. "Visual Analysis of Research Paper Collections Using Normalized Relative Compression". Entropy 21, n. 6 (21 giugno 2019): 612. http://dx.doi.org/10.3390/e21060612.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The analysis of research paper collections is an interesting topic that can give insights on whether a research area is stalled in the same problems, or there is a great amount of novelty every year. Previous research has addressed similar tasks by the analysis of keywords or reference lists, with different degrees of human intervention. In this paper, we demonstrate how, with the use of Normalized Relative Compression, together with a set of automated data-processing tasks, we can successfully visually compare research articles and document collections. We also achieve very similar results with Normalized Conditional Compression that can be applied with a regular compressor. With our approach, we can group papers of different disciplines, analyze how a conference evolves throughout the different editions, or how the profile of a researcher changes through the time. We provide a set of tests that validate our technique, and show that it behaves better for these tasks than other techniques previously proposed.
14

SAITO, Shin. "How to evaluate visual tasks by the analysis of eye movements". Japanese journal of ergonomics 29, Supplement (1993): 78–81. http://dx.doi.org/10.5100/jje.29.supplement_78.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Momin, Abdul, e Sudip Sanyal. "Analysis of Electrodermal Activity Signal Collected During Visual Attention Oriented Tasks". IEEE Access 7 (2019): 88186–95. http://dx.doi.org/10.1109/access.2019.2925933.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Yu, Ziya, Chi Zhang, Linyuan Wang, Li Tong e Bin Yan. "A Comparative Analysis of Visual Encoding Models Based on Classification and Segmentation Task-Driven CNNs". Computational and Mathematical Methods in Medicine 2020 (1 agosto 2020): 1–15. http://dx.doi.org/10.1155/2020/5408942.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Nowadays, visual encoding models use convolution neural networks (CNNs) with outstanding performance in computer vision to simulate the process of human information processing. However, the prediction performances of encoding models will have differences based on different networks driven by different tasks. Here, the impact of network tasks on encoding models is studied. Using functional magnetic resonance imaging (fMRI) data, the features of natural visual stimulation are extracted using a segmentation network (FCN32s) and a classification network (VGG16) with different visual tasks but similar network structure. Then, using three sets of features, i.e., segmentation, classification, and fused features, the regularized orthogonal matching pursuit (ROMP) method is used to establish the linear mapping from features to voxel responses. The analysis results indicate that encoding models based on networks performing different tasks can effectively but differently predict stimulus-induced responses measured by fMRI. The prediction accuracy of the encoding model based on VGG is found to be significantly better than that of the model based on FCN in most voxels but similar to that of fused features. The comparative analysis demonstrates that the CNN performing the classification task is more similar to human visual processing than that performing the segmentation task.
17

Anju Jha, Parveen Siddhique. "Do Mental Arithmetic Tasks affect Visual Evoked Potential". International Journal of Physiology 7, n. 3 (25 luglio 2019): 26–29. http://dx.doi.org/10.37506/ijop.v7i3.113.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Background & Objectives VEP is used to evaluate functional integrity of visual pathway. Objective of study is to see that stimulus other then light, during recording of VEP, can change the result of VEP. Method 200 healthy candidates of age between 18yrs-22yrs of both sexes were enrolled. At first, recording of VEP was done without any disturbance. Candidates were asked few arithmetic tasks verbally while recording VEP for second time. Results Analysis of latency of N75, P100 and amplitude of N75-P100 was done. There was no statistical significant difference in latency of N75 and P100 but there was significant statistical difference in amplitude of N75-P100. The p-Value for right eye is 0.0031 and for left eye 0.0299 for amplitude of N75-P100. Interpretation & conclusion-Arithmetic task makes mental processing very active and it affects the result of VEP thus it must be taken in to consideration while recording of VEP in any patient.
18

Mirel, Barbara. "Building Network Visualization Tools to Facilitate Metacognition in Complex Analysis". Leonardo 44, n. 3 (giugno 2011): 248–49. http://dx.doi.org/10.1162/leon_a_00176.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
If whole communities of domain analysts are to be able to use interactive network visualization tools productively and efficiently, tool design needs to adequately support the metacognition implicit in complex visual analytic tasks. Metacognition for such exploratory network-mediated tasks applies across disciplines. This essay presents metacognitive demands inherent in complex tasks aimed at uncovering relevant relationships for hypothesizing purposes and proposes network visualization tool designs that can support these metacognitive demands.
19

Li, Wangwang, Hengliang Tan, Jianwei Feng, Ming Xie, Jiao Du, Shuo Yang e Guofeng Yan. "Kernel Reverse Neighborhood Discriminant Analysis". Electronics 12, n. 6 (10 marzo 2023): 1322. http://dx.doi.org/10.3390/electronics12061322.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Currently, neighborhood linear discriminant analysis (nLDA) exploits reverse nearest neighbors (RNN) to avoid the assumption of linear discriminant analysis (LDA) that all samples from the same class should be independently and identically distributed (i.i.d.). nLDA performs well when a dataset contains multimodal classes. However, in complex pattern recognition tasks, such as visual classification, the complex appearance variations caused by deformation, illumination and visual angle often generate non-linearity. Furthermore, it is not easy to separate the multimodal classes in lower-dimensional feature space. One solution to these problems is to map the feature to a higher-dimensional feature space for discriminant learning. Hence, in this paper, we employ kernel functions to map the original data to a higher-dimensional feature space, where the nonlinear multimodal classes can be better classified. We give the details of the deduction of the proposed kernel reverse neighborhood discriminant analysis (KRNDA) with the kernel tricks. The proposed KRNDA outperforms the original nLDA on most datasets of the UCI benchmark database. In high-dimensional visual recognition tasks of handwritten digit recognition, object categorization and face recognition, our KRNDA achieves the best recognition results compared to several sophisticated LDA-based discriminators.
20

Arslan, Seçkin, Lucie Broc e Fabien Mathy. "Lower verbalizability of visual stimuli modulates differences in estimates of working memory capacity between children with and without developmental language disorders". Autism & Developmental Language Impairments 5 (gennaio 2020): 239694152094551. http://dx.doi.org/10.1177/2396941520945519.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Background and aims Children with developmental language disorder (DLD) often perform below their typically developing peers on verbal memory tasks. However, the picture is less clear on visual memory tasks. Research has generally shown that visual memory can be facilitated by verbal representations, but few studies have been conducted using visual materials that are not easy to verbalize. Therefore, we attempted to construct non-verbalizable stimuli to investigate the impact of working memory capacity. Method and results We manipulated verbalizability in visual span tasks and tested whether minimizing verbalizability could help reduce visual recall performance differences across children with and without developmental language disorder. Visuals that could be easily verbalized or not were selected based on a pretest with non-developmental language disorder young adults. We tested groups of children with developmental language disorder (N = 23) and their typically developing peers (N = 65) using these high and low verbalizable classes of visual stimuli. The memory span of the children with developmental language disorder varied across the different stimulus conditions, but critically, although their storage capacity for visual information was virtually unimpaired, the children with developmental language disorder still had difficulty in recalling verbalizable images with simple drawings. Also, recalling complex (galaxy) images with low verbalizability proved difficult in both groups of children. An item-based analysis on correctly recalled items showed that higher levels of verbalizability enhanced visual recall in the typically developing children to a greater extent than the children with developmental language disorder. Conclusions and clinical implication: We suggest that visual short-term memory in typically developing children might be mediated with verbal encoding to a larger extent than in children with developmental language disorder, thus leading to poorer performance on visual capacity tasks. Our findings cast doubts on the idea that short-term storage impairments are limited to the verbal domain, but they also challenge the idea that visual tasks are essentially visual. Therefore, our findings suggest to clinicians working with children experiencing developmental language difficulties that visual memory deficits may not necessarily be due to reduced non-verbal skills but may be due to the high amount of verbal cues in visual stimuli, from which they do not benefit in comparison to their peers.
21

Takeda, Kazuyoshi, e Shintaro Funahashi. "Prefrontal Task-Related Activity Representing Visual Cue Location or Saccade Direction in Spatial Working Memory Tasks". Journal of Neurophysiology 87, n. 1 (1 gennaio 2002): 567–88. http://dx.doi.org/10.1152/jn.00249.2001.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
To examine what kind of information task-related activity encodes during spatial working memory processes, we analyzed single-neuron activity in the prefrontal cortex while two monkeys performed two different oculomotor delayed-response (ODR) tasks. In the standard ODR task, monkeys were required to make a saccade to the cue location after a 3-s delay, whereas in the rotatory ODR (R-ODR) task, they were required to make a saccade 90° clockwise from the cue location after the 3-s delay. By comparing the same task-related activities in these two tasks, we could determine whether such activities encoded the location of the visual cue or the direction of the saccade. One hundred twenty one neurons exhibited task-related activity in relation to at least one task event in both tasks. Among them, 41 neurons exhibited directional cue-period activity, most of which encoded the location of the visual cue. Among 56 neurons with directional delay-period activity, 86% encoded the location of the visual cue, whereas 13% encoded the direction of the saccade. Among 57 neurons with directional response-period activity, 58% encoded the direction of the saccade, whereas 35% encoded the location of the visual cue. Most neurons whose response-period activity encoded the location of the visual cue also exhibited directional delay-period activity that encoded the location of the visual cue as well. The best directions of these two activities were identical, and most of these response-period activities were postsaccadic. Therefore this postsaccadic activity can be considered a signal to terminate unnecessary delay-period activity. Population histograms encoding the location of the visual cue showed tonic sustained activation during the delay period. However, population histograms encoding the direction of the saccade showed a gradual increase in activation during the delay period. These results indicate that the transformation from visual input to motor output occurs in the dorsolateral prefrontal cortex. The analysis using population histograms suggests that this transformation occurs gradually during the delay period.
22

Holcomb, Henry, Arthi Parwani, Deborah Medoff, Ning Ma e Carol Tamminga. "Auditory and visual discrimination, a conjunction analysis of two difficult perceptual tasks". NeuroImage 11, n. 5 (maggio 2000): S719. http://dx.doi.org/10.1016/s1053-8119(00)91649-5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Dormann, W. U., e T. Pfeiffer. "Psychophysiological analysis of cognitive speed in visual tasks by means of ERP". International Journal of Psychophysiology 7, n. 2-4 (agosto 1989): 185. http://dx.doi.org/10.1016/0167-8760(89)90132-3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Greenman, K., e J. Moses. "B - 29An Exploratory Factor Analysis of Verbal Mediation in Visual Assessment Tasks". Archives of Clinical Neuropsychology 33, n. 6 (1 settembre 2018): 703–94. http://dx.doi.org/10.1093/arclin/acy061.105.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Li, Zhi Hua, e Qiu Luan Li. "Automated Alarm Based on Intelligent Visual Analysis". Applied Mechanics and Materials 340 (luglio 2013): 701–5. http://dx.doi.org/10.4028/www.scientific.net/amm.340.701.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abnormal event detection and automated alarm are the important tasks in visual surveillance applications. In this paper, a novel automated alarm method based on intelligent visual analysis is proposed for alarm of abandoned objects and virtual cordon protection. Firstly the monitoring regions and cordons position are set artificially in the surveillance background scenes. The forground motion regions are segmented based on background subtraction model, and then are clustered by connected component analysis. After motion region segmentation and cluster, object tracking based on discriminative appearance model for monocular multi-target tracking is utilized. According to the motion segmentation and tracking results, alarm is triggered in comparison with the monitoring regions and cordons position. Experimental results show that the proposed automated alarm algorithms are sufficient to detect the abnormal events for alarm of abandoned objects and virtual cordon protection.
26

Rahikummahtum, Kautsar, Joko Nurkamto e Suparno Suparno. "The Pedagogical Potential of Visual Images in Indonesian High School English Language Textbooks: A Micro-Multimodal Analysis". AL-ISHLAH: Jurnal Pendidikan 14, n. 4 (26 settembre 2022): 5979–90. http://dx.doi.org/10.35445/alishlah.v14i4.2171.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Language textbooks mainly guide language learning and teaching activities. Most of the language textbooks used to comprise a range of visual texts, such as pictures, illustrations, and photos. The study adopts the Visual Grammar Theory by Kress and Leeuwen (2006) to elucidate the pedagogical functions of visual images and explore how such images can be exploited for learning tasks from a micro multimodal perspective. The data consist of 142 visual images in Indonesian senior high school EFL textbook grades ten (X) and eleven (XI). The findings pointed out that the textbook uses visual images' full potential to fulfill pedagogical aims. Many visual images or texts in language textbooks serve information and illustrations rather than as a decorative function. Visual images may assist students to engage effectively in learning tasks by emphasizing the meaning of information presented in images and text. This study suggests that learning activities should consider multimodal texts to contribute significantly. This research aims to enhance knowledge about the pedagogical function of visual images in textbooks and can be a reference for further exploration.
27

Shklyar, A. V., A. A. Zakharova e E. V. Vekhter. "Visualization in Data Reconstruction Tasks". Scientific Visualization 16, n. 1 (aprile 2024): 64–81. http://dx.doi.org/10.26583/sv.16.1.06.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Many application tasks of multidimensional data analysis which describe the state of real physical or other systems face with difficulties. This is a consequence of the low-quality source data, including missing values, the probability of errors or unreliability of measurements. Incomplete data can become an obstacle for research using many modern informational methods. The current work examines the potential and capabilities of visual analytics tools for preliminary preparation, correction or complete analysis of primary data volumes. A promising area of application of the approach discussed in the study is the targeted use of visualization capabilities as a data analysis tool. The implementation of specialized visual metaphors is used to solve problems of processing and interpreting data, the sources of which are cyberphysical systems of different complexity levels. Such systems operate in an autonomous or partially controlled mode. A characteristic feature of these systems is the presence of a large number of sensors that collect various types of data. Such data differ in the capacity of the corresponding information channels, their speed and reliability. Examples of such cyberphysical systems are unmanned aerial vehicles (UAVs), robotic stations, and multimodal monitoring systems. These systems can function in conditions where it is difficult to obtain objective observation experience (deep-sea robots). The effective use of data collected by cyberphysical monitoring systems is a condition for solving a large number of application and research tasks.
28

Idrissov, Agzam, Simon Rapp, Albert Albers e Anja M. Maier. "DEVELOPING SYSTEMS VISUALISATIONS IN DESIGN THROUGH A TYPOLOGY OF VISUAL TASKS: A MECHATRONIC CASE". Proceedings of the Design Society 1 (27 luglio 2021): 1213–22. http://dx.doi.org/10.1017/pds.2021.121.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractVisual representations are essential to design. Data-rich representations such as systems visualisations are gaining prominence in engineering practice. However, as such visualisations are often developed ad-hoc, we propose more systematically to link visual tasks with design-specific tasks for which the visualisations are used. Whereas research on such linking focuses mostly on CAD models and sketches, no such studies are yet available for systems visualisations. Thus, this paper introduces a typology of visual tasks from the Information Visualisation field to aid the development of systems visualisations in design. To build a visualisation using the typology, a case study with engineering students developing an autonomous robot was conducted. Through interviews and analysis of product representations used, design-specific tasks were identified and decomposed into visual tasks. Then, a visualisation that assisted the team in performing their design activities was created. Results illustrate the benefits of using such a typology to describe visual tasks and generate systems visualisations. The study suggests implications for researchers studying visual representations in design as well as for developers of systems visualisations.
29

Alhamdan, Areej A., Melanie J. Murphy, Hayley E. Pickering e Sheila G. Crewther. "The Contribution of Visual and Auditory Working Memory and Non-Verbal IQ to Motor Multisensory Processing in Elementary School Children". Brain Sciences 13, n. 2 (5 febbraio 2023): 270. http://dx.doi.org/10.3390/brainsci13020270.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Although cognitive abilities have been shown to facilitate multisensory processing in adults, the development of cognitive abilities such as working memory and intelligence, and their relationship to multisensory motor reaction times (MRTs), has not been well investigated in children. Thus, the aim of the current study was to explore the contribution of age-related cognitive abilities in elementary school-age children (n = 75) aged 5–10 years, to multisensory MRTs in response to auditory, visual, and audiovisual stimuli, and a visuomotor eye–hand co-ordination processing task. Cognitive performance was measured on classical working memory tasks such as forward and backward visual and auditory digit spans, and the Raven’s Coloured Progressive Matrices (RCPM test of nonverbal intelligence). Bayesian Analysis revealed decisive evidence for age-group differences across grades on visual digit span tasks and RCPM scores but not on auditory digit span tasks. The results also showed decisive evidence for the relationship between performance on more complex visually based tasks, such as difficult items of the RCPM and visual digit span, and multisensory MRT tasks. Bayesian regression analysis demonstrated that visual WM digit span tasks together with nonverbal IQ were the strongest unique predictors of multisensory processing. This suggests that the capacity of visual memory rather than auditory processing abilities becomes the most important cognitive predictor of multisensory MRTs, and potentially contributes to the expected age-related increase in cognitive abilities and multisensory motor processing.
30

Kotenko, Igor, Maxim Kolomeec, Kseniia Zhernova e Andrey Chechulin. "Visual Analytics for Information Security: Areas of Application, Tasks, Visualization Models". Voprosy kiberbezopasnosti, n. 4(44) (2021): 2–15. http://dx.doi.org/10.21681/2311-3456-2021-4-2-15.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The purpose of the article: to identify and systematize the areas and problems of information security that are solved using visual analytics methods, as well as analysis of the applied data visualization models and their properties that affect the perception of data by the operator. Research method: a systematic analysis of the application of visual analytics methods for solving information security problems. Analysis of relevant papers in the field of information security and data visualization. The objects of research are: theoretical and practical solutions to information security problems through visual analysis. Visual analytics in the article is considered from several sides: from the point of view of the areas of application of visual analysis methods in information security, from the point of view of the tasks solved by the security analyst, from the point of view of the visualization models used and the data structures used, as well as from the point of view of the properties of data visualization models. The result: classification of visualization models is proposed, which differs from analogs in that it is based on the analysis of areas and tasks of information security and comparison of visualization models to them. The scope of the proposed approach is the creation of visualization models that can be used to increase the efficiency of operator interaction with information security applications. The proposed article will be useful both for specialists who develop information security systems and for students studying in the direction of training “Information Security”.
31

Black, Peter, e Tina Arbon Black. "Fundamentals of ophthalmic dispensing – part 17 Visual task analysis – part 3". Optician 2021, n. 4 (aprile 2021): 8527–1. http://dx.doi.org/10.12968/opti.2021.4.8527.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this third article on visual task analysis Peter Black and Tina Arbon Black look at dispensing solutions to the visual problems in the real world of work, home and leisure activities, concentrating principally on outdoor tasks including aspects of regulations and standards relating to sports, health and safety and driving.
32

Lange-Küttner, C. "The Role of Object Violation in the Development of Visual Analysis". Perceptual and Motor Skills 90, n. 1 (febbraio 2000): 3–24. http://dx.doi.org/10.2466/pms.2000.90.1.3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The present study investigated whether sensitivity to object violations in perception as well as in action would vary with age. Five-, 6-, and 11-yr.-old children and adults solved tasks which involved perception only, motoric indication of parts, actual assembly of parts, and drawing of a violated figure. In perception, object violation was the only factor showing change across age groups, with violations being increasingly noticed. In composition tasks involving motor components, object violation was just one factor besides quantity of parts and type of segmentation contributing to task difficulty and showing increase in performance across age groups. Analysis of object violations in visual structure required abilities similar to those needed when analysing shape interference. Improved visual detection and graphic construction of object violation seemed not to occur because segmentation increased quantitatively but more likely because fast perceptual processes came under scrutiny.
33

Höferlin, Benjamin, Markus Höferlin, Gunther Heidemann e Daniel Weiskopf. "Scalable video visual analytics". Information Visualization 14, n. 1 (5 giugno 2013): 10–26. http://dx.doi.org/10.1177/1473871613488571.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Video visual analytics is the research field that addresses scalable and reliable analysis of video data. The vast amount of video data in typical analysis tasks renders manual analysis by watching the video data impractical. However, automatic evaluation of video material is not reliable enough, especially when it comes to semantic abstraction from the video signal. In this article, we describe the video visual analytics method that combines the complementary strengths of human recognition and machine processing. After inspecting the challenges of scalable video analysis, we derive the main components of visual analytics for video data. Based on these components, we present our video visual analytics system that has its origins in our IEEE VAST Challenge 2009 participation.
34

Cohen, Jeremiah Y., Pierre Pouget, Geoffrey F. Woodman, Chenchal R. Subraveti, Jeffrey D. Schall e Andrew F. Rossi. "Difficulty of Visual Search Modulates Neuronal Interactions and Response Variability in the Frontal Eye Field". Journal of Neurophysiology 98, n. 5 (novembre 2007): 2580–87. http://dx.doi.org/10.1152/jn.00522.2007.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The frontal eye field (FEF) is involved in selecting visual targets for eye movements. To understand how populations of FEF neurons interact during target selection, we recorded activity from multiple neurons simultaneously while macaques performed two versions of a visual search task. We used a multivariate analysis in a point process statistical framework to estimate the instantaneous firing rate and compare interactions among neurons between tasks. We found that FEF neurons were engaged in more interactions during easier visual search tasks compared with harder search tasks. In particular, eye movement–related neurons were involved in more interactions than visual-related neurons. In addition, our analysis revealed a decrease in the variability of spiking activity in the FEF beginning ∼100 ms before saccade onset. The minimum in response variability occurred ∼20 ms earlier for the easier search task compared with the harder one. This difference is positively correlated with the difference in saccade reaction times for the two tasks. These findings show that a multivariate analysis can provide a measure of neuronal interactions and characterize the spiking activity of FEF neurons in the context of a population of neurons.
35

Hichisson, Andrew, George Wilcock, Georgette Eaton, Laura J. Taylor e Jasleen K. Jolly. "Paramedic practice in low light conditions: a scoping review". Journal of Paramedic Practice 15, n. 1 (2 gennaio 2023): 6–15. http://dx.doi.org/10.12968/jpar.2023.15.1.6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Background: Paramedics undertake visually demanding tasks, which may be adversely affected by low lighting conditions. Aims: The study aimed to: identify difficulties paramedics experience carrying out tasks in low light; and establish occupational health standards and adjustments that may improve working practices. Methods: A scoping review was undertaken informed by a professional panel of paramedics recruited through social media. A meta-analysis was conducted assessing visual acuity under different light levels. Findings: Difficulty in driving and in assessing/treating patients under low light conditions were reported. Sixty relevant studies were identified for review. Visual acuity reduces with decreasing luminance, causing increasing difficulties in performing critical tasks. Conclusion: Visual function testing can assess paramedics' visual health and ability to undertake critical tasks. Adjustments may help to improve conditions. Regular occupational health assessments could identify paramedics who need support. Further research should explore levels of visual function and practical adjustments needed for safe clinical practice.
36

Shim, Gyuseok, Duwon Yang, Woorim Cho, Jihyeon Kim, Hyangshin Ryu, Woong Choi e Jaehyo Kim. "Elastic Resistance and Shoulder Movement Patterns: An Analysis of Reaching Tasks Based on Proprioception". Bioengineering 11, n. 1 (19 dicembre 2023): 1. http://dx.doi.org/10.3390/bioengineering11010001.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This study departs from the conventional research on horizontal plane reach movements by examining human motor control strategies in vertical plane elastic load reach movements conducted without visual feedback. Here, participants performed shoulder presses with elastic resistances at low, moderate, and high intensities without access to visual information about their hand position, relying exclusively on proprioceptive feedback and synchronizing their movements with a metronome set at a 3 s interval. The results revealed consistent performance symmetry across different intensities in terms of the reach speed (p = 0.254–0.736), return speed (p = 0.205–0.882), and movement distance (p = 0.480–0.919). This discovery underscores the human capacity to uphold bilateral symmetry in movement execution when relying solely on proprioception. Furthermore, this study observed an asymmetric velocity profile where the reach duration remained consistent irrespective of the load (1.15 s), whereas the return duration increased with higher loads (1.39 s–1.45 s). These findings suggest that, in the absence of visual feedback, the asymmetric velocity profile does not result from the execution of the action but rather represents a deliberate deceleration post-reach aimed at achieving the target position as generated by the brain’s internal model. These findings hold significant implications for interpreting rehabilitation approaches under settings devoid of visual feedback.
37

Chung, Haeyong, Santhosh Nandhakumar, Gopinath Polasani Vasu, Austin Vickers e Eunseok Lee. "GoCrystal: A gamified visual analytics tool for analysis and visualization of atomic configurations and thermodynamic energy models". Information Visualization 19, n. 4 (20 luglio 2020): 296–317. http://dx.doi.org/10.1177/1473871620925821.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this article, we present GoCrystal, a new visual analytics tool for analysis and visualization of atomic configurations and thermodynamic energy models. GoCrystal’s primary objective is to support the visual analytics tasks for finding and understanding favorable atomic patterns in a lattice using gamification. We believe the performance of visual analytics tasks can be improved by employing gamification features. Careful research was conducted in an effort to determine which gamification features would be more applicable for analyzing and exploring atomic configurations and their associated thermodynamic free energy. In addition, we conducted a user study to determine the effectiveness of GoCrystal and its gamification features in achieving this goal, comparing with a conventional visual analytics model without gamification as a control group. Finally, we report the results of the user study and demonstrate the impact that gamification features have on the performance and time necessary to understand atomic configurations.
38

Samaha, Jason, e Bradley R. Postle. "Correlated individual differences suggest a common mechanism underlying metacognition in visual perception and visual short-term memory". Proceedings of the Royal Society B: Biological Sciences 284, n. 1867 (22 novembre 2017): 20172035. http://dx.doi.org/10.1098/rspb.2017.2035.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Adaptive behaviour depends on the ability to introspect accurately about one's own performance. Whether this metacognitive ability is supported by the same mechanisms across different tasks is unclear. We investigated the relationship between metacognition of visual perception and metacognition of visual short-term memory (VSTM). Experiments 1 and 2 required subjects to estimate the perceived or remembered orientation of a grating stimulus and rate their confidence. We observed strong positive correlations between individual differences in metacognitive accuracy between the two tasks. This relationship was not accounted for by individual differences in task performance or average confidence, and was present across two different metrics of metacognition and in both experiments. A model-based analysis of data from a third experiment showed that a cross-domain correlation only emerged when both tasks shared the same task-relevant stimulus feature. That is, metacognition for perception and VSTM were correlated when both tasks required orientation judgements, but not when the perceptual task was switched to require contrast judgements. In contrast with previous results comparing perception and long-term memory, which have largely provided evidence for domain-specific metacognitive processes, the current findings suggest that metacognition of visual perception and VSTM is supported by a domain-general metacognitive architecture, but only when both domains share the same task-relevant stimulus feature.
39

Rodríguez Toscano, E., P. Díaz-Carracedo, P. de la Higuera-González, G. Padilla e A. de la Torre-Luque. "Memory deficits in children and adolescents with psychotic disorders: A systematic review and meta-analysis". European Psychiatry 66, S1 (marzo 2023): S332. http://dx.doi.org/10.1192/j.eurpsy.2023.728.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
IntroductionCognitive symptoms in psychosis represent a major unmet clinical need (Acuna-Vargas et al. Cog in Psych 2019; 21(3), 223–224). Deficit in memory has been largely described in first episode early onset psychosis (Mayoral et al. Eur Psych 2008; 23(5), 375-383) and has been associated to a worse functionality (Øie et al. Neuropsychology 2011; 25(1), 25–35). However, results from existing studies are quite mixed on memory deficits of early psychosis patients, particularly in terms of memory contents and storage resources.ObjectivesThe aims of this study were 1) to examine the nature and extent of cognitive impairment in early-onset psychosis and 2) to analyze which type of memory (verbal and visual) is more affected in the disorder.MethodsThe present systematic review and meta-analysis was conducted according to the PRISMA criteria (Moher et al. Systematic Reviews 2015; 4(1), 1 - 9). A systematic search of CINAHL, PsycInfo, PubMed, Redalyc, SCOPUS and Web of Science (published from 2000 to 2020) identified case-control studies of early onset psychotic disorder (under 18 years old). Those studies focused on both verbal and visual memory performance.ResultsTwenty articles were included in the review. A deficit in memory in child and adolescent psychotic disorders was obtained displaying a large effect size in memory tasks (g = -0.83). Also, a medium effect size was found in visual memory tasks (g = - 0.61) and a large effect size was found in verbal memory tasks (g = -1.00).ConclusionsIt was observed a strong memory deficit on early psychotic disorders already present at the onset of the illness. This deficit was stronger when verbal memory tasks were used compared to the effect found with visual memory tasks. Based on previous literature (García-Nieto et al. Jou Cli Child & Ado Psych 2011; 40(2), 266-280; Lepage et al. Eur Psych 2008; 23(5):368- 74; Hui et al. Psych Med 2016; 46(11):2435-44), these results contribute to describe and characterize the cognitive symptoms in the first-episode psychosis in a youth population.Disclosure of InterestNone Declared
40

Gamito, Pedro, Jorge Oliveira, Diogo Morais, Matthew Pavlovic, Olivia Smyth, Inês Maia, Tiago Gomes e Pedro J. Rosa. "Eye Movement Analysis and Cognitive Assessment". Methods of Information in Medicine 56, n. 02 (2017): 112–16. http://dx.doi.org/10.3414/me16-02-0006.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
SummaryBackground: An adequate behavioral response depends on attentional and mnesic processes. When these basic cognitive functions are impaired, the use of non-immersive Virtual Reality Applications (VRAs) can be a reliable technique for assessing the level of impairment. However, most non-immersive VRAs use indirect measures to make inferences about visual attention and mnesic processes (e.g., time to task completion, error rate).Objectives: To examine whether the eye movement analysis through eye tracking (ET) can be a reliable method to probe more effectively where and how attention is deployed and how it is linked with visual working memory during comparative visual search tasks (CVSTs) in non-immersive VRAs.Methods: The eye movements of 50 healthy participants were continuously recorded while CVSTs, selected from a set of cognitive tasks in the Systemic Lisbon Battery (SLB). Then a VRA designed to assess of cognitive impairments were randomly presented.Results: The total fixation duration, the number of visits in the areas of interest and in the interstimulus space, along with the total execution time was significantly different as a function of the Mini Mental State Examination (MMSE) scores.Conclusions: The present study demonstrates that CVSTs in SLB, when combined with ET, can be a reliable and unobtrusive method for assessing cognitive abilities in healthy individuals, opening it to potential use in clinical samples.
41

Lee, Ji-Hyun, Jin Moon e Sooyoung Kim. "Analysis of Occupants’ Visual Perception to Refine Indoor Lighting Environment for Office Tasks". Energies 7, n. 7 (27 giugno 2014): 4116–39. http://dx.doi.org/10.3390/en7074116.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Vásquez, Claudia, e Mercedes Muñetón-Ayala. "Visual and Auditory Temporal Processing in Elementary School Children". PSYCHOLINGUISTICS 34, n. 1 (26 ottobre 2023): 85–110. http://dx.doi.org/10.31470/2309-1797-2023-34-1-85-110.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Purpose. Temporal processing shows an evolutionary character in accordance with age and schooling. The purpose of this study is to analyze the role of temporal processing in children in different grades in primary school. Methods. 470 children (aged 5–13), in five school grades, were compared to a Temporal Order Judgment. Similar visual and auditory, linguistic, and nonlinguistic stimuli were presented to them. A three-factor repeated measure multivariate analysis of variance was used to examine the effects of Grade (1°vs.2°vs.3°vs.4°vs.5°) x Stimulus (Linguistic vs. Nonlinguistic) x Modality (Visual vs. Auditory). Results. These three factors have significant interactions. Auditory-nonlinguistic tasks were easier than auditory-linguistic tasks in every grade. Visual-nonlinguistic tasks were easier than visual-linguistic tasks in higher grades, and 1st grade differed significantly from the other school grades in all cases. The higher the school grade, the better the performance of TOJ tasks. Visual-linguistic tasks were easier than auditory-linguistic tasks. Conclusions. The present study provides evidence concerning the progressive nature of temporal processing among primary school children. This development trajectory is particularly noteworthy for students in lower primary school grades. Furthermore, the Temporal Order Judgment (TOJ) task exhibited robust experimental support, rendering it a valuable tool for assessing temporal processing within conventional school populations. This task offers the potential to assess TP across auditory and / or visual modalities, with diverse types of stimuli (linguistic vs. non-linguistic). Finally, the auditory modality, and especially the auditory linguistic modality, showed greater sensitivity depending on the school grade.
43

Du, Qin Jun, Xue Yi Zhang e Xing Guo Huang. "Modeling and Analysis of a Humanoid Robot Active Stereo Vision Platform". Applied Mechanics and Materials 55-57 (maggio 2011): 868–71. http://dx.doi.org/10.4028/www.scientific.net/amm.55-57.868.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Humanoid robot is not only expected to walk stably, but also is required to perform manipulation tasks autonomously in our work and living environment. This paper discusses the visual perception and the object manipulation based on visual servoing of a humanoid robot, an active robot vision model is built, and then the 3D location principle, the calibration method and precision of this model are analyzed. This active robot vision system with two DOF enlarges its visual field and the stereo is the most simple camera configuration for 3D position information.
44

Roundtree, Karina A., Matthew D. Manning e Julie A. Adams. "Analysis of Human-Swarm Visualizations". Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, n. 1 (settembre 2018): 287–91. http://dx.doi.org/10.1177/1541931218621066.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Interest in robotic swarms has increased exponentially. Prior research determined that humans perceive biological swarm motions as a single entity, rather than perceiving the individuals. An open question is how the swarm’s visual representation and the associated task impact human performance when identifying current swarm tasks. The majority of the existing swarm visualizations present each robot individually. Swarms typically incorporate large numbers of individuals, where the individuals exhibit simple behaviors, but the swarm appears to exhibit more intelligent behavior. As the swarm size increases, it becomes increasingly difficult for the human operator to understand the swarm’s current state, the emergent behaviors, and predict future outcomes. Alternative swarm visualizations are one means of mitigating high operator workload and risk of human error. Five visualizations were evaluated for two tasks, go to and avoid, in the presence or absence of obstacles. The results indicate that visualizations incorporating representations of individual agents resulted in higher accuracy when identifying tasks.
45

Weiand, Augusto, Isabel Harb Manssour e Milene Selbach Silveira. "Visual Analysis for Monitoring Students in Distance Courses". International Journal of Distance Education Technologies 17, n. 2 (aprile 2019): 18–44. http://dx.doi.org/10.4018/ijdet.2019040102.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
With technological advances, distance education has been frequently discussed in recent years. The learning environments used in this course usually generates a great deal of data because of the large number of students and the various tasks involving their interaction. In order to facilitate the analysis of the data, the authors researched to identify how interaction and visualization techniques integrated with data mining algorithms can assist teachers in predicting students' performance in learning environments. The main goal of this work is to present the results of such research and the visual analysis approach that the authors developed in this context. This approach allows data gathering on the students' interactions and provides tools to investigate and predict pass/fail rates in the courses that are being analyzed. Our main contributions are: the visualization of the resources and their use by students; the possibility of making an individual analysis of students through interactive visualizations; and the ability to compare subjects in terms of students' performance.
46

Qureshy, Ahmad, Ryuta Kawashima, Muhammad Babar Imran, Motoaki Sugiura, Ryoi Goto, Ken Okada, Kentaro Inoue et al. "Functional Mapping of Human Brain in Olfactory Processing: A PET Study". Journal of Neurophysiology 84, n. 3 (1 settembre 2000): 1656–66. http://dx.doi.org/10.1152/jn.2000.84.3.1656.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This study describes the functional anatomy of olfactory and visual naming and matching in humans, using positron emission tomography (PET). One baseline control task without olfactory or visual stimulation, one control task with simple olfactory and visual stimulation without cognition, one set of olfactory and visual naming tasks, and one set of olfactory and visual matching tasks were administered to eight normal volunteers. In the olfactory naming task (ON), odors from familiar items, associated with some verbal label, were to be named. Hence, it required long-term olfactory memory retrieval for stimulus recognition. The olfactory matching task (OM) involved differentiating a recently encoded unfamiliar odor from a sequentially presented group of unfamiliar odors. This required short-term olfactory memory retrieval for stimulus differentiation. The simple olfactory and visual stimulation resulted in activation of the left orbitofrontal region, the right piriform cortex, and the bilateral occipital cortex. During olfactory naming, activation was detected in the left cuneus, the right anterior cingulate gyrus, the left insula, and the cerebellum bilaterally. It appears that the effort to identify the origin of an odor involved semantic analysis and some degree of mental imagery. During olfactory matching, activation was observed in the left cuneus and the cerebellum bilaterally. This identified the brain areas activated during differentiation of one unlabeled odor from the others. In cross-task analysis, the region found to be specific for olfactory naming was the left cuneus. Our results show definite recruitment of the visual cortex in ON and OM tasks, most likely related to imagery component of these tasks. The cerebellar role in cognitive tasks has been recognized, but this is the first PET study that suggests that the human cerebellum may have a role in cognitive olfactory processing as well.
47

Tijerina, Louis, e Dev Kochhar. "A Measurement Systems Analysis of Total Shutter Open Time (TSOT) as a Distraction Metric for Visual-Manual Tasks". Proceedings of the Human Factors and Ergonomics Society Annual Meeting 51, n. 24 (ottobre 2007): 1545–49. http://dx.doi.org/10.1177/154193120705102407.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The Total Shutter Open Time (TSOT) metric was examined for estimating the visual-manual distraction potential of in-vehicle devices. A measurement systems analysis was carried out on TSOT using data on thirteen visual-manual tasks from the CAMP Driver Workload Metrics Project. TSOT showed low test-retest reliability but high repeatability when data were averaged across persons by task. TSOT predicted task completion time, lane keeping, speed variation, total glance time, and number of glances away from the road while driving. Tasks were classified into higher and lower workload categories based on literature, analytical modeling, and engineering judgment. TSOT showed a high percentage of statistically significant pairwise differences between higher vs. lower workload tasks. Different classification rules were also applied to TSOT. The best rule to classify tasks as higher or lower workload consistent with prior prediction was one in which a mean TSOT > 7.5 seconds implied the task was of higher workload. These results illustrate a general procedure to assess driver workload measures in general and the usefulness of TSOT in particular.
48

Yoshizaki, Kazuhito, e Yayoi Tsuji. "Benefits of Interhemispheric Integration on the Japanese Kana Script-Matching Tasks". Perceptual and Motor Skills 90, n. 1 (febbraio 2000): 153–65. http://dx.doi.org/10.2466/pms.2000.90.1.153.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We tested Banich's hypothesis that the benefits of bihemispheric processing were enhanced as task complexity increased, when some procedural shortcomings in the previous studies were overcome by using Japanese Kana script-matching tasks. In Exp. 1, the 20 right-handed subjects were given the Physical-Identity task (Katakana-Katakana scripts matching) and the Name-Identity task (Katakana-Hiragana scripts matching). On both tasks, a pair of Kana scripts was tachistoscopically presented in the left, right, and bilateral visual fields. Distractor stimuli were also presented with target Kana scripts on both tasks to equate the processing load between the hemispheres. Analysis showed that, while a bilateral visual-field advantage was found on the name-identity task, a unilateral visual-field advantage was found on the physical-identity task, suggesting that, as the computational complexity of the encoding stage was enhanced, the benefits of bilateral hemispheric processing increased. In Exp. 2, the 16 right-handed subjects were given the same physical-identity task as in Exp. 1, except Hiragana scripts were used as distractors instead of digits to enhance task difficulty. Analysis showed no differences in performance between the unilateral and bilateral visual fields. Taking into account these results of physical-identity tasks for both Exps. 1 and 2, enhancing task demand in the stage of ignoring distractors made the unilateral visual-field advantage obtained in Exp. 1 disappear in Exp. 2. These results supported Banich's hypothesis.
49

Arevalo, John, Angel Cruz-Roa e Fabio A. González O. "Representación de imágenes de histopatología utilizada en tareas de análisis automático: estado del arte". Revista Med 22, n. 2 (1 dicembre 2014): 79. http://dx.doi.org/10.18359/rmed.1184.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
<p>This paper presents a review of the state-of-the-art in histopathology image representation used in automatic image analysis tasks. Automatic analysis of histopathology images is important for building computer-assisted diagnosis tools, automatic image enhancing systems and virtual microscopy systems, among other applications. Histopathology images have a rich mix of visual patterns with particularities that make them difficult to analyze. The paper discusses these particularities, the acquisition process and the challenges found when doing automatic analysis. Second an overview of recent works and methods addressed to deal with visual content representation in different automatic image analysis tasks is presented. Third an overview of applications of image representation methods in several medical domains and tasks is presented. Finally, the paper concludes with current trends of automatic analysis of histopathology images like digital pathology.</p>
50

Rivera, J., J. Moses, M. Davis, A. Guerra e K. Hakinson. "A-52 An Exploratory Factor Analysis Investigation of the Role of Verbal Mediation in the Interaction between Intelligence and Visual Memory Tasks". Archives of Clinical Neuropsychology 34, n. 6 (25 luglio 2019): 912. http://dx.doi.org/10.1093/arclin/acz034.52.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Objective Examine whether verbal mediation may play a role in the interaction between visual memory tasks and the four-factor model of intelligence as operationalized by standard neuropsychological assessment instruments. Method The assessment records of 101 American Veterans with diverse neuropsychiatric conditions were examined using Exploratory Factor and Principal Component Analyses (EFA and PCA respectively). There were no exclusion criteria. All participants completed the Wechsler Adult Intelligence Scale, third edition (WAIS-III), Benton’s Visual Retention Test (BVRT), and Multilingual Aphasia Examination (MAE). Individual assessment instruments were factored using PCA. The factor solution of the BVRT was co-factored with the scales of the WAIS-III, then the resulting factor scales were again factored with the verbal components of the MAE to identify common sources of variance. Results A three-step analysis revealed a four-factor model explaining 69.44% of the shared variance: 1) Items 1-4 of the BVRT (BVRT-E) loaded with Verbal Comprehension and Visual Naming. 2) BVRT-E also loaded with Processing Speed and Controlled Word Association. 3) Items 5-10 of the BVRT (BVRT-L) loaded with Perceptual Organization and the Token Test. 4) Working Memory loaded with Sentence Repetition on a fourth factor. Conclusions The results indicate a strong relationship between assessed performance on visual memory tasks and performance on measures based on the four-factor model of intelligence. The results also appear to support the idea that verbal mediation plays a role in the interaction between visual memory and intelligence, particularly when comparing performance on simple versus more complex visual memory tasks.

Vai alla bibliografia