Journal articles on the topic 'Visual object'

To see the other types of publications on this topic, follow the link: Visual object.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Visual object.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Tao Li, Tao Li, Yu Wang Tao Li, Zheng Zhang Yu Wang, Xuezhuan Zhao Zheng Zhang, and Lishen Pei Xuezhuan Zhao. "Visual Object Detection with Score Refinement." 網際網路技術學刊 23, no. 5 (September 2022): 1163–72. http://dx.doi.org/10.53106/160792642022092305025.

Full text
Abstract:
<p>Robustness of object detection against hard samples, especially small objects, has long been a critical and difficult problem that hinders development of convolutional object detectors. To address this issue, we propose Progressive Refinement Network to reduce classification ambiguity for scale robust object detection. In PRN, several orders of residuals for the class prediction are regressed from upper level contexts and the residuals are progressively added to the basic prediction stage by stage, yielding multiple refinements. Supervision signal is imposed at each stage and an integration of all stages is performed to obtain the final score. By supervision retaining through the context aggregation procedure, PRN avoids over dependency on higher-level information and enables sufficient learning on the current scale level. The progressive residuals added for refinements adaptively reduce the ambiguity of the class prediction and the final integration of all stages can further stabilize the predicted distribution. PRN achieves 81.3% mAP on the PASCAL VOC 2007 dataset and 31.7% AP (15.6% APS) on MS COCO dataset, which demonstrates the effectiveness and efficiency of the proposed method and its promising capability on scale robustness.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
2

Symes, Ed, Rob Ellis, and Mike Tucker. "Visual object affordances: Object orientation." Acta Psychologica 124, no. 2 (February 2007): 238–55. http://dx.doi.org/10.1016/j.actpsy.2006.03.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Newell, F. N. "Searching for Objects in the Visual Periphery: Effects of Orientation." Perception 25, no. 1_suppl (August 1996): 110. http://dx.doi.org/10.1068/v96l1111.

Full text
Abstract:
Previous studies have found that the recognition of familiar objects is dependent on the orientation of the object in the picture plane. Here the time taken to locate rotated objects in the periphery was examined. Eye movements were also recorded. In all experiments, familiar objects were arranged in a clock face display. In experiment 1, subjects were instructed to locate a match to a central, upright object from amongst a set of randomly rotated objects. The target object was rotated in the frontoparallel plane. Search performance was dependent on rotation, yielding the classic ‘M’ function found in recognition tasks. When matching a single object in periphery, match times were dependent on the angular deviations between the central and target objects and showed no advantage for the upright (experiment 2). In experiment 3 the central object was shown in either the upright rotation or rotated by 120° from the upright. The target object was similarly rotated given four different match conditions. Distractor objects were aligned with the target object. Search times were faster when the centre and target object were aligned and also when the centre object was rotated and the target was upright. Search times were slower when matching a central upright object to a rotated target object. These results suggest that in simple tasks matching is based on image characteristics. However, in complex search tasks a contribution from the object's representation is made which gives an advantage to the canonical, upright view in peripheral vision.
APA, Harvard, Vancouver, ISO, and other styles
4

Zelinsky, Gregory J., and Gregory L. Murphy. "Synchronizing Visual and Language Processing: An Effect of Object Name Length on Eye Movements." Psychological Science 11, no. 2 (March 2000): 125–31. http://dx.doi.org/10.1111/1467-9280.00227.

Full text
Abstract:
Are visual and verbal processing systems functionally independent? Two experiments (one using line drawings of common objects, the other using faces) explored the relationship between the number of syllables in an object's name (one or three) and the visual inspection of that object. The tasks were short-term recognition and visual search. Results indicated more fixations and longer gaze durations on objects having three-syllable names when the task encouraged a verbal encoding of the objects (i.e., recognition). No effects of syllable length on eye movements were found when implicit naming demands were minimal (i.e., visual search). These findings suggest that implicitly naming a pictorial object constrains the oculomotor inspection of that object, and that the visual and verbal encoding of an object are synchronized so that the faster process must wait for the slower to be completed before gaze shifts to another object. Both findings imply a tight coupling between visual and linguistic processing, and highlight the utility of an oculomotor methodology to understand this coupling.
APA, Harvard, Vancouver, ISO, and other styles
5

Logothetis, N. K., and D. L. Sheinberg. "Visual Object Recognition." Annual Review of Neuroscience 19, no. 1 (March 1996): 577–621. http://dx.doi.org/10.1146/annurev.ne.19.030196.003045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Palmeri, Thomas J., and Isabel Gauthier. "Visual object understanding." Nature Reviews Neuroscience 5, no. 4 (April 2004): 291–303. http://dx.doi.org/10.1038/nrn1364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Crivelli, Tomas, Patrick Perez, and Lionel Oisel. "Visual object trapping." Computer Vision and Image Understanding 153 (December 2016): 3–15. http://dx.doi.org/10.1016/j.cviu.2016.07.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Grauman, Kristen, and Bastian Leibe. "Visual Object Recognition." Synthesis Lectures on Artificial Intelligence and Machine Learning 5, no. 2 (April 19, 2011): 1–181. http://dx.doi.org/10.2200/s00332ed1v01y201103aim011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ingle, David. "Central Visual Persistences: I. Visual and Kinesthetic Interactions." Perception 34, no. 9 (September 2005): 1135–51. http://dx.doi.org/10.1068/p5408.

Full text
Abstract:
Phenomena associated with ‘central visual persistences’ (CPs) are new to both medical and psychological literature. Five subjects have reported similar CPs: positive afterimages following brief fixation of high-contrast objects or drawings and eye closure. CPs duplicate shapes and colors of single objects, lasting for about 15 s. Unlike retinal afterimages, CPs do not move with the eyes but are stable in extrapersonal space during head or body rotations. CPs may reflect sustained neural activity in neurons of association cortex, which mediate object perception. A remarkable finding is that CPs can be moved in any direction by the (unseen) hand holding the original seen object. Moreover, a CP once formed will ‘jump’ into an extended hand and ‘stick’ in that hand as it moves about. The apparent size of a CP of a single object is determined by the size of the gap between finger and thumb, even when no object is touched. These CPs can be either magnified or minified via the grip of the extended hand. The felt orientation of the hand-held object will also determine the orientation of the CP seen in that hand. Thus, kinesthetic signals from hand and arm movements can determine perceived location, size, and orientation of CPs. A neural model based on physiological studies of premotor, temporal, parietal, and prefrontal cortices is proposed to account for these novel phenomena.
APA, Harvard, Vancouver, ISO, and other styles
10

Guo, Fei, Yuan Yang, and Yong Gao. "Optimization of Visual Information Presentation for Visual Prosthesis." International Journal of Biomedical Imaging 2018 (2018): 1–12. http://dx.doi.org/10.1155/2018/3198342.

Full text
Abstract:
Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis.
APA, Harvard, Vancouver, ISO, and other styles
11

Vannucci, Manila, Giuliana Mazzoni, Carlo Chiorri, and Lavinia Cioli. "Object imagery and object identification: object imagers are better at identifying spatially-filtered visual objects." Cognitive Processing 9, no. 2 (January 24, 2008): 137–43. http://dx.doi.org/10.1007/s10339-008-0203-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Sapkota, Raju P., Shahina Pardhan, and Ian van der Linde. "Change Detection in Visual Short-Term Memory." Experimental Psychology 62, no. 4 (September 2015): 232–39. http://dx.doi.org/10.1027/1618-3169/a000294.

Full text
Abstract:
Abstract. Numerous kinds of visual event challenge our ability to keep track of the objects that populate our visual environment from moment to moment. These include blinks, occlusion, shifting visual attention, and changes to object’s visual and spatial properties over time. These visual events may lead to objects falling out of our visual awareness, but can also lead to unnoticed changes, such as undetected object replacements and positional exchanges. Current visual memory models do not predict which visual changes are likely to be the most difficult to detect. We examine the accuracy with which switches (where two objects exchange locations) and substitutions (where one or two objects are replaced) are detected. Inferior performance for one-object substitutions versus two-objects switches, along with superior performance for two-object substitutions versus two-object switches was found. Our results are interpreted in terms of object file theory, trade-offs between diffused and localized attention, and net visual change.
APA, Harvard, Vancouver, ISO, and other styles
13

Xu, Yaoda. "Distinctive Neural Mechanisms Supporting Visual Object Individuation and Identification." Journal of Cognitive Neuroscience 21, no. 3 (March 2009): 511–18. http://dx.doi.org/10.1162/jocn.2008.21024.

Full text
Abstract:
Many everyday activities, such as driving on a busy street, require the encoding of distinctive visual objects from crowded scenes. Given resource limitations of our visual system, one solution to this difficult and challenging task is to first select individual objects from a crowded scene (object individuation) and then encode their details (object identification). Using functional magnetic resonance imaging, two distinctive brain mechanisms were recently identified that support these two stages of visual object processing. While the inferior intraparietal sulcus (IPS) selects a fixed number of about four objects via their spatial locations, the superior IPS and the lateral occipital complex (LOC) encode the features of a subset of the selected objects in great detail (object shapes in this case). Thus, the inferior IPS individuates visual objects from a crowded display and the superior IPS and higher visual areas participate in subsequent object identification. Consistent with the prediction of this theory, even when only object shape identity but not its location is task relevant, this study shows that object individuation in the inferior IPS treats four identical objects similarly as four objects that are all different, whereas object shape identification in the superior IPS and the LOC treat four identical objects as a single unique object. These results provide independent confirmation supporting the dissociation between visual object individuation and identification in the brain.
APA, Harvard, Vancouver, ISO, and other styles
14

Todd, Steven, and Arthur F. Kramer. "Attentional Guidance in Visual Attention." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 37, no. 19 (October 1993): 1378–82. http://dx.doi.org/10.1518/107118193784162290.

Full text
Abstract:
Earlier research has shown that a task-irrelevant sudden onset of an object will capture or draw an observer's visual attention to that object's location (e.g., Yantis & Jonides, 1984). In the four experiments reported here, we explore the question of whether task-irrelevant properties other than sudden-onset may capture attention. Our results suggest that a uniquely colored or luminous object, as well as an irrelevant boundary, may indeed capture or guide attention, though apparently to a lesser degree than a sudden onset: it appears that the degree of attentional capture is dependent on the relative salience of the varied, irrelevant dimension. Whereas a sudden onset is very salient, a uniquely colored object, for example, is only salient relative to the other objects within view, both to the degree that it is different in hue from its neighbors and the number of neighbors from which it differs. The relationship of these findings to work in the fields of visual momentum and visual scanning is noted.
APA, Harvard, Vancouver, ISO, and other styles
15

Clarke, Alex, Philip J. Pell, Charan Ranganath, and Lorraine K. Tyler. "Learning Warps Object Representations in the Ventral Temporal Cortex." Journal of Cognitive Neuroscience 28, no. 7 (July 2016): 1010–23. http://dx.doi.org/10.1162/jocn_a_00951.

Full text
Abstract:
The human ventral temporal cortex (VTC) plays a critical role in object recognition. Although it is well established that visual experience shapes VTC object representations, the impact of semantic and contextual learning is unclear. In this study, we tracked changes in representations of novel visual objects that emerged after learning meaningful information about each object. Over multiple training sessions, participants learned to associate semantic features (e.g., “made of wood,” “floats”) and spatial contextual associations (e.g., “found in gardens”) with novel objects. fMRI was used to examine VTC activity for objects before and after learning. Multivariate pattern similarity analyses revealed that, after learning, VTC activity patterns carried information about the learned contextual associations of the objects, such that objects with contextual associations exhibited higher pattern similarity after learning. Furthermore, these learning-induced increases in pattern information about contextual associations were correlated with reductions in pattern information about the object's visual features. In a second experiment, we validated that these contextual effects translated to real-life objects. Our findings demonstrate that visual object representations in VTC are shaped by the knowledge we have about objects and show that object representations can flexibly adapt as a consequence of learning with the changes related to the specific kind of newly acquired information.
APA, Harvard, Vancouver, ISO, and other styles
16

Jankowiak, Janet, Marcel Kinsbourne, Ruth S. Shalev, and David L. Bachman. "Preserved Visual Imagery and Categorization in a Case of Associative Visual Agnosia." Journal of Cognitive Neuroscience 4, no. 2 (April 1992): 119–31. http://dx.doi.org/10.1162/jocn.1992.4.2.119.

Full text
Abstract:
A patient with associative visual agnosia secondary to a penetrating bitemporooccipital lesion remained able to draw complex objects from memory but could not subsequently recognize his sketches. His retained ability to copy and draw briefly exposed objects indicates that this is not a problem of visual perception. On tasks of categorization, mental imagery, drawing, and object decision, he demonstrates many instances of preserved visual semantic memories and imagery despite a sense of unfamiliarity with the visual stimuli. We infer a preserved ability to derive internal visual images from semantic memory. Cues may help him visualize the named object, which then serves as a model for comparison with the actual stimulus. However, his adequate visual perception and mental visual imagery, even when assisted by cues, are still insufficient to correct fully his difficulty in recognizing objects. Unique to his case is an inability to match the representation of stimulus objects with his intact internal image of the same object. Deficient lateral inhibition between neural representations of similar objects may be responsible.
APA, Harvard, Vancouver, ISO, and other styles
17

Moore, Cathleen M., Steven Yantis, and Barry Vaughan. "Object-Based Visual Selection: Evidence From Perceptual Completion." Psychological Science 9, no. 2 (March 1998): 104–10. http://dx.doi.org/10.1111/1467-9280.00019.

Full text
Abstract:
A large body of evidence suggests that visual attention selects objects as well as spatial locations. If attention is to be regarded as truly object based, then it should operate not only on object representations that are explicit in the image, but also on representations that are the result of earlier perceptual completion processes. Reporting the results of two experiments, we show that when attention is directed to part of a perceptual object, other parts of that object enjoy an attentional advantage as well. In particular, we show that this object-specific attentional advantage accrues to partly occluded objects and to objects defined by subjective contours. The results corroborate the claim that perceptual completion precedes object-based attentional selection.
APA, Harvard, Vancouver, ISO, and other styles
18

Streri, Arlette, and Michèle Molina. "Visual–Tactual and Tactual–Visual Transfer between Objects and Pictures in 2-Month-Old Infants." Perception 22, no. 11 (November 1993): 1299–318. http://dx.doi.org/10.1068/p221299.

Full text
Abstract:
Previous studies have provided evidence for transfer of perception of object shape from touch to vision, but not from vision to touch, in young infants. Previous studies also indicate that intermodal recognition can produce a preference either for a matching or for a nonmatching object. We investigated the causes of asymmetries in intermodal transfer and of familiarity preference versus novelty preference in transfer tasks. The data support three conclusions: (i) Transfer from vision to touch is possible under certain conditions and is facilitated by the use of two-dimensional (2-D) visual representations rather than three-dimensional (3-D) visual objects, (ii) The direction of preferences in a transfer task depends on the degree of dissimilarity between the haptically and visually presented objects. Familiarity preferences increase with increasing difference between the object to be recognised and the familiar object, (iii) Infants are able to perceive the 3-D shape of an object both visually and haptically, and they are sensitive both to commonalities and to discrepancies between the shapes of 3-D objects and of their 2-D representations. Hierarchical levels of perceptual processing are proposed to account for these findings.
APA, Harvard, Vancouver, ISO, and other styles
19

Woods, Andrew T., Allison Moore, and Fiona N. Newell. "Canonical Views in Haptic Object Perception." Perception 37, no. 12 (January 1, 2008): 1867–78. http://dx.doi.org/10.1068/p6038.

Full text
Abstract:
Previous investigations of visual object recognition have found that some views of both familiar and unfamiliar objects promote more efficient recognition performance than other views. These views are considered as canonical and are often the views that present the most information about an object's 3-D structure and features in the image. Although objects can also be efficiently recognised with touch alone, little is known whether some views promote more efficient recognition than others. This may seem unlikely, given that the object structure and features are readily available to the hand during object exploration. We conducted two experiments to investigate whether canonical views existed in haptic object recognition. In the first, participants were required to position each object in a way that would present the best view for learning the object with touch alone. We found a large degree of consistency of viewpoint position across participants for both familiar and unfamiliar objects. In a second experiment, we found that these consistent, or canonical, views promoted better haptic recognition performance than other random views of the objects. Interestingly, these haptic canonical views were not necessarily the same as the canonical views normally found in visual perception. Nevertheless, our findings provide support for the idea that both the visual and the tactile systems are functionally equivalent in terms of how objects are represented in memory and subsequently recognised.
APA, Harvard, Vancouver, ISO, and other styles
20

Burnett, Margaret M. "Visual object-oriented programming." ACM SIGPLAN OOPS Messenger 5, no. 2 (April 1994): 127–29. http://dx.doi.org/10.1145/260304.261240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Place, S. S., and J. M. Wolfe. "Multiple visual object juggling." Journal of Vision 5, no. 8 (March 16, 2010): 27. http://dx.doi.org/10.1167/5.8.27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Vasylenko, Mykola, and Maksym Haida. "Visual Object Recognition System." Electronics and Control Systems 3, no. 73 (November 24, 2022): 9–19. http://dx.doi.org/10.18372/1990-5548.73.17007.

Full text
Abstract:
This article introduces the problem of object detection and recognition. The potential mobility of this solution, ease of installation and ease of initial setup, as well as the absence of expensive, resource-intensive and complex image collection and processing systems are presented. Solutions to the problem are demonstrated, along with the advantages and disadvantages of each. The selection of contours by a filter based on the Prewitt operator and a detector of characteristic points is an algorithm of the system, developed within the framework of object recognition techniques. The reader can follow the interim and final demonstrations of the system algorithm in this article to learn about its advantages over traditional video surveillance systems, as well as some of its disadvantages. A webcam with a video frame rate of 25 frames per second, a mobile phone and a PC with the Matlab2020 programming environment installed (due to its convenience and built-in image processing functions) are required to illustrate how the system works.
APA, Harvard, Vancouver, ISO, and other styles
23

Proklova, Daria, Daniel Kaiser, and Marius V. Peelen. "Disentangling Representations of Object Shape and Object Category in Human Visual Cortex: The Animate–Inanimate Distinction." Journal of Cognitive Neuroscience 28, no. 5 (May 2016): 680–92. http://dx.doi.org/10.1162/jocn_a_00924.

Full text
Abstract:
Objects belonging to different categories evoke reliably different fMRI activity patterns in human occipitotemporal cortex, with the most prominent distinction being that between animate and inanimate objects. An unresolved question is whether these categorical distinctions reflect category-associated visual properties of objects or whether they genuinely reflect object category. Here, we addressed this question by measuring fMRI responses to animate and inanimate objects that were closely matched for shape and low-level visual features. Univariate contrasts revealed animate- and inanimate-preferring regions in ventral and lateral temporal cortex even for individually matched object pairs (e.g., snake–rope). Using representational similarity analysis, we mapped out brain regions in which the pairwise dissimilarity of multivoxel activity patterns (neural dissimilarity) was predicted by the objects' pairwise visual dissimilarity and/or their categorical dissimilarity. Visual dissimilarity was measured as the time it took participants to find a unique target among identical distractors in three visual search experiments, where we separately quantified overall dissimilarity, outline dissimilarity, and texture dissimilarity. All three visual dissimilarity structures predicted neural dissimilarity in regions of visual cortex. Interestingly, these analyses revealed several clusters in which categorical dissimilarity predicted neural dissimilarity after regressing out visual dissimilarity. Together, these results suggest that the animate–inanimate organization of human visual cortex is not fully explained by differences in the characteristic shape or texture properties of animals and inanimate objects. Instead, representations of visual object properties and object category may coexist in more anterior parts of the visual system.
APA, Harvard, Vancouver, ISO, and other styles
24

Shen, Mowei, Wenjun Yu, Xiaotian Xu, and Zaifeng Gao. "Building Blocks of Visual Working Memory: Objects or Boolean Maps?" Journal of Cognitive Neuroscience 25, no. 5 (May 2013): 743–53. http://dx.doi.org/10.1162/jocn_a_00348.

Full text
Abstract:
The nature of the building blocks of information in visual working memory (VWM) is a fundamental issue that has not been well resolved. Most researchers take objects as the building blocks, although this perspective has received criticism. The objects could be physically separated ones (strict object hypothesis) or hierarchical objects created from separated individuals (broad object hypothesis). Meanwhile, a newly proposed Boolean map theory for visual attention suggests that Boolean maps may be the building blocks of VWM (Boolean map hypothesis); this perspective could explain many critical findings of VWM. However, no previous study has examined these hypotheses. We explored this issue by focusing on a critical point on which they make distinct predictions. We asked participants to remember two distinct objects (2-object), three distinct objects (3-object), or three objects with repeated information (mixed-3-object, e.g., one red bar and two green bars, green bars could be represented as one hierarchical object) and adopted contralateral delay activity (CDA) to tap into the maintenance phase of VWM. The mixed-3-object condition could generate two Boolean maps, three objects, or three objects most of the time (hierarchical objects are created in certain trials, retaining two objects). Simple orientations (Experiment 1) and colors (Experiments 2 and 3) were used as stimuli. Although the CDA of the mixed-3-object condition was slightly lower than that of the 3-object condition, no significant difference was revealed between them. Both conditions displayed significantly higher CDAs than the 2-object condition. These findings support the broad object hypothesis. We further suggest that Boolean maps might be the unit for retrieval/comparison in VWM.
APA, Harvard, Vancouver, ISO, and other styles
25

Turatto, Massimo, Veronica Mazza, and Carlo Umiltà. "Crossmodal object-based attention: Auditory objects affect visual processing." Cognition 96, no. 2 (June 2005): B55—B64. http://dx.doi.org/10.1016/j.cognition.2004.12.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

SCHWEIZER, TOM A., and MIKE J. DIXON. "The influence of visual and nonvisual attributes in visual object identification." Journal of the International Neuropsychological Society 12, no. 2 (March 2006): 176–83. http://dx.doi.org/10.1017/s1355617706060279.

Full text
Abstract:
To elucidate the role of visual and nonvisual attribute knowledge on visual object identification, we present data from three patients, each with visual object identification impairments as a result of different etiologies. Patients were shown novel computer-generated shapes paired with different labels referencing known entities. On test trials they were shown the novel shapes alone and had to identify them by generating the label with which they were formerly paired. In all conditions the same triad of computer-generated shapes were used. In one condition, the labels (banjo, guitar, violin) referenced entities that were both visually similar and similar in terms of their nonvisual attributes within semantics. In separate conditions we used labels (e.g., spike, straw, pencil or snorkel, cane, crowbar) that referenced entities that were similar in terms of their visual attributes but were dissimilar in terms of their nonvisual attributes. The results revealed that nonvisual attribute information profoundly influenced visual object identification. Our patients performed significantly better when attempting to identify shape triads whose labels referenced objects with distinct nonvisual attributes versus shape triads whose labels referenced objects with similar nonvisual attributes. We conclude that the nonvisual aspects of meaning must be taken into consideration when assessing visual object identification impairments. (JINS, 2006, 12, 176–183.)
APA, Harvard, Vancouver, ISO, and other styles
27

Humphreys, Glyn W. "Neural representation of objects in space: a dual coding account." Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 353, no. 1373 (August 29, 1998): 1341–51. http://dx.doi.org/10.1098/rstb.1998.0288.

Full text
Abstract:
I present evidence on the nature of object coding in the brain and discuss the implications of this coding for models of visual selective attention. Neuropsychological studies of task–based constraints on: (i) visual neglect; and (ii) reading and counting, reveal the existence of parallel forms of spatial representation for objects: within–object representations, where elements are coded as parts of objects, and between–object representations, where elements are coded as independent objects. Aside from these spatial codes for objects, however, the coding of visual space is limited. We are extremely poor at remembering small spatial displacements across eye movements, indicating (at best) impoverished coding of spatial position per se . Also, effects of element separation on spatial extinction can be eliminated by filling the space with an occluding object, indicating that spatial effects on visual selection are moderated by object coding. Overall, there are separate limits on visual processing reflecting: (i) the competition to code parts within objects; (ii) the small number of independent objects that can be coded in parallel; and (iii) task–based selection of whether within– or between–object codes determine behaviour. Between–object coding may be linked to the dorsal visual system while parallel coding of parts within objects takes place in the ventral system, although there may additionally be some dorsal involvement either when attention must be shifted within objects or when explicit spatial coding of parts is necessary for object identification.
APA, Harvard, Vancouver, ISO, and other styles
28

Jeong, Su Keun, and Yaoda Xu. "Task-context-dependent Linear Representation of Multiple Visual Objects in Human Parietal Cortex." Journal of Cognitive Neuroscience 29, no. 10 (October 2017): 1778–89. http://dx.doi.org/10.1162/jocn_a_01156.

Full text
Abstract:
A host of recent studies have reported robust representations of visual object information in the human parietal cortex, similar to those found in ventral visual cortex. In ventral visual cortex, both monkey neurophysiology and human fMRI studies showed that the neural representation of a pair of unrelated objects can be approximated by the averaged neural representation of the constituent objects shown in isolation. In this study, we examined whether such a linear relationship between objects exists for object representations in the human parietal cortex. Using fMRI and multivoxel pattern analysis, we examined object representations in human inferior and superior intraparietal sulcus, two parietal regions previously implicated in visual object selection and encoding, respectively. We also examined responses from the lateral occipital region, a ventral object processing area. We obtained fMRI response patterns to object pairs and their constituent objects shown in isolation while participants viewed these objects and performed a 1-back repetition detection task. By measuring fMRI response pattern correlations, we found that all three brain regions contained representations for both single object and object pairs. In the lateral occipital region, the representation for a pair of objects could be reliably approximated by the average representation of its constituent objects shown in isolation, replicating previous findings in ventral visual cortex. Such a simple linear relationship, however, was not observed in either parietal region examined. Nevertheless, when we equated the amount of task information present by examining responses from two pairs of objects, we found that representations for the average of two object pairs were indistinguishable in both parietal regions from the average of another two object pairs containing the same four component objects but with a different pairing of the objects (i.e., the average of AB and CD vs. that of AD and CB). Thus, when task information was held consistent, the same linear relationship may govern how multiple independent objects are represented in the human parietal cortex as it does in ventral visual cortex. These findings show that object and task representations coexist in the human parietal cortex and characterize one significant difference of how visual information may be represented in ventral visual and parietal regions.
APA, Harvard, Vancouver, ISO, and other styles
29

Breveglieri, Rossella, Claudio Galletti, Annalisa Bosco, Michela Gamberini, and Patrizia Fattori. "Object Affordance Modulates Visual Responses in the Macaque Medial Posterior Parietal Cortex." Journal of Cognitive Neuroscience 27, no. 7 (July 2015): 1447–55. http://dx.doi.org/10.1162/jocn_a_00793.

Full text
Abstract:
Area V6A is a visuomotor area of the dorsomedial visual stream that contains cells modulated by object observation and by grip formation. As different objects have different shapes but also evoke different grips, the response selectivity during object presentation could reflect either the coding of object geometry or object affordances. To clarify this point, we here investigate neural responses of V6A cells when monkeys observed two objects with similar visual features but different contextual information, such as the evoked grip type. We demonstrate that many V6A cells respond to the visual presentation of objects and about 30% of them by the object affordance. Given that area V6A is an early stage in the visuomotor processes underlying grasping, these data suggest that V6A may participate in the computation of object affordances. These results add some elements in the recent literature about the role of the dorsal visual stream areas in object representation and contribute in elucidating the neural correlates of the extraction of action-relevant information from general object properties, in agreement with recent neuroimaging studies on humans showing that vision of graspable objects activates action coding in the dorsomedial visual steam.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Tao, Yang Cong, Gan Sun, Qianqian Wang, and Zhenming Ding. "Visual Tactile Fusion Object Clustering." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (April 3, 2020): 10426–33. http://dx.doi.org/10.1609/aaai.v34i06.6612.

Full text
Abstract:
Object clustering, aiming at grouping similar objects into one cluster with an unsupervised strategy, has been extensively-studied among various data-driven applications. However, most existing state-of-the-art object clustering methods (e.g., single-view or multi-view clustering methods) only explore visual information, while ignoring one of most important sensing modalities, i.e., tactile information which can help capture different object properties and further boost the performance of object clustering task. To effectively benefit both visual and tactile modalities for object clustering, in this paper, we propose a deep Auto-Encoder-like Non-negative Matrix Factorization framework for visual-tactile fusion clustering. Specifically, deep matrix factorization constrained by an under-complete Auto-Encoder-like architecture is employed to jointly learn hierarchical expression of visual-tactile fusion data, and preserve the local structure of data generating distribution of visual and tactile modalities. Meanwhile, a graph regularizer is introduced to capture the intrinsic relations of data samples within each modality. Furthermore, we propose a modality-level consensus regularizer to effectively align the visual and tactile data in a common subspace in which the gap between visual and tactile data is mitigated. For the model optimization, we present an efficient alternating minimization strategy to solve our proposed model. Finally, we conduct extensive experiments on public datasets to verify the effectiveness of our framework.
APA, Harvard, Vancouver, ISO, and other styles
31

Petitjean, Sylvain. "A Computational Geometric Approach to Visual Hulls." International Journal of Computational Geometry & Applications 08, no. 04 (August 1998): 407–36. http://dx.doi.org/10.1142/s0218195998000229.

Full text
Abstract:
Recognizing 3D objects from their 2D silhouettes is a popular topic in computer vision. Object reconstruction can be performed using the volume intersection approach. The visual hull of an object is the best approximation of an object that can be obtained by volume intersection. From the point of view of recognition from silhouettes, the visual hull can not be distinguished from the original object. In this paper, we present efficient algorithms for computing visual hulls. We start with the case of planar figures (polygons and curved objects) and base our approach on an efficient algorithm for computing the visibility graph of planar figures. We present and tackle many topics related to the query of visual hulls and to the recognition of objects equal to their visual hulls. We then move on to the 3-dimensional case and give a flavor of how it may be approached.
APA, Harvard, Vancouver, ISO, and other styles
32

van Weelden, Lisanne, Alfons Maes, Joost Schilperoord, and Marc Swerts. "How Object Shape Affects Visual Metaphor Processing." Experimental Psychology 59, no. 6 (January 1, 2012): 364–71. http://dx.doi.org/10.1027/1618-3169/a000165.

Full text
Abstract:
In order to interpret novel metaphoric relations, we have to construct ad hoc categories under which the metaphorically related concepts can be subsumed. Shape is considered to be one of the primary vehicles of object categorization. Accordingly, shape might play a prominent role in interpreting visual metaphors (i.e., two metaphorically related objects depicted in one visual array). This study explores the role of object shape in visual metaphor interpretation of 10- to 12-year-olds. The experiment shows that participants can produce more correspondences between similarly shaped objects as compared to dissimilarly shaped objects and that they need less thinking time to do so. These findings suggest that similarity in shape facilitates the process of interpreting visual metaphors.
APA, Harvard, Vancouver, ISO, and other styles
33

Strother, Lars, Adrian Aldcroft, Cheryl Lavell, and Tutis Vilis. "Equal Degrees of Object Selectivity for Upper and Lower Visual Field Stimuli." Journal of Neurophysiology 104, no. 4 (October 2010): 2075–81. http://dx.doi.org/10.1152/jn.00462.2010.

Full text
Abstract:
Functional MRI (fMRI) studies of the human object recognition system commonly identify object-selective cortical regions by comparing blood oxygen level–dependent (BOLD) responses to objects versus those to scrambled objects. Object selectivity distinguishes human lateral occipital cortex (LO) from earlier visual areas. Recent studies suggest that, in addition to being object selective, LO is retinotopically organized; LO represents both object and location information. Although LO responses to objects have been shown to depend on location, it is not known whether responses to scrambled objects vary similarly. This is important because it would suggest that the degree of object selectivity in LO does not vary with retinal stimulus position. We used a conventional functional localizer to identify human visual area LO by comparing BOLD responses to objects versus scrambled objects presented to either the upper (UVF) or lower (LVF) visual field. In agreement with recent findings, we found evidence of position-dependent responses to objects. However, we observed the same degree of position dependence for scrambled objects and thus object selectivity did not differ for UVF and LVF stimuli. We conclude that, in terms of BOLD response, LO discriminates objects from non-objects equally well in either visual field location, despite stronger responses to objects in the LVF.
APA, Harvard, Vancouver, ISO, and other styles
34

Yeari, Menahem, and Morris Goldsmith. "Hierarchical Navigation of Visual Attention." Experimental Psychology 62, no. 6 (November 2015): 353–70. http://dx.doi.org/10.1027/1618-3169/a000306.

Full text
Abstract:
Abstract. This study explored the dynamics of attentional navigation between two hierarchically structured objects. Three experiments examined a Hierarchical Attentional Navigation (HAN) hypothesis, by which attentional navigation between two visual stimuli is constrained to follow the path linking the two stimuli in a hierarchical object-based representation. Presented with two adjacent compound-letter objects on each trial, participants successively identified the letter(s) at the specified hierarchical level (global or local) of the origin and destination object, respectively: local-local (Experiment 1), global-local (Experiment 2a), or local-global (Experiment 2b). The organizational complexity of the objects (2-level structure vs. 3-level structure) and their global size (large vs. small) were orthogonally manipulated. Results were generally consistent with the HAN hypothesis: overall response latency was positively related to the number of intervening levels of hierarchical object structure linking the two target levels. Hierarchical navigation was also suggested by the pattern of global size effects. The usefulness of the HAN framework for interpreting these and related findings in attention research is discussed.
APA, Harvard, Vancouver, ISO, and other styles
35

Rennig, Johannes, Sonja Cornelsen, Helmut Wilhelm, Marc Himmelbach, and Hans-Otto Karnath. "Preserved Expert Object Recognition in a Case of Visual Hemiagnosia." Journal of Cognitive Neuroscience 30, no. 2 (February 2018): 131–43. http://dx.doi.org/10.1162/jocn_a_01193.

Full text
Abstract:
We examined a stroke patient (HWS) with a unilateral lesion of the right medial ventral visual stream, involving the right fusiform and parahippocampal gyri. In a number of object recognition tests with lateralized presentations of target stimuli, HWS showed significant symptoms of hemiagnosia with contralesional recognition deficits for everyday objects. We further explored the patient's capacities of visual expertise that were acquired before the current perceptual impairment became effective. We confronted him with objects he was an expert for already before stroke onset and compared this performance with the recognition of familiar everyday objects. HWS was able to identify significantly more of the specific (“expert”) than of the everyday objects on the affected contralesional side. This observation of better expert object recognition in visual hemiagnosia allows for several interpretations. The results may be caused by enhanced information processing for expert objects in the ventral system in the affected or the intact hemisphere. Expert knowledge could trigger top–down mechanisms supporting object recognition despite of impaired basic functions of object processing. More importantly, the current work demonstrates that top–down mechanisms of visual expertise influence object recognition at an early stage, probably before visual object information propagates to modules of higher object recognition. Because HWS showed a lesion to the fusiform gyrus and spared capacities of expert object recognition, the current study emphasizes possible contributions of areas outside the ventral stream to visual expertise.
APA, Harvard, Vancouver, ISO, and other styles
36

Emrich, Stephen M., Hana Burianová, and Susanne Ferber. "Transient Perceptual Neglect: Visual Working Memory Load Affects Conscious Object Processing." Journal of Cognitive Neuroscience 23, no. 10 (October 2011): 2968–82. http://dx.doi.org/10.1162/jocn_a_00028.

Full text
Abstract:
Visual working memory (VWM) is a capacity-limited cognitive resource that plays an important role in complex cognitive behaviors. Recent studies indicate that regions subserving VWM may play a role in the perception and recognition of visual objects, suggesting that conscious object perception may depend on the same cognitive and neural architecture that supports the maintenance of visual object information. In the present study, we examined this question by testing object processing under a concurrent VWM load. Under a high VWM load, recognition was impaired for objects presented in the left visual field, in particular when two objects were presented simultaneously. Multivariate fMRI revealed that two independent but partially overlapping networks of brain regions contribute to object recognition. The first network consisted of regions involved in VWM encoding and maintenance. Importantly, these regions were also sensitive to object load. The second network comprised regions of the ventral temporal lobes traditionally associated with object recognition. Importantly, activation in both networks predicted object recognition performance. These results indicate that information processing in regions that mediate VWM may be critical to conscious visual perception. Moreover, the observation of a hemifield asymmetry in object recognition performance has important theoretical and clinical significance for the study of visual neglect.
APA, Harvard, Vancouver, ISO, and other styles
37

Kaiser, Daniel, and Radoslaw M. Cichy. "Typical visual-field locations enhance processing in object-selective channels of human occipital cortex." Journal of Neurophysiology 120, no. 2 (August 1, 2018): 848–53. http://dx.doi.org/10.1152/jn.00229.2018.

Full text
Abstract:
Natural environments consist of multiple objects, many of which repeatedly occupy similar locations within a scene. For example, hats are seen on people’s heads, while shoes are most often seen close to the ground. Such positional regularities bias the distribution of objects across the visual field: hats are more often encountered in the upper visual field, while shoes are more often encountered in the lower visual field. Here we tested the hypothesis that typical visual field locations of objects facilitate cortical processing. We recorded functional MRI while participants viewed images of objects that were associated with upper or lower visual field locations. Using multivariate classification, we show that object information can be more successfully decoded from response patterns in object-selective lateral occipital cortex (LO) when the objects are presented in their typical location (e.g., shoe in the lower visual field) than when they are presented in an atypical location (e.g., shoe in the upper visual field). In a functional connectivity analysis, we relate this benefit to increased coupling between LO and early visual cortex, suggesting that typical object positioning facilitates information propagation across the visual hierarchy. Together these results suggest that object representations in occipital visual cortex are tuned to the structure of natural environments. This tuning may support object perception in spatially structured environments. NEW & NOTEWORTHY In the real world, objects appear in predictable spatial locations. Hats, commonly appearing on people’s heads, often fall into the upper visual field. Shoes, mostly appearing on people’s feet, often fall into the lower visual field. Here we used functional MRI to demonstrate that such regularities facilitate cortical processing: Objects encountered in their typical locations are coded more efficiently, which may allow us to effortlessly recognize objects in natural environments.
APA, Harvard, Vancouver, ISO, and other styles
38

Smith, Cybelle M., and Kara D. Federmeier. "Neural Signatures of Learning Novel Object–Scene Associations." Journal of Cognitive Neuroscience 32, no. 5 (May 2020): 783–803. http://dx.doi.org/10.1162/jocn_a_01530.

Full text
Abstract:
Objects are perceived within rich visual contexts, and statistical associations may be exploited to facilitate their rapid recognition. Recent work using natural scene–object associations suggests that scenes can prime the visual form of associated objects, but it remains unknown whether this relies on an extended learning process. We asked participants to learn categorically structured associations between novel objects and scenes in a paired associate memory task while ERPs were recorded. In the test phase, scenes were first presented (2500 msec), followed by objects that matched or mismatched the scene; degree of contextual mismatch was manipulated along visual and categorical dimensions. Matching objects elicited a reduced N300 response, suggesting visuostructural priming based on recently formed associations. Amplitude of an extended positivity (onset ∼200 msec) was sensitive to visual distance between the presented object and the contextually associated target object, most likely indexing visual template matching. Results suggest recent associative memories may be rapidly recruited to facilitate object recognition in a top–down fashion, with clinical implications for populations with impairments in hippocampal-dependent memory and executive function.
APA, Harvard, Vancouver, ISO, and other styles
39

Bal, Mieke. "Visual essentialism and the object of visual culture." Journal of Visual Culture 2, no. 1 (April 2003): 5–32. http://dx.doi.org/10.1177/147041290300200101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Bal, Mieke. "Visual essentialism and the object of visual culture." Journal of Visual Culture 2, no. 1 (April 1, 2003): 5–32. http://dx.doi.org/10.1177/1470412903002001926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Hulleman, Johan, and Frans Boselie. "Visual attention and objects: New tests of two-object cost." Psychonomic Bulletin & Review 4, no. 3 (September 1997): 367–73. http://dx.doi.org/10.3758/bf03210794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Baylis, Gordon C. "Visual attention and objects: Two-object cost with equal convexity." Journal of Experimental Psychology: Human Perception and Performance 20, no. 1 (February 1994): 208–12. http://dx.doi.org/10.1037/0096-1523.20.1.208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kushnier, A., and Z. W. Pylyshyn. "Can flashing objects grab visual indexes in multiple object tracking?" Journal of Vision 3, no. 9 (March 18, 2010): 581. http://dx.doi.org/10.1167/3.9.581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Martin-Emerson, Robin, and Arthur F. Kramer. "Capture of Attention by Visual Onsets." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 39, no. 21 (October 1995): 1385–89. http://dx.doi.org/10.1177/154193129503902106.

Full text
Abstract:
The appearance of a new object within a multiple item display has been shown to capture attention in a stimulus-driven manner. Capture may be either beneficial or detrimental to performance depending on whether the new object is a target or distractor. In the present study we show that the ability of new objects to capture attention is mediated by the number of objects that change or morph. This finding establishes a boundary condition of the phenomena of attentional capture and has implications for the design of complex displays.
APA, Harvard, Vancouver, ISO, and other styles
45

Berti, Stefan, and Peter Wühr. "Using Redundant Visual Information From Different Dimensions for Attentional Selection." Journal of Psychophysiology 26, no. 3 (January 2012): 99–104. http://dx.doi.org/10.1027/0269-8803/a000072.

Full text
Abstract:
The present study investigated the use of redundant information for attentional selection of a visual object. Each display contained two overlapping objects, and participants had to report the color of the occluding object. A baseline condition did not require object selection because the objects were identical. A single-cue condition required object selection based on spatial arrangement (i.e., occlusion) because the objects had the same shape. A double-cue condition afforded object selection by occlusion and shape because the objects consistently differed in shape. Behavioral results showed that the redundant shape cue facilitated attentional selection, although participants were never supposed to respond to shape. The Event-Related Brain Potential (ERP) results showed a posterior N2 effect in both selection conditions, and a frontal N2 effect in the double-cue condition only. These results suggest that the redundancy gain in the double-cue condition relied on processes of voluntary attention, presumably the increase of attentional weights for visual shape information.
APA, Harvard, Vancouver, ISO, and other styles
46

Quek, Genevieve L., and Marius V. Peelen. "Contextual and Spatial Associations Between Objects Interactively Modulate Visual Processing." Cerebral Cortex 30, no. 12 (August 5, 2020): 6391–404. http://dx.doi.org/10.1093/cercor/bhaa197.

Full text
Abstract:
Abstract Much of what we know about object recognition arises from the study of isolated objects. In the real world, however, we commonly encounter groups of contextually associated objects (e.g., teacup and saucer), often in stereotypical spatial configurations (e.g., teacup above saucer). Here we used electroencephalography to test whether identity-based associations between objects (e.g., teacup–saucer vs. teacup–stapler) are encoded jointly with their typical relative positioning (e.g., teacup above saucer vs. below saucer). Observers viewed a 2.5-Hz image stream of contextually associated object pairs intermixed with nonassociated pairs as every fourth image. The differential response to nonassociated pairs (measurable at 0.625 Hz in 28/37 participants) served as an index of contextual integration, reflecting the association of object identities in each pair. Over right occipitotemporal sites, this signal was larger for typically positioned object streams, indicating that spatial configuration facilitated the extraction of the objects’ contextual association. This high-level influence of spatial configuration on object identity integration arose ~ 320 ms post-stimulus onset, with lower-level perceptual grouping (shared with inverted displays) present at ~ 130 ms. These results demonstrate that contextual and spatial associations between objects interactively influence object processing. We interpret these findings as reflecting the high-level perceptual grouping of objects that frequently co-occur in highly stereotyped relative positions.
APA, Harvard, Vancouver, ISO, and other styles
47

Vecera, Shaun P. "Visual object representation: An introduction." Psychobiology 26, no. 4 (December 1998): 281–308. http://dx.doi.org/10.3758/bf03330617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Yi, Yuexian Zou, and Wenwu Wang. "Manifold-Based Visual Object Counting." IEEE Transactions on Image Processing 27, no. 7 (July 2018): 3248–63. http://dx.doi.org/10.1109/tip.2018.2799328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Spriet, Céline, Etienne Abassi, Jean-Rémy Hochmann, and Liuba Papeo. "Visual object categorization in infancy." Journal of Vision 20, no. 11 (October 20, 2020): 1079. http://dx.doi.org/10.1167/jov.20.11.1079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Seiffert, A. E. "Visual attention mediates object control." Journal of Vision 4, no. 8 (August 1, 2004): 267. http://dx.doi.org/10.1167/4.8.267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography