Статті в журналах з теми "Invariant Object Recognition"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Invariant Object Recognition.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Invariant Object Recognition".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Wood, Justin N., and Samantha M. W. Wood. "The development of newborn object recognition in fast and slow visual worlds." Proceedings of the Royal Society B: Biological Sciences 283, no. 1829 (April 27, 2016): 20160166. http://dx.doi.org/10.1098/rspb.2016.0166.

Повний текст джерела
Анотація:
Object recognition is central to perception and cognition. Yet relatively little is known about the environmental factors that cause invariant object recognition to emerge in the newborn brain. Is this ability a hardwired property of vision? Or does the development of invariant object recognition require experience with a particular kind of visual environment? Here, we used a high-throughput controlled-rearing method to examine whether newborn chicks ( Gallus gallus ) require visual experience with slowly changing objects to develop invariant object recognition abilities. When newborn chicks were raised with a slowly rotating virtual object, the chicks built invariant object representations that generalized across novel viewpoints and rotation speeds. In contrast, when newborn chicks were raised with a virtual object that rotated more quickly, the chicks built viewpoint-specific object representations that failed to generalize to novel viewpoints and rotation speeds. Moreover, there was a direct relationship between the speed of the object and the amount of invariance in the chick's object representation. Thus, visual experience with slowly changing objects plays a critical role in the development of invariant object recognition. These results indicate that invariant object recognition is not a hardwired property of vision, but is learned rapidly when newborns encounter a slowly changing visual world.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Isik, Leyla, Ethan M. Meyers, Joel Z. Leibo, and Tomaso Poggio. "The dynamics of invariant object recognition in the human visual system." Journal of Neurophysiology 111, no. 1 (January 1, 2014): 91–102. http://dx.doi.org/10.1152/jn.00394.2013.

Повний текст джерела
Анотація:
The human visual system can rapidly recognize objects despite transformations that alter their appearance. The precise timing of when the brain computes neural representations that are invariant to particular transformations, however, has not been mapped in humans. Here we employ magnetoencephalography decoding analysis to measure the dynamics of size- and position-invariant visual information development in the ventral visual stream. With this method we can read out the identity of objects beginning as early as 60 ms. Size- and position-invariant visual information appear around 125 ms and 150 ms, respectively, and both develop in stages, with invariance to smaller transformations arising before invariance to larger transformations. Additionally, the magnetoencephalography sensor activity localizes to neural sources that are in the most posterior occipital regions at the early decoding times and then move temporally as invariant information develops. These results provide previously unknown latencies for key stages of human-invariant object recognition, as well as new and compelling evidence for a feed-forward hierarchical model of invariant object recognition where invariance increases at each successive visual area along the ventral stream.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

DiCarlo, James J., and David D. Cox. "Untangling invariant object recognition." Trends in Cognitive Sciences 11, no. 8 (August 2007): 333–41. http://dx.doi.org/10.1016/j.tics.2007.06.010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Stejskal, Tomáš. "2D-Shape Analysis Using Shape Invariants." Applied Mechanics and Materials 613 (August 2014): 452–57. http://dx.doi.org/10.4028/www.scientific.net/amm.613.452.

Повний текст джерела
Анотація:
High efficiency detection of two-dimensional objects is achieved by an appropriate choice of object invariants. The aim is to show an example of the construction of an algorithm for rapid identification also for highly complex objects. The program structure works in a similar way as animal systems in nature. Differentiating runs from whole to details. They are used to shape invariants. The program algorithm is specifically used a surfaces invariant, which represents a whole. Then was used a boundary length invariant around the object. Finally, the chord distribution code was used, which represent a detail of object recognition. The actual computational algorithms are not software-intensive and easy to debug. System uses the redundancy of uncertain information about the shape. In principle, chosen a certain balance between the confidence level of recognition and repetition of shape recognition by various methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Schurgin, Mark, and Jonathan Flombaum. "Invariant object recognition enhanced by object persistence." Journal of Vision 15, no. 12 (September 1, 2015): 239. http://dx.doi.org/10.1167/15.12.239.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Cox, David D., Philip Meier, Nadja Oertelt, and James J. DiCarlo. "'Breaking' position-invariant object recognition." Nature Neuroscience 8, no. 9 (August 7, 2005): 1145–47. http://dx.doi.org/10.1038/nn1519.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Rolls, Edmund T., and Simon M. Stringer. "Invariant visual object recognition: A model, with lighting invariance." Journal of Physiology-Paris 100, no. 1-3 (July 2006): 43–62. http://dx.doi.org/10.1016/j.jphysparis.2006.09.004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

CHAN, LAI-WAN. "NEURAL NETWORKS FOR COLLECTIVE TRANSLATIONAL INVARIANT OBJECT RECOGNITION." International Journal of Pattern Recognition and Artificial Intelligence 06, no. 01 (April 1992): 143–56. http://dx.doi.org/10.1142/s0218001492000084.

Повний текст джерела
Анотація:
A novel method using neural networks for translational invariant object recognition is described in this paper. The objective is to enable the recognition of objects in any shifted position when the objects are presented to the network in only one standard location during the training procedure. With the presence of multiple or overlapped objects in the scene, translational invariant object recognition is a very difficult task. Noise corruption of the image creates another difficulty. In this paper, a novel approach is proposed to tackle this problem, using neural networks with the consideration of multiple objects and the presence of noise. This method utilizes the secondary responses activated by the backpropagation network. A confirmative network is used to obtain the object identification and location, based on these secondary responses. Experimental results were used to demonstrate the ability of this approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Sufi karimi, Hiwa, and Karim Mohammadi. "Rotational invariant biologically inspired object recognition." IET Image Processing 14, no. 15 (December 2020): 3762–73. http://dx.doi.org/10.1049/iet-ipr.2019.1621.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kim, Kye-Kyung, Jae-Hong Kim, and Jae-Yun Lee. "Illumination and Rotation Invariant Object Recognition." Journal of the Korea Contents Association 12, no. 11 (November 28, 2012): 1–8. http://dx.doi.org/10.5392/jkca.2012.12.11.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Lamdan, Y., J. T. Schwartz, and H. J. Wolfson. "Affine invariant model-based object recognition." IEEE Transactions on Robotics and Automation 6, no. 5 (1990): 578–89. http://dx.doi.org/10.1109/70.62047.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Liu, Xiaofan, Shaohua Tan, V. Srinivasan, S. H. Ong, and Weixin Xie. "Fuzzy pyramid-based invariant object recognition." Pattern Recognition 27, no. 5 (May 1994): 741–56. http://dx.doi.org/10.1016/0031-3203(94)90051-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

You, Shingchern D., and Gary E. Ford. "Network model for invariant object recognition." Pattern Recognition Letters 15, no. 8 (August 1994): 761–67. http://dx.doi.org/10.1016/0167-8655(94)90004-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

KANLAYA, Wittayathawon, Le DUNG, and Makoto MIZUKAWA. "2P1-G04 Adaptive K-Nearest Neighbor for Pose-Invariant Object Recognition." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2009 (2009): _2P1—G04_1—_2P1—G04_4. http://dx.doi.org/10.1299/jsmermd.2009._2p1-g04_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Franzius, Mathias, Niko Wilbert, and Laurenz Wiskott. "Invariant Object Recognition and Pose Estimation with Slow Feature Analysis." Neural Computation 23, no. 9 (September 2011): 2289–323. http://dx.doi.org/10.1162/neco_a_00171.

Повний текст джерела
Анотація:
Primates are very good at recognizing objects independent of viewing angle or retinal position, and they outperform existing computer vision systems by far. But invariant object recognition is only one prerequisite for successful interaction with the environment. An animal also needs to assess an object's position and relative rotational angle. We propose here a model that is able to extract object identity, position, and rotation angles. We demonstrate the model behavior on complex three-dimensional objects under translation and rotation in depth on a homogeneous background. A similar model has previously been shown to extract hippocampal spatial codes from quasi-natural videos. The framework for mathematical analysis of this earlier application carries over to the scenario of invariant object recognition. Thus, the simulation results can be explained analytically even for the complex high-dimensional data we employed.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Rolls, Edmund T., and Simon M. Stringer. "Invariant Global Motion Recognition in the Dorsal Visual System: A Unifying Theory." Neural Computation 19, no. 1 (January 2007): 139–69. http://dx.doi.org/10.1162/neco.2007.19.1.139.

Повний текст джерела
Анотація:
The motion of an object (such as a wheel rotating) is seen as consistent independent of its position and size on the retina. Neurons in higher cortical visual areas respond to these global motion stimuli invariantly, but neurons in early cortical areas with small receptive fields cannot represent this motion, not only because of the aperture problem but also because they do not have invariant representations. In a unifying hypothesis with the design of the ventral cortical visual system, we propose that the dorsal visual system uses a hierarchical feedforward network architecture (V1, V2, MT, MSTd, parietal cortex) with training of the connections with a short-term memory trace associative synaptic modification rule to capture what is invariant at each stage. Simulations show that the proposal is computationally feasible, in that invariant representations of the motion flow fields produced by objects self-organize in the later layers of the architecture. The model produces invariant representations of the motion flow fields produced by global in-plane motion of an object, in-plane rotational motion, looming versus receding of the object, and object-based rotation about a principal axis. Thus, the dorsal and ventral visual systems may share some similar computational principles.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Nishimura, Mayu, K. Suzanne Scherf, Valentinos Zachariou, Michael J. Tarr, and Marlene Behrmann. "Size Precedes View: Developmental Emergence of Invariant Object Representations in Lateral Occipital Complex." Journal of Cognitive Neuroscience 27, no. 3 (March 2015): 474–91. http://dx.doi.org/10.1162/jocn_a_00720.

Повний текст джерела
Анотація:
Although object perception involves encoding a wide variety of object properties (e.g., size, color, viewpoint), some properties are irrelevant for identifying the object. The key to successful object recognition is having an internal representation of the object identity that is insensitive to these properties while accurately representing important diagnostic features. Behavioral evidence indicates that the formation of these kinds of invariant object representations takes many years to develop. However, little research has investigated the developmental emergence of invariant object representations in the ventral visual processing stream, particularly in the lateral occipital complex (LOC) that is implicated in object processing in adults. Here, we used an fMR adaptation paradigm to evaluate age-related changes in the neural representation of objects within LOC across variations in size and viewpoint from childhood through early adulthood. We found a dissociation between the neural encoding of object size and object viewpoint within LOC: by age of 5–10 years, area LOC demonstrates adaptation across changes in size, but not viewpoint, suggesting that LOC responses are invariant to size variations, but that adaptation across changes in view is observed in LOC much later in development. Furthermore, activation in LOC was correlated with behavioral indicators of view invariance across the entire sample, such that greater adaptation was correlated with better recognition of objects across changes in viewpoint. We did not observe similar developmental differences within early visual cortex. These results indicate that LOC acquires the capacity to compute invariance specific to different sources of information at different time points over the course of development.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

FINLAYSON, GRAHAM D., and GUI YUN TIAN. "COLOR NORMALIZATION FOR COLOR OBJECT RECOGNITION." International Journal of Pattern Recognition and Artificial Intelligence 13, no. 08 (December 1999): 1271–85. http://dx.doi.org/10.1142/s0218001499000720.

Повний текст джерела
Анотація:
Color images depend on the color of the capture illuminant and object reflectance. As such image colors are not stable features for object recognition, however stability is necessary since perceived colors (the colors we see) are illuminant independent and do correlate with object identity. Before the colors in images can be compared, they must first be preprocessed to remove the effect of illumination. Two types of preprocessing have been proposed: first, run a color constancy algorithm or second apply an invariant normalization. In color constancy preprocessing the illuminant color is estimated and then, at a second stage, the image colors are corrected to remove color bias due to illumination. In color invariant normalization image RGBs are redescribed, in an illuminant independent way, relative to the context in which they are seen (e.g. RGBs might be divided by a local RGB average). In theory the color constancy approach is superior since it works in a scene independently: color invariant normalization can be calculated post-color constancy but the converse is not true. However, in practice color invariant normalization usually supports better indexing. In this paper we ask whether color constancy algorithms will ever deliver better indexing than color normalization. The main result of this paper is to demonstrate equivalence between color constancy and color invariant computation. The equivalence is empirically derived based on color object recognition experiments. colorful objects are imaged under several different colors of light. To remove dependency due to illumination these images are preprocessed using either a perfect color constancy algorithm or the comprehensive color image normalization. In the perfect color constancy algorithm the illuminant is measured rather than estimated. The import of this is that the perfect color constancy algorithm can determine the actual illuminant without error and so bounds the performance of all existing and future algorithms. Post-color constancy or color normalization processing, the color content is used as cue for object recognition. Counter-intuitively perfect color constancy does not support perfect recognition. In comparison the color invariant normalization does deliver near-perfect recognition. That the color constancy approach fails implies that the scene effective illuminant is different from the measured illuminant. This explanation has merit since it is well known that color constancy is more difficult in the presence of physical processes such as fluorescence and mutual illumination. Thus, in a second experiment, image colors are corrected based on a scene dependent "effective illuminant". Here, color constancy preprocessing facilitates near-perfect recognition. Of course, if the effective light is scene dependent then optimal color constancy processing is also scene dependent and so, is equally a color invariant normalization.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Peng, Mengkang, and Narendra K. Gupta. "Invariant and Occluded Object Recognition Based on Graph Matching." International Journal of Electrical Engineering & Education 32, no. 1 (January 1995): 31–38. http://dx.doi.org/10.1177/002072099503200104.

Повний текст джерела
Анотація:
Invariant and occluded object recognition based on graph matching The paper describes an algorithm for object recognition using neural network. The objects may be occluded and may have any changes in position, orientation and scale. The paper aims to generate project ideas for the final year students at the university and also to provide a basis for further research.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Lueschow, Andreas, Earl K. Miller, and Robert Desimone. "Inferior Temporal Mechanisms for Invariant Object Recognition." Cerebral Cortex 4, no. 5 (1994): 523–31. http://dx.doi.org/10.1093/cercor/4.5.523.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Torres-Mendez, L. A., J. C. Ruiz-Suarez, L. E. Sucar, and G. Gomez. "Translation, rotation, and scale-invariant object recognition." IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews) 30, no. 1 (2000): 125–30. http://dx.doi.org/10.1109/5326.827484.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Lim, Jeonghun, and Kunwoo Lee. "3D object recognition using scale-invariant features." Visual Computer 35, no. 1 (October 25, 2017): 71–84. http://dx.doi.org/10.1007/s00371-017-1453-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Hoff, Daniel J., and Peter J. Olver. "Extensions of Invariant Signatures for Object Recognition." Journal of Mathematical Imaging and Vision 45, no. 2 (June 7, 2012): 176–85. http://dx.doi.org/10.1007/s10851-012-0358-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Wallis, Guy, and Roland Baddeley. "Optimal, Unsupervised Learning in Invariant Object Recognition." Neural Computation 9, no. 4 (May 1, 1997): 883–94. http://dx.doi.org/10.1162/neco.1997.9.4.883.

Повний текст джерела
Анотація:
A means for establishing transformation-invariant representations of objects is proposed and analyzed, in which different views are associated on the basis of the temporal order of the presentation of these views, as well as their spatial similarity. Assuming knowledge of the distribution of presentation times, an optimal linear learning rule is derived. Simulations of a competitive network trained on a character recognition task are then used to highlight the success of this learning rule in relation to simple Hebbian learning and to show that the theory can give accurate quantitative predictions for the optimal parameters for such networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Andreadis, I., and Ph Tsalides. "Coloured object recognition using invariant spectral features." Journal of Intelligent & Robotic Systems 13, no. 1 (May 1995): 93–106. http://dx.doi.org/10.1007/bf01664757.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Robinson, Leigh, and Edmund T. Rolls. "Invariant visual object recognition: biologically plausible approaches." Biological Cybernetics 109, no. 4-5 (September 3, 2015): 505–35. http://dx.doi.org/10.1007/s00422-015-0658-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

van Gemert, Jan C., Gertjan J. Burghouts, Frank J. Seinstra, and Jan-Mark Geusebroek. "Color invariant object recognition using entropic graphs." International Journal of Imaging Systems and Technology 16, no. 5 (2006): 146–53. http://dx.doi.org/10.1002/ima.20082.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Vanrie, Jan, Bert Willems, and Johan Wagemans. "Multiple Routes to Object Matching from Different Viewpoints: Mental Rotation versus Invariant Features." Perception 30, no. 9 (September 2001): 1047–56. http://dx.doi.org/10.1068/p3200.

Повний текст джерела
Анотація:
Previous research has shown that object recognition from different viewpoints often yields strong effects of viewpoint. However, for some objects and experimental paradigms almost complete viewpoint invariance is obtained. This suggests the existence of multiple routes to object recognition. In this study we further strengthen this notion by designing two different conditions using the same experimental paradigm (simultaneous matching) and highly similar objects (multiblock figures). In the first condition (involving a handedness violation), strong effects of viewpoint were obtained. In the second condition (involving an invariance violation), the effects of viewpoint were negligible. This result illustrates that asking under what circumstances object recognition is viewpoint dependent or independent is more fruitful than attempting to show that object recognition is either viewpoint dependent or independent.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Michler, Frank, Reinhard Eckhorn, and Thomas Wachtler. "Using Spatiotemporal Correlations to Learn Topographic Maps for Invariant Object Recognition." Journal of Neurophysiology 102, no. 2 (August 2009): 953–64. http://dx.doi.org/10.1152/jn.90651.2008.

Повний текст джерела
Анотація:
The retinal image of visual objects can vary drastically with changes of viewing angle. Nevertheless, our visual system is capable of recognizing objects fairly invariant of viewing angle. Under natural viewing conditions, different views of the same object tend to occur in temporal proximity, thereby generating temporal correlations in the sequence of retinal images. Such spatial and temporal stimulus correlations can be exploited for learning invariant representations. We propose a biologically plausible mechanism that implements this learning strategy using the principle of self-organizing maps. We developed a network of spiking neurons that uses spatiotemporal correlations in the inputs to map different views of objects onto a topographic representation. After learning, different views of the same object are represented in a connected neighborhood of neurons. Model neurons of a higher processing area that receive unspecific input from a local neighborhood in the map show view-invariant selectivities for visual objects. The findings suggest a functional relevance of cortical topographic maps.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Li, Bao Zhang, Mo Yu Sha, and Yan Ping Cui. "Study on Rotary Object Recognition Technique from the Complex Background." Advanced Materials Research 418-420 (December 2011): 494–500. http://dx.doi.org/10.4028/www.scientific.net/amr.418-420.494.

Повний текст джерела
Анотація:
Target recognition from complex background is the emphasis and difficulty of computer vision, and rotary objects is widely used in the military and manufacturing field. Rotary object recognition in complex background based on improved BP neural network is proposed in the dissertation. Median filter is adopted to get rid of the noise and an improved method of maximum classes square error is used to compute the threshold of the image segmentation. The target recognition system based on improved BP neural network is established to recognize the rotary objects, and seven invariant moments of rotary objects serve as the input feature vector. The experiment results show that the image noise could be gotten rid of effectively and the image could be segmented exactly by the image preprocessing method put forward in the dissertation, and the seven invariant moments is appropriate for the character of rotary objects, and the rotary object recognition system based on BP neural network acquires an excellent recognition result.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Parga, Néstor, and Edmund Rolls. "Transform-Invariant Recognition by Association in a Recurrent Network." Neural Computation 10, no. 6 (August 1, 1998): 1507–25. http://dx.doi.org/10.1162/089976698300017287.

Повний текст джерела
Анотація:
Objects can be reconized independently of the view they present, of their position on the retina, or their scale. It has been suggested that one basic mechanism that makes this possible is a memory effect, or a trace, that allows associations to be made between consecutive views of one object. In this work, we explore the possibility that this memory trace is provided by the sustained activity of neurons in layers of the visual pathway produced by an extensive recurrent connectivity. We describe a model that contains this high recurrent connectivity and synaptic efficacies built with contributions from associations between pairs of views that is simple enough to be treated analytically. The main result is that there is a change of behavior as the strength of the association between views of the same object, relative to the association within each view of an object, increases. When its value is small, sustained activity in the network is produced by the views themselves. As it increases above a threshold value, the network always reaches a particular state (which represents the object) independent of the particular view that was seen as a stimulus. In this regime, the network can still store an extensive number of objects, each defined by a finite (although it can be large) number of views.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Stankiewicz, B. J., and J. E. Hummel. "The Role of Attention on Viewpoint-Invariant Object Recognition." Perception 25, no. 1_suppl (August 1996): 148. http://dx.doi.org/10.1068/v96l1106.

Повний текст джерела
Анотація:
Researchers in the field of visual perception have dedicated a great deal of effort to understanding how humans recognise known objects from novel viewpoints (often referred to as shape constancy). This research has produced a variety of theories—some that emphasise the use of invariant representations, others that emphasise alignment processes used in conjunction with viewpoint-specific representations. Although researchers disagree on the specifics of the representations and processes used during human object recognition, most agree that achieving shape constancy is computationally expensive—that is, it requires work. If it is assumed that attention provides the necessary resources for these computations, these theories suggest that recognition with attention should be qualitatively different from recognition without attention. Specifically, recognition with attention should be more invariant with viewpoint than recognition without attention. We recently reported a series of experiments, in which we used a response-time priming paradigm in which attention and viewpoint were manipulated, that showed attention is necessary for generating a representation of shape that is invariant with left-right reflection. We are now reporting new experiments showing that shape representation activated without attention is not completely view-specific. These experiments demonstrate that the automatic shape representation is invariant with the size and location of an image in the visual field. The results are reported in the context of a recent model proposed by Hummel and Stankiewicz ( Attention and Performance16 in press), as well as in the context of other models of human object recognition that make explicit predictions about the role of attention in generating a viewpoint-invariant representation of object shape.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

HYDER, MASHUD, MD MONIRUL ISLAM, M. A. H. AKHAND, and KAZUYUKI MURASE. "SYMMETRY AXIS BASED OBJECT RECOGNITION UNDER TRANSLATION, ROTATION AND SCALING." International Journal of Neural Systems 19, no. 01 (February 2009): 25–42. http://dx.doi.org/10.1142/s0129065709001811.

Повний текст джерела
Анотація:
This paper presents a new approach, known as symmetry axis based feature extraction and recognition (SAFER), for recognizing objects under translation, rotation and scaling. Unlike most previous invariant object recognition (IOR) systems, SAFER puts emphasis on both simplicity and accuracy of the recognition system. To achieve simplicity, it uses simple formulae for extracting invariant features from an object. The scheme used in feature extraction is based on the axis of symmetry and angles of concentric circles drawn around the object. SAFER divides the extracted features into a number of groups based on their similarity. To improve the recognition performance, SAFER uses a number of neural networks (NNs) instead of single NN are used for training and recognition of extracted features. The new approach, SAFER, has been tested on two of real world problems i.e., English characters with two different fonts and images of different shapes. The experimental results show that SAFER can produce good recognition performance in comparison with other algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Donatti, Guillermo Sebastián. "Memory Organization for Invariant Object Recognition and Categorization." ELCVIA Electronic Letters on Computer Vision and Image Analysis 15, no. 2 (November 8, 2016): 33. http://dx.doi.org/10.5565/rev/elcvia.954.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Forsyth, D., J. L. Mundy, A. Zisserman, C. Coelho, A. Heller, and C. Rothwell. "Invariant descriptors for 3D object recognition and pose." IEEE Transactions on Pattern Analysis and Machine Intelligence 13, no. 10 (1991): 971–91. http://dx.doi.org/10.1109/34.99233.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Kyrki, Ville, Joni-Kristian Kamarainen, and Heikki Kälviäinen. "Simple Gabor feature space for invariant object recognition." Pattern Recognition Letters 25, no. 3 (February 2004): 311–18. http://dx.doi.org/10.1016/j.patrec.2003.10.008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Sun, Te-Hsiu, Horng-Chyi Horng, Chi-Shuan Liu, and Fang-Chin Tien. "Invariant 2D object recognition using KRA and GRA." Expert Systems with Applications 36, no. 9 (November 2009): 11517–27. http://dx.doi.org/10.1016/j.eswa.2009.03.055.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Li, Wenjing, and Tong Lee. "Projective invariant object recognition by a Hopfield network." Neurocomputing 62 (December 2004): 1–18. http://dx.doi.org/10.1016/j.neucom.2003.11.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Tang, H. W., V. Srinivasan, and S. H. Ong. "Invariant object recognition using a neural template classifier." Image and Vision Computing 14, no. 7 (July 1996): 473–83. http://dx.doi.org/10.1016/0262-8856(95)01065-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Harris, Irina M., and Paul E. Dux. "Orientation-invariant object recognition: evidence from repetition blindness." Cognition 95, no. 1 (February 2005): 73–93. http://dx.doi.org/10.1016/j.cognition.2004.02.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Tadin, Duje, and Raphael Pinaud. "Visual object recognition: building invariant representations over time." Journal of Biosciences 33, no. 5 (December 2008): 639–42. http://dx.doi.org/10.1007/s12038-008-0083-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Newell, Fiona N. "Stimulus Context and View Dependence in Object Recognition." Perception 27, no. 1 (January 1998): 47–68. http://dx.doi.org/10.1068/p270047.

Повний текст джерела
Анотація:
The effect of stimulus factors such as interobject similarity and stimulus density on the recognition of objects across changes in view was investigated in five experiments. The recognition of objects across views was found to depend on the degree of interobject similarity and on stimulus density: recognition was view dependent when both interobject similarity and stimulus density were high, irrespective of the familiarity of the target object. However, when stimulus density or interobject similarity was low recognition was invariant to viewpoint. It was found that recognition was accomplished through view-dependent procedures when discriminability between objects was low. The findings are discussed in terms of an exemplar-based model in which the dimensions used for discriminating between objects are optimised to maximise the differences between the objects. This optimisation process is characterised as a perceptual ‘ruler’ which measures interobject similarity by stretching across objects in representational space. It is proposed that the ‘ruler’ optimises the feature differences between objects in such a way that recognition is view invariant but that such a process incurs a cost in discriminating between small feature differences, which results in view-dependent recognition performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Bebis, George N., and George M. Papadourakis. "Object recognition using invariant object boundary representations and neural network models." Pattern Recognition 25, no. 1 (January 1992): 25–44. http://dx.doi.org/10.1016/0031-3203(92)90004-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Akagündüz, E., and İ Ulusoy. "3D object recognition from range images using transform invariant object representation." Electronics Letters 46, no. 22 (2010): 1499. http://dx.doi.org/10.1049/el.2010.1818.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Topalova, I. "Modular Adaptive System Based on a Multi-Stage Neural Structure for Recognition of 2D Objects of Discontinuous Production." International Journal of Advanced Robotic Systems 2, no. 1 (March 1, 2005): 6. http://dx.doi.org/10.5772/5804.

Повний текст джерела
Анотація:
This is a presentation of a new system for invariant recognition of 2D objects with overlapping classes, that can not be effectively recognized with the traditional methods. The translation, scale and partial rotation invariant contour object description is transformed in a DCT spectrum space. The obtained frequency spectrums are decomposed into frequency bands in order to feed different BPG neural nets (NNs). The NNs are structured in three stages - filtering and full rotation invariance; partial recognition; general classification. The designed multi-stage BPG Neural Structure shows very good accuracy and flexibility when tested with 2D objects used in the discontinuous production. The reached speed and the opportunuty for an easy restructuring and reprogramming of the system makes it suitable for application in different applied systems for real time work.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Stringer, Simon M., and Edmund T. Rolls. "Invariant Object Recognition in the Visual System with Novel Views of 3D Objects." Neural Computation 14, no. 11 (November 1, 2002): 2585–96. http://dx.doi.org/10.1162/089976602760407982.

Повний текст джерела
Анотація:
To form view-invariant representations of objects, neurons in the inferior temporal cortex may associate together different views of an object, which tend to occur close together in time under natural viewing conditions. This can be achieved in neuronal network models of this process by using an associative learning rule with a short-term temporal memory trace. It is postulated that within a view, neurons learn representations that enable them to generalize within variations of that view. When three-dimensional (3D) objects are rotated within small angles (up to, e.g., 30 degrees), their surface features undergo geometric distortion due to the change of perspective. In this article, we show how trace learning could solve the problem of in-depth rotation-invariant object recognition by developing representations of the transforms that features undergo when they are on the surfaces of 3D objects. Moreover, we show that having learned how features on 3D objects transform geometrically as the object is rotated in depth, the network can correctly recognize novel 3D variations within a generic view of an object composed of a new combination of previously learned features. These results are demonstrated in simulations of a hierarchical network model (VisNet) of the visual system that show that it can develop representations useful for the recognition of 3D objects by forming perspective-invariant representations to allow generalization within a generic view.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Hassan, Waqas, Philip Birch, Bhargav Mitra, Nagachetan Bangalore, Rupert Young, and Chris Chatwin. "Illumination invariant stationary object detection." IET Computer Vision 7, no. 1 (February 2013): 1–8. http://dx.doi.org/10.1049/iet-cvi.2012.0054.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Lejeune, Claude, and Yunlong Sheng. "Optoneural system for invariant pattern recognition." Canadian Journal of Physics 71, no. 9-10 (September 1, 1993): 405–9. http://dx.doi.org/10.1139/p93-063.

Повний текст джерела
Анотація:
An optoneural system is developed for invariant pattern recognition. The system consists of an optical correlator and a neural network. The correlator uses Fourier–Mellin spatial filters (FMF) for feature extraction. The FMF yields an unique output pattern for an input object. The present method works only with one object present in the input scene. The optical features extracted from the output pattern are shift, scale, and rotation invariant and are used as input to the neural network. The neural network is a multilayer feedforward net with back-propagation learning rule. Because of substantial reduction of the dimension of feature vectors provided by optical FMF, the small neural network is simply simulated in a personal computer. Optical experimental results are shown.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Avidan, Galia, Michal Harel, Talma Hendler, Dafna Ben-Bashat, Ehud Zohary, and Rafael Malach. "Contrast Sensitivity in Human Visual Areas and Its Relationship to Object Recognition." Journal of Neurophysiology 87, no. 6 (June 1, 2002): 3102–16. http://dx.doi.org/10.1152/jn.2002.87.6.3102.

Повний текст джерела
Анотація:
An important characteristic of visual perception is the fact that object recognition is largely immune to changes in viewing conditions. This invariance is obtained within a sequence of ventral stream visual areas beginning in area V1 and ending in high order occipito-temporal object areas (the lateral occipital complex, LOC). Here we studied whether this transformation could be observed in the contrast response of these areas. Subjects were presented with line drawings of common objects and faces in five different contrast levels (0, 4, 6, 10, and 100%). Our results show that indeed there was a gradual trend of increasing contrast invariance moving from area V1, which manifested high sensitivity to contrast changes, to the LOC, which showed a significantly higher degree of invariance at suprathreshold contrasts (from 10 to 100%). The trend toward increased invariance could be observed for both face and object images; however, it was more complete for the face images, while object images still manifested substantial sensitivity to contrast changes. Control experiments ruled out the involvement of attention effects or hemodynamic “ceiling” in producing the contrast invariance. The transition from V1 to LOC was gradual with areas along the ventral stream becoming increasingly contrast-invariant. These results further stress the hierarchical and gradual nature of the transition from early retinotopic areas to high order ones, in the build-up of abstract object representations.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Strother, Lars, and Matthew Harrison. "Left-lateralized interference of letter recognition on mirror-invariant object recognition." Journal of Vision 18, no. 10 (September 1, 2018): 1165. http://dx.doi.org/10.1167/18.10.1165.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії