Academic literature on the topic 'Invariant Object Recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Invariant Object Recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Invariant Object Recognition"

1

Wood, Justin N., and Samantha M. W. Wood. "The development of newborn object recognition in fast and slow visual worlds." Proceedings of the Royal Society B: Biological Sciences 283, no. 1829 (2016): 20160166. http://dx.doi.org/10.1098/rspb.2016.0166.

Full text
Abstract:
Object recognition is central to perception and cognition. Yet relatively little is known about the environmental factors that cause invariant object recognition to emerge in the newborn brain. Is this ability a hardwired property of vision? Or does the development of invariant object recognition require experience with a particular kind of visual environment? Here, we used a high-throughput controlled-rearing method to examine whether newborn chicks ( Gallus gallus ) require visual experience with slowly changing objects to develop invariant object recognition abilities. When newborn chicks were raised with a slowly rotating virtual object, the chicks built invariant object representations that generalized across novel viewpoints and rotation speeds. In contrast, when newborn chicks were raised with a virtual object that rotated more quickly, the chicks built viewpoint-specific object representations that failed to generalize to novel viewpoints and rotation speeds. Moreover, there was a direct relationship between the speed of the object and the amount of invariance in the chick's object representation. Thus, visual experience with slowly changing objects plays a critical role in the development of invariant object recognition. These results indicate that invariant object recognition is not a hardwired property of vision, but is learned rapidly when newborns encounter a slowly changing visual world.
APA, Harvard, Vancouver, ISO, and other styles
2

Isik, Leyla, Ethan M. Meyers, Joel Z. Leibo, and Tomaso Poggio. "The dynamics of invariant object recognition in the human visual system." Journal of Neurophysiology 111, no. 1 (2014): 91–102. http://dx.doi.org/10.1152/jn.00394.2013.

Full text
Abstract:
The human visual system can rapidly recognize objects despite transformations that alter their appearance. The precise timing of when the brain computes neural representations that are invariant to particular transformations, however, has not been mapped in humans. Here we employ magnetoencephalography decoding analysis to measure the dynamics of size- and position-invariant visual information development in the ventral visual stream. With this method we can read out the identity of objects beginning as early as 60 ms. Size- and position-invariant visual information appear around 125 ms and 150 ms, respectively, and both develop in stages, with invariance to smaller transformations arising before invariance to larger transformations. Additionally, the magnetoencephalography sensor activity localizes to neural sources that are in the most posterior occipital regions at the early decoding times and then move temporally as invariant information develops. These results provide previously unknown latencies for key stages of human-invariant object recognition, as well as new and compelling evidence for a feed-forward hierarchical model of invariant object recognition where invariance increases at each successive visual area along the ventral stream.
APA, Harvard, Vancouver, ISO, and other styles
3

DiCarlo, James J., and David D. Cox. "Untangling invariant object recognition." Trends in Cognitive Sciences 11, no. 8 (2007): 333–41. http://dx.doi.org/10.1016/j.tics.2007.06.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Stejskal, Tomáš. "2D-Shape Analysis Using Shape Invariants." Applied Mechanics and Materials 613 (August 2014): 452–57. http://dx.doi.org/10.4028/www.scientific.net/amm.613.452.

Full text
Abstract:
High efficiency detection of two-dimensional objects is achieved by an appropriate choice of object invariants. The aim is to show an example of the construction of an algorithm for rapid identification also for highly complex objects. The program structure works in a similar way as animal systems in nature. Differentiating runs from whole to details. They are used to shape invariants. The program algorithm is specifically used a surfaces invariant, which represents a whole. Then was used a boundary length invariant around the object. Finally, the chord distribution code was used, which represent a detail of object recognition. The actual computational algorithms are not software-intensive and easy to debug. System uses the redundancy of uncertain information about the shape. In principle, chosen a certain balance between the confidence level of recognition and repetition of shape recognition by various methods.
APA, Harvard, Vancouver, ISO, and other styles
5

Schurgin, Mark, and Jonathan Flombaum. "Invariant object recognition enhanced by object persistence." Journal of Vision 15, no. 12 (2015): 239. http://dx.doi.org/10.1167/15.12.239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cox, David D., Philip Meier, Nadja Oertelt, and James J. DiCarlo. "'Breaking' position-invariant object recognition." Nature Neuroscience 8, no. 9 (2005): 1145–47. http://dx.doi.org/10.1038/nn1519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rolls, Edmund T., and Simon M. Stringer. "Invariant visual object recognition: A model, with lighting invariance." Journal of Physiology-Paris 100, no. 1-3 (2006): 43–62. http://dx.doi.org/10.1016/j.jphysparis.2006.09.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

CHAN, LAI-WAN. "NEURAL NETWORKS FOR COLLECTIVE TRANSLATIONAL INVARIANT OBJECT RECOGNITION." International Journal of Pattern Recognition and Artificial Intelligence 06, no. 01 (1992): 143–56. http://dx.doi.org/10.1142/s0218001492000084.

Full text
Abstract:
A novel method using neural networks for translational invariant object recognition is described in this paper. The objective is to enable the recognition of objects in any shifted position when the objects are presented to the network in only one standard location during the training procedure. With the presence of multiple or overlapped objects in the scene, translational invariant object recognition is a very difficult task. Noise corruption of the image creates another difficulty. In this paper, a novel approach is proposed to tackle this problem, using neural networks with the consideration of multiple objects and the presence of noise. This method utilizes the secondary responses activated by the backpropagation network. A confirmative network is used to obtain the object identification and location, based on these secondary responses. Experimental results were used to demonstrate the ability of this approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Sufi karimi, Hiwa, and Karim Mohammadi. "Rotational invariant biologically inspired object recognition." IET Image Processing 14, no. 15 (2020): 3762–73. http://dx.doi.org/10.1049/iet-ipr.2019.1621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Kye-Kyung, Jae-Hong Kim, and Jae-Yun Lee. "Illumination and Rotation Invariant Object Recognition." Journal of the Korea Contents Association 12, no. 11 (2012): 1–8. http://dx.doi.org/10.5392/jkca.2012.12.11.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography