To see the other types of publications on this topic, follow the link: Binocular vision. Depth perception. Computer vision.

Journal articles on the topic 'Binocular vision. Depth perception. Computer vision'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Binocular vision. Depth perception. Computer vision.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Fazlyyyakhmatov, Marsel, Nataly Zwezdochkina, and Vladimir Antipov. "The EEG Activity during Binocular Depth Perception of 2D Images." Computational Intelligence and Neuroscience 2018 (2018): 1–7. http://dx.doi.org/10.1155/2018/5623165.

Full text
Abstract:
The central brain functions underlying a stereoscopic vision were a subject of numerous studies investigating the cortical activity during binocular perception of depth. However, the stereo vision is less explored as a function promoting the cognitive processes of the brain. In this work, we investigated a cortical activity during the cognitive task consisting of binocular viewing of a false image which is observed when the eyes are refocused out of the random-dot stereogram plane (3D phenomenon). The power of cortical activity before and after the onset of the false image perception was assessed using the scull EEG recording. We found that during stereo perception of the false image the power of alpha-band activity decreased in the left parietal area and bilaterally in frontal areas of the cortex, while activity in beta-1, beta-2, and delta frequency bands remained to be unchanged. We assume that this suppression of alpha rhythm is presumably associated with increased attention necessary for refocusing the eyes at the plane of the false image.
APA, Harvard, Vancouver, ISO, and other styles
2

Idesawa, Masanori. "3-D Illusory Phenomena with Binocular Viewing and Computer Vision." Journal of Robotics and Mechatronics 4, no. 3 (June 20, 1992): 249–55. http://dx.doi.org/10.20965/jrm.1992.p0249.

Full text
Abstract:
The human visual system can perceive 3-D information of an object by using disparity between two eyes, gradient of illumination (shading), occlusion, textures and their perspective and so on. Consequently, the disparity and the occlusion observed with binocular viewing seems to be the most important cues to get 3-D information. For the artificial realization of the visual function such as in computer vision or robot vision system, it seems to be a clever way to learn from the human visual mechanism. Recently, the author found a new type of illusion. When the visual stimuli of disparity are given only partially along the contour of an object, human visual system can perceive the 3-D surface (not only plane but also curved) of the object where there are no physical visual stimuli to get depth information. The interactions between the perceived illusory surface (occlusion, intersection and transparency) can be recognized. These newly found illusory phenomena have close relations with the visual function of 3-D space perception and can provide a new paradigm in the field of computer vision and human interface.
APA, Harvard, Vancouver, ISO, and other styles
3

Sakai, Ko, Mitsuharu Ogiya, and Yuzo Hirai. "Decoding of depth and motion in ambiguous binocular perception." Journal of the Optical Society of America A 28, no. 7 (June 20, 2011): 1445. http://dx.doi.org/10.1364/josaa.28.001445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yang, Fan, and Yutai Rao. "Vision-Based Intelligent Vehicle Road Recognition and Obstacle Detection Method." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 07 (October 18, 2019): 2050020. http://dx.doi.org/10.1142/s0218001420500202.

Full text
Abstract:
With the development of the world economy and the accelerating process of urbanization, cars have brought great convenience to people’s lives and activities, and have become an indispensable means of transportation. Intelligent vehicles have the important significance of reducing traffic accidents, improving transportation capacity and broad market prospects, and can lead the future development of the automotive industry, so they have received extensive attention. In the existing intelligent vehicle system, the laser radar is a well-deserved protagonist because of its excellent speed and precision. It is an indispensable part of achieving high-precision positioning, but to some extent, the price hindering its marketization is a major factor. Compared with lidar sensors, vision sensors have the advantages of fast sampling rate, light weight, low energy consumption and low price. Therefore, many domestic and foreign research institutions have listed them as the focus of research. However, the current vision-based intelligent vehicle environment sensing technology is also susceptible to factors such as illumination, climate and road type, resulting in insufficient accuracy and real-time performance of the algorithm. This paper takes the environment perception of intelligent vehicles as the research object, and conducts in-depth research on the existing problems in road recognition and obstacle detection algorithms, including road image vanishing point detection, road image segmentation problem, road scene based on binocular vision. Three-dimensional reconstruction and obstacle detection technology.
APA, Harvard, Vancouver, ISO, and other styles
5

Surdick, R. Troy, Elizabeth T. Davis, Robert A. King, and Larry F. Hodges. "The Perception of Distance in Simulated Visual Displays:A Comparison of the Effectiveness and Accuracy of Multiple Depth Cues Across Viewing Distances." Presence: Teleoperators and Virtual Environments 6, no. 5 (October 1997): 513–31. http://dx.doi.org/10.1162/pres.1997.6.5.513.

Full text
Abstract:
The ability effectively and accurately to simulate distance in virtual and augmented reality systems is a challenge currently facing R&D. To examine this issue, we separately tested each of seven visual depth cues (relative brightness, relative size, relative height, linear perspective, foreshortening, texture gradient, and stereopsis) as well as the condition in which all seven of these cues were present and simultaneously providing distance information in a simulated display. The viewing distances were 1 and 2 m. In developing simulated displays to convey distance and depth there are three questions that arise. First, which cues provide effective depth information (so that only a small change in the depth cue results in a perceived change in depth)? Second, which cues provide accurate depth information (so that the perceived distance of two equidistant objects perceptually matches)? Finally, how does the effectiveness and accuracy of these depth cues change as a function of the viewing distance? Ten college-aged subjects were tested with each depth-cue condition at both viewing distances. They were tested using a method of constant stimuli procedure and a modified Wheat-stone stereoscopic display. The perspective cues (linear perspective, foreshortening, and texture gradient) were found to be more effective than other depth cues, while effectiveness of relative brightness was vastly inferior. Moreover, relative brightness, relative height, and relative size all significantly decreased in effectiveness with an increase in viewing distance. The depth cues did not differ in terms of accuracy at either viewing distance. Finally, some subjects experienced difficulty in rapidly perceiving distance information provided by stereopsis, but no subjects had difficulty in effectively and accurately perceiving distance with the perspective information used in our experiment. A second experiment demonstrated that a previously stereo-anomalous subject could be trained to perceive stereoscopic depth in a binocular display. We conclude that the use of perspective cues in simulated displays may be more important than the other depth cues tested because these cues are the most effective and accurate cues at both viewing distances, can be easily perceived by all subjects, and can be readily incorporated into simpler, less complex displays (e.g., biocular HMDs) or more complex ones (e.g., binocular or see-through HMDs).
APA, Harvard, Vancouver, ISO, and other styles
6

Ishimura, G. "Hand Action in a Radial Direction Captures Visual Motion in Depth." Perception 25, no. 1_suppl (August 1996): 138. http://dx.doi.org/10.1068/v96p0116.

Full text
Abstract:
Transversal hand action in the frontoparallel plane biases the perception of bistable visual motion. This has been called action capture. In daily behaviour, however, hand action in a ‘radial’ direction from the head might be more important, because we frequently reach our hand for an object in front of us while guiding the action with vision. The purpose of this study was to measure the strength of action capture in the radial direction. Horizontal luminance gratings were placed above and below the fixation point. Binocular disparity, perspective contour, and spatial frequency gradient cues were attached to the gratings so that they simulated the ‘ceiling’ and the ‘floor’ of a long corridor. The display was reflected on a tilted mirror to face upward. The subject looked into the display and moved his/her dominant hand toward, or away from, the face behind the mirror. Just after the action onset, detected by the computer, one of the two gratings (the ceiling or the floor) flickered in short period to simulate bistable visual motion in depth (approaching or departing). The subject indicated the perceived motion direction in the frontoparallel plane using a 2AFC (upward or downward) method. The results showed that perceived motion was significantly biased to the ‘departing’ direction when the hand moved ‘away from’ the face, and it was biased to the ‘approaching’ direction when the hand moved ‘toward’ it. It is concluded that action capture occurs not only in transversal but also in radial movements.
APA, Harvard, Vancouver, ISO, and other styles
7

Naceri, Abdeldjallil, Ryad Chellali, and Thierry Hoinville. "Depth Perception Within Peripersonal Space Using Head-Mounted Display." Presence: Teleoperators and Virtual Environments 20, no. 3 (June 1, 2011): 254–72. http://dx.doi.org/10.1162/pres_a_00048.

Full text
Abstract:
In this paper, we address depth perception in the peripersonal space within three virtual environments: poor environment (dark room), reduced cues environment (wireframe room), and rich cues environment (a lit textured room). Observers binocularly viewed virtual scenes through a head-mounted display and evaluated the egocentric distance to spheres using visually open-loop pointing tasks. We conducted two different experiments within all three virtual environments. The apparent size of the sphere was held constant in the first experiment and covaried with distance in the second one. The results of the first experiment revealed that observers more accurately estimated depth in the rich virtual environment compared to the visually poor and the wireframe environments. Specifically, observers' pointing errors were small in distances up to 55 cm, and increased with distance once the sphere was further than 55 cm. Individual differences were found in the second experiment. Our results suggest that the quality of virtual environments has an impact on distance estimation within reaching space. Also, manipulating the targets' size cue led to individual differences in depth judgments. Finally, our findings confirm the use of vergence as an absolute distance cue in virtual environments within the arm's reaching space.
APA, Harvard, Vancouver, ISO, and other styles
8

Moro, Stefania S., and Jennifer K. E. Steeves. "Intact Dynamic Visual Capture in People With One Eye." Multisensory Research 31, no. 7 (2018): 675–88. http://dx.doi.org/10.1163/22134808-20181311.

Full text
Abstract:
Abstract Observing motion in one modality can influence the perceived direction of motion in a second modality (dynamic capture). For example observing a square moving in depth can influence the perception of a sound to increase in loudness. The current study investigates whether people who have lost one eye are susceptible to audiovisual dynamic capture in the depth plane similar to binocular and eye-patched viewing control participants. Partial deprivation of the visual system from the loss of one eye early in life results in changes in the remaining intact senses such as hearing. Linearly expanding or contracting discs were paired with increasing or decreasing tones and participants were asked to indicate the direction of the auditory stimulus. Magnitude of dynamic visual capture was measured in people with one eye compared to eye-patched and binocular viewing controls. People with one eye have the same susceptibility to dynamic visual capture as controls, where they perceived the direction of the auditory signal to be moving in the direction of the incongruent visual signal, despite previously showing a lack of visual dominance for audiovisual cues. This behaviour may be the result of directing attention to the visual modality, their partially deficient sense, in order to gain important information about approaching and receding stimuli which in the former case could be life-threatening. These results contribute to the growing body of research showing that people with one eye display unique accommodations with respect to audiovisual processing that are likely adaptive in each unique sensory situation.
APA, Harvard, Vancouver, ISO, and other styles
9

Bridge, Holly. "Effects of cortical damage on binocular depth perception." Philosophical Transactions of the Royal Society B: Biological Sciences 371, no. 1697 (June 19, 2016): 20150254. http://dx.doi.org/10.1098/rstb.2015.0254.

Full text
Abstract:
Stereoscopic depth perception requires considerable neural computation, including the initial correspondence of the two retinal images, comparison across the local regions of the visual field and integration with other cues to depth. The most common cause for loss of stereoscopic vision is amblyopia, in which one eye has failed to form an adequate input to the visual cortex, usually due to strabismus (deviating eye) or anisometropia. However, the significant cortical processing required to produce the percept of depth means that, even when the retinal input is intact from both eyes, brain damage or dysfunction can interfere with stereoscopic vision. In this review, I examine the evidence for impairment of binocular vision and depth perception that can result from insults to the brain, including both discrete damage, temporal lobectomy and more systemic diseases such as posterior cortical atrophy. This article is part of the themed issue ‘Vision in our three-dimensional world’.
APA, Harvard, Vancouver, ISO, and other styles
10

YASUOKA, Akiko, and Masaaki OKURA. "Binocular depth perception of objects with peripheral vision (5):." Proceedings of the Annual Convention of the Japanese Psychological Association 74 (September 20, 2010): 2AM113. http://dx.doi.org/10.4992/pacjpa.74.0_2am113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Read, Jenny. "Early computational processing in binocular vision and depth perception." Progress in Biophysics and Molecular Biology 87, no. 1 (January 2005): 77–108. http://dx.doi.org/10.1016/j.pbiomolbio.2004.06.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Harris, Julie, Harold Nefs, and Catherine Grafton. "Binocular vision and motion-in-depth." Spatial Vision 21, no. 6 (2008): 531–47. http://dx.doi.org/10.1163/156856808786451462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Lungeanu, D., C. Popa, S. Hotca, and G. Macovievici. "Modelling biological depth perception in binocular vision: The local disparity estimation." Medical Informatics 23, no. 2 (January 1998): 131–43. http://dx.doi.org/10.3109/14639239808995025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

SHINOMIYA, Takashi, and Haruhiko SATO. "Comparisons of accommodation under monocular and binocular vision in depth perception." Japanese journal of ergonomics 30, Supplement (1994): 470–71. http://dx.doi.org/10.5100/jje.30.supplement_470.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Wade, Nicholas J. "On the Late Invention of the Stereoscope." Perception 16, no. 6 (December 1987): 785–818. http://dx.doi.org/10.1068/p160785.

Full text
Abstract:
It was not until 1838, when Wheatstone published his account of the stereoscope, that stereoscopic depth perception entered into the body of binocular phenomena. It is argued that the stereoscope was not invented earlier because the phenomenon of stereopsis based on disparity had not been adequately described. This was the case despite the fact that there had been earlier descriptions of tasks that could be performed better with two eyes than with one; the perceptual deficits attendant upon the loss of one eye had been remarked upon; analyses of the projections to each eye were commonplace, and binocular disparities were accurately illustrated; moreover, binocular microscopes and telescopes had been made over a century earlier. Theories of binocular vision were generally confined to accounting for singleness of vision with two eyes, and the concepts employed to account for this were visible direction, corresponding retinal points, and union in the brain. The application of these concepts inhibited any consideration of disparities, other than for yielding diplopia. When perception of the third dimension was addressed by Berkeley at the beginning of the eighteenth century, it was in the context of monocular vision and binocular convergence. Thereafter visual direction became the province for binocular vision and it was analysed in terms of geometrical optics, whereas visual distance was examined in the context of learned associations between vision and touch. This artificial division was challenged initially with respect to visual direction and later with respect to stereopsis. An additional factor delaying the invention of the stereoscope was that experiments on binocular vision generally involved abnormal convergence on extended objects. Wheatstone's accidental observation of stereopsis was under artificial conditions in which disparity alone defined the binocular depth perceived. Once invented the stereoscope was enthusiastically embraced by students of vision. It is suggested that the ease with which retinal disparity could be manipulated in stereopairs has led to an exaggeration of its importance in space perception.
APA, Harvard, Vancouver, ISO, and other styles
16

Pepperell, Robert, and Anja Ruschkowski. "Double Vision as a Pictorial Depth Cue." Art & Perception 1, no. 1-2 (2013): 49–64. http://dx.doi.org/10.1163/22134913-00002001.

Full text
Abstract:
‘Double images’ are a little-noticed feature of human binocular vision caused by non-convergence of the eyes outside of the point of fixation. Double vision, or psychological diplopia, is closely linked to the perception of depth in natural vision as its perceived properties vary depending on proximity of the stimulus to the viewer. Very little attention, however, has been paid to double images in art or in scientific studies of pictorial depth. Double images have rarely been depicted and do not appear among the list of commonly cited monocular depth cues. In this study we discuss some attempts by artists to capture the doubled appearance of objects in pictures, and some of the relevant scientific work on double vision. We then present the results of a study designed to test whether the inclusion of double images in two-dimensional pictures can enhance the illusion of three-dimensional space. Our results suggest that double images can significantly enhance depth perception in pictures when combined with other depth cues such as blur. We conclude that double images could be added to the list of depth cues available to those wanting to create a greater sense of depth in pictures.
APA, Harvard, Vancouver, ISO, and other styles
17

Iwamoto, Tatsuya, and Masanori Idesawa. "Volume Perception and a Processing Method of Unpaired Region in Stereo Vision." Journal of Robotics and Mechatronics 9, no. 2 (April 20, 1997): 121–25. http://dx.doi.org/10.20965/jrm.1997.p0121.

Full text
Abstract:
In the human visual system, binocular unpaired regions, where binocular images do not correspond to each other, play a very important role on stereo perception. In our recent experiments, we found that binocular unpaired regions give a special effect on the volume perception of solid objects with curved surfaces. In this paper, we shall introduce phenomena of volume perception, and then propose some strategies for realizing such function on a computer vision system.
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Hao, Sheila Gillard Crewther, and Zheng Qin Yin. "The Role of Eye Movement Driven Attention in Functional Strabismic Amblyopia." Journal of Ophthalmology 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/534719.

Full text
Abstract:
Strabismic amblyopia “blunt vision” is a developmental anomaly that affects binocular vision and results in lowered visual acuity. Strabismus is a term for a misalignment of the visual axes and is usually characterized by impaired ability of the strabismic eye to take up fixation. Such impaired fixation is usually a function of the temporally and spatially impaired binocular eye movements that normally underlie binocular shifts in visual attention. In this review, we discuss how abnormal eye movement function in children with misaligned eyes influences the development of normal binocular visual attention and results in deficits in visual function such as depth perception. We also discuss how eye movement function deficits in adult amblyopia patients can also lead to other abnormalities in visual perception. Finally, we examine how the nonamblyopic eye of an amblyope is also affected in strabismic amblyopia.
APA, Harvard, Vancouver, ISO, and other styles
19

Shioiri, Satoshi, and Takao Sato. "Special issue. Depth perception and three dimensional imaging. 2. Depth perception. 2-1. Binocular stereopsis and monocular vision." Journal of the Institute of Television Engineers of Japan 45, no. 4 (1991): 431–37. http://dx.doi.org/10.3169/itej1978.45.431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Grigo, A., and M. Lappe. "Illusory Optic Flow Transformation with Binocular Vision." Perception 25, no. 1_suppl (August 1996): 61. http://dx.doi.org/10.1068/v96p0207.

Full text
Abstract:
We investigated the influence of stereoscopic vision on the perception of optic flow fields in psychophysical experiments based on the effect of an illusory transformation found by Duffy and Wurtz (1993 Vision Research33 1481 – 1490). Human subjects are not able to determine the centre of an expanding optic flow field correctly if the expansion is transparently superimposed on a unidirectional motion pattern. Its location is rather perceived shifted in the direction of the translational movement. Duffy and Wurtz proposed that this illusory shift is caused by the visual system taking the presented flow pattern as a flow field composed of linear self-motion and an eye rotation. As a consequence, the centre of the expansional movement is determined by compensating for the simulated eye rotation, like determining one's direction of heading (Lappe and Rauschecker, 1994 Vision Research35 1619 – 1631). In our experiments we examined the dependence of the illusory transformation on differences in depth between the superimposed movements. We presented the expansional and translational stimuli with different relative binocular disparities. In the case of zero disparity, we could confirm the results of Duffy and Wurtz. For uncrossed disparities (ie translation behind expansion) we found a small and nonsignificant decrease of the illusory shift. In contrast, there was a strong decrease up to 80% in the case of crossed disparity (ie translation in front of expansion). These findings confirm the assumption that the motion pattern is interpreted as a self-motion flow field: only in the unrealistic case of a large rotational component present in front of an expansion are the superimposed movements interpreted separately by the visual system.
APA, Harvard, Vancouver, ISO, and other styles
21

Kim, HyunGoo R., Dora E. Angelaki, and Gregory C. DeAngelis. "The neural basis of depth perception from motion parallax." Philosophical Transactions of the Royal Society B: Biological Sciences 371, no. 1697 (June 19, 2016): 20150256. http://dx.doi.org/10.1098/rstb.2015.0256.

Full text
Abstract:
In addition to depth cues afforded by binocular vision, the brain processes relative motion signals to perceive depth. When an observer translates relative to their visual environment, the relative motion of objects at different distances (motion parallax) provides a powerful cue to three-dimensional scene structure. Although perception of depth based on motion parallax has been studied extensively in humans, relatively little is known regarding the neural basis of this visual capability. We review recent advances in elucidating the neural mechanisms for representing depth-sign (near versus far) from motion parallax. We examine a potential neural substrate in the middle temporal visual area for depth perception based on motion parallax, and we explore the nature of the signals that provide critical inputs for disambiguating depth-sign. This article is part of the themed issue ‘Vision in our three-dimensional world’.
APA, Harvard, Vancouver, ISO, and other styles
22

Timney, Brian. "Effects of brief monocular deprivation on binocular depth perception in the cat: A sensitive period for the loss of stereopsis." Visual Neuroscience 5, no. 3 (September 1990): 273–80. http://dx.doi.org/10.1017/s0952523800000341.

Full text
Abstract:
AbstractThe period of susceptibility for binocular depth vision was studied in kittens by subjecting them to periods of monocular deprivation beginning at different ages. In an initial study, we found that normally reared kittens can learn a depth-discrimination task much more rapidly when tested binocularly than monocularly, even when testing is begun as early at 30 d. In subsequent experiments, kittens were monocularly deprived by eyelid suture, following which their monocular and binocular depth thresholds were measured using the jumping-stand procedure. We obtained the following results: (1) When monocular deprivation is applied before the time of natural eye opening but is discontinued by no later than 30 d, there is very Little effect on binocular depth thresholds. (2) When deprivation is begun at 90 d, binocular depth thresholds are unaffected. (3) When deprivation is begun between these two ages, the magnitude of the deficit varies with the period of deprivation and the age at which it begins. (4) By imposing brief (5 or 10 d) periods of deprivation, beginning at different ages, we were able to demonstrate that the peak of the sensitive period is between the ages of 35 and 45 d, with a fairly rapid decline in susceptibility outside those age limits. (5) Even with as little as 5 d of deprivation, substantial permanent deficits in binocular depth vision can be induced.
APA, Harvard, Vancouver, ISO, and other styles
23

Gui, Chen, Jun Peng, and Zuojin Li. "Oriented Planetary Exploration Robotic Vision Binocular Camera Calibration." International Journal of Cognitive Informatics and Natural Intelligence 7, no. 4 (October 2013): 83–95. http://dx.doi.org/10.4018/ijcini.2013100105.

Full text
Abstract:
One of the goals of planetary exploration is to cache rock samples for subsequent return to Earth in the future Mars Sample Return (MSR) mission. This paper presents a method of binocular camera calibration. Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these intrinsic and extrinsic imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are barely theoretical and difficultly practical. Because the authors' robot need to accurately approach and placement scientific objects, in this paper the authors present an image rectification method that is an extension to two-step method. There is an additional step for correcting the distorted image coordinates. The image rectification is performed and accurately compensates for radial and tangential distortions. Finally, through Matlab tool developed the results of experiment are accurate and available.
APA, Harvard, Vancouver, ISO, and other styles
24

Zlatkute, Giedre, Vanessa Charlotte Sagnay de la Bastida, and Dhanraj Vishwanath. "Unimpaired perception of relative depth from perspective cues in strabismus." Royal Society Open Science 7, no. 12 (December 2020): 200955. http://dx.doi.org/10.1098/rsos.200955.

Full text
Abstract:
Strabismus is a relatively common ophthalmological condition where the coordination of eye muscles to binocularly fixate a single point in space is impaired. This leads to deficits in vision and particularly in three-dimensional (3D) space perception. The exact nature of the deficits in 3D perception is poorly understood as much of understanding has relied on anecdotal reports or conjecture. Here, we investigated, for the first time, the perception of relative depth comparing strabismic and typically developed binocular observers. Specifically, we assessed the susceptibility to the depth cue of perspective convergence as well as the capacity to use this cue to make accurate judgements of relative depth. Susceptibility was measured by examining a 3D bias in making two-dimensional (2D) interval equidistance judgements and accuracy was measured by examining 3D interval equidistance judgements. We tested both monocular and binocular viewing of images of perspective scenes under two different psychophysical methods: two-alternative forced-choice (2AFC) and the method of adjustment. The biasing effect of perspective information on the 2D judgements (3D cue susceptibility) was highly significant and comparable for both subject groups in both the psychophysical tasks (all p s < 0.001) with no statistically significant difference found between the two groups. Both groups showed an underestimation in the 3D task with no significant difference between the group's judgements in the 2AFC task, but a small statistically significant difference (ratio difference of approx. 10%, p = 0.016) in the method of adjustment task. A small but significant effect of viewing condition (monocular versus binocular) was revealed only in the non-strabismic group (ratio difference of approx. 6%, p = 0.002). Our results show that both the automatic susceptibility to, and accuracy in the use of, the perspective convergence cue in strabismus is largely comparable to that found in typically developed binocular vision, and have implications on the nature of the encoding of depth in the human visual system.
APA, Harvard, Vancouver, ISO, and other styles
25

Mansour, Mostafa, Pavel Davidson, Oleg Stepanov, and Robert Piché. "Relative Importance of Binocular Disparity and Motion Parallax for Depth Estimation: A Computer Vision Approach." Remote Sensing 11, no. 17 (August 23, 2019): 1990. http://dx.doi.org/10.3390/rs11171990.

Full text
Abstract:
Binocular disparity and motion parallax are the most important cues for depth estimation in human and computer vision. Here, we present an experimental study to evaluate the accuracy of these two cues in depth estimation to stationary objects in a static environment. Depth estimation via binocular disparity is most commonly implemented using stereo vision, which uses images from two or more cameras to triangulate and estimate distances. We use a commercial stereo camera mounted on a wheeled robot to create a depth map of the environment. The sequence of images obtained by one of these two cameras as well as the camera motion parameters serve as the input to our motion parallax-based depth estimation algorithm. The measured camera motion parameters include translational and angular velocities. Reference distance to the tracked features is provided by a LiDAR. Overall, our results show that at short distances stereo vision is more accurate, but at large distances the combination of parallax and camera motion provide better depth estimation. Therefore, by combining the two cues, one obtains depth estimation with greater range than is possible using either cue individually.
APA, Harvard, Vancouver, ISO, and other styles
26

Zerbolio, Dominic J., and James T. Walker. "Factorial Design: Binocular and Monocular Depth Perception in Vertical and Horizontal Stimuli." Teaching of Psychology 16, no. 2 (April 1989): 65–66. http://dx.doi.org/10.1207/s15328023top1602_4.

Full text
Abstract:
This article describes a factorial experiment that is useful as a laboratory exercise in a research methods course. In the Howard–Dolman depth perception apparatus, two vertical rods are adjusted, using binocular or monocular vision, so they appear equidistant from the observer. The two rods can also be oriented horizontally, which allows a factorial design combining the factors of Viewing Condition (binocular and monocular) and Rod Orientation (vertical and horizontal). The exercise illustrates the nature of an interaction and the necessity of an additional analysis of simple main effects. It also provides a basis for understanding a perceptual problem in the real world—the difficulty of localizing horizontally extended stimuli such as power lines.
APA, Harvard, Vancouver, ISO, and other styles
27

Iehisa, Ikko, Masahiko Ayaki, Kazuo Tsubota, and Kazuno Negishi. "Factors affecting depth perception and comparison of depth perception measured by the three-rods test in monocular and binocular vision." Heliyon 6, no. 9 (September 2020): e04904. http://dx.doi.org/10.1016/j.heliyon.2020.e04904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Idesawa, Masanori, and Yasuhiro Mizukoshi. "3-D Computer Graphics System for Vision Research by Binocular Viewing." Journal of Robotics and Mechatronics 4, no. 1 (February 20, 1992): 70–75. http://dx.doi.org/10.20965/jrm.1992.p0070.

Full text
Abstract:
For the artificial realization and the application of the visual function of 3-D space perception, the better understanding of human visual mechanism is required strongly. The disparity and occlusion observed with binocular viewing seems to be the most important cues to get 3-D information. Then, the authors developed a simple stereoscopic display system using a time sharing display of left eye and right eye images with liquid crystal shutter. This system is composed of a simple small control circuit and has big advantages such as that the hardware can be applied to any types of display system and the software can be transplant to different type computer system easily. Then, the authors tried to apply this system for the vision research of 3-D perceptural function with binocular viewing.
APA, Harvard, Vancouver, ISO, and other styles
29

Theimer, W. "Phase-Based Binocular Vergence Control and Depth Reconstruction Using Active Vision." Computer Vision and Image Understanding 60, no. 3 (November 1994): 343–58. http://dx.doi.org/10.1006/cviu.1994.1067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

FERRIER, NICOLA J., and JAMES J. CLARK. "THE HARVARD BINOCULAR HEAD." International Journal of Pattern Recognition and Artificial Intelligence 07, no. 01 (February 1993): 9–31. http://dx.doi.org/10.1142/s0218001493000029.

Full text
Abstract:
The ability to dynamically control the imaging parameters such as camera position, focus, and aperture is satisfied through the use of special purpose hardware such as a robotic head. This paper presents the design and control aspects of the Harvard Head, a binocular image acquisition system. We present three applications of the head in vision tasks concentrating on the computation of depth from controlled camera motion.
APA, Harvard, Vancouver, ISO, and other styles
31

Rose, David, Mark F. Bradshaw, and Paul B. Hibbard. "Attention Affects the Stereoscopic Depth Aftereffect." Perception 32, no. 5 (May 2003): 635–40. http://dx.doi.org/10.1068/p3324.

Full text
Abstract:
‘Preattentive’ vision is typically considered to include several low-level processes, including the perception of depth from binocular disparity and motion parallax. However, doubt was cast on this model when it was shown that a secondary attentional task can modulate the motion aftereffect (Chaudhuri, 1990 Nature344 60–62). Here we investigate whether attention can also affect the depth aftereffect (Blakemore and Julesz, 1971 Science171 286–288). Subjects adapted to stationary or moving random-dot patterns segmented into depth planes while attention was manipulated with a secondary task (character processing at parametrically varied rates). We found that the duration of the depth aftereffect can be affected by attentional manipulations, and both its duration and that of the motion aftereffect varied with the difficulty of the secondary task. The results are discussed in the context of dynamic feedback models of vision, and support the penetrability of low-level sensory processes by attentional mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
32

Fu, Junwei, and Jun Liang. "Virtual View Generation Based on 3D-Dense-Attentive GAN Networks." Sensors 19, no. 2 (January 16, 2019): 344. http://dx.doi.org/10.3390/s19020344.

Full text
Abstract:
A binocular vision system is a common perception component of an intelligent vehicle. Benefiting from the biomimetic structure, the system is simple and effective. Which are extremely snesitive on external factors, especially missing vision signals. In this paper, a virtual view-generation algorithm based on generative adversarial networks (GAN) is proposed to enhance the robustness of binocular vision systems. The proposed model consists of two parts: generative network and discriminator network. To improve the quality of a virtual view, a generative network structure based on 3D convolutional neural networks (3D-CNN) and attentive mechanisms is introduced to extract the time-series features from image sequences. To avoid gradient vanish during training, the dense block structure is utilized to improve the discriminator network. Meanwhile, three kinds of image features, including image edge, depth map and optical flow are extracted to constrain the supervised training of model. The final results on KITTI and Cityscapes datasets demonstrate that our algorithm outperforms conventional methods, and the missing vision signal can be replaced by a generated virtual view.
APA, Harvard, Vancouver, ISO, and other styles
33

Adams, Wendy J., Erich W. Graf, and Matt Anderson. "Disruptive coloration and binocular disparity: breaking camouflage." Proceedings of the Royal Society B: Biological Sciences 286, no. 1896 (February 13, 2019): 20182045. http://dx.doi.org/10.1098/rspb.2018.2045.

Full text
Abstract:
Many species employ camouflage to disguise their true shape and avoid detection or recognition. Disruptive coloration is a form of camouflage in which high-contrast patterns obscure internal features or break up an animal's outline. In particular, edge enhancement creates illusory, or ‘fake’ depth edges within the animal's body. Disruptive coloration often co-occurs with background matching, and together, these strategies make it difficult for an observer to visually segment an animal from its background. However, stereoscopic vision could provide a critical advantage in the arms race between perception and camouflage: the depth information provided by binocular disparities reveals the true three-dimensional layout of a scene, and might, therefore, help an observer to overcome the effects of disruptive coloration. Human observers located snake targets embedded in leafy backgrounds. We analysed performance (response time) as a function of edge enhancement, illumination conditions and the availability of binocular depth cues. We confirm that edge enhancement contributes to effective camouflage: observers were slower to find snakes whose patterning contains ‘fake’ depth edges. Importantly, however, this effect disappeared when binocular depth cues were available. Illumination also affected detection: under directional illumination, where both the leaves and snake produced strong cast shadows, snake targets were localized more quickly than in scenes rendered under ambient illumination. In summary, we show that illusory depth edges, created via disruptive coloration, help to conceal targets from human observers. However, cast shadows and binocular depth information improve detection by providing information about the true three-dimensional structure of a scene. Importantly, the strong interaction between disparity and edge enhancement suggests that stereoscopic vision has a critical role in breaking camouflage, enabling the observer to overcome the disruptive effects of edge enhancement.
APA, Harvard, Vancouver, ISO, and other styles
34

Piano, Marianne, Ramin Nilforooshan, and Simon Evans. "Binocular Vision, Visual Function, and Pupil Dynamics in People Living With Dementia and Their Relation to the Rate of Cognitive Decline and Structural Changes Within the Brain: Protocol for an Observational Study." JMIR Research Protocols 9, no. 8 (August 10, 2020): e16089. http://dx.doi.org/10.2196/16089.

Full text
Abstract:
Background Visual impairment is a common comorbidity in people living with dementia. Addressing sources of visual difficulties can have a significant impact on the quality of life for people living with dementia and their caregivers. Depth perception problems are purportedly common in dementia and also contribute to falls, visuomotor task difficulties, and poorer psychosocial well-being. However, depth perception and binocular vision are rarely assessed in dementia research. Sleep fragmentation is also common for people living with dementia, and binocular cooperation for depth perception can be affected by fatigue. Pupillary responses under cognitive load also have the potential to be a risk marker for cognitive decline in people living with dementia and can be combined with the above measures for a comprehensive evaluation of clinical visual changes in people living with dementia and their relation to changes in cognitive status, sleep quality, and cortical structure or function. Objective This study aims to characterize the nature of clinical visual changes and altered task-evoked pupillary responses that may occur in people living with dementia and evaluate whether these responses relate to changes in cognitive status (standardized Mini Mental State Examination [MMSE] score), Pittsburgh sleep quality index, and cortical structure or function. Methods This proposed exploratory observational study will enroll ≤210 people with recently diagnosed dementia (within the last 24 months). The following parameters will be assessed on 3 occasions, 4 months apart (plus or minus 2 weeks): visual function (visual acuity and contrast sensitivity), binocular function (motor fusion and stereopsis), task-evoked pupillary responses (minimum and maximum pupil size, time to maximum dilation, and dilation velocity), cognitive status (MMSE score), and sleep quality (Pittsburgh Sleep Quality Index). A subset of patients (n=30) with Alzheimer disease will undergo structural and functional magnetic resonance imaging at first and third visits, completing a 10-day consensus sleep diary to monitor sleep quality, verified by sleep actimetry. Results This research was funded in February 2018 and received National Health Service Research Ethics Committee approval in September 2018. The data collection period was from October 1, 2018, to November 30, 2019. A total of 24 participants were recruited for the study. The data analysis is complete, with results expected to be published before the end of 2020. Conclusions Findings will demonstrate how often people with dementia experience binocular vision problems. If frequent, diagnosing and treating them could improve quality of life by reducing the risk of falls and fine visuomotor task impairment and by relieving psychosocial anxiety. This research will also demonstrate whether changes in depth perception, pupillary responses, and quality of vision relate to changes in memory or sleep quality and brain structure or function. If related, these quick and noninvasive eye tests help monitor dementia. This would help justify whether binocular vision and pupillary response testing should be included in dementia-friendly eye-testing guidelines. International Registered Report Identifier (IRRID) RR1-10.2196/16089
APA, Harvard, Vancouver, ISO, and other styles
35

Tasman, Abel-Jan, Frank Wallner, Gerold H. Kolling, and Heinz Stammberger. "Is Monocular Perception of Depth through the Rigid Endoscope a Disadvantage Compared to Binocular Vision through the Operating Microscope in Paranasal Sinus Surgery?" American Journal of Rhinology 12, no. 2 (March 1998): 87–92. http://dx.doi.org/10.2500/105065898781390343.

Full text
Abstract:
Vision through the endoscope is strictly monocular. Perception of depth (stereopsis) during ethmoid surgery through the operating microscope would be expected to be superior due to binocular view. To investigate whether monocularity of the endoscope is a disadvantage in paranasal sinus surgery, we compared stereoacuity in a model of the nasal cavity using a headlamp, an operating microscope, and a 0°-Hopkins-endoscope. Twenty volunteers were asked to touch defined points in a spatial model of the nasal cavity. Due to the configuration of the model, which allowed binocular vision of all contact points with headlamp, performance was significantly better than with optical instruments. Manipulations were performed faster with the endoscope than with the microscope. Under microscopic guidance more faults in point sequence were made than with the endoscope. Various monocular phenomena obviously allow sufficient spatial orientation through the endoscope, so that monocularity of the endoscope appears not to be a disadvantage for quick and safe manipulations during functional endoscopic sinus surgery.
APA, Harvard, Vancouver, ISO, and other styles
36

Verhoef, Bram-Ernst, Rufin Vogels, and Peter Janssen. "Binocular depth processing in the ventral visual pathway." Philosophical Transactions of the Royal Society B: Biological Sciences 371, no. 1697 (June 19, 2016): 20150259. http://dx.doi.org/10.1098/rstb.2015.0259.

Full text
Abstract:
One of the most powerful forms of depth perception capitalizes on the small relative displacements, or binocular disparities, in the images projected onto each eye. The brain employs these disparities to facilitate various computations, including sensori-motor transformations (reaching, grasping), scene segmentation and object recognition. In accordance with these different functions, disparity activates a large number of regions in the brain of both humans and monkeys. Here, we review how disparity processing evolves along different regions of the ventral visual pathway of macaques, emphasizing research based on both correlational and causal techniques. We will discuss the progression in the ventral pathway from a basic absolute disparity representation to a more complex three-dimensional shape code. We will show that, in the course of this evolution, the underlying neuronal activity becomes progressively more bound to the global perceptual experience. We argue that these observations most probably extend beyond disparity processing per se , and pertain to object processing in the ventral pathway in general. We conclude by posing some important unresolved questions whose answers may significantly advance the field, and broaden its scope. This article is part of the themed issue ‘Vision in our three-dimensional world’.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhang, Di, Vincent Nourrit, and Jean-Louis De Bougrenet de la Tocnaye. "Enhancing Motion-In-Depth Perception of Random-Dot Stereograms." Perception 47, no. 7 (June 18, 2018): 722–34. http://dx.doi.org/10.1177/0301006618775026.

Full text
Abstract:
Random-dot stereograms have been widely used to explore the neural mechanisms underlying binocular vision. Although they are a powerful tool to stimulate motion-in-depth (MID) perception, published results report some difficulties in the capacity to perceive MID generated by random-dot stereograms. The purpose of this study was to investigate whether the performance of MID perception could be improved using an appropriate stimulus design. Sixteen inexperienced observers participated in the experiment. A training session was carried out to improve the accuracy of MID detection before the experiment. Four aspects of stimulus design were investigated: presence of a static reference, background texture, relative disparity, and stimulus contrast. Participants’ performance in MID direction discrimination was recorded and compared to evaluate whether varying these factors helped MID perception. Results showed that only the presence of background texture had a significant effect on MID direction perception. This study provides suggestions for the design of 3D stimuli in order to facilitate MID perception.
APA, Harvard, Vancouver, ISO, and other styles
38

Cammack, P., and J. M. Harris. "Depth perception in disparity-defined objects: finding the balance between averaging and segregation." Philosophical Transactions of the Royal Society B: Biological Sciences 371, no. 1697 (June 19, 2016): 20150258. http://dx.doi.org/10.1098/rstb.2015.0258.

Full text
Abstract:
Deciding what constitutes an object, and what background, is an essential task for the visual system. This presents a conundrum: averaging over the visual scene is required to obtain a precise signal for object segregation, but segregation is required to define the region over which averaging should take place. Depth, obtained via binocular disparity (the differences between two eyes’ views), could help with segregation by enabling identification of object and background via differences in depth. Here, we explore depth perception in disparity-defined objects. We show that a simple object segregation rule, followed by averaging over that segregated area, can account for depth estimation errors. To do this, we compared objects with smoothly varying depth edges to those with sharp depth edges, and found that perceived peak depth was reduced for the former. A computational model used a rule based on object shape to segregate and average over a central portion of the object, and was able to emulate the reduction in perceived depth. We also demonstrated that the segregated area is not predefined but is dependent on the object shape. We discuss how this segregation strategy could be employed by animals seeking to deter binocular predators. This article is part of the themed issue ‘Vision in our three-dimensional world’.
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Gang, Haidong D. Lu, Hisashi Tanigawa, and Anna W. Roe. "Solving visual correspondence between the two eyes via domain-based population encoding in nonhuman primates." Proceedings of the National Academy of Sciences 114, no. 49 (November 27, 2017): 13024–29. http://dx.doi.org/10.1073/pnas.1614452114.

Full text
Abstract:
Stereoscopic vision depends on correct matching of corresponding features between the two eyes. It is unclear where the brain solves this binocular correspondence problem. Although our visual system is able to make correct global matches, there are many possible false matches between any two images. Here, we use optical imaging data of binocular disparity response in the visual cortex of awake and anesthetized monkeys to demonstrate that the second visual cortical area (V2) is the first cortical stage that correctly discards false matches and robustly encodes correct matches. Our findings indicate that a key transformation for achieving depth perception lies in early stages of extrastriate visual cortex and is achieved by population coding.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, Fengquan, Tingshen Lei, Jinhong Li, Xingquan Cai, Xuqiang Shao, Jian Chang, and Feng Tian. "Real-Time Calibration and Registration Method for Indoor Scene with Joint Depth and Color Camera." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 07 (March 14, 2018): 1854021. http://dx.doi.org/10.1142/s0218001418540216.

Full text
Abstract:
Traditional vision registration technologies require the design of precise markers or rich texture information captured from the video scenes, and the vision-based methods have high computational complexity while the hardware-based registration technologies lack accuracy. Therefore, in this paper, we propose a novel registration method that takes advantages of RGB-D camera to obtain the depth information in real-time, and a binocular system using the Time of Flight (ToF) camera and a commercial color camera is constructed to realize the three-dimensional registration technique. First, we calibrate the binocular system to get their position relationships. The systematic errors are fitted and corrected by the method of B-spline curve. In order to reduce the anomaly and random noise, an elimination algorithm and an improved bilateral filtering algorithm are proposed to optimize the depth map. For the real-time requirement of the system, it is further accelerated by parallel computing with CUDA. Then, the Camshift-based tracking algorithm is applied to capture the real object registered in the video stream. In addition, the position and orientation of the object are tracked according to the correspondence between the color image and the 3D data. Finally, some experiments are implemented and compared using our binocular system. Experimental results are shown to demonstrate the feasibility and effectiveness of our method.
APA, Harvard, Vancouver, ISO, and other styles
41

Fujita, Ichiro, and Takahiro Doi. "Weighted parallel contributions of binocular correlation and match signals to conscious perception of depth." Philosophical Transactions of the Royal Society B: Biological Sciences 371, no. 1697 (June 19, 2016): 20150257. http://dx.doi.org/10.1098/rstb.2015.0257.

Full text
Abstract:
Binocular disparity is detected in the primary visual cortex by a process similar to calculation of local cross-correlation between left and right retinal images. As a consequence, correlation-based neural signals convey information about false disparities as well as the true disparity. The false responses in the initial disparity detectors are eliminated at later stages in order to encode only disparities of the features correctly matched between the two eyes. For a simple stimulus configuration, a feed-forward nonlinear process can transform the correlation signal into the match signal. For human observers, depth judgement is determined by a weighted sum of the correlation and match signals rather than depending solely on the latter. The relative weight changes with spatial and temporal parameters of the stimuli, allowing adaptive recruitment of the two computations under different visual circumstances. A full transformation from correlation-based to match-based representation occurs at the neuronal population level in cortical area V4 and manifests in single-neuron responses of inferior temporal and posterior parietal cortices. Neurons in area V5/MT represent disparity in a manner intermediate between the correlation and match signals. We propose that the correlation and match signals in these areas contribute to depth perception in a weighted, parallel manner. This article is part of the themed issue ‘Vision in our three-dimensional world’.
APA, Harvard, Vancouver, ISO, and other styles
42

Guttmann, Josef, and Hanns-Christof Spatz. "Frequency of Fusion and of Loss of Fusion, and Binocular Depth Perception with Alternating Stimulus Presentation." Perception 14, no. 1 (February 1985): 5–12. http://dx.doi.org/10.1068/p140005.

Full text
Abstract:
Stereoscopic vision was investigated with an experimental design allowing dichoptic stimulus presentation at different frequencies of image alternation. For twenty subjects the frequency of binocular fusion and the frequency of loss of fusion to one stereoscopic image was measured as a function of the convergence angle. In thirteen subjects no dependence of the fusion frequency was found, while seven subjects showed a marked increase of the fusion frequency with increasing angle of convergence. In all cases the frequency of fusion was higher than the frequency of loss of fusion. Both frequencies, however, are lower than the flicker fusion frequency. Under conditions where no monocular cues and no references for stereoptic depth comparisons were presented, the apparent distance of the image from the observer could not be assessed, but perception of relative motion in depth was possible. All subjects assessed the direction of motion accurately down to changes of the convergence angle of 0.2 deg s−1.
APA, Harvard, Vancouver, ISO, and other styles
43

Norman, J. Farley, Charles E. Crabtree, Anna Marie Clayton, and Hideko F. Norman. "The Perception of Distances and Spatial Relationships in Natural Outdoor Environments." Perception 34, no. 11 (November 2005): 1315–24. http://dx.doi.org/10.1068/p5304.

Full text
Abstract:
The ability of observers to perceive distances and spatial relationships in outdoor environments was investigated in two experiments. In experiment 1, the observers adjusted triangular configurations to appear equilateral, while in experiment 2, they adjusted the depth of triangles to match their base width. The results of both experiments revealed that there are large individual differences in how observers perceive distances in outdoor settings. The observers' judgments were greatly affected by the particular task they were asked to perform. The observers who had shown no evidence of perceptual distortions in experiment 1 (with binocular vision) demonstrated large perceptual distortions in experiment 2 when the task was changed to match distances in depth to frontal distances perpendicular to the observers' line of sight. Considered as a whole, the results indicate that there is no single relationship between physical and perceived space that is consistent with observers' judgments of distances in ordinary outdoor contexts.
APA, Harvard, Vancouver, ISO, and other styles
44

Tittle, James S., Michael W. Rouse, and Myron L. Braunstein. "Relationship of Static Stereoscopic Depth Perception to Performance with Dynamic Stereoscopic Displays." Proceedings of the Human Factors Society Annual Meeting 32, no. 19 (October 1988): 1439–42. http://dx.doi.org/10.1177/154193128803201928.

Full text
Abstract:
Although most tasks performed by human observers that require accurate stereoscopic depth perception, such as working with tools, operating machinery, and controlling vehicles, involve dynamically changing disparities, classification of observers as having normal or deficient stereoscopic vision is currently based on performance with static stereoscopic displays. The present study compares the performance of subjects classified as deficient in static stereoscopic vision to a control group with normal stereoscopic vision in two experiments-one in which the disparities were constant during motion and one in which the disparities changed continuously. In the first experiment, subjects judged orientation in depth of a dihedral angle, with the apex pointed toward or away from them. The angle translated horizontally, leaving the disparities constant. When disparity and motion parallax were placed in conflict, subjects in the normal group almost always responded in accordance with disparity, whereas subjects in the deficient group responded in accordance with disparity at chance levels. In the second experiment, subjects were asked to judge the direction of rotation of a computer-generated cylinder. When dynamic occlusion and dynamic disparity indicated conflicting directions, performance of subjects in the normal and deficient groups did not differ significantly. When only dynamic disparity information was provided, most subjects classified as stereo deficient were able to judge the direction of rotation accurately. These results indicate that measures of stereoscopic vision that do not include changing disparities may not provide a complete evaluation of the ability of a human observer to perceive depth on the basis of disparity.
APA, Harvard, Vancouver, ISO, and other styles
45

Ma, Yunpeng, Qingwu Li, Lulu Chu, Yaqin Zhou, and Chang Xu. "Real-Time Detection and Spatial Localization of Insulators for UAV Inspection Based on Binocular Stereo Vision." Remote Sensing 13, no. 2 (January 11, 2021): 230. http://dx.doi.org/10.3390/rs13020230.

Full text
Abstract:
Unmanned aerial vehicles (UAVs) have become important tools for power transmission line inspection. Cameras installed on the platforms can efficiently obtain aerial images containing information about power equipment. However, most of the existing inspection systems cannot perform automatic real-time detection of transmission line components. In this paper, an automatic transmission line inspection system incorporating UAV remote sensing with binocular visual perception technology is developed to accurately detect and locate power equipment in real time. The system consists of a UAV module, embedded industrial computer, binocular visual perception module, and control and observation module. Insulators, which are key components in power transmission lines as well as fault-prone components, are selected as the detection targets. Insulator detection and spatial localization in aerial images with cluttered backgrounds are interesting but challenging tasks for an automatic transmission line inspection system. A two-stage strategy is proposed to achieve precise identification of insulators. First, candidate insulator regions are obtained based on RGB-D saliency detection. Then, the skeleton structure of candidate insulator regions is extracted. We implement a structure search to realize the final accurate detection of insulators. On the basis of insulator detection results, we further propose a real-time object spatial localization method that combines binocular stereo vision and a global positioning system (GPS). The longitude, latitude, and height of insulators are obtained through coordinate conversion based on the UAV’s real-time flight data and equipment parameters. Experiment results in the actual inspection environment (220 kV power transmission line) show that the presented system meets the requirement of robustness and accuracy of insulator detection and spatial localization in practical engineering.
APA, Harvard, Vancouver, ISO, and other styles
46

Ma, Yunpeng, Qingwu Li, Lulu Chu, Yaqin Zhou, and Chang Xu. "Real-Time Detection and Spatial Localization of Insulators for UAV Inspection Based on Binocular Stereo Vision." Remote Sensing 13, no. 2 (January 11, 2021): 230. http://dx.doi.org/10.3390/rs13020230.

Full text
Abstract:
Unmanned aerial vehicles (UAVs) have become important tools for power transmission line inspection. Cameras installed on the platforms can efficiently obtain aerial images containing information about power equipment. However, most of the existing inspection systems cannot perform automatic real-time detection of transmission line components. In this paper, an automatic transmission line inspection system incorporating UAV remote sensing with binocular visual perception technology is developed to accurately detect and locate power equipment in real time. The system consists of a UAV module, embedded industrial computer, binocular visual perception module, and control and observation module. Insulators, which are key components in power transmission lines as well as fault-prone components, are selected as the detection targets. Insulator detection and spatial localization in aerial images with cluttered backgrounds are interesting but challenging tasks for an automatic transmission line inspection system. A two-stage strategy is proposed to achieve precise identification of insulators. First, candidate insulator regions are obtained based on RGB-D saliency detection. Then, the skeleton structure of candidate insulator regions is extracted. We implement a structure search to realize the final accurate detection of insulators. On the basis of insulator detection results, we further propose a real-time object spatial localization method that combines binocular stereo vision and a global positioning system (GPS). The longitude, latitude, and height of insulators are obtained through coordinate conversion based on the UAV’s real-time flight data and equipment parameters. Experiment results in the actual inspection environment (220 kV power transmission line) show that the presented system meets the requirement of robustness and accuracy of insulator detection and spatial localization in practical engineering.
APA, Harvard, Vancouver, ISO, and other styles
47

Bista, Sujal, Ícaro Lins Leitão da Cunha, and Amitabh Varshney. "Kinetic depth images: flexible generation of depth perception." Visual Computer 33, no. 10 (May 6, 2016): 1357–69. http://dx.doi.org/10.1007/s00371-016-1231-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

van Ee, Raymond. "Correlation between Stereoanomaly and Perceived Depth When Disparity and Motion Interact in Binocular Matching." Perception 32, no. 1 (January 2003): 67–84. http://dx.doi.org/10.1068/p3459.

Full text
Abstract:
The aim of this study was to find out to what extent binocular matching is facilitated by motion when stereoanomalous and normal subjects estimate the perceived depth of a 3-D stimulus containing excessive matching candidates. Thirty subjects viewed stimuli that consisted of bars uniformly distributed inside a volume. They judged the perceived depth-to-width ratio of the volume by adjusting the aspect ratio of an outline rectangle (a metrical 3-D task). Although there were large inter-subject differences in the depth perceived, the experimental results yielded a good correlation with stereoanomaly (the inability to distinguish disparities of different magnitudes and/or signs in part of the disparity spectrum). The results cannot be explained solely by depth-cue combination. Since up to 30% of the population is stereo-anomalous, stereoscopic experiments would yield more informative results if subjects were first characterized with regard to their stereo capacities. Intriguingly, it was found that motion does not help to define disparities in subjects who are able to perceive depth-from-disparity in half of the disparity spectrum. These stereoanomalous subjects were found to rely completely on the motion signals. This suggests that the perception of volumetric depth in subjects with normal stereoscopic vision requires the joint processing of crossed and uncrossed disparities.
APA, Harvard, Vancouver, ISO, and other styles
49

Lange-Küttner, Chris. "Disappearance of Biased Visual Attention in Infants: Remediated Tonic Neck Reflex or Maturating Visual Asymmetry?" Perceptual and Motor Skills 125, no. 5 (July 17, 2018): 839–65. http://dx.doi.org/10.1177/0031512518786131.

Full text
Abstract:
Typically, infants younger than four months fail to attend to the left side of their spatial field, most likely due to an innate asymmetrical tonic neck reflex (ATNR). In a critical transition, by four months of age, infants begin to reach and develop depth perception; and, by five months, they tend to monitor the entire spatial field. However, this developmental transition can be delayed. Moreover, there is always a residual right-sided spatial bias under cognitive load, a phenomenon that may also occur among adult stroke patients. While causative factors of biased visual attention in both infants and brain-injured adults may vary, mechanisms of remediation may be similar. This literature review addresses whether the infant’s emergence of attention toward a full visual spatial field and the associated shift from monocular to binocular vision occurs because of (a) increased left side reaching, loosening the rarely mentioned high muscle tension ATNR or (b) maturational resolution of visual asymmetry in motion perception. More research is needed to investigate the origins of the infants’ visual control system and factors involved in its development, especially because Alzheimer and dementia patients may also show primitive two-dimensional vision and deficits in perceiving objects-in-motion that seem to mirror infant visual perception.
APA, Harvard, Vancouver, ISO, and other styles
50

Simpson, William A. "Optic flow and depth perception." Spatial Vision 7, no. 1 (1993): 35–75. http://dx.doi.org/10.1163/156856893x00036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography