Academic literature on the topic 'Stereoscopic vision, depth'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Stereoscopic vision, depth.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Stereoscopic vision, depth"

1

Uomori, Kenya, and Mitsuho Yamada. "Special Edition. Human Vision. Stereoscopic Vision and Depth Perception." Journal of the Institute of Television Engineers of Japan 48, no. 12 (1994): 1502–8. http://dx.doi.org/10.3169/itej1978.48.1502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ninio, J. "Curvature Biases in Stereoscopic Vision." Perception 26, no. 1_suppl (August 1997): 287. http://dx.doi.org/10.1068/v970154.

Full text
Abstract:
The reliability of in-depth curvature judgements for linear elements was studied with stereograms that contained two linear targets and a background representing a hemisphere. The targets were arcs facing to the left or to the right, like parentheses. Some formed binocular pairs with (type 1) or without (type 2) in-depth curvature. The others were monocular (type 3). The hemisphere in the background was generated by a random curve (Ninio, 1981 Perception10 403 – 410); it was either concave (hollow) or convex. The arcs had their binocular centre in the plane of the centre of the hemisphere. Each stereogram contained a type 1, and either a type 2 or a type 3 target. Subjects had to judge the hemisphere curvature, then the in-depth curvature of the targets in 32 different stereograms covering all curvature combinations. There were about 15% errors on type 1 targets, and 80% of these occurred when both the hemisphere and the target were convex, the target being perceived as concave, by transparency through the hemisphere. There were also about 15% errors on type 2 targets, but spread among all situations, the trend being to perceive them as slightly concave. The monocular stimuli (type 3) were judged to be frontoparallel in 70% of the cases. Otherwise, there was no directional bias except for monocular arcs on the nasal side, in conjunction with a concave background. Then, the perceived in-depth curvature was in the ‘generic’ direction predicted by associating the monocular arc in one image with a straight vertical segment in the other image.
APA, Harvard, Vancouver, ISO, and other styles
3

Bridge, Holly. "Effects of cortical damage on binocular depth perception." Philosophical Transactions of the Royal Society B: Biological Sciences 371, no. 1697 (June 19, 2016): 20150254. http://dx.doi.org/10.1098/rstb.2015.0254.

Full text
Abstract:
Stereoscopic depth perception requires considerable neural computation, including the initial correspondence of the two retinal images, comparison across the local regions of the visual field and integration with other cues to depth. The most common cause for loss of stereoscopic vision is amblyopia, in which one eye has failed to form an adequate input to the visual cortex, usually due to strabismus (deviating eye) or anisometropia. However, the significant cortical processing required to produce the percept of depth means that, even when the retinal input is intact from both eyes, brain damage or dysfunction can interfere with stereoscopic vision. In this review, I examine the evidence for impairment of binocular vision and depth perception that can result from insults to the brain, including both discrete damage, temporal lobectomy and more systemic diseases such as posterior cortical atrophy. This article is part of the themed issue ‘Vision in our three-dimensional world’.
APA, Harvard, Vancouver, ISO, and other styles
4

Guan, Phillip, and Martin S. Banks. "Stereoscopic depth constancy." Philosophical Transactions of the Royal Society B: Biological Sciences 371, no. 1697 (June 19, 2016): 20150253. http://dx.doi.org/10.1098/rstb.2015.0253.

Full text
Abstract:
Depth constancy is the ability to perceive a fixed depth interval in the world as constant despite changes in viewing distance and the spatial scale of depth variation. It is well known that the spatial frequency of depth variation has a large effect on threshold. In the first experiment, we determined that the visual system compensates for this differential sensitivity when the change in disparity is suprathreshold, thereby attaining constancy similar to contrast constancy in the luminance domain. In a second experiment, we examined the ability to perceive constant depth when the spatial frequency and viewing distance both changed. To attain constancy in this situation, the visual system has to estimate distance. We investigated this ability when vergence, accommodation and vertical disparity are all presented accurately and therefore provided veridical information about viewing distance. We found that constancy is nearly complete across changes in viewing distance. Depth constancy is most complete when the scale of the depth relief is constant in the world rather than when it is constant in angular units at the retina. These results bear on the efficacy of algorithms for creating stereo content. This article is part of the themed issue ‘Vision in our three-dimensional world’.
APA, Harvard, Vancouver, ISO, and other styles
5

Ludwig, Kai-Oliver, Heiko Neumann, and Bernd Neumann. "Local stereoscopic depth estimation." Image and Vision Computing 12, no. 1 (January 1994): 16–35. http://dx.doi.org/10.1016/0262-8856(94)90052-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tittle, James S., Michael W. Rouse, and Myron L. Braunstein. "Relationship of Static Stereoscopic Depth Perception to Performance with Dynamic Stereoscopic Displays." Proceedings of the Human Factors Society Annual Meeting 32, no. 19 (October 1988): 1439–42. http://dx.doi.org/10.1177/154193128803201928.

Full text
Abstract:
Although most tasks performed by human observers that require accurate stereoscopic depth perception, such as working with tools, operating machinery, and controlling vehicles, involve dynamically changing disparities, classification of observers as having normal or deficient stereoscopic vision is currently based on performance with static stereoscopic displays. The present study compares the performance of subjects classified as deficient in static stereoscopic vision to a control group with normal stereoscopic vision in two experiments-one in which the disparities were constant during motion and one in which the disparities changed continuously. In the first experiment, subjects judged orientation in depth of a dihedral angle, with the apex pointed toward or away from them. The angle translated horizontally, leaving the disparities constant. When disparity and motion parallax were placed in conflict, subjects in the normal group almost always responded in accordance with disparity, whereas subjects in the deficient group responded in accordance with disparity at chance levels. In the second experiment, subjects were asked to judge the direction of rotation of a computer-generated cylinder. When dynamic occlusion and dynamic disparity indicated conflicting directions, performance of subjects in the normal and deficient groups did not differ significantly. When only dynamic disparity information was provided, most subjects classified as stereo deficient were able to judge the direction of rotation accurately. These results indicate that measures of stereoscopic vision that do not include changing disparities may not provide a complete evaluation of the ability of a human observer to perceive depth on the basis of disparity.
APA, Harvard, Vancouver, ISO, and other styles
7

Retno Wulandari, Lely. "A COMPREHENSIVE APPROACH INTO STEREOSCOPIC VISION." MNJ (Malang Neurology Journal) 8, no. 1 (January 1, 2022): 53–57. http://dx.doi.org/10.21776/ub.mnj.2022.008.01.11.

Full text
Abstract:
Stereopsis (or stereoscopic) vision is the ability to see depth of perception, which is created by the difference in angle of view between both eyes. The first process is known as simultaneous perception. Objects will fall on each corresponding retina and there will be a process of fusion of the two images into one. Then, the brain initiates three-dimensional perception in visual cortex, creating stereoscopic vision. Stereoscopic vision will rapidly develop, especially at the age of 6-8 months of life. Stereoscopic is important in daily activities. There are many stereoacuity tests to evaluate stereoscopic vision. Stereoscopic examinations are based on the principle of haploscope, anaglyph, or polaroid vectograph. There are qualitative and quantitative examination methods to assess stereoscopic vision. Qualitative examinations such as Horizontal Lang Two Pencil test and Synoptophore. Quantitative examination including Contour stereopsis test and Clinical random dot stereopsis test. The inability of the eye to see stereoscopic can be called stereoblindness. This can be affected by amblyopia, decreased visual acuity, or the presence of ocular misalignment. Inability to achieve stereoscopic vision will impact an individual to perform some daily life activities, and lead to an increase in difficulty interacting in the world.
APA, Harvard, Vancouver, ISO, and other styles
8

Rose, David, Mark F. Bradshaw, and Paul B. Hibbard. "Attention Affects the Stereoscopic Depth Aftereffect." Perception 32, no. 5 (May 2003): 635–40. http://dx.doi.org/10.1068/p3324.

Full text
Abstract:
‘Preattentive’ vision is typically considered to include several low-level processes, including the perception of depth from binocular disparity and motion parallax. However, doubt was cast on this model when it was shown that a secondary attentional task can modulate the motion aftereffect (Chaudhuri, 1990 Nature344 60–62). Here we investigate whether attention can also affect the depth aftereffect (Blakemore and Julesz, 1971 Science171 286–288). Subjects adapted to stationary or moving random-dot patterns segmented into depth planes while attention was manipulated with a secondary task (character processing at parametrically varied rates). We found that the duration of the depth aftereffect can be affected by attentional manipulations, and both its duration and that of the motion aftereffect varied with the difficulty of the secondary task. The results are discussed in the context of dynamic feedback models of vision, and support the penetrability of low-level sensory processes by attentional mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
9

Takahashi, Satoshi. "Elucidation of the mechanism of stereoscopic insufficiency and mental and physical fatigue caused by near vision - research and development on recovery methods." Impact 2021, no. 5 (June 7, 2021): 78–79. http://dx.doi.org/10.21820/23987073.2021.5.78.

Full text
Abstract:
Increased exposure to video terminal display (VDT) devices is part of 21st century life, but the consequences of this are myopia and abnormal binocular single vision, which present as mental and physical fatigue. A collaborative team is investigating the mechanism underlying abnormal binocular single vision and developing a methodology for recovery. Associate Professor Satoshi Takahashi, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Japan, and the tem are looking into the interaction between binocular stereoscopic clues and monocular stereoscopic clues in binocular single vision. Their goal is to explore the effects on depth judgment and the researchers will use their findings to construct a training system that enables correct depth judgement in binocular single vision. This extensive research will involve conducting inspections on a large number of participants and developing effective methods for inspecting binocular stereoscopic function. This will lead to the development of a device that can easily diagnose the binocular stereoscopic function of the participants and enable early detection. Takahashi and the team will also explore training methods that can help individuals recover lost eye function and encourage behavioural changes that will reduce incidence of eye problems.
APA, Harvard, Vancouver, ISO, and other styles
10

Lü, Chao Hui, Jia Ying Pan, Chen Zhang, and Hui Ren. "Design and Implementation of a Stereoscopic Video Player for a Time-Division Display." Applied Mechanics and Materials 577 (July 2014): 1008–11. http://dx.doi.org/10.4028/www.scientific.net/amm.577.1008.

Full text
Abstract:
Three-dimensional video technology is becoming more and more popular, because it can provide a better natural depth perception. In this paper, a stereoscopic video player for a time-division display is designed and implemented, and people can use 3D Shutter Glasses to watch stereoscopic video by the player. It mainly focuses on the process of designing a Direct3D application, and the special handling of NVIDIA 3D Vision system for stereoscopic video. Upon examination, the stereoscopic video player can provide stereoscopic perception and good immersive experience.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Stereoscopic vision, depth"

1

Huynh, Du Quan. "Feature-based stereo vision on a mobile platform." University of Western Australia. Dept. of Computer Science, 1994. http://theses.library.uwa.edu.au/adt-WU2003.0001.

Full text
Abstract:
It is commonly known that stereopsis is the primary way for humans to perceive depth. Although, with one eye, we can still interact very well with our environment and do very highly skillful tasks by using other visual cues such as occlusion and motion, the resultant e ect of the absence of stereopsis is that the relative depth information between objects is essentially lost (Frisby,1979). While humans fuse the images seen by the left and right eyes in a seemingly easy way, the major problem - the correspondence of features - that needs to be solved in all binocular stereo systems of machine vision is not trivial. In this thesis, line segments and corners are chosen to be the features to be matched because they typically occur at object boundaries, surface discontinuities, and across surface markings. Polygonal regions are also selected since they are known to be well-configured and are, very often, associated with salient structures in the image. The use of these high level features, although helping to diminish matching ambiguities, does not completely resolve the matching problem when the scene contains repetitive structures. The spatial relationships between the feature matching pairs enforced in the stereo matching process, as proposed in this thesis, are found to provide even stronger support for correct feature matching pairs and, as a result, incorrect matching pairs can be largely eliminated. Getting global and salient 3D structures has been an important prerequisite for environmental modelling and understanding. While research on postprocessing the 3D information obtained from stereo has been attempted (Ayache and Faugeras, 1991), the strategy presented in this thesis for retrieving salient 3D descriptions is propagating the prominent information extracted from the 2D images to the 3D scene. Thus, the matching of two prominent 2D polygonal regions yields a prominent 3D region, and the inter-relation between two 2D region matching pairs is passed on and taken as a relationship between two 3D regions. Humans, when observing and interacting with the environment do not confine themselves to the observation and then the analysis of a single image. Similarly stereopsis can be vastly improved with the introduction of additional stereo image pairs. Eye, head, and body movements provide essential mobility for an active change of viewpoints, the disocclusion of occluded objects, the avoidance of obstacles, and the performance of any necessary tasks on hand. This thesis presents a mobile stereo vision system that has its eye movements provided by a binocular head support and stepper motors, and its body movements provided by a mobile platform, the Labmate. With a viewer centred coordinate system proposed in this thesis the computation of the 3D information observed at each individual viewpoint, the merging of the 3D in formation at consecutive viewpoints for environmental reconstruction, and strategies for movement control are discussed in detail.
APA, Harvard, Vancouver, ISO, and other styles
2

Katta, Pradeep. "Integrating depth and intensity information for vision-based head tracking." abstract and full text PDF (UNR users only), 2008. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1456416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Winterbottom, Marc. "Individual Differences in the Use of Remote Vision Stereoscopic Displays." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1433453135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chang, Kam Man. "Eye fatigue when viewing stereo images presented on a binocular display : effects of matching lens focus with stereoscopic depth cues /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?IELM%202008%20CHANG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Einecke, Nils [Verfasser], Horst-Michael [Akademischer Betreuer] Groß, Julian P. [Akademischer Betreuer] Eggert, and Darius [Akademischer Betreuer] Burschka. "Stereoscopic depth estimation for online vision systems / Nils Einecke. Gutachter: Julian P. Eggert ; Darius Burschka. Betreuer: Horst-Michael Groß." Ilmenau : Universitätsbibliothek Ilmenau, 2013. http://d-nb.info/1031421920/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

McIntire, John Paul. "Investigating the Relationship between Binocular Disparity, Viewer Discomfort, and Depth Task Performance on Stereoscopic 3D Displays." Wright State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=wright1400790668.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gurrieri, Luis E. "The Omnidirectional Acquisition of Stereoscopic Images of Dynamic Scenes." Thèse, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/30923.

Full text
Abstract:
This thesis analyzes the problem of acquiring stereoscopic images in all gazing directions around a reference viewpoint in space with the purpose of creating stereoscopic panoramas of non-static scenes. The generation of immersive stereoscopic imagery suitable to stimulate human stereopsis requires images from two distinct viewpoints with horizontal parallax in all gazing directions, or to be able to simulate this situation in the generated imagery. The available techniques to produce omnistereoscopic imagery for human viewing are not suitable to capture dynamic scenes stereoscopically. This is a not trivial problem when considering acquiring the entire scene at once while avoiding self-occlusion between multiple cameras. In this thesis, the term omnidirectional refers to all possible gazing directions in azimuth and a limited set of directions in elevation. The acquisition of dynamic scenes restricts the problem to those techniques suitable for collecting in one simultaneous exposure all the necessary visual information to recreate stereoscopic imagery in arbitrary gazing directions. The analysis of the problem starts by defining an omnistereoscopic viewing model for the physical magnitude to be measured by a panoramic image sensor intended to produce stereoscopic imagery for human viewing. Based on this model, a novel acquisition model is proposed, which is suitable to describe the omnistereoscopic techniques based on horizontal stereo. From this acquisition model, an acquisition method based on multiple cameras combined with the rendering by mosaicking of partially overlapped stereoscopic images is identified as a good candidate to produce omnistereoscopic imagery of dynamic scenes. Experimental acquisition and rendering tests were performed for different multiple-camera configurations. Furthermore, a mosaicking criterion between partially overlapped stereoscopic images based on the continuity of the perceived depth and the prediction of the location and magnitude of unwanted vertical disparities in the final stereoscopic panorama are two main contributions of this thesis. In addition, two novel omnistereoscopic acquisition and rendering techniques were introduced. The main contributions to this field are to propose a general model for the acquisition of omnistereoscopic imagery, to devise novel methods to produce omnistereoscopic imagery, and more importantly, to contribute to the awareness of the problem of acquiring dynamic scenes within the scope of omnistereoscopic research.
APA, Harvard, Vancouver, ISO, and other styles
8

Sych, Alexey, and Олексій Сергійович Сич. "Image depth evaluation system by stream video." Thesis, National Aviation University, 2021. https://er.nau.edu.ua/handle/NAU/50762.

Full text
Abstract:
1. Depth map generation for 2d-to-3d conversion by short-term motion assisted color segmentation/Yu-Lin Chang, Chih-Ying Fang, Li-Fu Ding, Shao-Yi Chen, and Liang-Gee Chen - DSP/IC Design Lab, Graduate Institute of Electronics Engineering, National Taiwan University, Taipei, Taiwan 2. Scharstein D., Szeliski R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms // Int. Journal of Computer Vision 47. April-June 2002. PP. 7–42. 3. Разработка и исследование алгоритма вычисления карты глубины стереоизображения/ В.В. Воронин. 4. Метод оценки глубины сцены и текстуры невидимых частей изображения URL: https://neurohive.io/ru/papers/pokazat-to-chto-skryto-metod-ocenki-glubiny-i-nevidimyh-chastej-izobrazhenij/ (Last accessed: 11.01.2021).
One of the data processing applications is stereo vision, in which obtaining a three-dimensional scene is based on models for determining the depths of key points of images from a video sequence or several images. If it is considered an example with a person, then a two-dimensional image is formed on the retina, but despite this, a person perceives the depth of space, that is, has three-dimensional, stereoscopic vision. As a result, in the presence of data on the size of an object, it can be estimated the distance to it or understand which of the objects is closer. When one object is in front of the other and partially obscures it, the person perceives the front object at a closer distance. Because of this, the need arose to teach machine devices to do this for various tasks. Based on the processing results, you can have spatial information for assessing the relief, obstacles while driving, etc. This algorithm is based on combining images of the same object, photographed or filmed on video with constant camera parameters and in the same focal plane from different angles, allows to obtain information about the distance to the object by perspective distortions (discrepancies).
Одним із додатків для обробки даних є стереобачення, в якому отримання тривимірної сцени базується на моделях для визначення глибини ключових точок зображень із відеопослідовності або декількох зображень. Якщо це розглядати як приклад з людиною, то на сітківці утворюється двовимірне зображення, але, незважаючи на це, людина сприймає глибину простору, тобто має тривимірне, стереоскопічне бачення. Як результат, за наявності даних про розмір об’єкта можна оцінити відстань до нього або зрозуміти, який з об’єктів знаходиться ближче. Коли один предмет перебуває перед іншим і частково затемнює його, людина сприймає передній предмет на більш близькій відстані. Через це виникла потреба навчити машинні пристрої робити це для різних завдань. На основі результатів обробки ви можете мати просторову інформацію для оцінки рельєфу, перешкод під час руху тощо. Цей алгоритм заснований на поєднанні зображень одного і того ж об'єкта, сфотографованих чи знятих на відео з постійними параметрами камери і в одній і тій же фокальній площині з різних кутів, дозволяє отримувати інформацію про відстань до об'єкта шляхом перспективних спотворень (розбіжностей).
APA, Harvard, Vancouver, ISO, and other styles
9

Salvi, Joaquim. "An approach to coded structured light to obtain three dimensional information." Doctoral thesis, Universitat de Girona, 1998. http://hdl.handle.net/10803/7714.

Full text
Abstract:
The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision.

The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope.
The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches.
In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision.
Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.
APA, Harvard, Vancouver, ISO, and other styles
10

Fahle, Manfred, and Tom Troscianko. "Computation of Texture and Stereoscopic Depth in Humans." 1989. http://hdl.handle.net/1721.1/6002.

Full text
Abstract:
The computation of texture and of stereoscopic depth is limited by a number of factors in the design of the optical front-end and subsequent processing stages in humans and machines. A number of limiting factors in the human visual system, such as resolution of the optics and opto-electronic interface, contrast, luminance, temporal resolution and eccentricity are reviewed and evaluated concerning their relevance for the recognition of texture and stereoscopic depth. The algorithms used by the human brain to discriminate between textures and to compute stereoscopic depth are very fast and efficient. Their study might be beneficial for the development of better algorithms in machine vision.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Stereoscopic vision, depth"

1

Diner, Daniel B. Stereo depth distortions in teleoperation. Pasadena, Calif: National Aeronautics and Space Administration, Jet Propulsion Laboratory, California Institute of Technology, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Parrish, Russell V. Determination of depth-viewing volumes for stereo three-dimensional graphic displays. Hampton, Va: Langley Research Center, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rogers, Brian J., and Ian P. Howard. Perceiving in Depth, Volume 2: Stereoscopic Vision. Oxford University Press, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

P, Williams Steven, Langley Research Center, United States. Army Aviation Research and Development Command., and United States. Army Aviation Systems Command., eds. Determination of depth-viewing volumes for stereo three-dimensional graphic displays. Washington, D.C: National Aeronautics and Space Administration, Office of Management, Scientific and Technical Information Division, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Determination of depth-viewing volumes for stereo three-dimensional graphic displays. Washington, D.C: National Aeronautics and Space Administration, Office of Management, Scientific and Technical Information Division, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

P, Williams Steven, Langley Research Center, United States. Army Aviation Research and Development Command., and United States. Army Aviation Systems Command., eds. Determination of depth-viewing volumes for stereo three-dimensional graphic displays. Washington, D.C: National Aeronautics and Space Administration, Office of Management, Scientific and Technical Information Division, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Stereoscopic vision, depth"

1

Ludwig, Kai-Oliver, Heiko Neumann, and Bernd Neumann. "Local stereoscopic depth estimation using ocular stripe maps." In Computer Vision — ECCV'92, 373–77. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/3-540-55426-2_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rogers, Brian J. "The Perception and Representation of Depth and Slant in Stereoscopic Surfaces." In Artificial and Biological Vision Systems, 241–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-77840-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Howard, Ian P., and Brian J. Rogers. "Limits of stereoscopic vision." In Seeing in Depth, 143–213. Oxford University Press, 2008. http://dx.doi.org/10.1093/acprof:oso/9780195367607.003.0005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Howard, Ian P., and Brian J. Rogers. "Depth contrast." In Perceiving in DepthVolume 2 Stereoscopic Vision, 433–69. Oxford University Press, 2012. http://dx.doi.org/10.1093/acprof:oso/9780199764150.003.0406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Howard, Ian P., and Brian J. Rogers. "Binocular disparity and depth perception." In Perceiving in DepthVolume 2 Stereoscopic Vision, 385–432. Oxford University Press, 2012. http://dx.doi.org/10.1093/acprof:oso/9780199764150.003.0350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Davis, Elizabeth Thorpe, and Larry F. Hodges. "Human Stereopsis, Fusion, and Stereoscopic Virtual Environments." In Virtual Environments and Advanced Interface Design. Oxford University Press, 1995. http://dx.doi.org/10.1093/oso/9780195075557.003.0013.

Full text
Abstract:
Two fundamental purposes of human spatial perception, in either a real or virtual 3D environment, are to determine where objects are located in the environment and to distinguish one object from another. Although various sensory inputs, such as haptic and auditory inputs, can provide this spatial information, vision usually provides the most accurate, salient, and useful information (Welch and Warren, 1986). Moreover, of the visual cues available to humans, stereopsis provides an enhanced perception of depth and of three-dimensionality for a visual scene (Yeh and Silverstein, 1992). (Stereopsis or stereoscopic vision results from the fusion of the two slightly different views of the external world that our laterally displaced eyes receive (Schor, 1987; Tyler, 1983).) In fact, users often prefer using 3D stereoscopic displays (Spain and Holzhausen, 1991) and find that such displays provide more fun and excitement than do simpler monoscopic displays (Wichanski, 1991). Thus, in creating 3D virtual environments or 3D simulated displays, much attention recently has been devoted to visual 3D stereoscopic displays. Yet, given the costs and technical requirements of such displays, we should consider several issues. First, we should consider in what conditions and situations these stereoscopic displays enhance perception and performance. Second, we should consider how binocular geometry and various spatial factors can affect human stereoscopic vision and, thus, constrain the design and use of stereoscopic displays. Finally, we should consider the modeling geometry of the software, the display geometry of the hardware, and some technological limitations that constrain the design and use of stereoscopic displays by humans. In the following section we consider when 3D stereoscopic displays are useful and why they are useful in some conditions but not others. In the section after that we review some basic concepts about human stereopsis and fusion that are of interest to those who design or use 3D stereoscopic displays. Also in that section we point out some spatial factors that limit stereopsis and fusion in human vision as well as some potential problems that should be considered in designing and using 3D stereoscopic displays. Following that we discuss some software and hardware issues, such as modelling geometry and display geometry as well as geometric distortions and other artifacts that can affect human perception.
APA, Harvard, Vancouver, ISO, and other styles
7

Barthakur, Manami, and Kandarpa Kumar Sarma. "Incorporation of Depth in Two Dimensional Video Captures." In Emerging Technologies in Intelligent Applications for Image and Video Processing, 88–109. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9685-3.ch004.

Full text
Abstract:
Stereoscopic vision in cameras is an interesting field of study. This type of vision is important in incorporation of depth in video images which is needed for the ability to measure distances of the object from the camera properly i.e. conversion of two dimensional video image into three dimensional video. In this chapter, some of the basic theoretical aspects of the methods for estimating depth in 2D video and the current state of research have been discussed. These methods are frequently used in the algorithms for estimating depth in the 2D to 3D video techniques. Some of the recent algorithms for incorporation depth in 2D video are also discussed and from the literature review a simple and generic system for incorporation depth in 2D video is presented.
APA, Harvard, Vancouver, ISO, and other styles
8

Mahmoudpour, Saeed, and Manbae Kim. "A study on the relationship between depth map quality and stereoscopic image quality using upsampled depth maps☆." In Emerging Trends in Image Processing, Computer Vision and Pattern Recognition, 149–60. Elsevier, 2015. http://dx.doi.org/10.1016/b978-0-12-802045-6.00010-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kerber, Kristen L. "Testing Stereopsis in Children." In The Pediatric Eye Exam Quick Reference Guide, 32–43. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-8044-8.ch003.

Full text
Abstract:
Stereopsis develops very early in life and is thought to be present in a normally developing child by six months of age. In order to develop stereopsis, multiple components of visual development must be intact including visual acuity and bifoveal fixation. Stereopsis is the most sensitive way to assess sensory fusion but can be unreliable in very young age groups due to difficulty understanding the test or instructions. It is best to choose an option with global stereopsis (high level cortical stereo), as local stereopsis may overestimate ability due to available monocular cues. Global is created using random dot stereograms (RDS) – computer-generated patterns to create a stereoscopic form, while local contains line stereograms which create horizontal retinal image disparity giving the perception of depth. Stereopsis can be affected by strabismus, amblyopia, and other binocular vision dysfunctions that interfere with visual efficiency (especially in school-age children). The chapter discusses the most commonly used clinical tests of global and local stereopsis.
APA, Harvard, Vancouver, ISO, and other styles
10

Lindsey, Rachel McBride. "Beyond the Sense Horizon." In A Communion of Shadows. University of North Carolina Press, 2017. http://dx.doi.org/10.5149/northcarolina/9781469633725.003.0006.

Full text
Abstract:
This chapter explores the communion of shadows through the optical marvel of the stereoscope. First developed in the decades before the invention of photography, stereographs began as simple drawings designed to explore binocular vision by simulating dimensional depth on a flat surface. With the invention of the daguerreotype and subsequent print photography, stereographs became immensely popular forms of nineteenth-century visual culture. The effect of dimension was accomplished by positioning two nearly exact photographs side by side and viewed through prismatic lenses fitted into a hood, a contraption known as a stereoscope. Like halftone tours and biblical photographs, stereographs of the Holy Land invited beholders to dismiss the photographic contemporary in their sights on a biblical imaginary. But through the visual sensation of the stereoscope, beholders imagined themselves transported into the biblical past in a way other photographic technologies had not enabled.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Stereoscopic vision, depth"

1

Fatah, O. Abdul, A. Aggoun, M. R. Swash, E. Alazawi, B. Li, J. C. Fernandez, D. Chen, and E. Tsekleves. "Generating stereoscopic 3D from holoscopic 3D." In 2013 3DTV Vision Beyond Depth (3DTV-CON). IEEE, 2013. http://dx.doi.org/10.1109/3dtv.2013.6676638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Aflaki, Payman, Miska M. Hannuksela, Hamed Sarbolandi, and Moncef Gabbouj. "Rendering stereoscopic video for simultaneous 2D and 3D presentation." In 2013 3DTV Vision Beyond Depth (3DTV-CON). IEEE, 2013. http://dx.doi.org/10.1109/3dtv.2013.6676658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Grimaldi, Lucia, Matthias Wegener, Arion Neddens, and Klaas Schuur. "A comparative study of 3D transmission formats for 4K auto-stereoscopic displays." In 2013 3DTV Vision Beyond Depth (3DTV-CON). IEEE, 2013. http://dx.doi.org/10.1109/3dtv.2013.6676646.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Keselman, Leonid, John Iselin Woodfill, Anders Grunnet-Jepsen, and Achintya Bhowmik. "Intel(R) RealSense(TM) Stereoscopic Depth Cameras." In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2017. http://dx.doi.org/10.1109/cvprw.2017.167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Martin, Virginia, Julian Cabrera, and Narciso Garcia. "Depth filtering for auto-stereoscopic mobile devices." In 2014 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON 2014). IEEE, 2014. http://dx.doi.org/10.1109/3dtv.2014.6874750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kemna, Maarten, Daan M. Pool, Mark Wentink, and Max Mulder. "Manual Control Behavior in Stereoscopic Vision-Enhanced Depth Control Tasks." In AIAA Scitech 2020 Forum. Reston, Virginia: American Institute of Aeronautics and Astronautics, 2020. http://dx.doi.org/10.2514/6.2020-2265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Islam, Md Baharul, Lai-Kuan Wong, Kok-Lim Low, and Chee Onn Wong. "Warping-Based Stereoscopic 3D Video Retargeting With Depth Remapping." In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2019. http://dx.doi.org/10.1109/wacv.2019.00181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kytö, Mikko, Mikko Nuutinen, and Pirkko Oittinen. "Method for measuring stereo camera depth accuracy based on stereoscopic vision." In IS&T/SPIE Electronic Imaging, edited by J. Angelo Beraldin, Geraldine S. Cheok, Michael B. McCarthy, Ulrich Neuschaefer-Rube, Atilla M. Baskurt, Ian E. McDowall, and Margaret Dolinsky. SPIE, 2011. http://dx.doi.org/10.1117/12.872015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jumisko-Pyykko, Satu, Tomi Haustola, Atanas Boev, and Atanas Gotchev. "Juxtaposition between compression and depth for stereoscopic image quality on portable auto-stereoscopic display." In 2011 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON 2011). IEEE, 2011. http://dx.doi.org/10.1109/3dtv.2011.5877232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Chensheng, Xiaochun Wang, Joris S. M. Vergeest, and Tjamme Wiegers. "On the Stereoscopic Composition of Wide Baseline Stereo Pairs." In ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-86357.

Full text
Abstract:
Wide baseline cameras are broadly utilized in binocular vision systems, delivering depth information and stereoscopic images of the scene that are crucial both in virtual reality and in computer vision applications. However, due to the large distance between the two cameras, the stereoscopic composition of stereo pairs with wide baseline is hardly to fit the human eye parallax. In this paper, techniques and algorithms for the stereoscopic composition of wide baseline stereo pairs in binocular vision will be investigated. By incorporating the human parallax limitation, a novel algorithm being capable of adjusting the wide baseline stereo pairs to compose a high quality stereoscopic image will be formulated. The main idea behind the proposed algorithm is, by simulating the eyeball rotation, to shift the wide baseline stereo pairs closer to each other to fit the human parallax limit. This makes it possible for the wide baseline stereo pairs to be composed into a recognizable stereoscopic image in terms of human parallax with a minor cost of variation in the depth cue. In addition, the depth variations before and after the shifting of the stereo pairs are evaluated by conducting an error estimation. Examples are provided for the evaluation of the proposed algorithm. And the quality of the composed stereoscopic images proves that the proposed algorithm is both valid and effective.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography