Academic literature on the topic 'Stereoscopic cameras'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Stereoscopic cameras.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Stereoscopic cameras"

1

Chiu, Chuang-Yuan, Michael Thelwell, Terry Senior, Simon Choppin, John Hart, and Jon Wheat. "Comparison of depth cameras for three-dimensional reconstruction in medicine." Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 233, no. 9 (June 28, 2019): 938–47. http://dx.doi.org/10.1177/0954411919859922.

Full text
Abstract:
KinectFusion is a typical three-dimensional reconstruction technique which enables generation of individual three-dimensional human models from consumer depth cameras for understanding body shapes. The aim of this study was to compare three-dimensional reconstruction results obtained using KinectFusion from data collected with two different types of depth camera (time-of-flight and stereoscopic cameras) and compare these results with those of a commercial three-dimensional scanning system to determine which type of depth camera gives improved reconstruction. Torso mannequins and machined aluminium cylinders were used as the test objects for this study. Two depth cameras, Microsoft Kinect V2 and Intel Realsense D435, were selected as the representatives of time-of-flight and stereoscopic cameras, respectively, to capture scan data for the reconstruction of three-dimensional point clouds by KinectFusion techniques. The results showed that both time-of-flight and stereoscopic cameras, using the developed rotating camera rig, provided repeatable body scanning data with minimal operator-induced error. However, the time-of-flight camera generated more accurate three-dimensional point clouds than the stereoscopic sensor. Thus, this suggests that applications requiring the generation of accurate three-dimensional human models by KinectFusion techniques should consider using a time-of-flight camera, such as the Microsoft Kinect V2, as the image capturing sensor.
APA, Harvard, Vancouver, ISO, and other styles
2

Novac, Petr, Vladimír Mostýn, Tomáš Kot, and Václav Krys. "Stereoscopic System with the Tight Tilted Cameras." Applied Mechanics and Materials 332 (July 2013): 154–64. http://dx.doi.org/10.4028/www.scientific.net/amm.332.154.

Full text
Abstract:
Thispaper describes a stereoscopic camera system based on a pair of separate cameras.The cameras are stationary and tilted together. The value of their tiltingangle is derived in regards to their maximal possible utilization of the chiparea with the minimalizing unused parts. It can increase the size of the croppedimage of both left and right images as well as their resolution. The resultswere tested and verified on a real stereoscopic system used for the exploratory, emergency and rescue mobile robots Hercules, Ares and Hardy.
APA, Harvard, Vancouver, ISO, and other styles
3

Rovira-Más, Francisco, Qi Wang, and Qin Zhang. "Bifocal Stereoscopic Vision for Intelligent Vehicles." International Journal of Vehicular Technology 2009 (March 29, 2009): 1–9. http://dx.doi.org/10.1155/2009/123231.

Full text
Abstract:
The numerous benefits of real-time 3D awareness for autonomous vehicles have motivated the incorporation of stereo cameras to the perception units of intelligent vehicles. The availability of the distance between camera and objects is essential for such applications as automatic guidance and safeguarding; however, a poor estimation of the position of the objects in front of the vehicle can result in dangerous actions. There is an emphasis, therefore, in the design of perception engines that can make available a rich and reliable interval of ranges in front of the camera. The objective of this research is to develop a stereo head that is capable of capturing 3D information from two cameras simultaneously, sensing different, but complementary, fields of view. In order to do so, the concept of bifocal perception was defined and physically materialized in an experimental bifocal stereo camera. The assembled system was validated through field tests, and results showed that each stereo pair of the head excelled at a singular range interval. The fusion of both intervals led to a more faithful representation of reality.
APA, Harvard, Vancouver, ISO, and other styles
4

Jia, Chen Yu, Ze Hua Gao, Xun Bo Yu, Xin Zhu Sang, and Tian Qi Zhao. "Auto-Stereoscopic 3D Video Conversation System Based on an Improved Eye Tracking Method." Applied Mechanics and Materials 513-517 (February 2014): 3907–10. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.3907.

Full text
Abstract:
An auto-stereoscopic 3D video conversation system is demonstrated with an improved eye-tracking method based on a lenticular sheet and two cameras. The two cameras are used to get stereoscopic picture pairs and addressed the viewers position by an Improved Eye Tracking Method. The computer combines the stereoscopic picture pairs with different masks graphic processing unit. Low crosstalk correct stereoscopic video pairs for the end-to-end commutation are achieved.
APA, Harvard, Vancouver, ISO, and other styles
5

Choi, Sung-In, and Soon-Yong Park. "DOF Correction of Heterogeneous Stereoscopic Cameras." Journal of the Institute of Electronics and Information Engineers 51, no. 7 (July 25, 2014): 169–79. http://dx.doi.org/10.5573/ieie.2014.51.7.169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Birnbaum, Faith A., Aaron Wang, and Christopher J. Brady. "Stereoscopic Surgical Recording Using GoPro Cameras." JAMA Ophthalmology 133, no. 12 (December 1, 2015): 1483. http://dx.doi.org/10.1001/jamaophthalmol.2015.3865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lavrushkin, Sergey, Ivan Molodetskikh, Konstantin Kozhemyakov, and Dmitriy Vatolin. "Stereoscopic quality assessment of 1,000 VR180 videos using 8 metrics." Electronic Imaging 2021, no. 2 (January 18, 2021): 350–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.2.sda-350.

Full text
Abstract:
In this work we present a large-scale analysis of stereoscopic quality for 1,000 VR180 YouTube videos. VR180 is a new S3D format for VR devices which stores the view for only a single hemisphere. Instead of a multi-camera rig, this format requires just two cameras with fisheye lenses similar to conventional 3D-shooting, resulting in cost reduction of the final device and simplification of the shooting process. But as in the conventional stereoscopic format, VR180 videos suffer from stereoscopyrelated problems specific to 3D shooting. In this paper we analyze videos to detect the most common stereoscopic artifacts using objective quality metrics, including color, sharpness and geometry mismatch between views and more. Our study depicts the current state of S3D technical quality of VR180 videos and reveals its overall poor condition, as most of the analyzed videos exhibit at least one of the stereoscopic artifacts, which shows a necessity for stereoscopic quality control in modern VR180 shooting.
APA, Harvard, Vancouver, ISO, and other styles
8

Rzeszotarski, Dariusz, and Paweł Pełczyński. "Software Application for Calibration of Stereoscopic Camera Setups." Metrology and Measurement Systems 19, no. 4 (December 1, 2012): 805–16. http://dx.doi.org/10.2478/v10178-012-0072-1.

Full text
Abstract:
Abstract The article describes an application for calibration of a stereovision camera setup constructed for the needs of an electronic travel aid for the blind. The application can be used to calibrate any stereovision system consisting of two DirectShow compatible cameras using a reference checkerboard of known dimensions. A method for experimental verification of the correctness of the calibration is also presented. The developed software is intended for calibration of mobile stereovision systems that focus mainly on obstacle detection.
APA, Harvard, Vancouver, ISO, and other styles
9

Garg, Sunir. "Stereoscopic Surgical Recording Using GoPro Cameras—Reply." JAMA Ophthalmology 133, no. 12 (December 1, 2015): 1484. http://dx.doi.org/10.1001/jamaophthalmol.2015.3879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sommer, Bjorn. "Hybrid Stereoscopic Photography - Analogue Stereo Photography meets the Digital Age with the StereoCompass app." Electronic Imaging 2021, no. 2 (January 18, 2021): 58–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.2.sda-058.

Full text
Abstract:
Stereoscopic photography has a long history which started just a few years after the first known photo was taken: 1849 Sir David Brewster introduced the first binocular camera. Whereas mobile photography is omnipresent because of the wide distribution of smart phones, stereoscopic photography is only used by a very small set of enthusiasts or professional (stereo) photographers. One important aspect of professional stereoscopic photography is that the required technology is usually quite expensive. Here, we present an alternative approach, uniting easily affordable vintage analogue SLR cameras with smart phone technology to measure and predict the stereo base/camera separation as well as the focal distance to zero parallax. For this purpose, the StereoCompass app was developed which is utilizing a number of smart phone sensors, combined with a Google Maps-based distance measurement. Three application cases including red/cyan anaglyph stereo photographs are shown. More information and the app can be found at: <uri>http://stereocompass.i2d.uk</uri>
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Stereoscopic cameras"

1

Gurrieri, Luis E. "The Omnidirectional Acquisition of Stereoscopic Images of Dynamic Scenes." Thèse, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/30923.

Full text
Abstract:
This thesis analyzes the problem of acquiring stereoscopic images in all gazing directions around a reference viewpoint in space with the purpose of creating stereoscopic panoramas of non-static scenes. The generation of immersive stereoscopic imagery suitable to stimulate human stereopsis requires images from two distinct viewpoints with horizontal parallax in all gazing directions, or to be able to simulate this situation in the generated imagery. The available techniques to produce omnistereoscopic imagery for human viewing are not suitable to capture dynamic scenes stereoscopically. This is a not trivial problem when considering acquiring the entire scene at once while avoiding self-occlusion between multiple cameras. In this thesis, the term omnidirectional refers to all possible gazing directions in azimuth and a limited set of directions in elevation. The acquisition of dynamic scenes restricts the problem to those techniques suitable for collecting in one simultaneous exposure all the necessary visual information to recreate stereoscopic imagery in arbitrary gazing directions. The analysis of the problem starts by defining an omnistereoscopic viewing model for the physical magnitude to be measured by a panoramic image sensor intended to produce stereoscopic imagery for human viewing. Based on this model, a novel acquisition model is proposed, which is suitable to describe the omnistereoscopic techniques based on horizontal stereo. From this acquisition model, an acquisition method based on multiple cameras combined with the rendering by mosaicking of partially overlapped stereoscopic images is identified as a good candidate to produce omnistereoscopic imagery of dynamic scenes. Experimental acquisition and rendering tests were performed for different multiple-camera configurations. Furthermore, a mosaicking criterion between partially overlapped stereoscopic images based on the continuity of the perceived depth and the prediction of the location and magnitude of unwanted vertical disparities in the final stereoscopic panorama are two main contributions of this thesis. In addition, two novel omnistereoscopic acquisition and rendering techniques were introduced. The main contributions to this field are to propose a general model for the acquisition of omnistereoscopic imagery, to devise novel methods to produce omnistereoscopic imagery, and more importantly, to contribute to the awareness of the problem of acquiring dynamic scenes within the scope of omnistereoscopic research.
APA, Harvard, Vancouver, ISO, and other styles
2

Prakash, Deepak. "Stereoscopic 3D viewing systems using a single sensor camera." Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1196268883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Huynh, Du Quan. "Feature-based stereo vision on a mobile platform." University of Western Australia. Dept. of Computer Science, 1994. http://theses.library.uwa.edu.au/adt-WU2003.0001.

Full text
Abstract:
It is commonly known that stereopsis is the primary way for humans to perceive depth. Although, with one eye, we can still interact very well with our environment and do very highly skillful tasks by using other visual cues such as occlusion and motion, the resultant e ect of the absence of stereopsis is that the relative depth information between objects is essentially lost (Frisby,1979). While humans fuse the images seen by the left and right eyes in a seemingly easy way, the major problem - the correspondence of features - that needs to be solved in all binocular stereo systems of machine vision is not trivial. In this thesis, line segments and corners are chosen to be the features to be matched because they typically occur at object boundaries, surface discontinuities, and across surface markings. Polygonal regions are also selected since they are known to be well-configured and are, very often, associated with salient structures in the image. The use of these high level features, although helping to diminish matching ambiguities, does not completely resolve the matching problem when the scene contains repetitive structures. The spatial relationships between the feature matching pairs enforced in the stereo matching process, as proposed in this thesis, are found to provide even stronger support for correct feature matching pairs and, as a result, incorrect matching pairs can be largely eliminated. Getting global and salient 3D structures has been an important prerequisite for environmental modelling and understanding. While research on postprocessing the 3D information obtained from stereo has been attempted (Ayache and Faugeras, 1991), the strategy presented in this thesis for retrieving salient 3D descriptions is propagating the prominent information extracted from the 2D images to the 3D scene. Thus, the matching of two prominent 2D polygonal regions yields a prominent 3D region, and the inter-relation between two 2D region matching pairs is passed on and taken as a relationship between two 3D regions. Humans, when observing and interacting with the environment do not confine themselves to the observation and then the analysis of a single image. Similarly stereopsis can be vastly improved with the introduction of additional stereo image pairs. Eye, head, and body movements provide essential mobility for an active change of viewpoints, the disocclusion of occluded objects, the avoidance of obstacles, and the performance of any necessary tasks on hand. This thesis presents a mobile stereo vision system that has its eye movements provided by a binocular head support and stepper motors, and its body movements provided by a mobile platform, the Labmate. With a viewer centred coordinate system proposed in this thesis the computation of the 3D information observed at each individual viewpoint, the merging of the 3D in formation at consecutive viewpoints for environmental reconstruction, and strategies for movement control are discussed in detail.
APA, Harvard, Vancouver, ISO, and other styles
4

O'Kennedy, Brian James. "Stereo camera calibration." Thesis, Stellenbosch : Stellenbosch University, 2002. http://hdl.handle.net/10019.1/53063.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2002.
ENGLISH ABSTRACT: We present all the components needed for a fully-fledged stereo vision system, ranging from object detection through camera calibration to depth perception. We propose an efficient, automatic and practical method to calibrate cameras for use in 3D machine vision metrology. We develop an automated stereo calibration system that only requires a series of views of a manufactured calibration object in unknown positions. The system is tested against real and synthetic data, and we investigate the robustness of the proposed method compared to standard calibration practice. All the aspects of 3D stereo reconstruction is dealt with and we present the necessary algorithms to perform epipolar rectification on images as well as solving the correspondence and triangulation problems. It was found that the system performs well even in the presence of noise, and calibration is easy and requires no specialist knowledge.
AFRIKAANSE OPSOMMING: Ons beskryf al die komponente van 'n omvattende stereo visie sisteem. Die kern van die sisteem is 'n effektiewe, ge-outomatiseerde en praktiese metode om kameras te kalibreer vir gebruik in 3D rekenaarvisie. Ons ontwikkel 'n outomatiese, stereo kamerakalibrasie sisteem wat slegs 'n reeks beelde van 'n kalibrasie voorwerp in onbekende posisies vereis. Die sisteem word getoets met reële en sintetiese data, en ons vergelyk die robuustheid van die metode met die standaard algoritmes. Al die aspekte van die 3D stereo rekonstruksie word behandel en ons beskryf die nodige algoritmes om epipolêre rektifikasie op beelde te doen sowel as metodes om die korrespondensie- en diepte probleme op te los. Ons wys dat die sisteem goeie resultate lewer in die aanwesigheid van ruis en dat kamerakalibrasie outomaties kan geskied sonder dat enige spesialis kennis benodig word.
APA, Harvard, Vancouver, ISO, and other styles
5

Katta, Pradeep. "Integrating depth and intensity information for vision-based head tracking." abstract and full text PDF (UNR users only), 2008. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1456416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tien, Fang-Chih. "Using neural networks for three-dimensional measurement in stereo vision systems /." free to MU campus, to others for purchase, 1996. http://wwwlib.umi.com/cr/mo/fullcit?p9720552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nadella, Suman. "Multi camera stereo and tracking patient motion for SPECT scanning systems." Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-082905-161037/.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: Feature matching in multiple cameras; Multi camera stereo computation; Patient Motion Tracking; SPECT Imaging Includes bibliographical references. (p.84-88)
APA, Harvard, Vancouver, ISO, and other styles
8

Converse, Blake L. "A real-time, single-camera, stereoscopic video device." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA294105.

Full text
Abstract:
Thesis (M.S. in Astronautical Engineering and M.S. in Applied Physics) Naval Postgraduate School, December 1994.
Thesis advisor(s): David Cleary, Oscar Biblarz. "December 1994." Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
9

Prakash, Deepak. "Stereoscopic 3D viewing systems using a single sensor camera." The Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=osu1196268883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Asbery, Richard. "The design, development and evaluation of an active stereoscopic telepresence system." Thesis, University of Surrey, 1997. http://epubs.surrey.ac.uk/843020/.

Full text
Abstract:
The work presented in this thesis documents the design, development and evaluation of a high performance stereoscopic telepresence system. Such a system offers the ability to enhance the operator perception of a remote and potentially hazardous environment as an aid to performing a remote task. To achieve this sensation of presence demands the design of a highly responsive remote camera system. A high performance stereo platform has been designed which utilises state- of-the-art cameras, servo drives and gearboxes. It possesses four degrees of freedom; pan, elevation and two camera vergence motions, all of which are controlled simultaneously in real-time by an open architecture controller. This has been developed on a PC/AT bus architecture and utilises a PID control regime. The controller can be easily interfaced to a range of input devices such as electromagnetic head tracking systems which provide the trajectory data for controlling the remote mechatronic platform. Experiments have been performed to evaluate both the mechatronic system and operator oriented performance aspects of the telepresence system. The mechatronic system investigations identify the overall system latency to be 80ms, which is considerably less than other current systems. The operator oriented evaluation demonstrates the necessity for a head tracked telepresence system with a head mounted display system. The need for a low latency period to achieve high operator performance and comfort during certain tasks is also established. This is evident during trajectory following experiments where the operator is required to track a highly dynamic target. The telepresence system has been fully evaluated and demonstrated to enhance operator spatial perception via a sensation of visual immersion in the remote environment.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Stereoscopic cameras"

1

Rozhkov, S. N. Stereoskopii︠a︡ v kino-, foto-, videotekhnike: Terminologicheskiĭ slovarʹ. Moskva: Paradiz, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Active computer vision by cooperative focus and stereo. New York: Springer-Verlag, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Giancola, Silvio, Matteo Valenti, and Remo Sala. A Survey on 3D Cameras: Metrological Comparison of Time-of-Flight, Structured-Light and Active Stereoscopy Technologies. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-91761-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Giancola, Silvio, Matteo Valenti, and Remo Sala. A Survey on 3D Cameras: Metrological Comparison of Time-of-Flight, Structured-Light and Active Stereoscopy Technologies. Springer, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shoot 3D Video Like a Pro: 3D Camcorder Tips, Tricks & Secrets - the 3D Movie Making Manual They Forgot to Include. Organik Media Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Stereoscopic cameras"

1

Diner, Daniel B., and Derek H. Fender. "Spatially-sampling Cameras and Monitors." In Human Engineering in Stereoscopic Viewing Devices, 73–121. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4899-1274-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hansard, Miles, Seungkyu Lee, Ouk Choi, and Radu Horaud. "Alignment of Time-of-Flight and Stereoscopic Data." In Time-of-Flight Cameras, 59–75. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-4658-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Diner, Daniel B., and Derek H. Fender. "Reducing Depth Distortions for Converged Cameras." In Human Engineering in Stereoscopic Viewing Devices, 153–78. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4899-1274-9_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hansard, Miles, Seungkyu Lee, Ouk Choi, and Radu Horaud. "A Mixed Time-of-Flight and Stereoscopic Camera System." In Time-of-Flight Cameras, 77–96. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-4658-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Aguiar, João, Andry Maykol Pinto, Nuno A. Cruz, and Anibal C. Matos. "The Impact of Convergence Cameras in a Stereoscopic System for AUVs." In Lecture Notes in Computer Science, 521–29. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41501-7_58.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rotter, Paweł. "Virtual Cameras and Stereoscopic Imaging for the Supervision of Industrial Processes." In Artificial Intelligence and Soft Computing, 561–69. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59063-9_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Selby, Boris Peter, Georgios Sakas, Wolfgang-Dieter Groch, and Uwe Stilla. "Absolute Orientation of Stereoscopic Cameras by Aligning Contours in Pairs of Images and Reference Images." In Photogrammetric Image Analysis, 25–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24393-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Diner, Daniel B., and Derek H. Fender. "Double Camera Systems." In Human Engineering in Stereoscopic Viewing Devices, 49–65. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4899-1274-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Diner, Daniel B., and Derek H. Fender. "Single Camera Systems." In Human Engineering in Stereoscopic Viewing Devices, 67–72. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4899-1274-9_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Diner, Daniel B., and Derek H. Fender. "Setting up a Stereoscopic Camera System." In Human Engineering in Stereoscopic Viewing Devices, 179–89. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4899-1274-9_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Stereoscopic cameras"

1

Zhang, Buyue, Sreenivas Kothandaraman, and Aziz Umit Batur. "Auto convergence for stereoscopic 3D cameras." In IS&T/SPIE Electronic Imaging, edited by Andrew J. Woods, Nicolas S. Holliman, and Gregg E. Favalora. SPIE, 2012. http://dx.doi.org/10.1117/12.906768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Keselman, Leonid, John Iselin Woodfill, Anders Grunnet-Jepsen, and Achintya Bhowmik. "Intel(R) RealSense(TM) Stereoscopic Depth Cameras." In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2017. http://dx.doi.org/10.1109/cvprw.2017.167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lipton, Lenny, and John Rupkalvis. "A stereoscopic lens for digital cinema cameras." In SPIE/IS&T Electronic Imaging, edited by Nicolas S. Holliman, Andrew J. Woods, Gregg E. Favalora, and Takashi Kawai. SPIE, 2015. http://dx.doi.org/10.1117/12.2081663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nguyen, J. S., T. H. Nguyen, and H. T. Nguyen. "Semi-autonomous wheelchair system using stereoscopic cameras." In 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 2009. http://dx.doi.org/10.1109/iembs.2009.5334266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ilani, Raviv, Adi Reich, Moshik Schindelhaim, and Adrian Stern. "3D Events with Stereoscopy with a single Dynamic Vision System." In 3D Image Acquisition and Display: Technology, Perception and Applications. Washington, D.C.: Optica Publishing Group, 2022. http://dx.doi.org/10.1364/3d.2022.3f2a.4.

Full text
Abstract:
Considering the cost of a single event-based camera, classical 3D imaging approaches using two or more synchronized cameras may be cost-prohibitive. This work explores 3D imaging using a single stationary camera and a stereoscopic lens setup.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Chensheng, Xiaochun Wang, Joris S. M. Vergeest, and Tjamme Wiegers. "On the Stereoscopic Composition of Wide Baseline Stereo Pairs." In ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-86357.

Full text
Abstract:
Wide baseline cameras are broadly utilized in binocular vision systems, delivering depth information and stereoscopic images of the scene that are crucial both in virtual reality and in computer vision applications. However, due to the large distance between the two cameras, the stereoscopic composition of stereo pairs with wide baseline is hardly to fit the human eye parallax. In this paper, techniques and algorithms for the stereoscopic composition of wide baseline stereo pairs in binocular vision will be investigated. By incorporating the human parallax limitation, a novel algorithm being capable of adjusting the wide baseline stereo pairs to compose a high quality stereoscopic image will be formulated. The main idea behind the proposed algorithm is, by simulating the eyeball rotation, to shift the wide baseline stereo pairs closer to each other to fit the human parallax limit. This makes it possible for the wide baseline stereo pairs to be composed into a recognizable stereoscopic image in terms of human parallax with a minor cost of variation in the depth cue. In addition, the depth variations before and after the shifting of the stereo pairs are evaluated by conducting an error estimation. Examples are provided for the evaluation of the proposed algorithm. And the quality of the composed stereoscopic images proves that the proposed algorithm is both valid and effective.
APA, Harvard, Vancouver, ISO, and other styles
7

Santoro, Michael, Ghassan AlRegib, and Yucel Altunbasak. "Misalignment correction for depth estimation using stereoscopic 3-D cameras." In 2012 IEEE 14th International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2012. http://dx.doi.org/10.1109/mmsp.2012.6343409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Naemura, Takeshi, Kaoru Sugita, Takahide Takano, and Hiroshi Harashima. "Multiresolution stereoscopic immersive communication using a set of four cameras." In Electronic Imaging, edited by John O. Merritt, Stephen A. Benton, Andrew J. Woods, and Mark T. Bolas. SPIE, 2000. http://dx.doi.org/10.1117/12.384452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pekkucuksen, Ibrahim E., Aziz Umit Batur, and Buyue Zhang. "A real-time misalignment correction algorithm for stereoscopic 3D cameras." In IS&T/SPIE Electronic Imaging, edited by Andrew J. Woods, Nicolas S. Holliman, and Gregg E. Favalora. SPIE, 2012. http://dx.doi.org/10.1117/12.906902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Harris, Jeff R., Michael McPhail, Christine Truong, and Arnold Fontaine. "Stereoscopic Particle Shadow Velocimetry." In ASME 2018 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/imece2018-88013.

Full text
Abstract:
Stereoscopic particle image velocimetry (SPIV) is a variant of particle image velocimetry (PIV) that allows for the measurement of three components of velocity along a plane in a flow field. In PIV, particles in the flow field are tracked by reflecting laser light from tracer particles into two angled cameras, allowing for the velocity field to be determined. Particle shadow velocimetry (PSV) is an inherently less expensive velocity measurement method since the method images shadows cast by particles from an LED backlight instead of scattered light from a laser. Previous studies have shown that PSV is an adequate substitute for PIV for many two-dimensional, two-component velocimetry measurements. In this work, the viability of the two-dimensional, three-component stereoscopic particle shadow velocimetry (SPSV) is demonstrated by using SPSV to examine a simple jet flow. Results obtained using SPIV are also used to provide benchmark comparison for SPSV measurements. Results show that in-plane and out-of-plane velocities measured using SPSV are comparable to those measured using SPIV.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Stereoscopic cameras"

1

Pardini, A. F. System design description for the LDUA high resolution stereoscopic video camera system (HRSVS). Office of Scientific and Technical Information (OSTI), January 1998. http://dx.doi.org/10.2172/10148081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pardini, A. F. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS). Office of Scientific and Technical Information (OSTI), January 1998. http://dx.doi.org/10.2172/10154319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pardini, A. F. ,. Westinghouse Hanford. Operation and maintenance manual for the high resolution stereoscopic video camera system (HRSVS) system 6230. Office of Scientific and Technical Information (OSTI), July 1996. http://dx.doi.org/10.2172/663154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography