To see the other types of publications on this topic, follow the link: Stereoscopic cameras.

Journal articles on the topic 'Stereoscopic cameras'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Stereoscopic cameras.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chiu, Chuang-Yuan, Michael Thelwell, Terry Senior, Simon Choppin, John Hart, and Jon Wheat. "Comparison of depth cameras for three-dimensional reconstruction in medicine." Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 233, no. 9 (June 28, 2019): 938–47. http://dx.doi.org/10.1177/0954411919859922.

Full text
Abstract:
KinectFusion is a typical three-dimensional reconstruction technique which enables generation of individual three-dimensional human models from consumer depth cameras for understanding body shapes. The aim of this study was to compare three-dimensional reconstruction results obtained using KinectFusion from data collected with two different types of depth camera (time-of-flight and stereoscopic cameras) and compare these results with those of a commercial three-dimensional scanning system to determine which type of depth camera gives improved reconstruction. Torso mannequins and machined aluminium cylinders were used as the test objects for this study. Two depth cameras, Microsoft Kinect V2 and Intel Realsense D435, were selected as the representatives of time-of-flight and stereoscopic cameras, respectively, to capture scan data for the reconstruction of three-dimensional point clouds by KinectFusion techniques. The results showed that both time-of-flight and stereoscopic cameras, using the developed rotating camera rig, provided repeatable body scanning data with minimal operator-induced error. However, the time-of-flight camera generated more accurate three-dimensional point clouds than the stereoscopic sensor. Thus, this suggests that applications requiring the generation of accurate three-dimensional human models by KinectFusion techniques should consider using a time-of-flight camera, such as the Microsoft Kinect V2, as the image capturing sensor.
APA, Harvard, Vancouver, ISO, and other styles
2

Novac, Petr, Vladimír Mostýn, Tomáš Kot, and Václav Krys. "Stereoscopic System with the Tight Tilted Cameras." Applied Mechanics and Materials 332 (July 2013): 154–64. http://dx.doi.org/10.4028/www.scientific.net/amm.332.154.

Full text
Abstract:
Thispaper describes a stereoscopic camera system based on a pair of separate cameras.The cameras are stationary and tilted together. The value of their tiltingangle is derived in regards to their maximal possible utilization of the chiparea with the minimalizing unused parts. It can increase the size of the croppedimage of both left and right images as well as their resolution. The resultswere tested and verified on a real stereoscopic system used for the exploratory, emergency and rescue mobile robots Hercules, Ares and Hardy.
APA, Harvard, Vancouver, ISO, and other styles
3

Rovira-Más, Francisco, Qi Wang, and Qin Zhang. "Bifocal Stereoscopic Vision for Intelligent Vehicles." International Journal of Vehicular Technology 2009 (March 29, 2009): 1–9. http://dx.doi.org/10.1155/2009/123231.

Full text
Abstract:
The numerous benefits of real-time 3D awareness for autonomous vehicles have motivated the incorporation of stereo cameras to the perception units of intelligent vehicles. The availability of the distance between camera and objects is essential for such applications as automatic guidance and safeguarding; however, a poor estimation of the position of the objects in front of the vehicle can result in dangerous actions. There is an emphasis, therefore, in the design of perception engines that can make available a rich and reliable interval of ranges in front of the camera. The objective of this research is to develop a stereo head that is capable of capturing 3D information from two cameras simultaneously, sensing different, but complementary, fields of view. In order to do so, the concept of bifocal perception was defined and physically materialized in an experimental bifocal stereo camera. The assembled system was validated through field tests, and results showed that each stereo pair of the head excelled at a singular range interval. The fusion of both intervals led to a more faithful representation of reality.
APA, Harvard, Vancouver, ISO, and other styles
4

Jia, Chen Yu, Ze Hua Gao, Xun Bo Yu, Xin Zhu Sang, and Tian Qi Zhao. "Auto-Stereoscopic 3D Video Conversation System Based on an Improved Eye Tracking Method." Applied Mechanics and Materials 513-517 (February 2014): 3907–10. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.3907.

Full text
Abstract:
An auto-stereoscopic 3D video conversation system is demonstrated with an improved eye-tracking method based on a lenticular sheet and two cameras. The two cameras are used to get stereoscopic picture pairs and addressed the viewers position by an Improved Eye Tracking Method. The computer combines the stereoscopic picture pairs with different masks graphic processing unit. Low crosstalk correct stereoscopic video pairs for the end-to-end commutation are achieved.
APA, Harvard, Vancouver, ISO, and other styles
5

Choi, Sung-In, and Soon-Yong Park. "DOF Correction of Heterogeneous Stereoscopic Cameras." Journal of the Institute of Electronics and Information Engineers 51, no. 7 (July 25, 2014): 169–79. http://dx.doi.org/10.5573/ieie.2014.51.7.169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Birnbaum, Faith A., Aaron Wang, and Christopher J. Brady. "Stereoscopic Surgical Recording Using GoPro Cameras." JAMA Ophthalmology 133, no. 12 (December 1, 2015): 1483. http://dx.doi.org/10.1001/jamaophthalmol.2015.3865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lavrushkin, Sergey, Ivan Molodetskikh, Konstantin Kozhemyakov, and Dmitriy Vatolin. "Stereoscopic quality assessment of 1,000 VR180 videos using 8 metrics." Electronic Imaging 2021, no. 2 (January 18, 2021): 350–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.2.sda-350.

Full text
Abstract:
In this work we present a large-scale analysis of stereoscopic quality for 1,000 VR180 YouTube videos. VR180 is a new S3D format for VR devices which stores the view for only a single hemisphere. Instead of a multi-camera rig, this format requires just two cameras with fisheye lenses similar to conventional 3D-shooting, resulting in cost reduction of the final device and simplification of the shooting process. But as in the conventional stereoscopic format, VR180 videos suffer from stereoscopyrelated problems specific to 3D shooting. In this paper we analyze videos to detect the most common stereoscopic artifacts using objective quality metrics, including color, sharpness and geometry mismatch between views and more. Our study depicts the current state of S3D technical quality of VR180 videos and reveals its overall poor condition, as most of the analyzed videos exhibit at least one of the stereoscopic artifacts, which shows a necessity for stereoscopic quality control in modern VR180 shooting.
APA, Harvard, Vancouver, ISO, and other styles
8

Rzeszotarski, Dariusz, and Paweł Pełczyński. "Software Application for Calibration of Stereoscopic Camera Setups." Metrology and Measurement Systems 19, no. 4 (December 1, 2012): 805–16. http://dx.doi.org/10.2478/v10178-012-0072-1.

Full text
Abstract:
Abstract The article describes an application for calibration of a stereovision camera setup constructed for the needs of an electronic travel aid for the blind. The application can be used to calibrate any stereovision system consisting of two DirectShow compatible cameras using a reference checkerboard of known dimensions. A method for experimental verification of the correctness of the calibration is also presented. The developed software is intended for calibration of mobile stereovision systems that focus mainly on obstacle detection.
APA, Harvard, Vancouver, ISO, and other styles
9

Garg, Sunir. "Stereoscopic Surgical Recording Using GoPro Cameras—Reply." JAMA Ophthalmology 133, no. 12 (December 1, 2015): 1484. http://dx.doi.org/10.1001/jamaophthalmol.2015.3879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sommer, Bjorn. "Hybrid Stereoscopic Photography - Analogue Stereo Photography meets the Digital Age with the StereoCompass app." Electronic Imaging 2021, no. 2 (January 18, 2021): 58–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.2.sda-058.

Full text
Abstract:
Stereoscopic photography has a long history which started just a few years after the first known photo was taken: 1849 Sir David Brewster introduced the first binocular camera. Whereas mobile photography is omnipresent because of the wide distribution of smart phones, stereoscopic photography is only used by a very small set of enthusiasts or professional (stereo) photographers. One important aspect of professional stereoscopic photography is that the required technology is usually quite expensive. Here, we present an alternative approach, uniting easily affordable vintage analogue SLR cameras with smart phone technology to measure and predict the stereo base/camera separation as well as the focal distance to zero parallax. For this purpose, the StereoCompass app was developed which is utilizing a number of smart phone sensors, combined with a Google Maps-based distance measurement. Three application cases including red/cyan anaglyph stereo photographs are shown. More information and the app can be found at: <uri>http://stereocompass.i2d.uk</uri>
APA, Harvard, Vancouver, ISO, and other styles
11

Peczyński, Paweł, and Bartosz Ostrowski. "Automatic Calibration of Stereoscopic Cameras in an Electronic Travel Aid for the Blind." Metrology and Measurement Systems 20, no. 2 (June 1, 2013): 229–38. http://dx.doi.org/10.2478/mms-2013-0020.

Full text
Abstract:
Abstract The article describes a technique developed for identification of extrinsic parameters of a stereovision camera system for the purpose of image rectification without the use of reference calibration objects. The goal of the presented algorithm is the determination of the mutual position of cameras, under the assumption that they can be modeled by pinhole cameras, are separated by a fixed distance and are moving through a stationary scene. The developed method was verified experimentally on image sequences of a scene with a known structure.
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Mi, Beibei Guo, Ying Zhu, Yufeng Cheng, and Chenhui Nie. "On-Orbit Calibration Approach Based on Partial Calibration-Field Coverage for the GF-1/WFV Camera." Photogrammetric Engineering & Remote Sensing 85, no. 11 (November 1, 2019): 815–27. http://dx.doi.org/10.14358/pers.85.11.815.

Full text
Abstract:
The Gaofen-1 (GF1) optical remote sensing satellite is the first in China's series of high-resolution civilian satellites and is equipped with four wide-field-of-view cameras. The cameras work together to obtain an image 800 km wide, with a resolution of 16 m, allowing GF1 to complete a global scan in four days. To achieve high-accuracy calibration of the wide-field-of-view cameras on GF1, the calibration field should have high resolution and broad coverage based on the traditional calibration method. In this study, a GF self-calibration scheme was developed. It uses partial reference calibration data covering the selected primary charge-coupled device to achieve high-accuracy calibration of the whole image. Based on the absolute constraint of the ground control points and the relative constraint of the tie points of stereoscopic images, we present two geometric calibration models based on paired stereoscopic images and three stereoscopic images for wide-field-of-view cameras on GF1, along with corresponding stepwise internal-parameter estimation methods. Our experimental results indicate that the internal relative accuracy can be guaranteed after calibration. This article provides a new approach that enables large-field-of-view optical satellites to achieve high-accuracy calibration based on partial calibration-field coverage.
APA, Harvard, Vancouver, ISO, and other styles
13

Lee, Cheon, Hyok Song, Byeongho Choi, and Yo-Sung Ho. "3D scene capturing using stereoscopic cameras and a time-of-flight camera." IEEE Transactions on Consumer Electronics 57, no. 3 (August 2011): 1370–76. http://dx.doi.org/10.1109/tce.2011.6018896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Marzi, Christian, Andreas Wachter, and Werner Nahm. "Design of an experimental four-camera setup for enhanced 3D surface reconstruction in microsurgery." Current Directions in Biomedical Engineering 3, no. 2 (September 7, 2017): 539–42. http://dx.doi.org/10.1515/cdbme-2017-0185.

Full text
Abstract:
AbstractFuture fully digital surgical visualization systems enable a wide range of new options. Caused by optomechanical limitations a main disadvantage of today’s surgical microscopes is their incapability of providing arbitrary perspectives to more than two observers. In a fully digital microscopic system, multiple arbitrary views can be generated from a 3D reconstruction. Modern surgical microscopes allow replacing the eyepieces by cameras in order to record stereoscopic videos. A reconstruction from these videos can only contain the amount of detail the recording camera system gathers from the scene. Therefore, covered surfaces can result in a faulty reconstruction for deviating stereoscopic perspectives. By adding cameras recording the object from different angles, additional information of the scene is acquired, allowing to improve the reconstruction. Our approach is to use a fixed four-camera setup as a front-end system to capture enhanced 3D topography of a pseudo-surgical scene. This experimental setup would provide images for the reconstruction algorithms and generation of multiple observing stereo perspectives. The concept of the designed setup is based on the common main objective (CMO) principle of current surgical microscopes. These systems are well established and optically mature. Furthermore, the CMO principle allows a more compact design and a lowered effort in calibration than cameras with separate optics. Behind the CMO four pupils separate the four channels which are recorded by one camera each. The designed system captures an area of approximately 28mm × 28mm with four cameras. Thus, allowing to process images of 6 different stereo perspectives. In order to verify the setup, it is modelled in silico. It can be used in further studies to test algorithms for 3D reconstruction from up to four perspectives and provide information about the impact of additionally recorded perspectives on the enhancement of a reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
15

Lee, ChaBum, and Xiangyu Guo. "Spatially resolved stereoscopic surface profiling by using a feature-selective segmentation and merging technique." Surface Topography: Metrology and Properties 10, no. 1 (March 1, 2022): 014002. http://dx.doi.org/10.1088/2051-672x/ac5998.

Full text
Abstract:
Abstract We present a feature-selective segmentation and merging technique to achieve spatially resolved surface profiles of the parts by 3D stereoscopy and strobo-stereoscopy. A pair of vision cameras capture images of the parts at different angles, and 3D stereoscopic images can be reconstructed. Conventional filtering processes of the 3D images involve data loss and lower the spatial resolution of the image. In this study, the 3D reconstructed image was spatially resolved by automatically recognizing and segmenting the features on the raw images, locally and adaptively applying super-resolution algorithm to the segmented images based on the classified features, and then merging those filtered segments. Here, the features are transformed into masks that selectively separate the features and background images for segmentation. The experimental results were compared with those of conventional filtering methods by using Gaussian filters and bandpass filters in terms of spatial frequency and profile accuracy. As a result, the selective feature segmentation technique was capable of spatially resolved 3D stereoscopic imaging while preserving imaging features.
APA, Harvard, Vancouver, ISO, and other styles
16

Balogh, Attila, Mark C. Preul, Mark Schornak, Michael Hickman, and Robert F. Spetzler. "Intraoperative stereoscopic QuickTime Virtual Reality." Journal of Neurosurgery 100, no. 4 (April 2004): 591–96. http://dx.doi.org/10.3171/jns.2004.100.4.0591.

Full text
Abstract:
Object. The aim of this study was to acquire intraoperative images during neurosurgical procedures for later reconstruction into a stereoscopic image system (QuickTime Virtual Reality [QTVR]) that would improve visualization of complex neurosurgical procedures. Methods. A robotic microscope and digital cameras were used to acquire left and right image pairs during cranial surgery; a grid system facilitated image acquisition with the microscope. The surgeon determined a field of interest and a target or pivot point for image acquisition. Images were processed with commercially available software and hardware. Two-dimensional (2D) or interlaced left and right 2D images were reconstructed into a standard or stereoscopic QTVR format. Standard QTVR images were produced if stereoscopy was not needed. Intraoperative image sequences of regions of interest were captured in six patients. Relatively wide and deep dissections afford an opportunity for excellent QTVR production. Narrow or restricted surgical corridors can be reconstructed into the stereoscopic QTVR mode by using a keyhole mode of image acquisition. The stereoscopic effect is unimpressive with shallow or cortical surface dissections, which can be reconstructed into standard QTVR images. Conclusions. The QTVR system depicts multiple views of the same anatomy from different angles. By tilting, panning, or rotating the reconstructed images, the user can view a virtual three-dimensional tour of a neurosurgical dissection, with images acquired intraoperatively. The stereoscopic QTVR format provides depth to the montage. The system recreates the dissection environment almost completely and provides a superior anatomical frame of reference compared with the images captured by still or video photography in the operating room.
APA, Harvard, Vancouver, ISO, and other styles
17

Gao, Zhi-Wei, Wen-Kuo Lin, Yu-Shian Shen, Chia-Yen Lin, and Wen-Chung Kao. "Design of signal processing pipeline for stereoscopic cameras." IEEE Transactions on Consumer Electronics 56, no. 2 (May 2010): 324–31. http://dx.doi.org/10.1109/tce.2010.5505935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kim, Sung-Yeol, Eun-Kyung Lee, and Yo-Sung Ho. "Generation of ROI Enhanced Depth Maps Using Stereoscopic Cameras and a Depth Camera." IEEE Transactions on Broadcasting 54, no. 4 (December 2008): 732–40. http://dx.doi.org/10.1109/tbc.2008.2002338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Davies, Ross, Ian Wilson, and Andrew Ware. "Stereoscopic Human Detection in a Natural Environment." Annals of Emerging Technologies in Computing 2, no. 2 (April 1, 2018): 15–23. http://dx.doi.org/10.33166/aetic.2018.02.002.

Full text
Abstract:
The algorithm presented in this paper is designed to detect people in real-time from 3D footage for use in Augmented Reality applications. Techniques are discussed that hold potential for a detection system when combined with stereoscopic video capture using the extra depth included in the footage. This information allows for the production of a robust and reliable system. To utilise stereoscopic imagery, two separate images are analysed, combined and the human region detected and extracted. The greatest benefit of this system is the second image, which contains additional information to which conventional systems do not have access, such as the depth perception in the overlapping field of view from the cameras. We describe the motivation behind using 3D footage and the technical complexity of human detection. The system is analysed for both indoor and outdoor usage, when detecting human regions. The developed system has further uses in the field of motion capture, computer gaming and augmented reality. Novelty comes from the camera not being fixed to a single point. Instead, the camera is subject to six degrees of freedom (DOF). In addition, the algorithm is designed to be used as a first filter to extract feature points in input video frames faster than real-time.
APA, Harvard, Vancouver, ISO, and other styles
20

Honkavaara, E., T. Hakala, O. Nevalainen, N. Viljanen, T. Rosnell, E. Khoramshahi, R. Näsi, R. Oliveira, and A. Tommaselli. "GEOMETRIC AND REFLECTANCE SIGNATURE CHARACTERIZATION OF COMPLEX CANOPIES USING HYPERSPECTRAL STEREOSCOPIC IMAGES FROM UAV AND TERRESTRIAL PLATFORMS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 17, 2016): 77–82. http://dx.doi.org/10.5194/isprs-archives-xli-b7-77-2016.

Full text
Abstract:
Light-weight hyperspectral frame cameras represent novel developments in remote sensing technology. With frame camera technology, when capturing images with stereoscopic overlaps, it is possible to derive 3D hyperspectral reflectance information and 3D geometric data of targets of interest, which enables detailed geometric and radiometric characterization of the object. These technologies are expected to provide efficient tools in various environmental remote sensing applications, such as canopy classification, canopy stress analysis, precision agriculture, and urban material classification. Furthermore, these data sets enable advanced quantitative, physical based retrieval of biophysical and biochemical parameters by model inversion technologies. Objective of this investigation was to study the aspects of capturing hyperspectral reflectance data from unmanned airborne vehicle (UAV) and terrestrial platform with novel hyperspectral frame cameras in complex, forested environment.
APA, Harvard, Vancouver, ISO, and other styles
21

Honkavaara, E., T. Hakala, O. Nevalainen, N. Viljanen, T. Rosnell, E. Khoramshahi, R. Näsi, R. Oliveira, and A. Tommaselli. "GEOMETRIC AND REFLECTANCE SIGNATURE CHARACTERIZATION OF COMPLEX CANOPIES USING HYPERSPECTRAL STEREOSCOPIC IMAGES FROM UAV AND TERRESTRIAL PLATFORMS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 17, 2016): 77–82. http://dx.doi.org/10.5194/isprsarchives-xli-b7-77-2016.

Full text
Abstract:
Light-weight hyperspectral frame cameras represent novel developments in remote sensing technology. With frame camera technology, when capturing images with stereoscopic overlaps, it is possible to derive 3D hyperspectral reflectance information and 3D geometric data of targets of interest, which enables detailed geometric and radiometric characterization of the object. These technologies are expected to provide efficient tools in various environmental remote sensing applications, such as canopy classification, canopy stress analysis, precision agriculture, and urban material classification. Furthermore, these data sets enable advanced quantitative, physical based retrieval of biophysical and biochemical parameters by model inversion technologies. Objective of this investigation was to study the aspects of capturing hyperspectral reflectance data from unmanned airborne vehicle (UAV) and terrestrial platform with novel hyperspectral frame cameras in complex, forested environment.
APA, Harvard, Vancouver, ISO, and other styles
22

Szubert, Aleksander. "Stereoscopic Image Acquisition for Analysis and 3D Cameras Calibration." ELEKTRONIKA - KONSTRUKCJE, TECHNOLOGIE, ZASTOSOWANIA 1, no. 12 (December 5, 2016): 29–32. http://dx.doi.org/10.15199/13.2016.12.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Tubaro, Stefano. "Letter a precise stereoscopic system with two video cameras." European Transactions on Telecommunications 3, no. 3 (May 1992): 275–80. http://dx.doi.org/10.1002/ett.4460030309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Kozelov, B. V., S. V. Pilgaev, L. P. Borovkov, and V. E. Yurov. "Multi-scale auroral observations in Apatity: winter 2010–2011." Geoscientific Instrumentation, Methods and Data Systems Discussions 1, no. 1 (December 9, 2011): 31–45. http://dx.doi.org/10.5194/gid-1-31-2011.

Full text
Abstract:
Abstract. Routine observations of the aurora are conducted in Apatity by a set of five cameras: (i) all-sky TV camera Watec WAT-902K (1/2"CCD) with Fujinon lens YV2.2 × 1.4A-SA2; (ii) two monochromatic cameras Guppy F-044B NIR (1/2"CCD) with Fujinon HF25HA-1B (1:1.4/25 mm) lens for 18° field of view and glass filter 558 nm; (iii) two color cameras Guppy F-044C NIR (1/2"CCD) with Fujinon DF6HA-1B (1:1.2/6 mm) lens for 67° field of view. The observational complex is aimed at investigating spatial structure of the aurora, its scaling properties, and vertical distribution in the rayed forms. The cameras were installed on the main building of the Apatity division of the Polar Geophysical Institute and at the Apatity stratospheric range. The distance between these sites is nearly 4 km, so the identical monochromatic cameras can be used as a stereoscopic system. All cameras are accessible and operated remotely via Internet. For 2010–2011 winter season the equipment was upgraded by special blocks of GPS-time triggering, temperature control and motorized pan-tilt rotation mounts. This paper presents the equipment, samples of observed events and the web-site with access to available data previews.
APA, Harvard, Vancouver, ISO, and other styles
25

Kozelov, B. V., S. V. Pilgaev, L. P. Borovkov, and V. E. Yurov. "Multi-scale auroral observations in Apatity: winter 2010–2011." Geoscientific Instrumentation, Methods and Data Systems 1, no. 1 (March 6, 2012): 1–6. http://dx.doi.org/10.5194/gi-1-1-2012.

Full text
Abstract:
Abstract. Routine observations of the aurora are conducted in Apatity by a set of five cameras: (i) all-sky TV camera Watec WAT-902K (1/2"CCD) with Fujinon lens YV2.2 × 1.4A-SA2; (ii) two monochromatic cameras Guppy F-044B NIR (1/2"CCD) with Fujinon HF25HA-1B (1:1.4/25 mm) lens for 18° field of view and glass filter 558 nm; (iii) two color cameras Guppy F-044C NIR (1/2"CCD) with Fujinon DF6HA-1B (1:1.2/6 mm) lens for 67° field of view. The observational complex is aimed at investigating spatial structure of the aurora, its scaling properties, and vertical distribution in the rayed forms. The cameras were installed on the main building of the Apatity division of the Polar Geophysical Institute and at the Apatity stratospheric range. The distance between these sites is nearly 4 km, so the identical monochromatic cameras can be used as a stereoscopic system. All cameras are accessible and operated remotely via Internet. For 2010–2011 winter season the equipment was upgraded by special blocks of GPS-time triggering, temperature control and motorized pan-tilt rotation mounts. This paper presents the equipment, samples of observed events and the web-site with access to available data previews.
APA, Harvard, Vancouver, ISO, and other styles
26

Kim, Hak Gu, Minho Park, Sangmin Lee, Seongyeop Kim, and Yong Man Ro. "Visual Comfort Aware-Reinforcement Learning for Depth Adjustment of Stereoscopic 3D Images." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1762–70. http://dx.doi.org/10.1609/aaai.v35i2.16270.

Full text
Abstract:
Depth adjustment aims to enhance the visual experience of stereoscopic 3D (S3D) images, which accompanied with improving visual comfort and depth perception. For a human expert, the depth adjustment procedure is a sequence of iterative decision making. The human expert iteratively adjusted the depth until he is satisfied with the both levels of visual comfort and the perceived depth. In this work, we present a novel deep reinforcement learning (DRL)-based approach for depth adjustment named VCA-RL (Visual Comfort Aware Reinforcement Learning) to explicitly model human sequential decision making in depth editing operations. We formulate the depth adjustment process as a Markov decision process where actions are defined as camera movement operations to control the distance between the left and right cameras. Our agent is trained based on the guidance of an objective visual comfort assessment metric to learn the optimal sequence of camera movement actions in terms of perceptual aspects in stereoscopic viewing. With extensive experiments and user studies, we show the effectiveness of our VCA-RL model on three different S3D databases.
APA, Harvard, Vancouver, ISO, and other styles
27

Nicholson, Paul T. "Three-dimensional imaging in archaeology: its history and future." Antiquity 75, no. 288 (June 2001): 402–9. http://dx.doi.org/10.1017/s0003598x00061056.

Full text
Abstract:
Whilst digital cameras and computer graphics are starting to be used in archaeological recording, stereoscopic photography tends to be overlooked. This technique has been used successfully in three recent projects and could be beneficial as a means of 3D photographic recording.
APA, Harvard, Vancouver, ISO, and other styles
28

Kataoka, R., Y. Miyoshi, K. Shigematsu, D. Hampton, Y. Mori, T. Kubo, A. Yamashita, et al. "Stereoscopic determination of all-sky altitude map of aurora using two ground-based Nikon DSLR cameras." Annales Geophysicae 31, no. 9 (September 6, 2013): 1543–48. http://dx.doi.org/10.5194/angeo-31-1543-2013.

Full text
Abstract:
Abstract. A new stereoscopic measurement technique is developed to obtain an all-sky altitude map of aurora using two ground-based digital single-lens reflex (DSLR) cameras. Two identical full-color all-sky cameras were set with an 8 km separation across the Chatanika area in Alaska (Poker Flat Research Range and Aurora Borealis Lodge) to find localized emission height with the maximum correlation of the apparent patterns in the localized pixels applying a method of the geographical coordinate transform. It is found that a typical ray structure of discrete aurora shows the broad altitude distribution above 100 km, while a typical patchy structure of pulsating aurora shows the narrow altitude distribution of less than 100 km. Because of its portability and low cost of the DSLR camera systems, the new technique may open a unique opportunity not only for scientists but also for night-sky photographers to complementarily attend the aurora science to potentially form a dense observation network.
APA, Harvard, Vancouver, ISO, and other styles
29

Xi, K., and Y. Duan. "AMS-3000 LARGE FIELD VIEW AERIAL MAPPING SYSTEM: BASIC PRINCIPLES AND THE WORKFLOW." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2020 (August 6, 2020): 79–84. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2020-79-2020.

Full text
Abstract:
Abstract. Three-line array stereo aerial survey camera is a typical mapping equipment of aerial photogrammetry. As one of the airborne equipment, it can quickly obtain a large range of basic geographic information with high precision. At present, typical three-line array stereoscopic aerial survey cameras, such as Leica ADS40 and 80, have the disadvantages of small field of view and low resolution, which makes it difficult to meet the demand of large-scale topographic mapping for economic construction. For the urgent need of domestic three linear array aerial mapping camera in our project, we developed the AMS-3000 camera system. Camera features include a large field of view, high resolution, low distortion and high environmental adaptability. The AMS-3000 system has reached the international advanced level on both software and hardware aspects.
APA, Harvard, Vancouver, ISO, and other styles
30

Madeira, Bruno Eduardo, and Luiz Velho. "Virtual Table-Teleporter: Image Processing and Rendering for Horizontal Stereoscopic Display." International Journal of Virtual Reality 12, no. 1 (January 1, 2013): 30–43. http://dx.doi.org/10.20870/ijvr.2013.12.1.2856.

Full text
Abstract:
We describe a new architecture composed of software and hardware for displaying stereoscopic images over a horizontal surface. It works as a ``Virtual Table and Teleporter'', in the sense that virtual objects depicted over a table have the appearance of real objects. This system can be used for visualization and interaction. We propose two basic configurations: the Virtual Table, consisting of a single display surface, and the Virtual Teleporter, consisting of a pair of tables for image capture and display. The Virtual Table displays either 3D computer generated images or previously captured stereoscopic video and can be used for interactive applications. The Virtual Teleporter captures and transmits stereoscopic video from one table to the other and can be used for telepresence applications. In both configurations the images are properly deformed and displayed for horizontal 3D stereo. In the Virtual Teleporter two cameras are pointed to the first table, capturing a stereoscopic image pair. These images are shown on the second table that is, in fact, a stereoscopic display positioned horizontally. Many applications can benefit from this technology such as virtual reality, games, teleconferencing, and distance learning. We present some interactive applications that we developed using this architecture.
APA, Harvard, Vancouver, ISO, and other styles
31

Jama, Michal, and Dale Schinstock. "Parallel Tracking and Mapping for Controlling VTOL Airframe." Journal of Control Science and Engineering 2011 (2011): 1–10. http://dx.doi.org/10.1155/2011/413074.

Full text
Abstract:
This work presents a vision based system for navigation on a vertical takeoff and landing unmanned aerial vehicle (UAV). This is a monocular vision based, simultaneous localization and mapping (SLAM) system, which measures the position and orientation of the camera and builds a map of the environment using a video stream from a single camera. This is different from past SLAM solutions on UAV which use sensors that measure depth, like LIDAR, stereoscopic cameras or depth cameras. Solution presented in this paper extends and significantly modifies a recent open-source algorithm that solves SLAM problem using approach fundamentally different from a traditional approach. Proposed modifications provide the position measurements necessary for the navigation solution on a UAV. The main contributions of this work include: (1) extension of the map building algorithm to enable it to be used realistically while controlling a UAV and simultaneously building the map; (2) improved performance of the SLAM algorithm for lower camera frame rates; and (3) the first known demonstration of a monocular SLAM algorithm successfully controlling a UAV while simultaneously building the map. This work demonstrates that a fully autonomous UAV that uses monocular vision for navigation is feasible.
APA, Harvard, Vancouver, ISO, and other styles
32

Dueholm, K. S. "Geologic photogrammetry using standard small-frame cameras." Rapport Grønlands Geologiske Undersøgelse 156 (January 1, 1992): 7–17. http://dx.doi.org/10.34194/rapggu.v156.8187.

Full text
Abstract:
Multi-model photogrammetry enables precise three-dimensional measurements from strips or blocks of overlapping small-frame photographs (colour slides). The method can be used for geo-scientific terrain analysis and mapping. Of special interest is the ability to map otherwise inaccessible terrain features such as geological outcrops on steep mountain faces and canyon walls. Field photography is carried out without special photogrammetric training using ordinary small-frame cameras. Photographs can be taken at any scale and angle from terrestrial stations, helicopters, light planes, or boats. In the laboratory, strips or blocks of small-frame photographs are set up in an analytical stereo-plotter where multiple stereoscopic model pairs are simultaneously orientated. Interpretation and compilation is continuous across model boundaries. Data can be plotted in many different projections. The multi-model photogrammetric technique is explained and procedures are outlined for camera calibration, photography, and acquisition of ground-control information.
APA, Harvard, Vancouver, ISO, and other styles
33

Ogawa, Masahiko, Kazunori Shidoji, and Yuji Matsuki. "Assessment of Stereoscopic Multi-resolution Images for a Virtual Reality System." International Journal of Virtual Reality 9, no. 2 (January 1, 2010): 31–37. http://dx.doi.org/10.20870/ijvr.2010.9.2.2769.

Full text
Abstract:
A camera and monitor system that projects actual real-world images has yet to be developed due to the technical limitation that the existing cameras cannot simultaneously acquire high-resolution and wide-angle images. In this research, we try to resolve this issue by superimposing images; a method which is effective because the entire wide-angle image does not necessarily need to be of high resolution because of perceptual characteristics of the human visual system. First, we examined the minimum resolution required for the field of view, which indicated that a triple-resolution image where positions more than 20 and 40 deg from the center of the visual field were decreased to 25% and approximately 11% of the resolution of the gaze point, respectively, was perceived as similar to a completely high-resolution image. Next, we investigated whether the participants could distinguish between the original completely high-resolution image and processed images, which included triple-resolution, dual-resolution, and low-resolution images. Our results suggested that the participants could not differentiate between the triple-resolution image and the original image. Finally, we developed a stereoscopic camera system based on our results
APA, Harvard, Vancouver, ISO, and other styles
34

Ribas, Guilherme Carvalhal, Ricardo Ferreira Bento, and Aldo Junqueira Rodrigues. "Anaglyphic three-dimensional stereoscopic printing: revival of an old method for anatomical and surgical teaching and reporting." Journal of Neurosurgery 95, no. 6 (December 2001): 1057–66. http://dx.doi.org/10.3171/jns.2001.95.6.1057.

Full text
Abstract:
✓ The authors describe how to use the three-dimensional (3D) anaglyphic method to produce stereoscopic prints for anatomical and surgical teaching and reports preparation by using currently available nonprofessional photographic and computer methods. As with any other method of producing stereoscopic images, the anaglyphic procedure is based on the superimposition of two slightly different images of the object to be reproduced, one seen more from a left-sided point of view and the other seen more from a right-sided point of view. The pictures are obtained using a single camera, which following the first shot can be slid along a special bar for the second shot, or by using two cameras affixed to a surgical microscope. After the images have been distinguished from each other by applying different complementary color dyes, the images are scanned and superimposed on each other with the aid of nonprofessional imaging-manipulation software used on a standard personal computer (PC), and are printed using a standard printer. To be seen stereoscopically, glasses with colored lenses, normally one red and one blue, have to be used. Stereoscopic 3D anaglyphic prints can be produced using standard photographic and PC equipment; after some training, the prints can be easily reproduced without significant cost and are particularly helpful to disclose the 3D character of anatomical structures.
APA, Harvard, Vancouver, ISO, and other styles
35

Lee, Cheon, Hyok Song, Byeong-Ho Choi, and Yo-Sung Ho. "Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera." Journal of Korea Information and Communications Society 37, no. 4A (April 30, 2012): 239–49. http://dx.doi.org/10.7840/kics.2012.37a.4.239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Steinbuck, Jonah V., Paul L. D. Roberts, Cary D. Troy, Alexander R. Horner-Devine, Fernando Simonet, Alfred H. Uhlman, Jules S. Jaffe, Stephen G. Monismith, and Peter J. S. Franks. "An Autonomous Open-Ocean Stereoscopic PIV Profiler." Journal of Atmospheric and Oceanic Technology 27, no. 8 (August 1, 2010): 1362–80. http://dx.doi.org/10.1175/2010jtecho694.1.

Full text
Abstract:
Abstract Over the past decade, a novel free-fall imaging profiler has been under development at the Scripps Institution of Oceanography to observe and quantify biological and physical structure in the upper 100 m of the ocean. The profiler provided the first detailed view of microscale phytoplankton distributions using in situ planar laser-induced fluorescence. The present study examines a recent incarnation of the profiler that features microscale turbulent flow measurement capabilities using stereoscopic particle image velocimetry (PIV). As the profiler descends through the water column, a vertical sheet of laser light illuminates natural particles below the profiler. Two sensitive charge-coupled device (CCD) cameras image a 25 cm × 25 cm × 0.6 cm region at a nominal frame rate of 8 Hz. The stereoscopic camera configuration allows all three components of velocity to be measured in the vertical plane with an average spatial resolution of approximately 3 mm. The performance of the PIV system is evaluated for deployments offshore of the southern California coast. The in situ image characteristics, including natural particle seeding density and imaged particle size, are found to be suitable for PIV. Ensemble-averaged velocity and dissipation of turbulent kinetic energy estimates from the stereoscopic PIV system are consistent with observations from an acoustic Doppler velocimeter and acoustic Doppler current profiler, though it is revealed that the present instrument configuration influences the observed flow field. The salient challenges in adapting stereoscopic PIV for in situ, open-ocean turbulence measurements are identified, including cross-plane particle motion, instrument intrusiveness, and measurement uncertainty limitations. These challenges are discussed and recommendations are provided for future development: improved alignment with the dominant flow direction, mitigation of instrument intrusiveness, and improvements in illumination and imaging resolution.
APA, Harvard, Vancouver, ISO, and other styles
37

Cai, Chengtao, Bing Fan, Xin Liang, and Qidan Zhu. "Automatic Rectification of the Hybrid Stereo Vision System." Sensors 18, no. 10 (October 8, 2018): 3355. http://dx.doi.org/10.3390/s18103355.

Full text
Abstract:
By combining the advantages of 360-degree field of view cameras and the high resolution of conventional cameras, the hybrid stereo vision system could be widely used in surveillance. As the relative position of the two cameras is not constant over time, its automatic rectification is highly desirable when adopting a hybrid stereo vision system for practical use. In this work, we provide a method for rectifying the dynamic hybrid stereo vision system automatically. A perspective projection model is proposed to reduce the computation complexity of the hybrid stereoscopic 3D reconstruction. The rectification transformation is calculated by solving a nonlinear constrained optimization problem for a given set of corresponding point pairs. The experimental results demonstrate the accuracy and effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
38

Ciullo, Vito, Lucile Rossi, and Antoine Pieri. "Experimental Fire Measurement with UAV Multimodal Stereovision." Remote Sensing 12, no. 21 (October 29, 2020): 3546. http://dx.doi.org/10.3390/rs12213546.

Full text
Abstract:
In wildfire research, systems that are able to estimate the geometric characteristics of fire, in order to understand and model the behavior of this spreading and dangerous phenomenon, are required. Over the past decade, there has been a growing interest in the use of computer vision and image processing technologies. The majority of these works have considered multiple mono-camera systems, merging the information obtained from each camera. Recent studies have introduced the use of stereovision in this field; for example, a framework with multiple ground stereo pairs of cameras has been developed to measure fires spreading for about 10 meters. This work proposes an unmanned aerial vehicle multimodal stereovision framework which allows for estimation of the geometric characteristics of fires propagating over long distances. The vision system is composed of two cameras operating simultaneously in the visible and infrared spectral bands. The main result of this work is the development of a portable drone system which is able to obtain georeferenced stereoscopic multimodal images associated with a method for the estimation of fire geometric characteristics. The performance of the proposed system is tested through various experiments, which reveal its efficiency and potential for use in monitoring wildfires.
APA, Harvard, Vancouver, ISO, and other styles
39

Midulla, P. "BIFOCAL PAIRS OF IMAGES FOR LOW-COST SURVEY IN CLOSE RANGE PHOTOGRAMMETRY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W17 (November 29, 2019): 203–7. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w17-203-2019.

Full text
Abstract:
Abstract. This paper present a method for close range photogrammetry based on an camera positioning scheme in which two cameras capture an equal portion of an object at the same scale, but have different focal lengths and camera-to-object distances. This scheme is alternative to the stereoscopic scheme and is associated with a system of equations which permits one to calculate first the relief displacement of points on a photograph and then their relief relative to a reference plane. The obtained relief and relief displacement values can be used to produce low-cost orthophotographs by using software for image processing, which doesn’t need to be dedicated, but has to provide measurement and calculation functions. Moreover, this method also allows one to obtain three-dimensional coordinates, through further calculations.
APA, Harvard, Vancouver, ISO, and other styles
40

Porreca, Luca, Anestis I. Kalfas, Reza S. Abhari, Yong Il Yun, and Seung Jin Song. "Stereoscopic PIV measurements in a two-stage axial turbine." E3S Web of Conferences 345 (2022): 01013. http://dx.doi.org/10.1051/e3sconf/202234501013.

Full text
Abstract:
In the present work, the three-dimensional flow field in the interstage region of a twostage axial turbine has been measured by a stereoscopic PIV system. The stereoscopic method is used to compensate for perspective as well as to observe the highly threedimensional flows. The digital images are recorded with a set of two cameras positioned perpendicularly to the measurement plane and inclined by an angle varying between 22° and 30° to allow stereoscopic measurements. The laser beam is delivered to a laser endoscope able to access the measurement areas. By traversing radially, several blade-to-blade planes can be illuminated with the laser endoscope from 66 to 96% blade span. To compensate for the perspective distortion of the field of view due to the tilt angle of camera B as well as the optical distortion through the double-curvature windows, a threedimensional calibration method is used. In the current investigation, a Monte Carlo simulation has been conducted to evaluate measurement errors of PIV. Results of these measurements are compared with velocities derived from time resolved pressure measurements using fast aerodynamic response probe (FRAP). A good agreement is found at the exit of the second rotor. The present work present a unique set of steady and unsteady data measured in a two-stage axial turbine. Measured data in a volume can be used for numerical tool validation as well as improve existing kinematic model of vortex transport and dissipation.
APA, Harvard, Vancouver, ISO, and other styles
41

Sugita, Kenichiro, Yohji Ohohigashi, and Shigeaki Kobayashi. "Stereoscopic television system for use with the operating microscope." Journal of Neurosurgery 62, no. 4 (April 1985): 610–11. http://dx.doi.org/10.3171/jns.1985.62.4.0610.

Full text
Abstract:
✓ A new and simple method of stereoscopic television imaging of surgical procedures performed under an operating microscope has been developed. Two television cameras of the same type, two television monitors of the same size, and a mirror box for fusion of the two visual objects on the two television monitors are used. No significant modifications of available components are necessary. The method can be applied to all operating microscopes with a beam splitter.
APA, Harvard, Vancouver, ISO, and other styles
42

Pavlov, V. I. "Aerial photography of the water area." Geodesy and Cartography 956, no. 2 (March 20, 2020): 18–24. http://dx.doi.org/10.22389/0016-7126-2020-956-2-18-24.

Full text
Abstract:
During the development of water resources the characteristics of excitement, direction, and flow velocity, depth, points of bottom, temperature and chemical composition of water is to be taken into account. Some of these indicators are determined through the results of measuring single aerial photographs and their stereoscopic pairs. Making aerial photography (APS) of water surface on technology for topographic land survey enables obtaining only single overlapping aerial photographs, as the water surface is in constant motion. Stereoscopic pairs of aerial photographs can be obtained if photographing is performed simultaneously by two aerial cameras (AFA) with close elements of internal orientation. The author considers two technological schemes of using two AFA in aerial photography of water space
APA, Harvard, Vancouver, ISO, and other styles
43

Panday, Sanjeeb Prasad. "Stereoscopic Correspondence of Particles for 3-Dimensional Particle Tracking Velocimetry by using Genetic Algorithm." Journal of the Institute of Engineering 12, no. 1 (March 6, 2017): 10–26. http://dx.doi.org/10.3126/jie.v12i1.16706.

Full text
Abstract:
The genetic algorithm (GA) based stereo particle-pairing algorithm has been developed and applied to the spatial particle-pairing problem of the stereoscopic three-dimensional (3-D) PTV system. In this 3 D PTV system, particles viewed by two (or more than two) stereoscopic cameras with a parallax have to be correctly paired at every synchronized time step. This is important because the 3-D coordinates of individual particles cannot be computed without the knowledge of the correct stereo correspondence of the particles. In the present study, the GA algorithm is applied to the epipolar line proximity analysis for establishing correspondence of particles pairs between two co-instantaneous stereoscopic particles images, in order to compute the 3-D coordinates of every individual particle. The results are tested with various standard images and it’s found that the new strategy using GA works better than conventional particle pairing methods of 3-D particle tracking velocimetry for steoroscopic PTV. Journal of the Institute of Engineering, 2016, 12(1): 10-26
APA, Harvard, Vancouver, ISO, and other styles
44

Sun, Hai, David W. Roberts, Hany Farid, Ziji Wu, Alex Hartov, and Keith D. Paulsen. "Cortical Surface Tracking Using a Stereoscopic Operating Microscope." Operative Neurosurgery 56, suppl_1 (January 1, 2005): ONS—86—ONS—97. http://dx.doi.org/10.1227/01.neu.0000146263.98583.cc.

Full text
Abstract:
Abstract OBJECTIVE: To measure and compensate for soft tissue deformation during image-guided neurosurgery, we have developed a novel approach to estimate the three-dimensional (3-D) topology of the cortical surface and track its motion over time. METHODS: We use stereopsis to estimate the 3-D cortical topology during neurosurgical procedures. To facilitate this process, two charge-coupled device cameras have been attached to the binocular optics of a stereoscopic operating microscope. Before surgery, this stereo imaging system is calibrated to obtain the extrinsic and intrinsic camera parameters. During surgery, the 3-D shape of the cortical surface is automatically estimated from a stereo pair of images and registered to the preoperative image volume to provide navigational guidance. This estimation requires robust matching of features between the images, which, when combined with the camera calibration, yields the desired 3-D coordinates. After the 3-D cortical surface has been estimated from stereo pairs, its motion is tracked by comparing the current surface with its previous locations. RESULTS: We are able to estimate the 3-D topology of the cortical surface with an average error of less than 1.2 mm. Executing on a 1.1-GHz Pentium machine, the 3-D estimation from a stereo pair of 1024 × 768 resolution images requires approximately 60 seconds of computation. By applying stereopsis over time, we are able to track the motion of the cortical surface, including the pulsatile movement of the cortical surface, gravitational sag, tissue bulge as a result of increased intracranial pressure, and the parenchymal shape changes associated with tissue resection. The results from 10 surgical patients are reported. CONCLUSION: We have demonstrated that a stereo vision system coupled to the operating microscope can be used to efficiently estimate the dynamic topology of the cortical surface during surgery. The 3-D surface can be coregistered to the preoperative image volume. This unique intraoperative imaging technique expands the capability of the current navigational system in the operating room and increases the accuracy of anatomic correspondence with preoperative images through compensation for brain deformation.
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Hongru, Nicolas Rambaux, and Jérémie Vaubaillon. "Accuracy of meteor positioning from space- and ground-based observations." Astronomy & Astrophysics 642 (October 2020): L11. http://dx.doi.org/10.1051/0004-6361/202039014.

Full text
Abstract:
Aims. The knowledge of the orbits and origins derived from meteors is important for the study of meteoroids and of the early solar system. With an increase in nano-satellite projects dedicated to Earth observations or directly to meteor observations (e.g., the Meteorix CubeSat), we investigate the stereoscopic measurement of meteor positions using a pair of cameras, one deployed in space and one on the ground, and aim to understand the accuracy and the main driving factors. This study will reveal the requirements for system setups and the geometry favorable for meteor triangulation. Methods. This Letter presents the principle of the stereoscopic measurement from space and the ground, and an error analysis. Specifically, the impacts of the resolutions of the cameras, the attitude and orbit determination accuracy of the satellite, and the geometry formed by the moving target and observers are investigated. Results. To reach a desirable positioning accuracy of 1 km it is necessary to equip the satellite with high-accuracy sensors (e.g., star tracker and GPS receiver) to perform fine attitude and orbit determination. The best accuracy can occur when the target is at an elevation of 30° with respect to the ground station.
APA, Harvard, Vancouver, ISO, and other styles
46

Romps, David M., and Ruşen Öktem. "Observing Clouds in 4D with Multiview Stereophotogrammetry." Bulletin of the American Meteorological Society 99, no. 12 (December 2018): 2575–86. http://dx.doi.org/10.1175/bams-d-18-0029.1.

Full text
Abstract:
AbstractNewly installed stereo cameras ringing the Southern Great Plains (SGP) Atmospheric Radiation Measurement (ARM) site in Oklahoma are providing a 4D gridded view of shallow clouds. Six digital cameras have been installed in pairs at a distance of 6 km from the site and with a spacing of 500 m between cameras in a pair. These pairs of cameras provide stereoscopic views of shallow clouds from all sides; when these data are combined, they allow for a complete stereo reconstruction. The result—the Clouds Optically Gridded by Stereo (COGS) product—is a 4D grid of cloudiness covering a 6 km × 6 km × 6 km cube at a spatial resolution of 50 m and a temporal resolution of 20 s. This provides a unique set of data on the sizes, lifetimes, and life cycles of shallow clouds. This type of information is critical for developing cloud macrophysical schemes for the next generation of weather and climate models.
APA, Harvard, Vancouver, ISO, and other styles
47

Kataoka, R., Y. Fukuda, H. A. Uchida, H. Yamada, Y. Miyoshi, Y. Ebihara, H. Dahlgren, and D. Hampton. "High-speed stereoscopy of aurora." Annales Geophysicae 34, no. 1 (January 18, 2016): 41–44. http://dx.doi.org/10.5194/angeo-34-41-2016.

Full text
Abstract:
Abstract. We performed 100 fps stereoscopic imaging of aurora for the first time. Two identical sCMOS cameras equipped with narrow field-of-view lenses (15° by 15°) were directed at magnetic zenith with the north–south base distance of 8.1 km. Here we show the best example that a rapidly pulsating diffuse patch and a streaming discrete arc were observed at the same time with different parallaxes, and the emission altitudes were estimated as 85–95 km and > 100 km, respectively. The estimated emission altitudes are consistent with those estimated in previous studies, and it is suggested that high-speed stereoscopy is useful to directly measure the emission altitudes of various types of rapidly varying aurora. It is also found that variation of emission altitude is gradual (e.g., 10 km increase over 5 s) for pulsating patches and is fast (e.g., 10 km increase within 0.5 s) for streaming arcs.
APA, Harvard, Vancouver, ISO, and other styles
48

Woods, A. J., J. D. Penrose, A. J. Duncan, R. Koch, and D. Clark. "IMPROVING THE OPERABILITY OF REMOTELY OPERATED VEHICLES." APPEA Journal 38, no. 1 (1998): 849. http://dx.doi.org/10.1071/aj97057.

Full text
Abstract:
Underwater Remotely Operated Vehicles (ROV's) have a significant support role to play in offshore petroleum production facilities. The extent to which ROVs can replace diver-based operations depends significantly on ROV capacity and the relative costs of mobilising and implementing the two modes of underwater operation. This paper presents work directed at two aspects of ROV operability: the quality of visual information presented to the ROV pilots and the degree of station keeping control exhibited by the vehicle.Significant improvement in pilot performance of selected maintenance-type tasks has been achieved by the use of a purpose built underwater stereoscopic video camera and associated ship-based stereoscopic display unit. Two generations of cameras have now been built and used on a Perry Triton vehicle in use at the North Rankin A platform on the North West Shelf.In a related program, stereoscopic images of the platform structure are processed to determine the relative position of the ROV. Changes in position are used as inputs to thruster control algorithms, with a view to enabling the vehicle to hold position in fluctuating current fields. The position data from the processed 3D images are linked to output from an on-board inertial system to enable position to be maintained despite periodic loss of visual information.First trials of the combined vision-inertial system indicated some success, notably using the vision system, but indicated difficulties with the inertial package and its integration into the control process. An extension of this project is now being supported by the Australian Maritime Engineering Cooperative Research Centre (AMECRC).
APA, Harvard, Vancouver, ISO, and other styles
49

Martins, Henrique, Ian Oakley, and Rodrigo Ventura. "Design and evaluation of a head-mounted display for immersive 3D teleoperation of field robots." Robotica 33, no. 10 (May 29, 2014): 2166–85. http://dx.doi.org/10.1017/s026357471400126x.

Full text
Abstract:
SUMMARYThis paper describes and evaluates the use of a head-mounted display (HMD) for the teleoperation of a field robot. The HMD presents a pair of video streams to the operator (one to each eye) originating from a pair of stereo cameras located on the front of the robot, thus providing him/her with a sense of depth (stereopsis). A tracker on the HMD captures 3-DOF head orientation data which is then used for adjusting the camera orientation by moving the robot and/or the camera position accordingly, and rotating the displayed images to compensate for the operator's head rotation. This approach was implemented in a search and rescue robot (RAPOSA), and it was empirically validated in a series of short user studies. This evaluation involved four experiments covering two-dimensional perception, depth perception, scene perception, and performing a search and rescue task in a controlled scenario. The stereoscopic display and head tracking are shown to afford a number of performance benefits. However, one experiment also revealed that controlling robot orientation with yaw input from the head tracker negatively influenced task completion time. A possible explanation is a mismatch between the abilities of the robot and the human operator. This aside, the studies indicated that the use of an HMD to create a stereoscopic visualization of the camera feeds from a mobile robot enhanced the perception of cues in a static three-dimensional environment and also that such benefits transferred to simulated field scenarios in the form of enhanced task completion times.
APA, Harvard, Vancouver, ISO, and other styles
50

Repola, L., R. Memmolo, and D. Signoretti. "INSTRUMENTS AND METHODOLOGIES FOR THE UNDERWATER TRIDIMENSIONAL DIGITIZATION AND DATA MUSEALIZATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5/W5 (April 9, 2015): 183–90. http://dx.doi.org/10.5194/isprsarchives-xl-5-w5-183-2015.

Full text
Abstract:
In the research started within the SINAPSIS project of the Università degli Studi Suor Orsola Benincasa an underwater stereoscopic scanning aimed at surveying of submerged archaeological sites, integrable to standard systems for geomorphological detection of the coast, has been developed. The project involves the construction of hardware consisting of an aluminum frame supporting a pair of GoPro Hero Black Edition cameras and software for the production of point clouds and the initial processing of data. <br><br> The software has features for stereoscopic vision system calibration, reduction of noise and the of distortion of underwater captured images, searching for corresponding points of stereoscopic images using stereo-matching algorithms (dense and sparse), for points cloud generating and filtering. <br><br> Only after various calibration and survey tests carried out during the excavations envisaged in the project, the mastery of methods for an efficient acquisition of data has been achieved. The current development of the system has allowed generation of portions of digital models of real submerged scenes. A semi-automatic procedure for global registration of partial models is under development as a useful aid for the study and musealization of sites.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography