Статті в журналах з теми "Photometric image"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Photometric image.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Photometric image".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Miyazaki, Daisuke, and Kazuya Uegomori. "Example-Based Multispectral Photometric Stereo for Multi-Colored Surfaces." Journal of Imaging 8, no. 4 (April 11, 2022): 107. http://dx.doi.org/10.3390/jimaging8040107.

Повний текст джерела
Анотація:
A photometric stereo needs three images taken under three different light directions lit one by one, while a color photometric stereo needs only one image taken under three different lights lit at the same time with different light directions and different colors. As a result, a color photometric stereo can obtain the surface normal of a dynamically moving object from a single image. However, the conventional color photometric stereo cannot estimate a multicolored object due to the colored illumination. This paper uses an example-based photometric stereo to solve the problem of the color photometric stereo. The example-based photometric stereo searches the surface normal from the database of the images of known shapes. Color photometric stereos suffer from mathematical difficulty, and they add many assumptions and constraints; however, the example-based photometric stereo is free from such mathematical problems. The process of our method is pixelwise; thus, the estimated surface normal is not oversmoothed, unlike existing methods that use smoothness constraints. To demonstrate the effectiveness of this study, a measurement device that can realize the multispectral photometric stereo method with sixteen colors is employed instead of the classic color photometric stereo method with three colors.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Song, Euijeong, Seokjung Kim, Seok Chung, and Minho Chang. "SRPS–deep-learning-based photometric stereo using superresolution images." Journal of Computational Design and Engineering 8, no. 4 (June 14, 2021): 995–1012. http://dx.doi.org/10.1093/jcde/qwab025.

Повний текст джерела
Анотація:
Abstract This paper introduces a novel deep-learning-based photometric stereo method that uses superresolution (SR) images: SR photometric stereo. Recent deep-learning-based SR algorithms have yielded great results in terms of enlarging images without mosaic effects. Supposing that the SR algorithms successfully enhance the feature and colour information of original images, implementing SR images using the photometric stereo method facilitates the use of considerably more information on the object than existing photometric stereo methods. We built a novel deep-learning-based network for the photometric stereo technique to optimize the input–output of SR image inputs and normal map outputs. We tested our network using the most widely used benchmark dataset and obtained better results than existing photometric stereo methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Baslamisli, Anil S., Partha Das, Hoang-An Le, Sezer Karaoglu, and Theo Gevers. "ShadingNet: Image Intrinsics by Fine-Grained Shading Decomposition." International Journal of Computer Vision 129, no. 8 (May 27, 2021): 2445–73. http://dx.doi.org/10.1007/s11263-021-01477-5.

Повний текст джерела
Анотація:
AbstractIn general, intrinsic image decomposition algorithms interpret shading as one unified component including all photometric effects. As shading transitions are generally smoother than reflectance (albedo) changes, these methods may fail in distinguishing strong photometric effects from reflectance variations. Therefore, in this paper, we propose to decompose the shading component into direct (illumination) and indirect shading (ambient light and shadows) subcomponents. The aim is to distinguish strong photometric effects from reflectance variations. An end-to-end deep convolutional neural network (ShadingNet) is proposed that operates in a fine-to-coarse manner with a specialized fusion and refinement unit exploiting the fine-grained shading model. It is designed to learn specific reflectance cues separated from specific photometric effects to analyze the disentanglement capability. A large-scale dataset of scene-level synthetic images of outdoor natural environments is provided with fine-grained intrinsic image ground-truths. Large scale experiments show that our approach using fine-grained shading decompositions outperforms state-of-the-art algorithms utilizing unified shading on NED, MPI Sintel, GTA V, IIW, MIT Intrinsic Images, 3DRMS and SRD datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Yu, Yong Yan. "Dense 3D Reconstruction Based on Photometric Stereo with Unknown Light Source via Energy Minimization Framework." Applied Mechanics and Materials 427-429 (September 2013): 1776–80. http://dx.doi.org/10.4028/www.scientific.net/amm.427-429.1776.

Повний текст джерела
Анотація:
In this paper,an novel method would be suggested to achieve an dense 3D reconstruction of objects using photometric stereo without any prior knowledge of light source. Using the photometric images I which is constructed with its columns equal to number of photometric images captured and rows equal to number of pixels in a photometric image. A per pixel initial surface normal estimate is computed based upon SVD of the image matrix I. A effective regularization technique has been applied on the initial normal estimate within the energy minimization framework which via graph cuts to regularize them and preserve the underlying discontinuities better.Finally, the regularized surface normals are integrated to recover the surface of the object. The algorithm has been tested on synthetic as well as real datasets and very encouraging results have been obtained.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Peng, Man, Kaichang Di, Yexin Wang, Wenhui Wan, Zhaoqin Liu, Jia Wang, and Lichun Li. "A Photogrammetric-Photometric Stereo Method for High-Resolution Lunar Topographic Mapping Using Yutu-2 Rover Images." Remote Sensing 13, no. 15 (July 28, 2021): 2975. http://dx.doi.org/10.3390/rs13152975.

Повний текст джерела
Анотація:
Topographic products are important for mission operations and scientific research in lunar exploration. In a lunar rover mission, high-resolution digital elevation models are typically generated at waypoints by photogrammetry methods based on rover stereo images acquired by stereo cameras. In case stereo images are not available, the stereo-photogrammetric method will not be applicable. Alternatively, photometric stereo method can recover topographic information with pixel-level resolution from three or more images, which are acquired by one camera under the same viewing geometry with different illumination conditions. In this research, we extend the concept of photometric stereo to photogrammetric-photometric stereo by incorporating collinearity equations into imaging irradiance model. The proposed photogrammetric-photometric stereo algorithm for surface construction involves three steps. First, the terrain normal vector in object space is derived from collinearity equations, and image irradiance equation for close-range topographic mapping is determined. Second, based on image irradiance equations of multiple images, the height gradients in image space can be solved. Finally, the height map is reconstructed through global least-squares surface reconstruction with spectral regularization. Experiments were carried out using simulated lunar rover images and actual lunar rover images acquired by Yutu-2 rover of Chang’e-4 mission. The results indicate that the proposed method achieves high-resolution and high-precision surface reconstruction, and outperforms the traditional photometric stereo methods. The proposed method is valuable for ground-based lunar surface reconstruction and can be applicable to surface reconstruction of Earth and other planets.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tinbergen, J. "Array Polarimetry and Optical-Differencing Photometry." Symposium - International Astronomical Union 167 (1995): 197–205. http://dx.doi.org/10.1017/s0074180900056448.

Повний текст джерела
Анотація:
Array detectors have improved the efficiency of optical polarimetry sufficiently for this technique to become part of the standard arsenal of observational facilities. However, we could gain even more: spatially-differentiating photometry can be implemented as an option of array Polarimeters and low-noise, high-frame-rate array detectors will allow extremely high precision both in polarimetry and in such differentiating photometry. The latter would be valuable for analyzing many kinds of optical or infrared images of very low contrast; the essence of the technique is to use optical (and extremely stable) means to produce the spatial derivative of the flux image, in the form of a polarization image which is then presented to a “standard” array polarimeter. The polarimeter should incorporate a polarization modulator of sufficient quality for the photometric application in mind. If developed properly, using a state-of-the-art array detector and the most sensitive type of polarization modulator (stress-birefringence), optical differencing will allow levels of relative photometric precision not otherwise obtainable. With the optical differencing option taken out of the beam, the same instrument can be used for high-quality polarimetry.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Davies, L. J. M., J. E. Thorne, A. S. G. Robotham, S. Bellstedt, S. P. Driver, N. J. Adams, M. Bilicki, et al. "Deep Extragalactic VIsible Legacy Survey (DEVILS): consistent multiwavelength photometry for the DEVILS regions (COSMOS, XMMLSS, and ECDFS)." Monthly Notices of the Royal Astronomical Society 506, no. 1 (June 5, 2021): 256–87. http://dx.doi.org/10.1093/mnras/stab1601.

Повний текст джерела
Анотація:
ABSTRACT The Deep Extragalactic VIsible Legacy Survey (DEVILS) is an ongoing high-completeness, deep spectroscopic survey of ∼60 000 galaxies to Y < 21.2 mag, over ∼6 deg2 in three well-studied deep extragalactic fields: D10 (COSMOS), D02 (XMMLSS), and D03 (ECDFS). Numerous DEVILS projects all require consistent, uniformly derived and state-of-the-art photometric data with which to measure galaxy properties. Existing photometric catalogues in these regions either use varied photometric measurement techniques for different facilities/wavelengths leading to inconsistencies, older imaging data and/or rely on source detection and photometry techniques with known problems. Here, we use the ProFound image analysis package and state-of-the-art imaging data sets (including Subaru-HSC, VST-VOICE, VISTA-VIDEO, and UltraVISTA-DR4) to derive matched-source photometry in 22 bands from the FUV to 500 $\mu$m. This photometry is found to be consistent, or better, in colour analysis to previous approaches using fixed-size apertures (which are specifically tuned to derive colours), but produces superior total source photometry, essential for the derivation of stellar masses, star formation rates, star formation histories, etc. Our photometric catalogue is described in detail and, after internal DEVILS team projects, will be publicly released for use by the broader scientific community.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Diaz, Mauricio, and Peter Sturm. "Estimating Photometric Properties from Image Collections." Journal of Mathematical Imaging and Vision 47, no. 1-2 (May 4, 2013): 93–107. http://dx.doi.org/10.1007/s10851-013-0442-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hadj-Abdelkader, Hicham, Omar Tahri, and Houssem-Eddine Benseddik. "Rotation Estimation: A Closed-Form Solution Using Spherical Moments." Sensors 19, no. 22 (November 14, 2019): 4958. http://dx.doi.org/10.3390/s19224958.

Повний текст джерела
Анотація:
Photometric moments are global descriptors of an image that can be used to recover motion information. This paper uses spherical photometric moments for a closed form estimation of 3D rotations from images. Since the used descriptors are global and not of the geometrical kind, they allow to avoid image processing as features extraction, matching, and tracking. The proposed scheme based on spherical projection can be used for the different vision sensors obeying the central unified model: conventional, fisheye, and catadioptric. Experimental results using both synthetic data and real images in different scenarios are provided to show the efficiency of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lu, Liang, Hongbao Zhu, Junyu Dong, Yakun Ju, and Huiyu Zhou. "Three-Dimensional Reconstruction with a Laser Line Based on Image In-Painting and Multi-Spectral Photometric Stereo." Sensors 21, no. 6 (March 18, 2021): 2131. http://dx.doi.org/10.3390/s21062131.

Повний текст джерела
Анотація:
This paper presents a multi-spectral photometric stereo (MPS) method based on image in-painting, which can reconstruct the shape using a multi-spectral image with a laser line. One of the difficulties in multi-spectral photometric stereo is to extract the laser line because the required illumination for MPS, e.g., red, green, and blue light, may pollute the laser color. Unlike previous methods, through the improvement of the network proposed by Isola, a Generative Adversarial Network based on image in-painting was proposed, to separate a multi-spectral image with a laser line into a clean laser image and an uncorrupted multi-spectral image without the laser line. Then these results were substituted into the method proposed by Fan to obtain high-precision 3D reconstruction results. To make the proposed method applicable to real-world objects, a rendered image dataset obtained using the rendering models in ShapeNet has been used for training the network. Evaluation using the rendered images and real-world images shows the superiority of the proposed approach over several previous methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Yin, Jun E., Daniel J. Eisenstein, Douglas P. Finkbeiner, and Pavlos Protopapas. "A Conditional Autoencoder for Galaxy Photometric Parameter Estimation." Publications of the Astronomical Society of the Pacific 134, no. 1034 (April 1, 2022): 044502. http://dx.doi.org/10.1088/1538-3873/ac5847.

Повний текст джерела
Анотація:
Abstract Astronomical photometric surveys routinely image billions of galaxies, and traditionally infer the parameters of a parametric model for each galaxy. This approach has served us well, but the computational expense of deriving a full posterior probability distribution function is a challenge for increasingly ambitious surveys. In this paper, we use deep learning methods to characterize galaxy images, training a conditional autoencoder on mock data. The autoencoder can reconstruct and denoise galaxy images via a latent space engineered to include semantically meaningful parameters, such as brightness, location, size, and shape. Our model recovers galaxy fluxes and shapes on mock data with a lower variance than the Hyper Suprime-Cam photometry pipeline, and returns reasonable answers even for inputs outside the range of its training data. When applied to data in the training range, the regression errors on all extracted parameters are nearly unbiased with a variance near the Cramr-Rao bound.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Yin, Jun E., Daniel J. Eisenstein, Douglas P. Finkbeiner, and Pavlos Protopapas. "A Conditional Autoencoder for Galaxy Photometric Parameter Estimation." Publications of the Astronomical Society of the Pacific 134, no. 1034 (April 1, 2022): 044502. http://dx.doi.org/10.1088/1538-3873/ac5847.

Повний текст джерела
Анотація:
Abstract Astronomical photometric surveys routinely image billions of galaxies, and traditionally infer the parameters of a parametric model for each galaxy. This approach has served us well, but the computational expense of deriving a full posterior probability distribution function is a challenge for increasingly ambitious surveys. In this paper, we use deep learning methods to characterize galaxy images, training a conditional autoencoder on mock data. The autoencoder can reconstruct and denoise galaxy images via a latent space engineered to include semantically meaningful parameters, such as brightness, location, size, and shape. Our model recovers galaxy fluxes and shapes on mock data with a lower variance than the Hyper Suprime-Cam photometry pipeline, and returns reasonable answers even for inputs outside the range of its training data. When applied to data in the training range, the regression errors on all extracted parameters are nearly unbiased with a variance near the Cramr-Rao bound.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Yin, Jun E., Daniel J. Eisenstein, Douglas P. Finkbeiner, and Pavlos Protopapas. "A Conditional Autoencoder for Galaxy Photometric Parameter Estimation." Publications of the Astronomical Society of the Pacific 134, no. 1034 (April 1, 2022): 044502. http://dx.doi.org/10.1088/1538-3873/ac5847.

Повний текст джерела
Анотація:
Abstract Astronomical photometric surveys routinely image billions of galaxies, and traditionally infer the parameters of a parametric model for each galaxy. This approach has served us well, but the computational expense of deriving a full posterior probability distribution function is a challenge for increasingly ambitious surveys. In this paper, we use deep learning methods to characterize galaxy images, training a conditional autoencoder on mock data. The autoencoder can reconstruct and denoise galaxy images via a latent space engineered to include semantically meaningful parameters, such as brightness, location, size, and shape. Our model recovers galaxy fluxes and shapes on mock data with a lower variance than the Hyper Suprime-Cam photometry pipeline, and returns reasonable answers even for inputs outside the range of its training data. When applied to data in the training range, the regression errors on all extracted parameters are nearly unbiased with a variance near the Cramr-Rao bound.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Colman, Isabel L., Timothy R. Bedding, Daniel Huber, and Hans Kjeldsen. "The Kepler IRIS Catalog: Image Subtraction Light Curves for 9150 Stars in and around the Open Clusters NGC 6791 and NGC 6819." Astrophysical Journal Supplement Series 258, no. 2 (February 1, 2022): 39. http://dx.doi.org/10.3847/1538-4365/ac3a11.

Повний текст джерела
Анотація:
Abstract The four-year Kepler mission collected long-cadence images of the open clusters NGC 6791 and NGC 6819, known as “superstamps”. Each superstamp region is a 200 pixel square that captures thousands of cluster members, plus foreground and background stars, of which only the brightest were targeted for long- or short-cadence photometry during the Kepler mission. Using image subtraction photometry, we have produced light curves for every object in the Kepler Input Catalog that falls on the superstamps. The Increased Resolution Image Subtraction (IRIS) catalog includes light curves for 9150 stars, and contains a wealth of new data: 8427 of these stars were not targeted at all by Kepler, and we have increased the number of available quarters of long-cadence data for 382 stars. The catalog is available as a high-level science product on MAST, with both raw photometric data for each quarter and corrected light curves for all available quarters for each star. We also present an introduction to our implementation of image subtraction photometry and the open-source IRIS pipeline, alongside an overview of the data products, systematics, and catalog statistics.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Polák, P., T. Sakowski, E. N. Blanco Roa, J. Huba, E. Krupa, J. Tomka, PeškovičováD, M. Oravcová, and P. Strapák. "Use of computer image analysis for in vivo estimates of the carcass quality of bulls." Czech Journal of Animal Science 52, No. 12 (January 7, 2008): 430–36. http://dx.doi.org/10.17221/2333-cjas.

Повний текст джерела
Анотація:
The aims of the paper were to construct models for the estimation of carcass quality by means of computer image analysis and to verify computer photometry as an <i>in vivo</i> method of carcass quality prediction. Results of photometric measurements and carcass quality of 118 Slovak Pied bulls slaughtered at the age of 15 to 18 months were analysed. Nine length dimensions and four area dimensions were measured on the images of the top, left and rear view of each animal. Hot carcass weight (HCW), weight of meat in carcass (WMC) and weight of meat in valuable cuts (WMVC) were obtained after slaughter treatment and carcass dissection. HCW, WMC and WMVC revealed a maximum correlation with the top-view body area (<i>r</i> = 0.54–0.60) and thurl width (<i>r</i> = 0.58–0.60). Stepwise regression was applied to construct linear regression equations for HCW, WMC and WMVC in two alternatives using photometrical dimensions with and without weight before slaughter (WBS). <i>R</i><sup>2</sup> in an alternative without WBS were lower (<i>R</i><sup>2</sup> = 0.47–0.55); however <i>R</i><sup>2</sup> in an alternative with weight before slaughter were higher and highly significant (<i>R</i><sup>2</sup> = 0.83–0.92). In both alternatives, the equation for HCW had the highest <i>R</i><sup>2</sup> and the equation for WMVC had the lowest <i>R</i><sup>2</sup>. Equations using photometric dimensions and WBS are suitable to estimate HCW, WMC and WMVC without detailed dissection.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Hauagge, Daniel, Scott Wehrwein, Kavita Bala, and Noah Snavely. "Photometric Ambient Occlusion for Intrinsic Image Decomposition." IEEE Transactions on Pattern Analysis and Machine Intelligence 38, no. 4 (April 1, 2016): 639–51. http://dx.doi.org/10.1109/tpami.2015.2453959.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Bartoli, A. "Groupwise Geometric and Photometric Direct Image Registration." IEEE Transactions on Pattern Analysis and Machine Intelligence 30, no. 12 (December 2008): 2098–108. http://dx.doi.org/10.1109/tpami.2008.22.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Julià, Carme, Rodrigo Moreno, Domenec Puig, and Miguel Angel Garcia. "Shape-based image segmentation through photometric stereo." Computer Vision and Image Understanding 115, no. 1 (January 2011): 91–104. http://dx.doi.org/10.1016/j.cviu.2010.09.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Paul, Grégory, Janick Cardinale, and Ivo F. Sbalzarini. "Coupling Image Restoration and Segmentation: A Generalized Linear Model/Bregman Perspective." International Journal of Computer Vision 104, no. 1 (March 8, 2013): 69–93. http://dx.doi.org/10.1007/s11263-013-0615-2.

Повний текст джерела
Анотація:
Abstract We introduce a new class of data-fitting energies that couple image segmentation with image restoration. These functionals model the image intensity using the statistical framework of generalized linear models. By duality, we establish an information-theoretic interpretation using Bregman divergences. We demonstrate how this formulation couples in a principled way image restoration tasks such as denoising, deblurring (deconvolution), and inpainting with segmentation. We present an alternating minimization algorithm to solve the resulting composite photometric/geometric inverse problem. We use Fisher scoring to solve the photometric problem and to provide asymptotic uncertainty estimates. We derive the shape gradient of our data-fitting energy and investigate convex relaxation for the geometric problem. We introduce a new alternating split-Bregman strategy to solve the resulting convex problem and present experiments and comparisons on both synthetic and real-world images.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Karami, A., F. Menna, and F. Remondino. "INVESTIGATING 3D RECONSTRUCTION OF NON-COLLABORATIVE SURFACES THROUGH PHOTOGRAMMETRY AND PHOTOMETRIC STEREO." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2021 (June 28, 2021): 519–26. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2021-519-2021.

Повний текст джерела
Анотація:
Abstract. 3D digital reconstruction techniques are extensively used for quality control purposes. Among them, photogrammetry and photometric stereo methods have been for a long time used with success in several application fields. However, generating highly-detailed and reliable micro-measurements of non-collaborative surfaces is still an open issue. In these cases, photogrammetry can provide accurate low-frequency 3D information, whereas it struggles to extract reliable high-frequency details. Conversely, photometric stereo can recover a very detailed surface topography, although global surface deformation is often present. In this paper, we present the preliminary results of an ongoing project aiming to combine photogrammetry and photometric stereo in a synergetic fusion of the two techniques. Particularly, hereafter, we introduce the main concept design behind an image acquisition system we developed to capture images from different positions and under different lighting conditions as required by photogrammetry and photometric stereo techniques. We show the benefit of such a combination through some experimental tests. The experiments showed that the proposed method recovers the surface topography at the same high-resolution achievable with photometric stereo while preserving the photogrammetric accuracy. Furthermore, we exploit light directionality and multiple light sources to improve the quality of dense image matching in poorly textured surfaces.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Varela, Jesús, David Cristóbal-Hornillos, Javier Cenarro, Alessadro Ederoclite, David Muniesa, Héctor Vázquez Ramió, Nicolas Gruel, and Mariano Moles. "Statistical Challenges in the Photometric Calibration for 21st Century Cosmology: The J-PAS case." Proceedings of the International Astronomical Union 10, S306 (May 2014): 359–61. http://dx.doi.org/10.1017/s1743921314010746.

Повний текст джерела
Анотація:
AbstractThe success of many cosmological surveys in the near future is highly grounded on the quality of their photometry. The Javalambre-PAU Astrophysical Survey (J-PAS) will image more than 8500 deg2 of the Northern Sky Hemisphere in 54 narrow + 2 medium/broad optical bands plus Sloan u, g and r bands. The main goal of J-PAS is to provide the best constrains on the cosmological parameters before the arrival of projects like Euclid or LSST. To achieve this goal the uncertainty in photo-z cannot be larger than 0.3% for several millions of galaxies and this is highly dependent on the photometric accuracy.The photometric calibration of J-PAS will imply the intensive use of huge amounts of data and the use of statistical tools is unavoidable. Here, we present some of the key steps in the photometric calibration of J-PAS that will demand a suitable statistical approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Deline, A., D. Queloz, B. Chazelas, M. Sordet, F. Wildi, A. Fortier, C. Broeg, D. Futyan, and W. Benz. "Expected performances of the Characterising Exoplanet Satellite (CHEOPS)." Astronomy & Astrophysics 635 (March 2020): A22. http://dx.doi.org/10.1051/0004-6361/201935977.

Повний текст джерела
Анотація:
Context. The characterisation of Earth-size exoplanets through transit photometry has stimulated new generations of high-precision instruments. In that respect, the Characterising Exoplanet Satellite (CHEOPS) is designed to perform photometric observations of bright stars to obtain precise radii measurements of transiting planets. The CHEOPS instrument will have the capability to follow up bright hosts provided by radial-velocity facilities. With the recent launch of the Transiting Exoplanet Survey Satellite (TESS), CHEOPS may also be able to confirm some of the long-period TESS candidates and to improve the radii precision of confirmed exoplanets. Aims. The high-precision photometry of CHEOPS relies on careful on-ground calibration of its payload. For that purpose, intensive pre-launch campaigns of measurements were carried out to calibrate the instrument and characterise its photometric performances. This work reports on the main results of these campaigns. It provides a complete analysis of data sets and estimates in-flight photometric performance by means of an end-to-end simulation. Instrumental systematics were measured by carrying out long-term calibration sequences. Using an end-to end model, we simulated transit observations to evaluate the impact of in-orbit behaviour of the satellite and to determine the achievable precision on the planetary radii measurement. Methods. After introducing key results from the payload calibration, we focussed on the data analysis of a series of long-term measurements of uniformly illuminated images. The recorded frames were corrected for instrumental effects and a mean photometric signal was computed on each image. The resulting light curve was corrected for systematics related to laboratory temperature fluctuations. Transit observations were simulated, considering the payload performance parameters. The data were corrected using calibration results and estimates of the background level and position of the stellar image. The light curve was extracted using aperture photometry and analysed with a transit model using a Markov chain Monte Carlo algorithm. Results. In our analysis, we show that the calibration test set-up induces thermally correlated features in the data that can be corrected in post-processing to improve the quality of the light curves. We find that on-ground photometric performances of the instrument measured after this correction is of the order of 15 parts per million over five hours. Using our end-to-end simulation, we determine that measurements of planet-to-star radii ratio with a precision of 2% for a Neptune-size planet transiting a K-dwarf star and 5% for an Earth-size planet orbiting a Sun-like star are possible with CHEOPS. These values correspond to transit depths obtained with signal-to-noise ratios of 25 and 10, respectively, allowing the characterisation and detection of these planets. The pre-launch CHEOPS performances are shown to be compliant with the mission requirements.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Zhou, Xingchen, Yan Gong, Xian-Min Meng, Ye Cao, Xuelei Chen, Zhu Chen, Wei Du, Liping Fu, and Zhijian Luo. "Extracting photometric redshift from galaxy flux and image data using neural networks in the CSST survey." Monthly Notices of the Royal Astronomical Society 512, no. 3 (March 23, 2022): 4593–603. http://dx.doi.org/10.1093/mnras/stac786.

Повний текст джерела
Анотація:
ABSTRACT The accuracy of galaxy photometric redshift (photo-z) can significantly affect the analysis of weak gravitational lensing measurements, especially for future high-precision surveys. In this work, we try to extract photo-z information from both galaxy flux and image data expected to be obtained by China Space Station Telescope (CSST) using neural networks. We generate mock galaxy images based on the observational images from the Advanced Camera for Surveys of Hubble Space Telescope (HST-ACS) and COSMOS catalogues, considering the CSST instrumental effects. Galaxy flux data are then measured directly from these images by aperture photometry. The multilayer perceptron (MLP) and convolutional neural network (CNN) are constructed to predict photo-z from fluxes and images, respectively. We also propose to use an efficient hybrid network, which combines the MLP and CNN, by employing the transfer learning techniques to investigate the improvement of the result with both flux and image data included. We find that the photo-z accuracy and outlier fraction can achieve σNMAD = 0.023 and $\eta = 1.43{{\ \rm per\ cent}}$ for the MLP using flux data only, and σNMAD = 0.025 and $\eta = 1.21{{\ \rm per\ cent}}$ for the CNN using image data only. The result can be further improved in high efficiency as σNMAD = 0.020 and $\eta = 0.90{{\ \rm per\ cent}}$ for the hybrid transfer network. These approaches result in similar galaxy median and mean redshifts 0.8 and 0.9, respectively, for the redshift range from 0 to 4. This indicates that our networks can effectively and properly extract photo-z information from the CSST galaxy flux and image data.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Usami, Hiroyasu, Yuji Iwahori, Yuki Hanai, Boonserm Kijsirikul, and Kunio Kasugai. "Recovering Polyp Shape from an Endoscope Image Using Two Light Sources." International Journal of Software Innovation 5, no. 2 (April 2017): 33–54. http://dx.doi.org/10.4018/ijsi.2017040103.

Повний текст джерела
Анотація:
This paper proposes a new approach to recover the polyp shape from an endoscope image using a photometric constraint equation considering two light sources. The procedures are as follows. First, obtain the initial depth distributions by optimizing photometric equation obtained from two light sources. Next, obtain the surface normal vector from depth using numerical difference at each point. Then the mapping between the obtained normal vector and true normal vector is learned using Radial Basis Function Neural Network for a Lambertian sphere, and learning is generalized to another actual polyp image. Finally, optimize the depth using the obtained surface normal to recover the final 3D shape. The validity is confirmed of this method in comparison with the previous methods via computer simulation and experiments using actual endoscope images.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Zhong, Libo, Lanqiang Zhang, Zhendong Shi, Yu Tian, Youming Guo, Lin Kong, Xuejun Rao, Hua Bao, Lei Zhu, and Changhui Rao. "Wide field-of-view, high-resolution Solar observation in combination with ground layer adaptive optics and speckle imaging." Astronomy & Astrophysics 637 (May 2020): A99. http://dx.doi.org/10.1051/0004-6361/201935109.

Повний текст джерела
Анотація:
Context. High angular resolution images at a wide field of view are required for investigating Solar physics and predicting space weather. Ground-based observations are often subject to adaptive optics (AO) correction and post-facto reconstruction techniques to improve the spatial resolution. The combination of ground layer adaptive optics (GLAO) and speckle imaging is appealing with regard to a simplification of the correction and the high resolution of the reconstruction. The speckle transfer functions (STFs) used in the speckle image reconstruction mainly determine the photometric accuracy of the recovered result. The STF model proposed by Friedrich Wöger and Oskar von der Lühe in the classical AO condition is generic enough to accommodate the GLAO condition if correct inputs are given. Thus, the precisely calculated inputs to the model STF are essential for the final results. The necessary input for the model STF is the correction efficiency which can be calculated simply with the assumption of one layer turbulence. The method for calculating the correction efficiency for the classical AO condition should also be improved to suit the GLAO condition. The generic average height of the turbulence layer used by Friedrich Wöger and Oskar von der Lühe in the classic AO correction may lead to reduced accuracy and should be revised to improve photometric accuracy. Aims. This study is aimed at obtaining quantitative photometric reconstructed images in the GLAO condition. We propose methods for extracting the appropriate inputs for the STF model. Methods. In this paper, the telemetry data of the GLAO system was used to extract the correction efficiency and the equivalent height of the turbulence. To analyze the photometric accuracy of the method, the influence resulting from the distribution of the atmospheric turbulence profile and the extension of the guide stars are investigated by simulations. At those simulations, we computed the STF from the wavefront phases and convolved it with the high-resolution numerical simulations of the solar photosphere. We then deconvolved them with the model STF calculated from the correction efficiency and the equivalent height to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. We reconstructed the solar images taken by the GLAO prototype system at the New Vacuum Solar Telescope of the Yunnan Astronomical Observatory using this method and analyzed the results. Results. These simulations and ensuing analysis demonstrate that high photometric precision can be obtained for speckle amplitude reconstruction using the inputs for the model STF derived from the telemetry data of the GLAO system.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Bramich, D. M., E. Bachelet, K. A. Alsubai, D. Mislis, and N. Parley. "Difference image analysis: The interplay between the photometric scale factor and systematic photometric errors." Astronomy & Astrophysics 577 (May 2015): A108. http://dx.doi.org/10.1051/0004-6361/201526025.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Liu, W. C., and B. Wu. "PHOTOMETRIC STEREO SHAPE-AND-ALBEDO-FROM-SHADING FOR PIXEL-LEVEL RESOLUTION LUNAR SURFACE RECONSTRUCTION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W1 (July 25, 2017): 91–97. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w1-91-2017.

Повний текст джерела
Анотація:
Shape and Albedo from Shading (SAfS) techniques recover pixel-wise surface details based on the relationship between terrain slopes, illumination and imaging geometry, and the energy response (i.e., image intensity) captured by the sensing system. Multiple images with different illumination geometries (i.e., photometric stereo) can provide better SAfS surface reconstruction due to the increase in observations. Photometric stereo SAfS is suitable for detailed surface reconstruction of the Moon and other extra-terrestrial bodies due to the availability of photometric stereo and the less complex surface reflecting properties (i.e., albedo) of the target bodies as compared to the Earth. Considering only one photometric stereo pair (i.e., two images), pixel-variant albedo is still a major obstacle to satisfactory reconstruction and it needs to be regulated by the SAfS algorithm. The illumination directional difference between the two images also becomes an important factor affecting the reconstruction quality. This paper presents a photometric stereo SAfS algorithm for pixel-level resolution lunar surface reconstruction. The algorithm includes a hierarchical optimization architecture for handling pixel-variant albedo and improving performance. With the use of Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC) photometric stereo images, the reconstructed topography (i.e., the DEM) is compared with the DEM produced independently by photogrammetric methods. This paper also addresses the effect of illumination directional difference in between one photometric stereo pair on the reconstruction quality of the proposed algorithm by both mathematical and experimental analysis. In this case, LROC NAC images under multiple illumination directions are utilized by the proposed algorithm for experimental comparison. The mathematical derivation suggests an illumination azimuthal difference of 90 degrees between two images is recommended to achieve minimal error in SAfS reconstruction while results using real data presents similar pattern. Although the algorithm is designed for lunar surface reconstruction, it is likely to be applicable on other extra-terrestrial bodies such as Mars. The results and findings from this research is of significance for the practical use of photometric stereo and SAfS in the domain of planetary remote sensing and mapping.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Trentacoste, Matthew, Wolfgang Heidrich, Lorne Whitehead, Helge Seetzen, and Greg Ward. "Photometric image processing for high dynamic range displays." Journal of Visual Communication and Image Representation 18, no. 5 (October 2007): 439–51. http://dx.doi.org/10.1016/j.jvcir.2007.06.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Argueta-Diaz, Victor, and Augusto García-Valenzuela. "3D image profiler based on passive photometric measurements." Journal of Optics A: Pure and Applied Optics 10, no. 10 (August 28, 2008): 104015. http://dx.doi.org/10.1088/1464-4258/10/10/104015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Harrison, Adam P., and Dileepan Joseph. "Translational photometric alignment of single-view image sequences." Computer Vision and Image Understanding 116, no. 6 (June 2012): 765–76. http://dx.doi.org/10.1016/j.cviu.2012.01.005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Li, Boren, and Tomonari Furukawa. "Microtexture Road Profiling System Using Photometric Stereo." Tire Science and Technology 43, no. 2 (April 1, 2015): 117–43. http://dx.doi.org/10.2346/tire.15.430204.

Повний текст джерела
Анотація:
ABSTRACT This paper presents the design and development of a stationary microtexture road profiling system using the photometric stereo (PS) technique. The structure of the developed system is simple, mainly consisting of a digital single-lens reflex (DSLR) camera with a macro lens and multiple light-emitting diodes (LEDs). The camera with the lens is oriented perpendicularly to the pavement texture and takes images each with a different LED turned on at a time. With the pavement texture images with diverse shadings, the PS technique is applied by inverting the image-forming process locally (pixel-wise) to associate the measured image intensities with the known lighting directions to estimate the gradients for each pixel-corresponding surface patch of the pavement texture. Surface normal integration (SNI) is then employed to reconstruct the three-dimensional (3D) road surface in the microtexture scale. The PS-based system has several intrinsic advantages. First, it could achieve high accuracy for surfaces with most diffuse reflection. Second, the measurement speed is fast because of its area-scanning nature. Third, the spatial resolution is high because of the usage of a high-resolution complementary metal-oxide semiconductor DSLR camera. In addition, it can be less sensitive to effects from specularities and shadows compared with most optical-based methods, since images captured under diverse lighting directions in PS provide more cues for detection purposes. Last but not least, the hardware of the system can be made compact at low cost because of its simple structure and can be adapted for direct measurement on the pavement. Parametric studies for the Lambertian-based PS technique were first investigated analytically and numerically, and these investigations yielded the design of the system having eight LEDs with the same zenith angle of 30 degrees and uniformly distributed azimuth angles in 360 degrees. Several experimental results on various types of surfaces have demonstrated that the developed system could achieve the accuracy in the order of 10 microns.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Alard, C. "New Powerful Methods for Photometry of CCD Images in Crowded Fields." International Astronomical Union Colloquium 176 (2000): 50–55. http://dx.doi.org/10.1017/s0252921100057055.

Повний текст джерела
Анотація:
AbstractImage subtraction is an interesting new alternative to the classical profile fitting method (DAOPHOT or DoPHOT) for finding variable stars and producing their light curves. In crowded fields this new method can lead to large improvements in the photometric accuracy. The method is based on finding the best kernel solution in order to match two images as closely as possible. This approach leads to simple mathematical equations. It is possible to find a general solution to these equations which require very reasonable computing times. It is also shown that even in the case of a spatially variable kernel an optimal solution can be found with minimum computing time. Constant flux scaling can be imposed in this case without changing the basis of the algorithm. The method is illustrated using a set of images of the central region of the globular cluster M5. Only 26 variables were found by processing this data set with DoPHOT, while 61 were found with image subtraction. A large photometric improvement was also found for the 26 variables in common. The maximum improvement achieved by using image subtraction was a factor of 20 with respect to DoPHOT. The accuracy achieved with image subtraction is comparable to what was achieved with HST in a small region around the M5 center. One consequence of this photometric improvement was the discovery of an RR Lyrae star pulsating in a nonradial mode in M5. Finally, it is concluded that image subtraction is a technique of choice when dealing with variability, and that it is important to use it when the field is crowded. It is also important to note that image subtraction may open new possibilities in the investigation of very crowded fields, even with relatively small telescope from the ground.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Tanaka, Masayuki, Hiroyuki Ikeda, Kazumi Murata, Satoshi Takita, Sogo Mineo, Michitaro Koike, Yuki Okura, and Sumiko Harasawa. "Hyper Suprime-Cam Legacy Archive." Publications of the Astronomical Society of Japan 73, no. 3 (May 8, 2021): 735–46. http://dx.doi.org/10.1093/pasj/psab034.

Повний текст джерела
Анотація:
Abstract We present the launch of the Hyper Suprime-Cam Legacy Archive (HSCLA), a public archive of processed, science-ready data from the Hyper Suprime-Cam (HSC). HSC is an optical wide-field imager installed at the prime focus of the Subaru Telescope and has been in operation since 2014. While ∼1/3 of the total observing time of HSC has been used for the Subaru Strategic Program (SSP), the remainder of the time is used for Principal Investigator (PI)-based programs. We have processed the data from these PI-based programs and make the processed, high-quality data available to the community through HSCLA. The current version of HSCLA includes data taken in the first year of science operation, 2014. We provide both individual and coadd images as well as photometric catalogs. The photometric catalog from the coadd is loaded to the database, which offers a fast access to the large catalog. There are other online tools such as an image browser and an image cutout tool and they will be useful for science analyses. The coadd images reach 24–27th magnitudes at 5σ for point sources and cover approximately 580 square degrees in at least one filter with 150 million objects in total. We perform extensive quality assurance tests and verify that the photometric and astrometric quality of the data is good enough for most scientific explorations. However, the data are not without problems and users are referred to the list of known issues before exploiting the data for science. All the data and documentations can be found at the data release site, 〈https://hscla.mtk.nao.ac.jp/〉.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Geda, Robel, Steven M. Crawford, Lucas Hunt, Matthew Bershady, Erik Tollerud, and Solohery Randriamampandry. "PetroFit: A Python Package for Computing Petrosian Radii and Fitting Galaxy Light Profiles." Astronomical Journal 163, no. 5 (April 8, 2022): 202. http://dx.doi.org/10.3847/1538-3881/ac5908.

Повний текст джерела
Анотація:
Abstract PetroFit is an open-source Python package based on Astropy and Photutils that can calculate Petrosian profiles and fit galaxy images. It offers end-to-end tools for making accurate photometric measurements, estimating morphological properties, and fitting 2D models to galaxy images. Petrosian metric radii can be used for model parameter estimation and aperture photometry to provide accurate total fluxes. Correction tools are provided for improving Petrosian radii estimates affected by galaxy morphology. PetroFit also provides tools for sampling Astropy-based models (including custom profiles and multicomponent models) onto image grids and enables point-spread function convolution to account for the effects of seeing. These capabilities provide a robust means of modeling and fitting galaxy light profiles. We have made the PetroFit package publicly available on GitHub ( PetroFit/petrofit ) and PyPi (pip install petrofit).
Стилі APA, Harvard, Vancouver, ISO та ін.
35

MENDEZ-VÁZQUEZ, HEYDI, JOSEF KITTLER, CHI HO CHAN, and EDEL GARCÍA-REYES. "PHOTOMETRIC NORMALIZATION FOR FACE RECOGNITION USING LOCAL DISCRETE COSINE TRANSFORM." International Journal of Pattern Recognition and Artificial Intelligence 27, no. 03 (May 2013): 1360005. http://dx.doi.org/10.1142/s0218001413600057.

Повний текст джерела
Анотація:
Variations in illumination is one of major limiting factors of face recognition system performance. The effect of changes in the incident light on face images is analyzed, as well as its influence on the low frequency components of the image. Starting from this analysis, a new photometric normalization method for illumination invariant face recognition is presented. Low-frequency Discrete Cosine Transform coefficients in the logarithmic domain are used in a local way to reconstruct a slowly varying component of the face image which is caused by illumination. After smoothing, this component is subtracted from the original logarithmic image to compensate for illumination variations. Compared to other preprocessing algorithms, our method achieved a very good performance with a total error rate very similar to that produced by the best performing state-of-the-art algorithm. An in-depth analysis of the two preprocessing methods revealed notable differences in their behavior, which is exploited in a multiple classifier fusion framework to achieve further performance improvement. The superiority of the proposal is demonstrated in both face verification and identification experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Saiz, Fátima A., Iñigo Barandiaran, Ander Arbelaiz, and Manuel Graña. "Photometric Stereo-Based Defect Detection System for Steel Components Manufacturing Using a Deep Segmentation Network." Sensors 22, no. 3 (January 24, 2022): 882. http://dx.doi.org/10.3390/s22030882.

Повний текст джерела
Анотація:
This paper presents an automatic system for the quality control of metallic components using a photometric stereo-based sensor and a customized semantic segmentation network. This system is designed based on interoperable modules, and allows capturing the knowledge of the operators to apply it later in automatic defect detection. A salient contribution is the compact representation of the surface information achieved by combining photometric stereo images into a RGB image that is fed to a convolutional segmentation network trained for surface defect detection. We demonstrate the advantage of this compact surface imaging representation over the use of each photometric imaging source of information in isolation. An empirical analysis of the performance of the segmentation network on imaging samples of materials with diverse surface reflectance properties is carried out, achieving Dice performance index values above 0.83 in all cases. The results support the potential of photometric stereo in conjunction with our semantic segmentation network.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Vogt, S. S., and A. P. Hatzes. "Doppler images of HR 1099 from 1981–1993." Symposium - International Astronomical Union 176 (1996): 245–59. http://dx.doi.org/10.1017/s0074180900083273.

Повний текст джерела
Анотація:
We present 23 Doppler images of the spotted RS CVn star HR 1099 (V711 Tau) from 1981 to 1993. All of the images show a large variable-shape cool spot straddling the pole. Many of the images also show a number of isolated low-latitude spots. This image set has been used to track the emergence and evolution of spots on this star over the 12-year interval. The Doppler images are compared to previous ‘few-circular-spot’ photometric model solutions when available. These photometric solutions do not well-represent the spot distribution. We discuss, among other things: the location on the stellar surface where spots first appear, their subsequent movement with time, differential rotation from spot motions, and spot cycles and dynamos within HR 1099.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Sokolovsky, K. V., A. Z. Bonanos, P. Gavras, M. Yang, D. Hatzidimitriou, M. I. Moretti, A. Karampelas, et al. "The Hubble Catalog of Variables (HCV)." Proceedings of the International Astronomical Union 14, S339 (November 2017): 91–94. http://dx.doi.org/10.1017/s1743921318002296.

Повний текст джерела
Анотація:
AbstractThe Hubble Source Catalog (HSC) combines lists of sources detected on images obtained with the WFPC2, ACS and WFC3 instruments aboard the Hubble Space Telescope (HST) and now available in the Hubble Legacy Archive. The catalogue contains time-domain information for about two million of its sources detected using the same instrument and filter on at least five HST visits. The Hubble Catalog of Variables (HCV) aims to identify HSC sources showing significant brightness variations. A magnitude-dependent threshold in the median absolute deviation of photometric measurements (an outlier-resistant measure of light-curve scatter) is adopted as the variability detection statistic. It is supplemented with a cut in χred2 that removes sources with large photometric errors. A pre-processing procedure involving bad image identification, outlier rejection and computation of local magnitude zero-point corrections is applied to the HSC light-curves before computing the variability detection statistics. About 52 000 HSC sources have been identified as candidate variables, among which 7,800 show variability in more than one filter. Visual inspection suggests that ∼70% of the candidates detected in multiple filters are true variables, while the remaining ∼30% are sources with aperture photometry corrupted by blending, imaging artefacts or image processing anomalies. The candidate variables have AB magnitudes in the range 15–27m, with a median of 22m. Among them are the stars in our own and nearby galaxies, and active galactic nuclei.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Xiao1, Yao, Xiaogang Ruan1, and Xiaoqing Zhu. "PC-VINS-Mono: A Robust Mono Visual-Inertial Odometry with Photometric Calibration." Journal of Autonomous Intelligence 1, no. 2 (January 21, 2019): 29. http://dx.doi.org/10.32629/jai.v1i2.33.

Повний текст джерела
Анотація:
Feature detection and Tracking, which heavily rely on the gray value information of images, is a very importance procedure for Visual-Inertial Odometry (VIO) and the tracking results significantly affect the accuracy of the estimation results and the robustness of VIO. In high contrast lighting condition environment, images captured by auto exposure camera shows frequently change with its exposure time. As a result, the gray value of the same feature in the image show vary from frame to frame, which poses large challenge to the feature detection and tracking procedure. Moreover, this problem further been aggravated by the nonlinear camera response function and lens attenuation. However, very few VIO methods take full advantage of photometric camera calibration and discuss the influence of photometric calibration to the VIO. In this paper, we proposed a robust monocular visual-inertial odometry, PC-VINS-Mono, which can be understood as an extension of the opens-source VIO pipeline, VINS-Mono, with the capability of photometric calibration. We evaluate the proposed algorithm with the public dataset. Experimental results show that, with photometric calibration, our algorithm achieves better performance comparing to the VINS-Mono.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Greco, G., G. Beskin, S. Karpov, S. Bondar, C. Bartolini, A. Guarnieri, and A. Piccioni. "High-Speed and Wide-Field Photometry with TORTORA." Advances in Astronomy 2010 (2010): 1–8. http://dx.doi.org/10.1155/2010/268501.

Повний текст джерела
Анотація:
We present the photometric analysis of the extended sky fields observed by the TORTORA optical monitoring system. The technology involved in the TORTORA camera is based on the use of a fast TV-CCD matrix with an image intensifier. This approach can both significantly reduce the readout noise and shorten the focal length following to monitor relatively large sky regions with high temporal resolution and adequate detection limit. The performance of the system has been tested using the relative magnitudes of standard stars by means of long image sequences collected at different airmasses and at various intensities of the moon illumination. As expected from the previous laboratory measurements, artifact sources are negligible and do not affect the photometric results. The following analysis is based on a large sample of images acquired by the TORTORA instrument since July 2006.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Abzal, A., M. Saadatseresht, and M. Varshosaz. "PHOTOMETRIC STEREO ASSISTED DRAWING OF ARCHITECTURAL RELIEFS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W18 (October 18, 2019): 7–12. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w18-7-2019.

Повний текст джерела
Анотація:
Abstract. Geometric documentation is one of the most important parts of a documentary report. Despite the advances made in the field of line drawing of ancient relief surfaces, in most cases, human operator interaction is unavoidable. In this paper, an algorithm for the semiautomatic line drawing of relief surfaces has been developed. In the proposed method, photometric stereo normals are used as a highresolution and low-noise data for the automatic extraction of surface edges. The normals are computed in 2D image space and also the fringe projection scanner is used for geometric correction of 2D image based drawings. Therefore, the drawings are converted to a metric map for geometric documentation reports. The results show that the efficiency of the proposed method, which has managed to correctly draw about more than 99% of the edge lines of an ancient relief. Also all of the drawn lines are completely coincided to the relief edges on its orthophoto image.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Mimica, P., and K. Pavlovski. "Reconstruction of an Accretion Disk Image in AU Mon from CoRoT Photometry." Proceedings of the International Astronomical Union 7, S282 (July 2011): 63–64. http://dx.doi.org/10.1017/s1743921311026901.

Повний текст джерела
Анотація:
AbstractThe long-period binary system AU Mon was photometrically observed on-board the CoRoT satellite in a continuous run of almost 60 days long which has covered almost 5 complete cycles. Unprecedented sub milimag precision of CoRoT photometry reveals all complexity of its light variations in this, still active mass-transfer binary system. We present images of an accretion disk reconstructed by eclipse mapping, and an optimization of intensity distribution along disk surface. Time resolution and accurate CoRoT photometric measurements allow precise location of spatial distribution of ‘hot’ spots on the disk, and tracing temporal changes in their activity. Clumpy disk structure is similar to those we detected early for another W Serpentis binary W Cru (Pavlovski, Burki & Mimica, 2006, A&A, 454, 855).
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Takagishi, K., M. Matsuoka, and T. Omodaka. "CCD camera for the 60cm telescope at Kagoshima Space Center." Symposium - International Astronomical Union 118 (1986): 111–12. http://dx.doi.org/10.1017/s0074180900151216.

Повний текст джерела
Анотація:
A CCD detector has been developed for photometry and image detection with the 60cm reflector at Kagoshima Space Center. The structure and the manipulation of the instrument were simpified by the use of Peltier devices for cooling the CCD in order to eliminate thermal noise. A micro-computer system is used to control the instrument and process the data. A test observing run has demonstrated that a photometric sensitivity of 20th magnitude in the W band(3800-7000 Å) can be achieved in a 3600sec exposure.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Bezuglyi, M. A., N. V. Bezuglaya, A. V. Ventsuryk, and K. P. Vonsevych. "Angular Photometry of Biological Tissue by Ellipsoidal Reflector Method." Devices and Methods of Measurements 10, no. 2 (June 24, 2019): 160–68. http://dx.doi.org/10.21122/2220-9506-2019-10-2-160-168.

Повний текст джерела
Анотація:
Angular measurements in optics of biological tissues are used for different applied spectroscopic task for roughness surface control, define of refractive index and for research of optical properties. Purpose of the research is investigation of the reflectance of biologic tissues by the ellipsoidal reflector method under the variable angle of the incident radiation.The research investigates functional features of improved photometry method by ellipsoidal reflectors. The photometric setup with mirror ellipsoid of revolution in reflected light was developed. Theoretical foundations of the design of an ellipsoidal reflector with a specific slot to ensure the input of laser radiation into the object area were presented. Analytical solution for calculating the angles range of incident radiation depending on the eccentricity and focal parameter of the ellipsoid are obtained. Also created the scheme of image processing at angular photometry by ellipsoidal reflector.The research represents results of experimental series for samples of muscle tissues at wavelengths 405 nm, 532 nm, 650 nm. During experiment there were received photometric images on the equipment with such parameters: laser beam incident angles range 12.5–62.5°, ellipsoidal reflector eccentricity 0.6, focal parameter 18 mm, slot width 8 mm.The nature of light scattering by muscle tissues at different wavelengths was represented by graphs for the collimated reflection area. The investigated method allows qualitative estimation of influence of internal or surface layers of biologic tissues optical properties on the light scattering under variable angles of incident radiation by the shape of zone of incident light.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Silva, André M., Sérgio G. Sousa, Nuno Santos, Olivier D. S. Demangeon, Pedro Silva, S. Hoyer, P. Guterman, Magali Deleuil, and David Ehrenreich. "archi: pipeline for light curve extraction of CHEOPS background stars." Monthly Notices of the Royal Astronomical Society 496, no. 1 (May 23, 2020): 282–94. http://dx.doi.org/10.1093/mnras/staa1443.

Повний текст джерела
Анотація:
ABSTRACT High precision time-series photometry from space is being used for a number of scientific cases. In this context, the recently launched CHaracterizing ExOPlanet Satellite (CHEOPS) (ESA) mission promises to bring 20 ppm precision over an exposure time of 6 h, when targeting nearby bright stars, having in mind the detailed characterization of exoplanetary systems through transit measurements. However, the official CHEOPS (ESA) mission pipeline only provides photometry for the main target (the central star in the field). In order to explore the potential of CHEOPS photometry for all stars in the field, in this paper, we present archi, an additional open-source pipeline module1 to analyse the background stars present in the image. As archi uses the official data reduction pipeline data as input, it is not meant to be used as an independent tool to process raw CHEOPS data but, instead, to be used as an add-on to the official pipeline. We test archi using CHEOPS simulated images, and show that photometry of background stars in CHEOPS images is only slightly degraded (by a factor of 2–3) with respect to the main target. This opens a potential for the use of CHEOPS to produce photometric time-series of several close-by targets at once, as well as to use different stars in the image to calibrate systematic errors. We also show one clear scientific application where the study of the companion light curve can be important for the understanding of the contamination on the main target.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Guhathakurta, P., G. Piotto, and E. Vesperini. "Mass Functions & Stellar Populations of Globular Clusters." Highlights of Astronomy 11, no. 2 (1998): 603–8. http://dx.doi.org/10.1017/s1539299600018256.

Повний текст джерела
Анотація:
I present a summary of results from various HST photometric studies of the dense central regions of Galactic globular clusters that my collaborators and I have carried out over the last 6 years. The dataset includes short exposures of 47 Tuc, M15, M3 and M13 obtained with the aberrated Planetary Camera-I (PC-I) and F555W (“V”) and F785LP (“I”) filters, as well as post-refurbishment Wide Field Planetary Camera 2 (WFPC2) snapshots of the post core collapse clusters M15, M30, and NGC 6624 in F336W (“U”), F439W (“B”), and V. Recently, a very deep, doubly oversampled PC-I U image of the core of 47 Tuc, and accompanying B and V images, have also been analyzed. In addition, we have carried out extensive checks of incompleteness and photometric error with the help of multiband image simulations that mimic the relevant characteristics of the HST PC-I and WFPC2 images: empirical point spread function, crowding effects based on a realistic density profile and stellar luminosity function (LF), noise, undersampling, A/D saturation, etc..
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Zhu, Tingting, Lin Qi, Hao Fan, and Junyu Dong. "Photometric Stereo on Large Flattened Surfaces Using Image Stitching." Procedia Computer Science 147 (2019): 247–53. http://dx.doi.org/10.1016/j.procs.2019.01.245.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Saad, Elhusain, and Keigo Hirakawa. "Improved photometric acceptance testing in image feature extraction tasks." Journal of Electronic Imaging 29, no. 04 (July 24, 2020): 1. http://dx.doi.org/10.1117/1.jei.29.4.043012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Gibson, Simon, Toby Howard, and Roger Hubbold. "Flexible Image-Based Photometric Reconstruction using Virtual Light Sources." Computer Graphics Forum 20, no. 3 (September 2001): 203–14. http://dx.doi.org/10.1111/1467-8659.00513.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Onn, Ruth, and Alfred Bruckstein. "Integrability disambiguates surface recovery in two-image photometric stereo." International Journal of Computer Vision 5, no. 1 (August 1990): 105–13. http://dx.doi.org/10.1007/bf00056773.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії