Статті в журналах з теми "Estimation de la qualité des images"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Estimation de la qualité des images.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Estimation de la qualité des images".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Li, Chao, Mingyang Li, Jie Liu, Yingchang Li, and Qianshi Dai. "Comparative Analysis of Seasonal Landsat 8 Images for Forest Aboveground Biomass Estimation in a Subtropical Forest." Forests 11, no. 1 (December 31, 2019): 45. http://dx.doi.org/10.3390/f11010045.

Повний текст джерела
Анотація:
To effectively further research the regional carbon sink, it is important to estimate forest aboveground biomass (AGB). Based on optical images, the AGB can be estimated and mapped on a regional scale. The Landsat 8 Operational Land Imager (OLI) has, therefore, been widely used for regional scale AGB estimation; however, most studies have been based solely on peak season images without performance comparison of other seasons; this may ultimately affect the accuracy of AGB estimation. To explore the effects of utilizing various seasonal images for AGB estimation, we analyzed seasonal images collected using Landsat 8 OLI for a subtropical forest in northern Hunan, China. We then performed stepwise regression to estimate AGB of different forest types (coniferous forest, broadleaf forest, mixed forest and total vegetation). The model performances using seasonal images of different forest types were then compared. The results showed that textural information played an important role in AGB estimation of each forest type. Stratification based on forest types resulted in better AGB estimation model performances than those of total vegetation. The most accurate AGB estimations were achieved using the autumn (October) image, and the least accurate AGB estimations were achieved using the peak season (August) image. In addition, the uncertainties associated with the peak season image were largest in terms of AGB values < 25 Mg/ha and >75 Mg/ha, and the quality of the AGB map depicting the peak season was poorer than the maps depicting other seasons. This study suggests that the acquisition time of forest images can affect AGB estimations in subtropical forest. Therefore, future research should consider and incorporate seasonal time-series images to improve AGB estimation.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Meng, Hui, Jinsong Chong, Yuhang Wang, Yan Li, and Zhuofan Yan. "Local Azimuth Ambiguity-to-Signal Ratio Estimation Method Based on the Doppler Power Spectrum in SAR Images." Remote Sensing 11, no. 7 (April 9, 2019): 857. http://dx.doi.org/10.3390/rs11070857.

Повний текст джерела
Анотація:
In synthetic aperture radar (SAR) images, azimuth ambiguity is one of the important factors that affect image quality. Generally, the azimuth ambiguity-to-signal ratio (AASR) is a measure of the azimuth ambiguity of SAR images. For the low signal-to-noise ratio (SNR) ocean areas, it is difficult to accurately estimate the local AASR using traditional estimation algorithms. In order to solve this problem, a local AASR estimation method based on the Doppler power spectrum in SAR images is proposed in this paper by analyzing the composition of the local Doppler spectrum of SAR images. The method not only has higher estimation accuracy under low SNR, but also overcomes the limitations of traditional algorithms on SAR images when estimating AASR. The feasibility and accuracy of the proposed method are verified by simulation experiments and spaceborne SAR data.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ci Wang, Haoyuan Dong, Zhikai Wu, and Yap-Peng Tan. "Example-based quality estimation for compressed images." IEEE Multimedia 17, no. 3 (2010): 54–61. http://dx.doi.org/10.1109/mmul.2010.5692183.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Chaturvedi, Pawan, and Michael F. Insana. "Autoregressive Spectral Estimation in Ultrasonic Scatterer Size Imaging." Ultrasonic Imaging 18, no. 1 (January 1996): 10–24. http://dx.doi.org/10.1177/016173469601800102.

Повний текст джерела
Анотація:
An autoregressive (AR) spectral estimation method was considered for the purpose of estimating scatterer size images. The variance and bias of the resulting estimates were compared with those using classical FFT periodograms for a range of input signal-to-noise ratios and echo-signal durations corresponding to various C-scan image slice thicknesses. The AR approach was found to produce images of significantly higher quality for noisy data and when thin slices were required. Several images reconstructed with the two techniques are presented to demonstrate difference in visual quality. Task-specific guidelines for empirical selection of the AR model order are also proposed.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kamaev, A. N., I. P. Urmanov, A. A. Sorokin, D. A. Karmanov, and S. P. Korolev. "IMAGES ANALYSIS FOR AUTOMATIC VOLCANO VISIBILITY ESTIMATION." Computer Optics 42, no. 1 (March 30, 2018): 128–40. http://dx.doi.org/10.18287/2412-6179-2018-42-1-128-140.

Повний текст джерела
Анотація:
In this paper, a method for estimating the volcano visibility in the images is presented.This method includes algorithms for analyzing parametric edges of objects under observation and frequency characteristics of the images. Procedures for constructing parametric edges of a volcano and their comparison are considered. An algorithm is proposed for identifying the most persistent edges for a group of several reference images. The visibility of a volcano is estimated by comparing these edges to those of the image under analysis. The visibility estimation is maximized with respect to a planar shift and rotation of the camera to eliminate their influence on the estimation. If the image quality is low, making it hardly suitable for further visibility analysis, the estimation is corrected using an algorithm for analyzing the image frequency response represented as a vector of the octave frequency contribution to the image luminance. A comparison of the reference frequency characteristics and the characteristics of the analyzed image allows us to estimate the contribution of different frequencies to the formation of volcano images. We discuss results of the verification of the proposed algorithms performed using the archive of a video observation system of Kamchatka volcanoes. The estimates obtained corroborate the effectiveness of the proposed methods, enabling the non-informative imagery to be automatically filtered off while monitoring the volcanic activity.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chin, Sin Chee, Chee-Onn Chow, Jeevan Kanesan, and Joon Huang Chuah. "A Study on Distortion Estimation Based on Image Gradients." Sensors 22, no. 2 (January 14, 2022): 639. http://dx.doi.org/10.3390/s22020639.

Повний текст джерела
Анотація:
Image noise is a variation of uneven pixel values that occurs randomly. A good estimation of image noise parameters is crucial in image noise modeling, image denoising, and image quality assessment. To the best of our knowledge, there is no single estimator that can predict all noise parameters for multiple noise types. The first contribution of our research was to design a noise data feature extractor that can effectively extract noise information from the image pair. The second contribution of our work leveraged other noise parameter estimation algorithms that can only predict one type of noise. Our proposed method, DE-G, can estimate additive noise, multiplicative noise, and impulsive noise from single-source images accurately. We also show the capability of the proposed method in estimating multiple corruptions.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Galvíncio, Josiclêda Domiciano, and Carine Rosa Naue. "Estimation of NDVI with visible images (RGB) obtained with drones." Journal of Hyperspectral Remote Sensing 9, no. 6 (April 21, 2020): 407. http://dx.doi.org/10.29150/jhrs.v9.6.p407-420.

Повний текст джерела
Анотація:
The NDVI (Normalized Difference Vegetation Index) is a vegetation index widely used to evaluate the health conditions of vegetation, whether preserved or derived from anthropic actions, such as agriculture. NDVI's estimation with drones is still quite precarious as it requires different studies to assess their accuracy. The aim of this study is to evaluate the NDVI estimate obtained with images of the visible attention to radiometric calibrations. Radiometric calibration equations that were widely disseminated for the use of Landsat 5 satellite were used. These equations were used to calibrate drone images. The results showed that the calibrations raised the level of accuracy of NDVI estimates with drone images. It is concluded that it is of paramount importance the radiometric calibration of the images obtained with drones so that they allow more accurate estimates, such as NDVI. The use of drone products to estimate NDVI is quite promising. But it is necessary to study more robust radiometric calibration procedures, increasing the quality of data products from drones and making it more comparable between sites, sensors, and schedules.Estimativa do NDVI com imagens do visível (RGB) obtidas com drones R E S U M OO NDVI (Normalized Difference Vegetation Index) é um índice de vegetação muito utilizado para avaliação das condições de saúde da vegetação, seja ela preservada ou advinda das ações antrópicas, como por exemplo, agricultura. A estimativa do NDVI com drones ainda é bastante precária uma vez que necessita de diferentes estudos para avaliar a precisão deles. O objetivo deste estudo é avaliar a estimativa do NDVI obtidas com imagens do visível atentando para as calibrações radiométricas. Foram utilizadas equações de calibração radiométricas bastantes difundidas para uso do satélite Landsat 5. Essas equações foram utilizadas para calibração de imagens de drones. Os resultados mostraram que as calibrações elevaram o nível de acurácia das estimativas do NDVI com imagens de drones. Conclui-se que é de suma importância a calibração radiométrica das imagens obtidas com drones para que elas possibilitem estimativas mais precisas, como por exemplo o NDVI. O uso de produtos de drones para estimativa de NDVI é bastante promissor. Mas, se faz necessário o estudo de mais procedimentos robustos de calibração radiométrica, aumentando a qualidade dos produtos de dados advindos de drones e tornando mais comparáveis entre sites, sensores e horários.Palavras-chave: Calibração radiométrica, condições ambientais, monitoramento.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Madhuanand, L., F. Nex, and M. Y. Yang. "DEEP LEARNING FOR MONOCULAR DEPTH ESTIMATION FROM UAV IMAGES." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020 (August 3, 2020): 451–58. http://dx.doi.org/10.5194/isprs-annals-v-2-2020-451-2020.

Повний текст джерела
Анотація:
Abstract. Depth is an essential component for various scene understanding tasks and for reconstructing the 3D geometry of the scene. Estimating depth from stereo images requires multiple views of the same scene to be captured which is often not possible when exploring new environments with a UAV. To overcome this monocular depth estimation has been a topic of interest with the recent advancements in computer vision and deep learning techniques. This research has been widely focused on indoor scenes or outdoor scenes captured at ground level. Single image depth estimation from aerial images has been limited due to additional complexities arising from increased camera distance, wider area coverage with lots of occlusions. A new aerial image dataset is prepared specifically for this purpose combining Unmanned Aerial Vehicles (UAV) images covering different regions, features and point of views. The single image depth estimation is based on image reconstruction techniques which uses stereo images for learning to estimate depth from single images. Among the various available models for ground-level single image depth estimation, two models, 1) a Convolutional Neural Network (CNN) and 2) a Generative Adversarial model (GAN) are used to learn depth from aerial images from UAVs. These models generate pixel-wise disparity images which could be converted into depth information. The generated disparity maps from these models are evaluated for its internal quality using various error metrics. The results show higher disparity ranges with smoother images generated by CNN model and sharper images with lesser disparity range generated by GAN model. The produced disparity images are converted to depth information and compared with point clouds obtained using Pix4D. It is found that the CNN model performs better than GAN and produces depth similar to that of Pix4D. This comparison helps in streamlining the efforts to produce depth from a single aerial image.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hashim, Ahmed, Hazim Daway, and Hana kareem. "No reference Image Quality Measure for Hazy Images." International Journal of Intelligent Engineering and Systems 13, no. 6 (December 31, 2020): 460–71. http://dx.doi.org/10.22266/ijies2020.1231.41.

Повний текст джерела
Анотація:
Haze causes the degradation of image quality. Thus, the quality of the haze must be estimated. In this paper, we introduce a new method for measuring the quality of haze images using a no-reference scale depending on color saturation. We calculate the probability for a saturation component. This work also includes a subjective study for measuring image quality using human perception. The proposed method is compared with other methods as, entropy, Naturalness Image Quality Evaluator (NIQE), Haze Distribution Map based Haze Assessment (HDMHA), and no reference image quality assessment by using Transmission Component Estimation (TCE). This done by calculating the correlation coefficient between non-reference measures and subjective measure, the results show that the proposed method has a high correlation coefficient values for Pearson correlation coefficient (0.8923), Kendall (0.7170), and Spearman correlation coefficient (0.8960). The image database used in this work consists of 70 hazy images captured by using a special device, design to capture haze image. The experiment on haze database is consistent with the subjective experiment.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Takezawa, Megumi, Hirofumi Sanada, and Miki Haseyama. "[Paper] Quality Estimation Method for Fractal Compressed Images." ITE Transactions on Media Technology and Applications 1, no. 2 (2013): 178–83. http://dx.doi.org/10.3169/mta.1.178.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Pei, Zhao, Deqiang Wen, Yanning Zhang, Miao Ma, Min Guo, Xiuwei Zhang, and Yee-Hong Yang. "MDEAN: Multi-View Disparity Estimation with an Asymmetric Network." Electronics 9, no. 6 (June 2, 2020): 924. http://dx.doi.org/10.3390/electronics9060924.

Повний текст джерела
Анотація:
In recent years, disparity estimation of a scene based on deep learning methods has been extensively studied and significant progress has been made. In contrast, a traditional image disparity estimation method requires considerable resources and consumes much time in processes such as stereo matching and 3D reconstruction. At present, most deep learning based disparity estimation methods focus on estimating disparity based on monocular images. Motivated by the results of traditional methods that multi-view methods are more accurate than monocular methods, especially for scenes that are textureless and have thin structures, in this paper, we present MDEAN, a new deep convolutional neural network to estimate disparity using multi-view images with an asymmetric encoder–decoder network structure. First, our method takes an arbitrary number of multi-view images as input. Next, we use these images to produce a set of plane-sweep cost volumes, which are combined to compute a high quality disparity map using an end-to-end asymmetric network. The results show that our method performs better than state-of-the-art methods, in particular, for outdoor scenes with the sky, flat surfaces and buildings.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Anikeeva, I., and A. Chibunichev. "RANDOM NOISE ASSESSMENT IN AERIAL AND SATELLITE IMAGES." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2021 (June 28, 2021): 771–75. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2021-771-2021.

Повний текст джерела
Анотація:
Abstract. Random noise in aerial and satellite images is one of the factors, decreasing their quality. The noise level assessment in images is paid not enough attention. The method of numerical estimation of random image noise is considered. The object of the study is the image noise estimating method, based on harmonic analysis. The capability of using this method for aerial and satellite image quality assessment is considered. The results of the algorithm testing on model data and on real satellite images with different terrain surfaces are carried out. The accuracy estimating results for calculating the root-mean-square deviation (RMS) of random image noise by the harmonic analysis method are shown.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Ercolini, Leonardo, Nicola Grossi, and Nicola Silvestri. "A Simple Method to Estimate Weed Control Threshold by Using RGB Images from Drones." Applied Sciences 12, no. 23 (November 23, 2022): 11935. http://dx.doi.org/10.3390/app122311935.

Повний текст джерела
Анотація:
The estimation of the infestation level in a field and the consequent determination of the economic threshold is a basic requisite to rationalize post-emergence weeding. In this study, a simple and inexpensive procedure to determine the economic threshold based on weed cover is proposed. By using high-resolution RGB images captured by a low-cost drone, a free downloadable app for image processing and common spreadsheet software to perform the model parametrization, two different methods have been tested. The first method was based on the joint estimation of the two parameters involved in weed cover calculation, whereas the second method required the availability of further images for the separate estimation of the first parameter. The reliability of the two methods has been evaluated through the comparison with observed data and the goodness of fit in parameter calibration has been verified by calculating appropriate quality indices. The results showed an acceptable estimation of the weed cover value for the second method with respect to observed data (0.24 vs. 0.17 m2 and 0.17 vs. 0.14 m2, by processing images captured at 10 and 20 m, respectively), whereas the estimations obtained with the first method were disappointing (0.35 vs. 0.17 m2 and 0.33 vs. 0.14 m2, by processing images captured at 10 and 20 m, respectively).
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Xu, Ningshan, Dongao Ma, Guoqiang Ren, and Yongmei Huang. "BM-IQE: An Image Quality Evaluator with Block-Matching for Both Real-Life Scenes and Remote Sensing Scenes." Sensors 20, no. 12 (June 19, 2020): 3472. http://dx.doi.org/10.3390/s20123472.

Повний текст джерела
Анотація:
Like natural images, remote sensing scene images; of which the quality represents the imaging performance of the remote sensor, also suffer from the degradation caused by imaging system. However, current methods measuring the imaging performance in engineering applications require for particular image patterns and lack generality. Therefore, a more universal approach is demanded to assess the imaging performance of remote sensor without constraints of land cover. Due to the fact that existing general-purpose blind image quality assessment (BIQA) methods cannot obtain satisfying results on remote sensing scene images; in this work, we propose a BIQA model of improved performance for natural images as well as remote sensing scene images namely BM-IQE. We employ a novel block-matching strategy called Structural Similarity Block-Matching (SSIM-BM) to match and group similar image patches. In this way, the potential local information among different patches can get expressed; thus, the validity of natural scene statistics (NSS) feature modeling is enhanced. At the same time, we introduce several features to better characterize and express remote sensing images. The NSS features are extracted from each group and the feature vectors are then fitted to a multivariate Gaussian (MVG) model. This MVG model is therefore used against a reference MVG model learned from a corpus of high-quality natural images to produce a basic quality estimation of each patch (centroid of each group). The further quality estimation of each patch is obtained by weighting averaging of its similar patches’ basic quality estimations. The overall quality score of the test image is then computed through average pooling of the patch estimations. Extensive experiments demonstrate that the proposed BM-IQE method can not only outperforms other BIQA methods on remote sensing scene image datasets but also achieve competitive performance on general-purpose natural image datasets as compared to existing state-of-the-art FR/NR-IQA methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Maulana, Luthfi, Yusuf Gladiensyah Bihanda, and Yuita Arum Sari. "Color space and color channel selection on image segmentation of food images." Register: Jurnal Ilmiah Teknologi Sistem Informasi 6, no. 2 (September 1, 2020): 141. http://dx.doi.org/10.26594/register.v6i2.2061.

Повний текст джерела
Анотація:
Image segmentation is a predefined process of image processing to determine a specific object. One of the problems in food recognition and food estimation is the lack of quality of the result of image segmentation. This paper presents a comparative study of different color space and color channel selection in image segmentation of food images. Based on previous research regarding image segmentation used in food leftover estimation, this paper proposed a different approach to selecting color space and color channel based on the score of Intersection Over Union (IOU) and Dice from the whole dataset. The color transformation is required, and five color spaces were used: CIELAB, HSV, YUV, YCbCr, and HLS. The result shows that A in LAB and H in HLS are better to produce segmentation than other color channels, with the Dice score of both is 5 (the highest score). It concludes that this color channel selection is applicable to be embedded in the Automatic Food Leftover Estimation (AFLE) algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Anikeeva, I. A. "Radiometric resolution and dynamic range of aerial and space images, obtained for monitoring and mapping purposes." Geodesy and Cartography 964, no. 10 (November 20, 2020): 40–48. http://dx.doi.org/10.22389/0016-7126-2020-964-10-40-48.

Повний текст джерела
Анотація:
The dynamic range and radiometric resolution are among the most important indicators of aerial and space images’ fine quality. Gradation properties are of particular importance for aerial and space images, obtained for monitoring and mapping purposes, because the completeness and quality of the information on the earth’s surface objects depend on them, the accuracy of brightness features reproduction of earth’s surface objects. The author discusses various approaches to defining the concepts of dynamic range and radiometric resolution; the most proper definitions of these terms are given in the context of estimating the image’s gradation properties. The expediency of separating the concepts of nominal, actual and useful (effective) radiometric resolution is shown; their definitions are given. Methods of dynamic range and radio-metric resolution numerical estimation based on a histogram are shown. Absolute and relative indicators are considered. The advantages of using relative indicators are shown. Examples of the dynamic range and radiometric resolution evaluation are given basing upon the images obtained by ‘‘Canopus-B’’ spacecraft.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Millan, Borja, Santiago Velasco-Forero, Arturo Aquino, and Javier Tardaguila. "On-the-Go Grapevine Yield Estimation Using Image Analysis and Boolean Model." Journal of Sensors 2018 (December 16, 2018): 1–14. http://dx.doi.org/10.1155/2018/9634752.

Повний текст джерела
Анотація:
This paper describes a new methodology for noninvasive, objective, and automated assessment of yield in vineyards using image analysis and Boolean model. Image analysis, as an inexpensive and noninvasive procedure, has been studied for this purpose, but the effect of occlusions from the cluster or other organs of the vine has an impact that diminishes the quality of the results. To reduce the influence of the occlusions in the estimation, the number of berries was assessed using the Boolean model. To evaluate the methodology, three different datasets were studied: cluster images, manually acquired vine images, and vine images captured on-the-go using a quad. The proposed algorithm estimated the number of berries in cluster images with a root mean square error (RMSE) of 20 and a coefficient of determination (R2) of 0.80. Vine images manually taken were evaluated, providing 310 grams of mean error and R2=0.81. Finally, images captured using a quad equipped with artificial light and automatic camera triggering were also analysed. The estimation obtained applying the Boolean model had 610 grams of mean error per segment (three vines) and R2=0.78. The reliability against occlusions and segmentation errors of the Boolean model makes it ideal for vineyard yield estimation. Its application greatly improved the results when compared to a simpler estimator based on the relationship between cluster area and weight.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Bhattacharya, Abhishek, and Tanusree Chatterjee. "An Estimation Method of Measuring Image Quality for Compressed Images of Human Face." International Journal of Computer Trends and Technology 7, no. 3 (January 25, 2014): 154–59. http://dx.doi.org/10.14445/22312803/ijctt-v7p144.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Oliveira, Miguel, Gi-Hyun Lim, Tiago Madeira, Paulo Dias, and Vítor Santos. "Robust Texture Mapping Using RGB-D Cameras." Sensors 21, no. 9 (May 7, 2021): 3248. http://dx.doi.org/10.3390/s21093248.

Повний текст джерела
Анотація:
The creation of a textured 3D mesh from a set of RGD-D images often results in textured meshes that yield unappealing visual artifacts. The main cause is the misalignments between the RGB-D images due to inaccurate camera pose estimations. While there are many works that focus on improving those estimates, the fact is that this is a cumbersome problem, in particular due to the accumulation of pose estimation errors. In this work, we conjecture that camera poses estimation methodologies will always display non-neglectable errors. Hence, the need for more robust texture mapping methodologies, capable of producing quality textures even in considerable camera misalignments scenarios. To this end, we argue that use of the depth data from RGB-D images can be an invaluable help to confer such robustness to the texture mapping process. Results show that the complete texture mapping procedure proposed in this paper is able to significantly improve the quality of the produced textured 3D meshes.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Wang, Qiu Yun. "Depth Estimation Based Underwater Image Enhancement." Advanced Materials Research 926-930 (May 2014): 1704–7. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.1704.

Повний текст джерела
Анотація:
According to the image formation model and the nature of underwater images, we find that the effect of the haze and the color distortion seriously pollute the underwater image data, lowing the quality of the underwater images in the visibility and the quality of the data. Hence, aiming to reduce the noise and the haze effect existing in the underwater image and compensate the color distortion, the dark channel prior model is used to enhance the underwater image. We compare the dark channel prior model based image enhancement method to the contrast stretching based method for image enhancement. The experimental results proved that the dark channel prior model has good ability for processing the underwater images. The super performance of the proposed method is demonstrated as well.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Huang, Penghe, Dongyan Li, and Huimin Zhao. "An Improved Robust Fractal Image Compression Based on M-Estimator." Applied Sciences 12, no. 15 (July 27, 2022): 7533. http://dx.doi.org/10.3390/app12157533.

Повний текст джерела
Анотація:
In this paper, a robust fractal image compression method based on M-estimator is presented. The proposed method applies the M-estimator to the parameter estimation in the fractal encoding procedure using Huber and Tukey’s robust statistics. The M-estimation reduces the influence of the outliers and makes the fractal encoding algorithm robust to the noisy image. Meanwhile, the quadtree partitioning approach has been used in the proposed methods to improve the efficiency of the encoding algorithm, and some unnecessary computations are eliminated in the parameter estimation procedures. The experimental results demonstrate that the proposed method is insensitive to the outliers in the noisy corrupted image. The comparative data shows that the proposed method is superior in both the encoding time and the quality of retrieved images over other robust fractal compression algorithms. The proposed algorithm is useful for multimedia and image archiving, low-cost consumption applications and progressive image transmission of live images, and in reducing computing time for fractal image compression.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Kim, J., T. Kim, D. Shin, and S. H. Kim. "ROBUST MOSAICKING OF UAV IMAGES WITH NARROW OVERLAPS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 6, 2016): 879–83. http://dx.doi.org/10.5194/isprs-archives-xli-b1-879-2016.

Повний текст джерела
Анотація:
This paper considers fast and robust mosaicking of UAV images under a circumstance that each UAV images have very narrow overlaps in-between. Image transformation for image mosaicking consists of two estimations: relative transformations and global transformations. For estimating relative transformations between adjacent images, projective transformation is widely considered. For estimating global transformations, panoramic constraint is widely used. While perspective transformation is a general transformation model in 2D-2D transformation, this may not be optimal with weak stereo geometry such as images with narrow overlaps. While panoramic constraint works for reliable conversion of global transformation for panoramic image generation, this constraint is not applicable to UAV images in linear motions. For these reasons, a robust approach is investigated to generate a high quality mosaicked image from narrowly overlapped UAV images. For relative transformations, several transformation models were considered to ensure robust estimation of relative transformation relationship. Among them were perspective transformation, affine transformation, coplanar relative orientation, and relative orientation with reduced adjustment parameters. Performance evaluation for each transformation model was carried out. The experiment results showed that affine transformation and adjusted coplanar relative orientation were superior to others in terms of stability and accuracy. For global transformation, we set initial approximation by converting each relative transformation to a common transformation with respect to a reference image. In future work, we will investigate constrained relative orientation for enhancing geometric accuracy of image mosaicking and bundle adjustments of each relative transformation model for optimal global transformation.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Kim, J., T. Kim, D. Shin, and S. H. Kim. "ROBUST MOSAICKING OF UAV IMAGES WITH NARROW OVERLAPS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 6, 2016): 879–83. http://dx.doi.org/10.5194/isprsarchives-xli-b1-879-2016.

Повний текст джерела
Анотація:
This paper considers fast and robust mosaicking of UAV images under a circumstance that each UAV images have very narrow overlaps in-between. Image transformation for image mosaicking consists of two estimations: relative transformations and global transformations. For estimating relative transformations between adjacent images, projective transformation is widely considered. For estimating global transformations, panoramic constraint is widely used. While perspective transformation is a general transformation model in 2D-2D transformation, this may not be optimal with weak stereo geometry such as images with narrow overlaps. While panoramic constraint works for reliable conversion of global transformation for panoramic image generation, this constraint is not applicable to UAV images in linear motions. For these reasons, a robust approach is investigated to generate a high quality mosaicked image from narrowly overlapped UAV images. For relative transformations, several transformation models were considered to ensure robust estimation of relative transformation relationship. Among them were perspective transformation, affine transformation, coplanar relative orientation, and relative orientation with reduced adjustment parameters. Performance evaluation for each transformation model was carried out. The experiment results showed that affine transformation and adjusted coplanar relative orientation were superior to others in terms of stability and accuracy. For global transformation, we set initial approximation by converting each relative transformation to a common transformation with respect to a reference image. In future work, we will investigate constrained relative orientation for enhancing geometric accuracy of image mosaicking and bundle adjustments of each relative transformation model for optimal global transformation.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Sun, Wenbo, Zhi Gao, Jinqiang Cui, Bharath Ramesh, Bin Zhang, and Ziyao Li. "Semantic Segmentation Leveraging Simultaneous Depth Estimation." Sensors 21, no. 3 (January 20, 2021): 690. http://dx.doi.org/10.3390/s21030690.

Повний текст джерела
Анотація:
Semantic segmentation is one of the most widely studied problems in computer vision communities, which makes a great contribution to a variety of applications. A lot of learning-based approaches, such as Convolutional Neural Network (CNN), have made a vast contribution to this problem. While rich context information of the input images can be learned from multi-scale receptive fields by convolutions with deep layers, traditional CNNs have great difficulty in learning the geometrical relationship and distribution of objects in the RGB image due to the lack of depth information, which may lead to an inferior segmentation quality. To solve this problem, we propose a method that improves segmentation quality with depth estimation on RGB images. Specifically, we estimate depth information on RGB images via a depth estimation network, and then feed the depth map into the CNN which is able to guide the semantic segmentation. Furthermore, in order to parse the depth map and RGB images simultaneously, we construct a multi-branch encoder–decoder network and fuse the RGB and depth features step by step. Extensive experimental evaluation on four baseline networks demonstrates that our proposed method can enhance the segmentation quality considerably and obtain better performance compared to other segmentation networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Nunes, Jose, Martín Piquerez, Leonardo Pujadas, Eileen Armstrong, Alicia Fernández, and Federico Lecumberry. "Beef quality parameters estimation using ultrasound and color images." BMC Bioinformatics 16, Suppl 4 (2015): S6. http://dx.doi.org/10.1186/1471-2105-16-s4-s6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Salokhiddinov, Sherzod, and Seungkyu Lee. "Iterative Refinement of Uniformly Focused Image Set for Accurate Depth from Focus." Applied Sciences 10, no. 23 (November 28, 2020): 8522. http://dx.doi.org/10.3390/app10238522.

Повний текст джерела
Анотація:
Estimating the 3D shape of a scene from differently focused set of images has been a practical approach for 3D reconstruction with color cameras. However, reconstructed depth with existing depth from focus (DFF) methods still suffer from poor quality with textureless and object boundary regions. In this paper, we propose an improved depth estimation based on depth from focus iteratively refining 3D shape from uniformly focused image set (UFIS). We investigated the appearance changes in spatial and frequency domains in iterative manner. In order to achieve sub-frame accuracy in depth estimation, optimal location of focused frame in DFF is estimated by fitting a polynomial curve on the dissimilarity measurements. In order to avoid wrong depth values on texture-less regions we propose to build a confidence map and use it to identify erroneous depth estimations. We evaluated our method on public and our own datasets obtained from different types of devices, such as smartphones, medical, and normal color cameras. Quantitative and qualitative evaluations on various test image sets show promising performance of the proposed method in depth estimation.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Zhu, Zhiqin, Yaqin Luo, Hongyan Wei, Yong Li, Guanqiu Qi, Neal Mazur, Yuanyuan Li, and Penglong Li. "Atmospheric Light Estimation Based Remote Sensing Image Dehazing." Remote Sensing 13, no. 13 (June 22, 2021): 2432. http://dx.doi.org/10.3390/rs13132432.

Повний текст джерела
Анотація:
Remote sensing images are widely used in object detection and tracking, military security, and other computer vision tasks. However, remote sensing images are often degraded by suspended aerosol in the air, especially under poor weather conditions, such as fog, haze, and mist. The quality of remote sensing images directly affect the normal operations of computer vision systems. As such, haze removal is a crucial and indispensable pre-processing step in remote sensing image processing. Additionally, most of the existing image dehazing methods are not applicable to all scenes, so the corresponding dehazed images may have varying degrees of color distortion. This paper proposes a novel atmospheric light estimation based dehazing algorithm to obtain high visual-quality remote sensing images. First, a differentiable function is used to train the parameters of a linear scene depth model for the scene depth map generation of remote sensing images. Second, the atmospheric light of each hazy remote sensing image is estimated by the corresponding scene depth map. Then, the corresponding transmission map is estimated on the basis of the estimated atmospheric light by a haze-lines model. Finally, according to the estimated atmospheric light and transmission map, an atmospheric scattering model is applied to remove haze from remote sensing images. The colors of the images dehazed by the proposed method are in line with the perception of human eyes in different scenes. A dataset with 100 remote sensing images from hazy scenes was built for testing. The performance of the proposed image dehazing method is confirmed by theoretical analysis and comparative experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Das, Amlan Jyoti, Anjan Kumar Talukdar, and Kandarpa Kumar Sarma. "An Adaptive Rayleigh-Laplacian Based MAP Estimation Technique for Despeckling SAR Images using Stationary Wavelet Transform." International Journal of Applied Evolutionary Computation 4, no. 4 (October 2013): 88–102. http://dx.doi.org/10.4018/ijaec.2013100106.

Повний текст джерела
Анотація:
Removal of speckle noise from Synthetic Aperture Radar (SAR) images is an important step before performing any image processing operations on these images. This paper presents a novel Stationary Wavelet Transform (SWT) based technique for the purpose of removing the speckle noise from the SAR returns. Maximum a posteriori probability (MAP) condition which uses a prior knowledge is used to estimate the noise free wavelet coefficients. The proposed MAP estimator is designed for this purpose which uses Rayleigh distribution for modeling the speckle noise and Laplacian distribution for modeling the statistics of the noise free wavelet coefficients. The parameters required for MAP estimator is determined by technique used for parameter estimation after SWT. Moreover an Laplacian – Gaussian based MAP estimator is also applied and the parameter estimation is done using the same method used for the proposed algorithm. For the purpose of enhancing the visual quality and to restore more edge information, a wavelet based resolution enhancement technique is also used after applying the Inverse stationary Wavelet Transform (ISWT), using interpolation technique. The experimental results show that the proposed despeckling algorithm efficiently removes speckle noise from the SAR images and restores the edge information as well.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Morrison, H. Boyd. "Depth and Image Quality of Three-Dimensional, Lenticular-Sheet Images." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 41, no. 2 (October 1997): 1338–42. http://dx.doi.org/10.1177/1071181397041002135.

Повний текст джерела
Анотація:
This study investigated the inherent tradeoff between depth and image quality in lenticular-sheet (LS) imaging. Four different scenes were generated as experimental stimuli to represent a range of typical LS images. The overall amount of depth in each image, as well as the degree of foreground and background disparity, were varied, and the images were rated by subjects using the free-modulus magnitude estimation procedure. Generally, subjects preferred images which had smaller amounts of overall depth and tended to dislike excessive amounts of foreground or background disparity. The most preferred image was also determined for each scene by selecting the image with the highest mean rating. In a second experiment, these most preferred LS images for each scene were shown to subjects along with the analogous two-dimensional (2D) photographic versions. Results indicate that observers from the general population looked at the LS images longer than they did at the 2D versions and rated them higher on the attributes of quality of depth and attention-getting ability, although the LS images were rated lower on sharpness. No difference was found in overall quality or likeability.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Park, Hyeseung, and Seungchul Park. "An Unsupervised Depth-Estimation Model for Monocular Images Based on Perceptual Image Error Assessment." Applied Sciences 12, no. 17 (September 2, 2022): 8829. http://dx.doi.org/10.3390/app12178829.

Повний текст джерела
Анотація:
In this paper, we propose a novel unsupervised learning-based model for estimating the depth of monocular images by integrating a simple ResNet-based auto-encoder and some special loss functions. We use only stereo images obtained from binocular cameras as training data without using depth ground-truth data. Our model basically outputs a disparity map that is necessary to warp an input image to an image corresponding to a different viewpoint. When the input image is warped using the output-disparity map, distortions of various patterns inevitably occur in the reconstructed image. During the training process, the occurrence frequency and size of these distortions gradually decrease, while the similarity between the reconstructed and target images increases, which proves that the accuracy of the predicted disparity maps also increases. Therefore, one of the important factors in this type of training is an efficient loss function that accurately measures how much the difference in quality between the reconstructed and target images is and guides the gap to be properly and quickly closed as the training progresses. In recent related studies, the photometric difference was calculated through simple methods such as L1 and L2 loss or by combining one of these with a traditional computer vision-based hand-coded image-quality assessment algorithm such as SSIM. However, these methods have limitations in modeling various patterns at the level of the human visual system. Therefore, the proposed model uses a pre-trained perceptual image-quality assessment model that effectively mimics human-perception mechanisms to measure the quality of distorted images as image-reconstruction loss. In order to highlight the performance of the proposed loss functions, a simple ResNet50-based network is adopted in our model. We trained our model using stereo images of the KITTI 2015 driving dataset to measure the pixel-level depth for 768 × 384 images. Despite the simplicity of the network structure, thanks to the effectiveness of the proposed image-reconstruction loss, our model outperformed other state-of-the-art studies that have been trained in unsupervised methods on a variety of evaluation indicators.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Onyango, F. A., F. Nex, M. S. Peter, and P. Jende. "ACCURATE ESTIMATION OF ORIENTATION PARAMETERS OF UAV IMAGES THROUGH IMAGE REGISTRATION WITH AERIAL OBLIQUE IMAGERY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1/W1 (May 31, 2017): 599–605. http://dx.doi.org/10.5194/isprs-archives-xlii-1-w1-599-2017.

Повний текст джерела
Анотація:
Unmanned Aerial Vehicles (UAVs) have gained popularity in acquiring geotagged, low cost and high resolution images. However, the images acquired by UAV-borne cameras often have poor georeferencing information, because of the low quality on-board Global Navigation Satellite System (GNSS) receiver. In addition, lightweight UAVs have a limited payload capacity to host a high quality on-board Inertial Measurement Unit (IMU). Thus, orientation parameters of images acquired by UAV-borne cameras may not be very accurate.<br><br> Poorly georeferenced UAV images can be correctly oriented using accurately oriented airborne images capturing a similar scene by finding correspondences between the images. This is not a trivial task considering the image pairs have huge variations in scale, perspective and illumination conditions. This paper presents a procedure to successfully register UAV and aerial oblique imagery. The proposed procedure implements the use of the AKAZE interest operator for feature extraction in both images. Brute force is implemented to find putative correspondences and later on Lowe’s ratio test (Lowe, 2004) is used to discard a significant number of wrong matches. In order to filter out the remaining mismatches, the putative correspondences are used in the computation of multiple homographies, which aid in the reduction of outliers significantly. In order to increase the number and improve the quality of correspondences, the impact of pre-processing the images using the Wallis filter (Wallis, 1974) is investigated. This paper presents the test results of different scenarios and the respective accuracies compared to a manual registration of the finally computed fundamental and essential matrices that encode the orientation parameters of the UAV images with respect to the aerial images.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Zhao, Hongshan, Bingcong Liu, and Lingjie Wang. "Blur Kernel Estimation and Non-Blind Super-Resolution for Power Equipment Infrared Images by Compressed Sensing and Adaptive Regularization." Sensors 21, no. 14 (July 14, 2021): 4820. http://dx.doi.org/10.3390/s21144820.

Повний текст джерела
Анотація:
Infrared sensing technology is more and more widely used in the construction of power Internet of Things. However, due to cost constraints, it is difficult to achieve the large-scale installation of high-precision infrared sensors. Therefore, we propose a blind super-resolution method for infrared images of power equipment to improve the imaging quality of low-cost infrared sensors. If the blur kernel estimation and non-blind super-resolution are performed at the same time, it is easy to produce sub-optimal results, so we chose to divide the blind super-resolution into two parts. First, we propose a blur kernel estimation method based on compressed sensing theory, which accurately estimates the blur kernel through low-resolution images. After estimating the blur kernel, we propose an adaptive regularization non-blind super-resolution method to achieve the high-quality reconstruction of high-resolution infrared images. According to the final experimental demonstration, the blind super-resolution method we proposed can effectively reconstruct low-resolution infrared images of power equipment. The reconstructed image has richer details and better visual effects, which can provide better conditions for the infrared diagnosis of the power system.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Tsagkatakis, G., S. Roumpakis, S. Nikolidakis, E. Petra, A. Mantes, A. Kapantagakis, K. Grigorakis, G. Katselis, N. Vlahos, and P. Tsakalides. "Knowledge distillation from multispectral Images for fish freshness estimation." Electronic Imaging 2021, no. 12 (January 18, 2021): 27–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.12.fais-027.

Повний текст джерела
Анотація:
Fish quality is primarily effected by the number of days elapsed since harvesting, while bad storage conditions can also lead to quality degradation similar to the impact time. Existing approaches require laboratory testing, a laborious and timeconsuming process. In this work, we investigate technologies for quantifying fish quality though the development of deep learning models for analyzing imagery of fish. We first demonstrate that such a quantification is possible, to a certain degree, from multispectral images provided a sufficient number of training examples is available. Given that, we explore how knowledge distillation can be utilized for achieving similar fish quality estimation accuracy, but instead of using high-end multispectral imaging systems, using off-the-shelf RGB cameras. Experimental evaluation on individuals from the Mullus Marbatus family demonstrates that the proposed methodology constitutes a valid approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Mallesh, S. "Haze Removal Method for Efficient Visualization of Remotely Sensed Satellite Images." International Journal for Research in Applied Science and Engineering Technology 10, no. 11 (November 30, 2022): 1173–76. http://dx.doi.org/10.22214/ijraset.2022.47556.

Повний текст джерела
Анотація:
Abstract: Images captured in foggy weather conditions often suffer from poor visibility, which will create a lot of impacts on the outdoor computer vision systems, such as video surveillance, intelligent transportation assistance system, remote sensing space cameras and so on. In such situations, traditional visibility restoration approaches usually cannot adequately restore images due to poor estimation of haze thickness and the persistence of color cast problems. In our work, we propose a visibility restoration approach to effectively solve inadequate haze thickness estimation and alleviate color cast problems. By doing so, a high-quality image with clear visibility and vivid color can be generated to improve the visibility of single input image (with fog or haze), as well as the image’s details. Our approach stems from two important statistical observations about haze-free images and the haze itself. First, Wavelet decomposition is applied and LUM Filter is applied on the decomposed image. Finally, we can get the dehazed output.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

LEHTIHET, RAJA, WAEL EL ORAIBY, and MOHAMMED BENMOHAMMED. "RIDGE FREQUENCY ESTIMATION FOR LOW-QUALITY FINGERPRINT IMAGES ENHANCEMENT USING DELAUNAY TRIANGULATION." International Journal of Pattern Recognition and Artificial Intelligence 28, no. 01 (February 2014): 1456002. http://dx.doi.org/10.1142/s0218001414560023.

Повний текст джерела
Анотація:
In this paper, we propose a hybrid computational geometry-gray scale algorithm that enhances fingerprint images greatly. The algorithm extracts the local minima points that are positioned on the ridges of a fingerprint, then, it generates a Delaunay triangulation using these points of interest. This triangulation along with the local orientations give an accurate distance and orientation-based ridge frequency. Finally, a tuned anisotropic filter is locally applied and the enhanced output fingerprint image is obtained. When the algorithm is applied to rejected fingerprint images from FVC2004 DB2 database by the veryfinger application, these images pass and experimental results show that we obtain a low false and missed minutiae rate with an almost uniform distribution over the database. Moreover, the application of the proposed algorithm enables the extraction of features from all low-quality fingerprint images where the equal error rate of verification is decreased from 6.50% to 5% using nondamaged low-quality images in the database.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Demirtas, Ali Murat, Amy R. Reibman, and Hamid Jafarkhani. "Full-Reference Quality Estimation for Images With Different Spatial Resolutions." IEEE Transactions on Image Processing 23, no. 5 (May 2014): 2069–80. http://dx.doi.org/10.1109/tip.2014.2310991.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Kamble, V. M., and K. Bhurchandi. "Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images." IOP Conference Series: Materials Science and Engineering 331 (March 2018): 012019. http://dx.doi.org/10.1088/1757-899x/331/1/012019.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Noda, Hideki, and Michiharu Niimi. "Local MAP estimation for quality improvement of compressed color images." Pattern Recognition 44, no. 4 (April 2011): 788–93. http://dx.doi.org/10.1016/j.patcog.2010.10.022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Retraint, Florent, and Cathel Zitzmann. "Quality factor estimation of JPEG images using a statistical model." Digital Signal Processing 103 (August 2020): 102759. http://dx.doi.org/10.1016/j.dsp.2020.102759.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Galvan, Fausto, Giovanni Puglisi, Arcangelo Ranieri Bruna, and Sebastiano Battiato. "First Quantization Matrix Estimation From Double Compressed JPEG Images." IEEE Transactions on Information Forensics and Security 9, no. 8 (August 2014): 1299–310. http://dx.doi.org/10.1109/tifs.2014.2330312.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Lin, Dan, Douglas Steiert, Joshua Morris, Anna Squicciarini, and Jianping Fan. "REMIND: Risk Estimation Mechanism for Images in Network Distribution." IEEE Transactions on Information Forensics and Security 15 (2020): 539–52. http://dx.doi.org/10.1109/tifs.2019.2924853.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Du, Juan. "AIVMAF: Automatic Image Quality Estimation Based on Improved VMAF and YOLOv4." Journal of Physics: Conference Series 2289, no. 1 (June 1, 2022): 012020. http://dx.doi.org/10.1088/1742-6596/2289/1/012020.

Повний текст джерела
Анотація:
Abstract The current most widely used way of image quality estimation relies heavily on the subjective assessment, while majority of past objective estimation methods are not satisfactory on accuracy. To solve them and realize unsupervised image quality estimation with high precision, this paper creates a linear way with “Proportional Partition” controlled by horizontal and vertical rates of extracted pixel to get best representations of the image with patching, balance the uneven distribution of image quality in each source image, and offer dynamic compatibility to images with high resolution. Besides, it estimates the image quality automatically with a model trained by current best artificial intelligence (AI) algorithm for target detection YOLOv4 with 1000 images random selected from ImageNet2013 database. The proposal also uses the spirit of joint indices from the current widely used method named Video Multimethod Assessment Fusion (VMAF). But we replace its Visual Information Fidelity (VIF) with Visual Saliency-induced Index (VSI) and add VSI to our target function because of VIF’s dependence on subjective assessment, and also for VSI’s better performance surpassing most recent IQA estimators as TOP3 best model in recent world. Besides, contrast masking is also included by objective function for the KL-divergence to simulate the human visual perception better. A creative “Batch Learning” way is found to address patches for less calculation and faster speed. All source images are pretreated with colour space transformation and normalization to improve descriptiveness of images and reduce the redundant points, and a threshold is devised to formulate suppression mechanisms. The proposed solution is tested to be a good image quality assessor in many aspects such as correctness, consistency, linearity, monotonicity and speed, and performs well on even HD images.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Zhou, Jun. "Epipolar Geometry Estimation Using Improved LO-RANSAC." Advanced Materials Research 213 (February 2011): 255–59. http://dx.doi.org/10.4028/www.scientific.net/amr.213.255.

Повний текст джерела
Анотація:
The estimation of the epipolar geometry is of great interest for a number of computer vision and robotics tasks, and which is especially difficult when the putative correspondences include a low percentage of inliers correspondences or a large subset of the inliers is consistent with a degenerate configuration of the epipolar geometry that is totally incorrect. The Random Sample Consensus (RANSAC) algorithm is a popular tool for robust estimation, primarily due to its ability to tolerate a tremendous fraction of outliers. In this paper, we propose an approach for improve of locally optimized RANSAC (LO-RANSAC) that has the benefit of offering fast and accurate RANSAC. The resulting algorithm when tested on real images with or without degenerate configurations gives quality estimations and achieves significant speedups compared to the LO-RANSAC algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

D'AMBROSIO, D., G. FIACCHI, M. MARENGO, S. BOSCHI, S. FANTI, and A. E. SPINELLI. "RECONSTRUCTION OF DYNAMIC PET IMAGES USING ACCURATE SYSTEM POINT SPREAD FUNCTION MODELING: EFFECTS ON PARAMETRIC IMAGES." Journal of Mechanics in Medicine and Biology 10, no. 01 (March 2010): 73–94. http://dx.doi.org/10.1142/s021951941000323x.

Повний текст джерела
Анотація:
Quantitative analysis of positron emission tomography (PET) dynamic images allows to estimate physiological parameters such as glucose metabolic rate (GMR), perfusion, and cardiac output (CO). However, several physical effects such as photon attenuation, scatter and partial volume can reduce the accuracy of parameter estimation. The main goal of this work was to improve small animal PET image quality by introducing system point spread function (PSF) in the reconstruction scheme and to evaluate the effect of partial volume correction (PVC) on physiological parameter estimation. Images reconstructed respectively using constant and spatially variant (SV) PSFs and no PSF modeling was compared. The proposed algorithms were tested on simulated and real phantoms and mice images. Results show that the SV-PSF-based reconstruction method provides a significant contrast improvement of small animals PET cardiac images and, thus, the effects of PVC on physiological parameters were evaluated using such algorithm. Simulations show that the proposed PVC method reduces errors with respect to the true values for parametric images of GMR and perfusion. A reduction of CO percentage error with respect to the original value was also obtained using the SF-PSF approach. In conclusion, SV-PSF reconstruction method provides a more accurate estimation of several physiological parameters obtained from a dynamic PET scan.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Van Nguyen, Manh, Chao-Hung Lin, Hone-Jay Chu, Lalu Muhamad Jaelani, and Muhammad Aldila Syariz. "Spectral Feature Selection Optimization for Water Quality Estimation." International Journal of Environmental Research and Public Health 17, no. 1 (December 30, 2019): 272. http://dx.doi.org/10.3390/ijerph17010272.

Повний текст джерела
Анотація:
The spatial heterogeneity and nonlinearity exhibited by bio-optical relationships in turbid inland waters complicate the retrieval of chlorophyll-a (Chl-a) concentration from multispectral satellite images. Most studies achieved satisfactory Chl-a estimation and focused solely on the spectral regions from near-infrared (NIR) to red spectral bands. However, the optical complexity of turbid waters may vary with locations and seasons, which renders the selection of spectral bands challenging. Accordingly, this study proposes an optimization process utilizing available spectral models to achieve optimal Chl-a retrieval. The method begins with the generation of a set of feature candidates, followed by candidate selection and optimization. Each candidate links to a Chl-a estimation model, including two-band, three-band, and normalized different chlorophyll index models. Moreover, a set of selected candidates using available spectral bands implies an optimal composition of estimation models, which results in an optimal Chl-a estimation. Remote sensing images and in situ Chl-a measurements in Lake Kasumigaura, Japan, are analyzed quantitatively and qualitatively to evaluate the proposed method. Results indicate that the model outperforms related Chl-a estimation models. The root-mean-squared errors of the Chl-a concentration obtained by the resulting model (OptiM-3) improve from 11.95 mg · m − 3 to 6.37 mg · m − 3 , and the Pearson’s correlation coefficients between the predicted and in situ Chl- a improve from 0.56 to 0.89.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Franjcic, Z., and J. Bondeson. "Quality Assessment of Self-Calibration with Distortion Estimation for Grid Point Images." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3 (August 11, 2014): 95–99. http://dx.doi.org/10.5194/isprsarchives-xl-3-95-2014.

Повний текст джерела
Анотація:
Recently, a camera self-calibration algorithm was reported which solves for pose, focal length and radial distortion using a minimal set of four 2D-to-3D point correspondences. In this paper, we present an empirical analysis of the algorithm’s accuracy using highfidelity point correspondences. In particular, we use images of circular markers arranged in a regular planar grid, obtain the centroids of the marker images, and pass those as input point correspondences to the algorithm. We compare the resulting reprojection errors against those obtained from a benchmark calibration based on the same data. Our experiments show that for low-noise point images the self-calibration technique performs at least as good as the benchmark with a simplified distortion model.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Li, Ran, Lin Luo, and Yu Zhang. "Multiframe Astronomical Image Registration Based on Block Homography Estimation." Journal of Sensors 2020 (December 9, 2020): 1–19. http://dx.doi.org/10.1155/2020/8849552.

Повний текст джерела
Анотація:
Due to the influence of atmospheric turbulence, a time-variate video of an observed object by using the astronomical telescope drifts randomly with the passing of time. Thereafter, a series of images is obtained snapshotting from the video. In this paper, a method is proposed to improve the quality of astronomical images only through multiframe image registration and superimposition for the first time. In order to overcome the influence of anisoplanatism, a specific image registration algorithm based on multiple local homography transformations is proposed. Superimposing registered images can achieve an image with high definition. As a result, signal-to-noise ratio, contrast-to-noise ratio, and definition are improved significantly.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Adithya Pothan Raj, V., and P. Mohan Kumar. "Machine Learning to Perform Segmentation and 3D Projection of Abnormal Tissues by Endoscopy Images." Journal of Computational and Theoretical Nanoscience 17, no. 5 (May 1, 2020): 2296–303. http://dx.doi.org/10.1166/jctn.2020.8887.

Повний текст джерела
Анотація:
Images obtained by endoscopy technique provides the normal direction of the tissue contour. This provides the important anatomical parameters which can be used for segmentation algorithms. Due to the variation of tissue image sizes, the values of intensity for the tissues is typically ununiformed and also have noisiness by nature. So, identifying the direction in normal by a single iteration is unreliable. A multi (factor)-iteration algorithm has been developed for estimating the direction normal to the edge of defective tissue. From experimented results, the estimation reliability is formulated by multiple iterations. The estimation post last iteration corrects the direction normally. We have obtained the balance at all points during the normal direction estimation and it is used by the Edge Detector. The implementation results obtained prove that our proposed algorithm reduces the amount of astonishing boundaries and gapes in the actual outlines. Thus improves the quality of segmentation and 3D projection. The obtained corrected output could also be used in the removal of false edges in post processing. The performance outcome of our proposed algorithm is measured at multiple iterations and results are tabulated.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Alshamary, Haider Ali Jasim, Ahmad Sulaiman Abdullah, and Sadeq Adnan Hbeeb. "Nearest-neighbor field algorithm based on patchMatch for myocardial perfusion motion estimation/correction." Bulletin of Electrical Engineering and Informatics 12, no. 2 (April 1, 2023): 843–50. http://dx.doi.org/10.11591/eei.v12i2.4216.

Повний текст джерела
Анотація:
Deformation correction and recovery of dynamic magnetic resonance images (DMRI) with low complexity algorithms without compromising image quality is a challenging problem. We proposed a motion estimation deformation-correction compressive sensing (DC-CS) scheme to recover dynamic images from its undersampled measurements. We simplify the complex optimization problem into three sub-problems. The contributions of this research are: introducing a global search strategy instead of the DC registration step, guaranteeing a non-explicit motion estimation that avoids any spatial alignment or registration of the images, and lowering the computational cost to the minimum by using PatchMatch (PM). The simulation result shows that the PM algorithm accelerates the recovery time without losing the quality in comparison with the DC algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Buma, Willibroad Gabila, and Sang-Il Lee. "Evaluation of Sentinel-2 and Landsat 8 Images for Estimating Chlorophyll-a Concentrations in Lake Chad, Africa." Remote Sensing 12, no. 15 (July 29, 2020): 2437. http://dx.doi.org/10.3390/rs12152437.

Повний текст джерела
Анотація:
Much effort has been applied in estimating the concentrations of chlorophyll-a (Chl a) in lakes. The optical complexity and lack of in situ data complicate estimating Chl a in such water bodies. We compared four established satellite reflectance algorithms—the two-band and three-band algorithms (2BDA, 3BDA), fluorescence line height (FLH), and normalized difference chlorophyll index (NDCI)—to estimate Chl a concentration in Lake Chad. We evaluated the performance and applicability of Landsat-8 (L8) and Sentinel-2 (S2) images with the four Chl a estimation algorithms. For accuracy, we compared the concentration levels from the four algorithms to those from Worldview-3 (WV3) images. We identified two promising algorithms that could be used alongside L8 and S2 satellite images to monitor Chl a concentrations in Lake Chad. With an averaged R2 of 0.8, the 3BDA and NDCI Chl a algorithms performed accurately with S2 and L8 images. For the S2 and L8 images, 3BDA had the highest performance when compared to the WV3 estimates. We demonstrate the usefulness of sensor images in improving water quality information for areas that are difficult to access or when conventional data are limited.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії