Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Image quality estimation.

Статті в журналах з теми "Image quality estimation"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Image quality estimation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Chin, Sin Chee, Chee-Onn Chow, Jeevan Kanesan, and Joon Huang Chuah. "A Study on Distortion Estimation Based on Image Gradients." Sensors 22, no. 2 (January 14, 2022): 639. http://dx.doi.org/10.3390/s22020639.

Повний текст джерела
Анотація:
Image noise is a variation of uneven pixel values that occurs randomly. A good estimation of image noise parameters is crucial in image noise modeling, image denoising, and image quality assessment. To the best of our knowledge, there is no single estimator that can predict all noise parameters for multiple noise types. The first contribution of our research was to design a noise data feature extractor that can effectively extract noise information from the image pair. The second contribution of our work leveraged other noise parameter estimation algorithms that can only predict one type of noise. Our proposed method, DE-G, can estimate additive noise, multiplicative noise, and impulsive noise from single-source images accurately. We also show the capability of the proposed method in estimating multiple corruptions.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Temel, Dogancan, Mohit Prabhushankar, and Ghassan AlRegib. "UNIQUE: Unsupervised Image Quality Estimation." IEEE Signal Processing Letters 23, no. 10 (October 2016): 1414–18. http://dx.doi.org/10.1109/lsp.2016.2601119.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wang, Qiu Yun. "Depth Estimation Based Underwater Image Enhancement." Advanced Materials Research 926-930 (May 2014): 1704–7. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.1704.

Повний текст джерела
Анотація:
According to the image formation model and the nature of underwater images, we find that the effect of the haze and the color distortion seriously pollute the underwater image data, lowing the quality of the underwater images in the visibility and the quality of the data. Hence, aiming to reduce the noise and the haze effect existing in the underwater image and compensate the color distortion, the dark channel prior model is used to enhance the underwater image. We compare the dark channel prior model based image enhancement method to the contrast stretching based method for image enhancement. The experimental results proved that the dark channel prior model has good ability for processing the underwater images. The super performance of the proposed method is demonstrated as well.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Li, Chao, Mingyang Li, Jie Liu, Yingchang Li, and Qianshi Dai. "Comparative Analysis of Seasonal Landsat 8 Images for Forest Aboveground Biomass Estimation in a Subtropical Forest." Forests 11, no. 1 (December 31, 2019): 45. http://dx.doi.org/10.3390/f11010045.

Повний текст джерела
Анотація:
To effectively further research the regional carbon sink, it is important to estimate forest aboveground biomass (AGB). Based on optical images, the AGB can be estimated and mapped on a regional scale. The Landsat 8 Operational Land Imager (OLI) has, therefore, been widely used for regional scale AGB estimation; however, most studies have been based solely on peak season images without performance comparison of other seasons; this may ultimately affect the accuracy of AGB estimation. To explore the effects of utilizing various seasonal images for AGB estimation, we analyzed seasonal images collected using Landsat 8 OLI for a subtropical forest in northern Hunan, China. We then performed stepwise regression to estimate AGB of different forest types (coniferous forest, broadleaf forest, mixed forest and total vegetation). The model performances using seasonal images of different forest types were then compared. The results showed that textural information played an important role in AGB estimation of each forest type. Stratification based on forest types resulted in better AGB estimation model performances than those of total vegetation. The most accurate AGB estimations were achieved using the autumn (October) image, and the least accurate AGB estimations were achieved using the peak season (August) image. In addition, the uncertainties associated with the peak season image were largest in terms of AGB values < 25 Mg/ha and >75 Mg/ha, and the quality of the AGB map depicting the peak season was poorer than the maps depicting other seasons. This study suggests that the acquisition time of forest images can affect AGB estimations in subtropical forest. Therefore, future research should consider and incorporate seasonal time-series images to improve AGB estimation.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Xu, Ningshan, Dongao Ma, Guoqiang Ren, and Yongmei Huang. "BM-IQE: An Image Quality Evaluator with Block-Matching for Both Real-Life Scenes and Remote Sensing Scenes." Sensors 20, no. 12 (June 19, 2020): 3472. http://dx.doi.org/10.3390/s20123472.

Повний текст джерела
Анотація:
Like natural images, remote sensing scene images; of which the quality represents the imaging performance of the remote sensor, also suffer from the degradation caused by imaging system. However, current methods measuring the imaging performance in engineering applications require for particular image patterns and lack generality. Therefore, a more universal approach is demanded to assess the imaging performance of remote sensor without constraints of land cover. Due to the fact that existing general-purpose blind image quality assessment (BIQA) methods cannot obtain satisfying results on remote sensing scene images; in this work, we propose a BIQA model of improved performance for natural images as well as remote sensing scene images namely BM-IQE. We employ a novel block-matching strategy called Structural Similarity Block-Matching (SSIM-BM) to match and group similar image patches. In this way, the potential local information among different patches can get expressed; thus, the validity of natural scene statistics (NSS) feature modeling is enhanced. At the same time, we introduce several features to better characterize and express remote sensing images. The NSS features are extracted from each group and the feature vectors are then fitted to a multivariate Gaussian (MVG) model. This MVG model is therefore used against a reference MVG model learned from a corpus of high-quality natural images to produce a basic quality estimation of each patch (centroid of each group). The further quality estimation of each patch is obtained by weighting averaging of its similar patches’ basic quality estimations. The overall quality score of the test image is then computed through average pooling of the patch estimations. Extensive experiments demonstrate that the proposed BM-IQE method can not only outperforms other BIQA methods on remote sensing scene image datasets but also achieve competitive performance on general-purpose natural image datasets as compared to existing state-of-the-art FR/NR-IQA methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Huang, Penghe, Dongyan Li, and Huimin Zhao. "An Improved Robust Fractal Image Compression Based on M-Estimator." Applied Sciences 12, no. 15 (July 27, 2022): 7533. http://dx.doi.org/10.3390/app12157533.

Повний текст джерела
Анотація:
In this paper, a robust fractal image compression method based on M-estimator is presented. The proposed method applies the M-estimator to the parameter estimation in the fractal encoding procedure using Huber and Tukey’s robust statistics. The M-estimation reduces the influence of the outliers and makes the fractal encoding algorithm robust to the noisy image. Meanwhile, the quadtree partitioning approach has been used in the proposed methods to improve the efficiency of the encoding algorithm, and some unnecessary computations are eliminated in the parameter estimation procedures. The experimental results demonstrate that the proposed method is insensitive to the outliers in the noisy corrupted image. The comparative data shows that the proposed method is superior in both the encoding time and the quality of retrieved images over other robust fractal compression algorithms. The proposed algorithm is useful for multimedia and image archiving, low-cost consumption applications and progressive image transmission of live images, and in reducing computing time for fractal image compression.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Anikeeva, I., and A. Chibunichev. "RANDOM NOISE ASSESSMENT IN AERIAL AND SATELLITE IMAGES." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2021 (June 28, 2021): 771–75. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2021-771-2021.

Повний текст джерела
Анотація:
Abstract. Random noise in aerial and satellite images is one of the factors, decreasing their quality. The noise level assessment in images is paid not enough attention. The method of numerical estimation of random image noise is considered. The object of the study is the image noise estimating method, based on harmonic analysis. The capability of using this method for aerial and satellite image quality assessment is considered. The results of the algorithm testing on model data and on real satellite images with different terrain surfaces are carried out. The accuracy estimating results for calculating the root-mean-square deviation (RMS) of random image noise by the harmonic analysis method are shown.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kim, Sangmin, Daekwan Kim, Kilwoo Chung, and JoonSeo Yim. "Estimation of any fields of lens PSFs for image simulation." Electronic Imaging 2021, no. 7 (January 18, 2021): 72–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.7.iss-072.

Повний текст джерела
Анотація:
In a mobile smartphone camera, image quality is more degraded towards the edges of an image sensor due to high CRA (Chief Ray Angle). It is critical to estimate the cause of this effect since image quality is degraded at image periphery from attenuating illuminance and broadening PSF (point spread function). In order to predict image quality from the center to the edge of the camera output, we propose a method to estimate lens PSFs at any particular image field. The method adopts Zernike polynomials to consider lens aberrations while having an arbitrary spatial sampling. Also, it employs estimating a pupil shape in accordance with an optical field. The proposed method has two steps: 1) estimation of a pupil shape and Zernike polynomial coefficients, and 2) generation of a PSF with estimated parameters. The method was experimented with a typical mobile lens to evaluate the performance of the PSF estimation at 0.0F and 0.8F. In addition, Siemens star images were generated with the estimated PSFs to compare resolutions at the center and the edge of an image. The results show that the image of the edge is worse than that of the center in terms of MTF (Modulation Transfer Function), showing the importance of assessing image quality at the edge for pre-evaluation of a mobile camera.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Du, Juan. "AIVMAF: Automatic Image Quality Estimation Based on Improved VMAF and YOLOv4." Journal of Physics: Conference Series 2289, no. 1 (June 1, 2022): 012020. http://dx.doi.org/10.1088/1742-6596/2289/1/012020.

Повний текст джерела
Анотація:
Abstract The current most widely used way of image quality estimation relies heavily on the subjective assessment, while majority of past objective estimation methods are not satisfactory on accuracy. To solve them and realize unsupervised image quality estimation with high precision, this paper creates a linear way with “Proportional Partition” controlled by horizontal and vertical rates of extracted pixel to get best representations of the image with patching, balance the uneven distribution of image quality in each source image, and offer dynamic compatibility to images with high resolution. Besides, it estimates the image quality automatically with a model trained by current best artificial intelligence (AI) algorithm for target detection YOLOv4 with 1000 images random selected from ImageNet2013 database. The proposal also uses the spirit of joint indices from the current widely used method named Video Multimethod Assessment Fusion (VMAF). But we replace its Visual Information Fidelity (VIF) with Visual Saliency-induced Index (VSI) and add VSI to our target function because of VIF’s dependence on subjective assessment, and also for VSI’s better performance surpassing most recent IQA estimators as TOP3 best model in recent world. Besides, contrast masking is also included by objective function for the KL-divergence to simulate the human visual perception better. A creative “Batch Learning” way is found to address patches for less calculation and faster speed. All source images are pretreated with colour space transformation and normalization to improve descriptiveness of images and reduce the redundant points, and a threshold is devised to formulate suppression mechanisms. The proposed solution is tested to be a good image quality assessor in many aspects such as correctness, consistency, linearity, monotonicity and speed, and performs well on even HD images.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Salokhiddinov, Sherzod, and Seungkyu Lee. "Iterative Refinement of Uniformly Focused Image Set for Accurate Depth from Focus." Applied Sciences 10, no. 23 (November 28, 2020): 8522. http://dx.doi.org/10.3390/app10238522.

Повний текст джерела
Анотація:
Estimating the 3D shape of a scene from differently focused set of images has been a practical approach for 3D reconstruction with color cameras. However, reconstructed depth with existing depth from focus (DFF) methods still suffer from poor quality with textureless and object boundary regions. In this paper, we propose an improved depth estimation based on depth from focus iteratively refining 3D shape from uniformly focused image set (UFIS). We investigated the appearance changes in spatial and frequency domains in iterative manner. In order to achieve sub-frame accuracy in depth estimation, optimal location of focused frame in DFF is estimated by fitting a polynomial curve on the dissimilarity measurements. In order to avoid wrong depth values on texture-less regions we propose to build a confidence map and use it to identify erroneous depth estimations. We evaluated our method on public and our own datasets obtained from different types of devices, such as smartphones, medical, and normal color cameras. Quantitative and qualitative evaluations on various test image sets show promising performance of the proposed method in depth estimation.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Min, Xiongkuo, Guangtao Zhai, Ke Gu, Yutao Liu, and Xiaokang Yang. "Blind Image Quality Estimation via Distortion Aggravation." IEEE Transactions on Broadcasting 64, no. 2 (June 2018): 508–17. http://dx.doi.org/10.1109/tbc.2018.2816783.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Loftus, John, David Laurí, and Barry Lennox. "Product Quality Estimation Using Multivariate Image Analysis." IFAC Proceedings Volumes 47, no. 3 (2014): 10610–15. http://dx.doi.org/10.3182/20140824-6-za-1003.00614.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Karam, Ghada Sabah. "Blurred Image Restoration with Unknown Point Spread Function." Al-Mustansiriyah Journal of Science 29, no. 1 (October 31, 2018): 189. http://dx.doi.org/10.23851/mjs.v29i1.335.

Повний текст джерела
Анотація:
Blurring image caused by a number of factors such as de focus, motion, and limited sensor resolution. Most of existing blind deconvolution research concentrates at recovering a single blurring kernel for the entire image. We proposed adaptive blind- non reference image quality assessment method for estimation the blur function (i.e. point spread function PSF) from the image acquired under low-lighting conditions and defocus images using Bayesian Blind Deconvolution. It is based on predicting a sharp version of a blurry inter image and uses the two images to solve a PSF. The estimation down by trial and error experimentation, until an acceptable restored image quality is obtained. Assessments the qualities of images have done through the applications of a set of quality metrics. Our method is fast and produces accurate results.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Kamaev, A. N., I. P. Urmanov, A. A. Sorokin, D. A. Karmanov, and S. P. Korolev. "IMAGES ANALYSIS FOR AUTOMATIC VOLCANO VISIBILITY ESTIMATION." Computer Optics 42, no. 1 (March 30, 2018): 128–40. http://dx.doi.org/10.18287/2412-6179-2018-42-1-128-140.

Повний текст джерела
Анотація:
In this paper, a method for estimating the volcano visibility in the images is presented.This method includes algorithms for analyzing parametric edges of objects under observation and frequency characteristics of the images. Procedures for constructing parametric edges of a volcano and their comparison are considered. An algorithm is proposed for identifying the most persistent edges for a group of several reference images. The visibility of a volcano is estimated by comparing these edges to those of the image under analysis. The visibility estimation is maximized with respect to a planar shift and rotation of the camera to eliminate their influence on the estimation. If the image quality is low, making it hardly suitable for further visibility analysis, the estimation is corrected using an algorithm for analyzing the image frequency response represented as a vector of the octave frequency contribution to the image luminance. A comparison of the reference frequency characteristics and the characteristics of the analyzed image allows us to estimate the contribution of different frequencies to the formation of volcano images. We discuss results of the verification of the proposed algorithms performed using the archive of a video observation system of Kamchatka volcanoes. The estimates obtained corroborate the effectiveness of the proposed methods, enabling the non-informative imagery to be automatically filtered off while monitoring the volcanic activity.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Chaturvedi, Pawan, and Michael F. Insana. "Autoregressive Spectral Estimation in Ultrasonic Scatterer Size Imaging." Ultrasonic Imaging 18, no. 1 (January 1996): 10–24. http://dx.doi.org/10.1177/016173469601800102.

Повний текст джерела
Анотація:
An autoregressive (AR) spectral estimation method was considered for the purpose of estimating scatterer size images. The variance and bias of the resulting estimates were compared with those using classical FFT periodograms for a range of input signal-to-noise ratios and echo-signal durations corresponding to various C-scan image slice thicknesses. The AR approach was found to produce images of significantly higher quality for noisy data and when thin slices were required. Several images reconstructed with the two techniques are presented to demonstrate difference in visual quality. Task-specific guidelines for empirical selection of the AR model order are also proposed.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Madhuanand, L., F. Nex, and M. Y. Yang. "DEEP LEARNING FOR MONOCULAR DEPTH ESTIMATION FROM UAV IMAGES." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020 (August 3, 2020): 451–58. http://dx.doi.org/10.5194/isprs-annals-v-2-2020-451-2020.

Повний текст джерела
Анотація:
Abstract. Depth is an essential component for various scene understanding tasks and for reconstructing the 3D geometry of the scene. Estimating depth from stereo images requires multiple views of the same scene to be captured which is often not possible when exploring new environments with a UAV. To overcome this monocular depth estimation has been a topic of interest with the recent advancements in computer vision and deep learning techniques. This research has been widely focused on indoor scenes or outdoor scenes captured at ground level. Single image depth estimation from aerial images has been limited due to additional complexities arising from increased camera distance, wider area coverage with lots of occlusions. A new aerial image dataset is prepared specifically for this purpose combining Unmanned Aerial Vehicles (UAV) images covering different regions, features and point of views. The single image depth estimation is based on image reconstruction techniques which uses stereo images for learning to estimate depth from single images. Among the various available models for ground-level single image depth estimation, two models, 1) a Convolutional Neural Network (CNN) and 2) a Generative Adversarial model (GAN) are used to learn depth from aerial images from UAVs. These models generate pixel-wise disparity images which could be converted into depth information. The generated disparity maps from these models are evaluated for its internal quality using various error metrics. The results show higher disparity ranges with smoother images generated by CNN model and sharper images with lesser disparity range generated by GAN model. The produced disparity images are converted to depth information and compared with point clouds obtained using Pix4D. It is found that the CNN model performs better than GAN and produces depth similar to that of Pix4D. This comparison helps in streamlining the efforts to produce depth from a single aerial image.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Omura, Hajime, and Teruya Minamoto. "Image quality degradation assessment based on the dual-tree complex discrete wavelet transform for evaluating watermarked images." International Journal of Wavelets, Multiresolution and Information Processing 15, no. 05 (August 28, 2017): 1750046. http://dx.doi.org/10.1142/s0219691317500461.

Повний текст джерела
Анотація:
We propose a new image quality degradation assessment method based on the dual-tree complex discrete wavelet transform (DT-CDWT) for evaluating the image quality of watermarked images. The peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) are widely used to evaluate image quality degradation resulting from embedding a digital watermark. The majority of digital image watermarking methods embed a digital watermark in the spatial or frequency domain of an original image. They evaluate image quality degradation using only the spatial domain in spite of the fact that the majority of digital image watermarking methods embed a digital watermark in the spatial or frequency domain. As a result, they do not always fairly evaluate the image quality degradation. Therefore, our method evaluates image quality degradation of the watermarked images using features in the spatial and frequency domains. To extract the features, we defined three indices: 1-norm estimation using bit-planes in the spatial domain, the sharpness, and 1-norm estimation based on the DT-CDWT domains. We describe our image quality assessment method in detail and present experimental results demonstrating that there is a strong positive correlation between the result obtained by our method and a subjective evaluation, in comparison with PSNR and SSIM.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Zhu, Zhiqin, Yaqin Luo, Hongyan Wei, Yong Li, Guanqiu Qi, Neal Mazur, Yuanyuan Li, and Penglong Li. "Atmospheric Light Estimation Based Remote Sensing Image Dehazing." Remote Sensing 13, no. 13 (June 22, 2021): 2432. http://dx.doi.org/10.3390/rs13132432.

Повний текст джерела
Анотація:
Remote sensing images are widely used in object detection and tracking, military security, and other computer vision tasks. However, remote sensing images are often degraded by suspended aerosol in the air, especially under poor weather conditions, such as fog, haze, and mist. The quality of remote sensing images directly affect the normal operations of computer vision systems. As such, haze removal is a crucial and indispensable pre-processing step in remote sensing image processing. Additionally, most of the existing image dehazing methods are not applicable to all scenes, so the corresponding dehazed images may have varying degrees of color distortion. This paper proposes a novel atmospheric light estimation based dehazing algorithm to obtain high visual-quality remote sensing images. First, a differentiable function is used to train the parameters of a linear scene depth model for the scene depth map generation of remote sensing images. Second, the atmospheric light of each hazy remote sensing image is estimated by the corresponding scene depth map. Then, the corresponding transmission map is estimated on the basis of the estimated atmospheric light by a haze-lines model. Finally, according to the estimated atmospheric light and transmission map, an atmospheric scattering model is applied to remove haze from remote sensing images. The colors of the images dehazed by the proposed method are in line with the perception of human eyes in different scenes. A dataset with 100 remote sensing images from hazy scenes was built for testing. The performance of the proposed image dehazing method is confirmed by theoretical analysis and comparative experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Hashim, Ahmed, Hazim Daway, and Hana kareem. "No reference Image Quality Measure for Hazy Images." International Journal of Intelligent Engineering and Systems 13, no. 6 (December 31, 2020): 460–71. http://dx.doi.org/10.22266/ijies2020.1231.41.

Повний текст джерела
Анотація:
Haze causes the degradation of image quality. Thus, the quality of the haze must be estimated. In this paper, we introduce a new method for measuring the quality of haze images using a no-reference scale depending on color saturation. We calculate the probability for a saturation component. This work also includes a subjective study for measuring image quality using human perception. The proposed method is compared with other methods as, entropy, Naturalness Image Quality Evaluator (NIQE), Haze Distribution Map based Haze Assessment (HDMHA), and no reference image quality assessment by using Transmission Component Estimation (TCE). This done by calculating the correlation coefficient between non-reference measures and subjective measure, the results show that the proposed method has a high correlation coefficient values for Pearson correlation coefficient (0.8923), Kendall (0.7170), and Spearman correlation coefficient (0.8960). The image database used in this work consists of 70 hazy images captured by using a special device, design to capture haze image. The experiment on haze database is consistent with the subjective experiment.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Alvi, Hafiz Muhammad Usama Hassan, Muhammad Shahid Farid, Muhammad Hassan Khan, and Marcin Grzegorzek. "Quality Assessment of 3D Synthesized Images Based on Textural and Structural Distortion Estimation." Applied Sciences 11, no. 6 (March 17, 2021): 2666. http://dx.doi.org/10.3390/app11062666.

Повний текст джерела
Анотація:
Emerging 3D-related technologies such as augmented reality, virtual reality, mixed reality, and stereoscopy have gained remarkable growth due to their numerous applications in the entertainment, gaming, and electromedical industries. In particular, the 3D television (3DTV) and free-viewpoint television (FTV) enhance viewers’ television experience by providing immersion. They need an infinite number of views to provide a full parallax to the viewer, which is not practical due to various financial and technological constraints. Therefore, novel 3D views are generated from a set of available views and their depth maps using depth-image-based rendering (DIBR) techniques. The quality of a DIBR-synthesized image may be compromised for several reasons, e.g., inaccurate depth estimation. Since depth is important in this application, inaccuracies in depth maps lead to different textural and structural distortions that degrade the quality of the generated image and result in a poor quality of experience (QoE). Therefore, quality assessment DIBR-generated images are essential to guarantee an appreciative QoE. This paper aims at estimating the quality of DIBR-synthesized images and proposes a novel 3D objective image quality metric. The proposed algorithm aims to measure both textural and structural distortions in the DIBR image by exploiting the contrast sensitivity and the Hausdorff distance, respectively. The two measures are combined to estimate an overall quality score. The experimental evaluations performed on the benchmark MCL-3D dataset show that the proposed metric is reliable and accurate, and performs better than existing 2D and 3D quality assessment metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Mohd Ashraf., Niket., Devender & Dr. Vinod Kumar. "Air Quality Index Estimation Based on Image Analysis." International Journal for Modern Trends in Science and Technology 7, no. 05 (May 27, 2021): 172–77. http://dx.doi.org/10.46501/ijmtst0705029.

Повний текст джерела
Анотація:
Air pollution is an issue that is out of the control of an average citizen. Controlling air pollution requires preventive and control measures on a large scale implemented by the government. However, what an individual can dois protect him/her from the harmful effects of pollution by taking precautions such as not going out in times of severe pollution or wearing an air mask when travelling out. It will be very helpful if a person is able to find out the pollution level around him. Government provides measures of pollution in terms of AIR QUALITY INDEX (AQI). However this is provided only at certain centre places. AQI may change drastically between these centres. In this report, an effort was made to solve this problem by enabling an individual to find an estimate of the Air Quality Index near them with their smartphone, even without an Internet connection, by simply clicking an image of their surroundings. Using this information a person can take preventive measures to take care of his health. This will not only spread awareness about air pollution but also protect people from the harmful effects of air pollution. We have used Machine Learning to achieve this goal. We prepared a dataset of images of sky and trained a model using several algorithms and compared them. We then used this model to recognise almost accurate AQI of the surroundings.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

ZHAO, Yu-lan, Zeng-guang WU, Xiang-ping MENG, and Shu LIU. "Research and application of fingerprint image quality estimation." Journal of Computer Applications 28, no. 11 (June 5, 2009): 2904–7. http://dx.doi.org/10.3724/sp.j.1087.2008.02904.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Adrien, C., C. Le Loirec, J. C. Garcia-Hernandez, J. Plagnard, S. Dreuil, B. Habib-Geryes, D. Grevent, L. Berteloot, F. Raimondi, and J. M. Bordy. "Dose and image quality estimation in computed tomography." Physica Medica 31 (November 2015): e47. http://dx.doi.org/10.1016/j.ejmp.2015.10.066.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Yamada, Akira, and Kazuo Kurahashi. "Experimental Image Quality Estimation of Ultrasonic Diffraction Tomography." Japanese Journal of Applied Physics 32, Part 1, No. 5B (May 30, 1993): 2507–9. http://dx.doi.org/10.1143/jjap.32.2507.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Kim, J., T. Kim, D. Shin, and S. H. Kim. "ROBUST MOSAICKING OF UAV IMAGES WITH NARROW OVERLAPS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 6, 2016): 879–83. http://dx.doi.org/10.5194/isprs-archives-xli-b1-879-2016.

Повний текст джерела
Анотація:
This paper considers fast and robust mosaicking of UAV images under a circumstance that each UAV images have very narrow overlaps in-between. Image transformation for image mosaicking consists of two estimations: relative transformations and global transformations. For estimating relative transformations between adjacent images, projective transformation is widely considered. For estimating global transformations, panoramic constraint is widely used. While perspective transformation is a general transformation model in 2D-2D transformation, this may not be optimal with weak stereo geometry such as images with narrow overlaps. While panoramic constraint works for reliable conversion of global transformation for panoramic image generation, this constraint is not applicable to UAV images in linear motions. For these reasons, a robust approach is investigated to generate a high quality mosaicked image from narrowly overlapped UAV images. For relative transformations, several transformation models were considered to ensure robust estimation of relative transformation relationship. Among them were perspective transformation, affine transformation, coplanar relative orientation, and relative orientation with reduced adjustment parameters. Performance evaluation for each transformation model was carried out. The experiment results showed that affine transformation and adjusted coplanar relative orientation were superior to others in terms of stability and accuracy. For global transformation, we set initial approximation by converting each relative transformation to a common transformation with respect to a reference image. In future work, we will investigate constrained relative orientation for enhancing geometric accuracy of image mosaicking and bundle adjustments of each relative transformation model for optimal global transformation.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Kim, J., T. Kim, D. Shin, and S. H. Kim. "ROBUST MOSAICKING OF UAV IMAGES WITH NARROW OVERLAPS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 6, 2016): 879–83. http://dx.doi.org/10.5194/isprsarchives-xli-b1-879-2016.

Повний текст джерела
Анотація:
This paper considers fast and robust mosaicking of UAV images under a circumstance that each UAV images have very narrow overlaps in-between. Image transformation for image mosaicking consists of two estimations: relative transformations and global transformations. For estimating relative transformations between adjacent images, projective transformation is widely considered. For estimating global transformations, panoramic constraint is widely used. While perspective transformation is a general transformation model in 2D-2D transformation, this may not be optimal with weak stereo geometry such as images with narrow overlaps. While panoramic constraint works for reliable conversion of global transformation for panoramic image generation, this constraint is not applicable to UAV images in linear motions. For these reasons, a robust approach is investigated to generate a high quality mosaicked image from narrowly overlapped UAV images. For relative transformations, several transformation models were considered to ensure robust estimation of relative transformation relationship. Among them were perspective transformation, affine transformation, coplanar relative orientation, and relative orientation with reduced adjustment parameters. Performance evaluation for each transformation model was carried out. The experiment results showed that affine transformation and adjusted coplanar relative orientation were superior to others in terms of stability and accuracy. For global transformation, we set initial approximation by converting each relative transformation to a common transformation with respect to a reference image. In future work, we will investigate constrained relative orientation for enhancing geometric accuracy of image mosaicking and bundle adjustments of each relative transformation model for optimal global transformation.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Varga, Domonkos. "Saliency-Guided Local Full-Reference Image Quality Assessment." Signals 3, no. 3 (July 11, 2022): 483–96. http://dx.doi.org/10.3390/signals3030028.

Повний текст джерела
Анотація:
Research and development of image quality assessment (IQA) algorithms have been in the focus of the computer vision and image processing community for decades. The intent of IQA methods is to estimate the perceptual quality of digital images correlating as high as possible with human judgements. Full-reference image quality assessment algorithms, which have full access to the distortion-free images, usually contain two phases: local image quality estimation and pooling. Previous works have utilized visual saliency in the final pooling stage. In addition to this, visual saliency was utilized as weights in the weighted averaging of local image quality scores, emphasizing image regions that are salient to human observers. In contrast to this common practice, visual saliency is applied in the computation of local image quality in this study, based on the observation that local image quality is determined both by local image degradation and visual saliency simultaneously. Experimental results on KADID-10k, TID2013, TID2008, and CSIQ have shown that the proposed method was able to improve the state-of-the-art’s performance at low computational costs.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

He, Kangjian, Jian Gong, and Dan Xu. "Focus-pixel estimation and optimization for multi-focus image fusion." Multimedia Tools and Applications 81, no. 6 (January 28, 2022): 7711–31. http://dx.doi.org/10.1007/s11042-022-12031-x.

Повний текст джерела
Анотація:
AbstractTo integrate the effective information and improve the quality of multi-source images, many spatial or transform domain-based image fusion methods have been proposed in the field of information fusion. The key purpose of multi-focus image fusion is to integrate the focused pixels and remove redundant information of each source image. Theoretically, if the focused pixels and complementary information of different images are detected completely, the fusion image with best quality can be obtained. For this goal, we propose a focus-pixel estimation and optimization based multi-focus image fusion framework in this paper. Because the focused pixels of an image are in the same depth of field (DOF), we propose a multi-scale focus-measure algorithm for the focused pixels matting to integrate the focused region firstly. Then, the boundaries of focused and defocused regions are obtained accurately by the proposed optimizing strategy. And the boundaries are also fused to reduce the influence of insufficient boundary precision. The experimental results demonstrate that the proposed method outperforms some previous typical methods in both objective evaluations and visual perception.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Xie, Qiwei, Xi Chen, Lin Li, Kaifeng Rao, Luo Tao, and Chao Ma. "Image Fusion Based on Kernel Estimation and Data Envelopment Analysis." International Journal of Information Technology & Decision Making 18, no. 02 (March 2019): 487–515. http://dx.doi.org/10.1142/s0219622019500032.

Повний текст джерела
Анотація:
This paper reports the improvement of the image quality during the fusion of remote sensing images by minimizing a novel energy function. First, by introducing a gradient constraint term in the energy function, the spatial information of the panchromatic image is transferred to the fused results. Second, the spectral information of the multispectral image is preserved by importing a kernel function to the data fitting term in the energy function. Finally, an objective parameter selection method based on data envelopment analysis (DEA) is proposed to integrate state-of-the-art image quality metrics. Visual perception measurement and selected fusion metrics are employed to evaluate the fusion performance. Experimental results show that the proposed method outperforms other established image fusion techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Li, Ran, Lin Luo, and Yu Zhang. "Multiframe Astronomical Image Registration Based on Block Homography Estimation." Journal of Sensors 2020 (December 9, 2020): 1–19. http://dx.doi.org/10.1155/2020/8849552.

Повний текст джерела
Анотація:
Due to the influence of atmospheric turbulence, a time-variate video of an observed object by using the astronomical telescope drifts randomly with the passing of time. Thereafter, a series of images is obtained snapshotting from the video. In this paper, a method is proposed to improve the quality of astronomical images only through multiframe image registration and superimposition for the first time. In order to overcome the influence of anisoplanatism, a specific image registration algorithm based on multiple local homography transformations is proposed. Superimposing registered images can achieve an image with high definition. As a result, signal-to-noise ratio, contrast-to-noise ratio, and definition are improved significantly.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Okarma, Krzysztof. "Current Trends and Advances in Image Quality Assessment." Elektronika ir Elektrotechnika 25, no. 3 (June 25, 2019): 77–84. http://dx.doi.org/10.5755/j01.eie.25.3.23681.

Повний текст джерела
Анотація:
Image quality assessment (IQA) is one of the constantly active areas of research in computer vision. Starting from the idea of Universal Image Quality Index (UIQI), followed by well-known Structural Similarity (SSIM) and its numerous extensions and modifications, through Feature Similarity (FSIM) towards combined metrics using the multi-metric fusion approach, the development of image quality assessment is still in progress. Nevertheless, regardless of new databases and the potential use of deep learning methods, some challenges remain still up to date. Some of the IQA metrics can also be used efficiently for alternative purposes, such as texture similarity estimation, quality evaluation of 3D images and 3D printed surfaces as well as video quality assessment.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Polanco, Jonatan D., Carlos Jacanamejoy-Jamioy, Claudia L. Mambuscay, Jeferson F. Piamba, and Manuel G. Forero. "Automatic Method for Vickers Hardness Estimation by Image Processing." Journal of Imaging 9, no. 1 (December 30, 2022): 8. http://dx.doi.org/10.3390/jimaging9010008.

Повний текст джерела
Анотація:
Hardness is one of the most important mechanical properties of materials, since it is used to estimate their quality and to determine their suitability for a particular application. One method of determining quality is the Vickers hardness test, in which the resistance to plastic deformation at the surface of the material is measured after applying force with an indenter. The hardness is measured from the sample image, which is a tedious, time-consuming, and prone to human error procedure. Therefore, in this work, a new automatic method based on image processing techniques is proposed, allowing for obtaining results quickly and more accurately even with high irregularities in the indentation mark. For the development and validation of the method, a set of microscopy images of samples indented with applied forces of 5N and 10N on AISI D2 steel with and without quenching, tempering heat treatment and samples coated with titanium niobium nitride (TiNbN) was used. The proposed method was implemented as a plugin of the ImageJ program, allowing for obtaining reproducible Vickers hardness results in an average time of 2.05 seconds with an accuracy of 98.3% and a maximum error of 4.5% with respect to the values obtained manually, used as a golden standard.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Chao, Chih-Feng, Ming-Huwi Horng, and Yu-Chan Chen. "Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue." Computational and Mathematical Methods in Medicine 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/343217.

Повний текст джерела
Анотація:
Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments ofin vivoultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Nguyen, T. G., M. Pierrot-Deseilligny, J. M. Muller, and C. Thom. "SECOND ITERATION OF PHOTOGRAMMETRIC PIPELINE TO ENHANCE THE ACCURACY OF IMAGE POSE ESTIMATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1/W1 (May 31, 2017): 225–30. http://dx.doi.org/10.5194/isprs-archives-xlii-1-w1-225-2017.

Повний текст джерела
Анотація:
In classical photogrammetric processing pipeline, the automatic tie point extraction plays a key role in the quality of achieved results. The image tie points are crucial to pose estimation and have a significant influence on the precision of calculated orientation parameters. Therefore, both relative and absolute orientations of the 3D model can be affected. By improving the precision of image tie point measurement, one can enhance the quality of image orientation. The quality of image tie points is under the influence of several factors such as the multiplicity, the measurement precision and the distribution in 2D images as well as in 3D scenes. In complex acquisition scenarios such as indoor applications and oblique aerial images, tie point extraction is limited while only image information can be exploited. Hence, we propose here a method which improves the precision of pose estimation in complex scenarios by adding a second iteration to the classical processing pipeline. The result of a first iteration is used as <i>a priori</i> information to guide the extraction of new tie points with better quality. Evaluated with multiple case studies, the proposed method shows its validity and its high potiential for precision improvement.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Sarode, M. V., and P. R. Deshmukh. "Image Sequence Denoising with Motion Estimation in Color Image Sequences." Engineering, Technology & Applied Science Research 1, no. 6 (December 10, 2011): 139–43. http://dx.doi.org/10.48084/etasr.54.

Повний текст джерела
Анотація:
In this paper, we investigate the denoising of image sequences i.e. video, corrupted with Gaussian noise and Impulse noise. In relation to single image denoising techniques, denoising of sequences aims to utilize the temporal dimension. This approach gives faster algorithms and better output quality. This paper focuses on the removal of different types of noise introduced in image sequences while transferring through network systems and video acquisition. The approach introduced consists of motion estimation, motion compensation, and filtering of image sequences. Most of the estimation approaches proposed deal mainly with monochrome video. The most usual way to apply them in color image sequences is to process each color channel separately. In this paper, we also propose a simple, accompanying method to extract the moving objects. Our experimental results on synthetic and natural images verify our arguments. The proposed algorithm’s performance is experimentally compared with a previous method, demonstrating comparable results.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Sazzad, Z. M. Parvez, Roushain Akhter, J. Baltes, and Y. Horita. "Objective No-Reference Stereoscopic Image Quality Prediction Based on 2D Image Features and Relative Disparity." Advances in Multimedia 2012 (2012): 1–16. http://dx.doi.org/10.1155/2012/256130.

Повний текст джерела
Анотація:
Stereoscopic images are widely used to enhance the viewing experience of three-dimensional (3D) imaging and communication system. In this paper, we propose an image feature and disparity dependent quality evaluation metric, which incorporates human visible system characteristics. We believe perceived distortions and disparity of any stereoscopic image are strongly dependent on local features, such as edge (i.e., nonplane areas of an image) and nonedge (i.e., plane areas of an image) areas within the image. Therefore, a no-reference perceptual quality assessment method is developed for JPEG coded stereoscopic images based on segmented local features of distortions and disparity. Local feature information such as edge and non-edge area based relative disparity estimation, as well as the blockiness and the edge distortion within the block of images are evaluated in this method. Subjective stereo image database is used for evaluation of the metric. The subjective experiment results indicate that our metric has sufficient prediction performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Li, Xiao Guang. "A New Defogging Algorithm Based on Atmospheric Degradation Physical Model." Advanced Materials Research 989-994 (July 2014): 2484–87. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.2484.

Повний текст джерела
Анотація:
Aiming at the degeneration phenomenon of images taken in mist, according to features of the drop quality image, this paper proposed a novel defogging algorithm based on improved atmospheric scattering model. The problem of fog-degraded images defogging restoration is transformed into the problem of optimization estimation for original un-degraded image by maximizing the global contrast object function, in this way, the proposed algorithm can restore the object image as completely as possible in probability sense. The experimental results show that the method can effectively improve the fog degeneration phenomenon and improve image clarity, significantly improving drop quality image visual effect.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Tummala, Sudhakar, Venkata Sainath Gupta Thadikemalla, Seifedine Kadry, Mohamed Sharaf, and Hafiz Tayyab Rauf. "EfficientNetV2 Based Ensemble Model for Quality Estimation of Diabetic Retinopathy Images from DeepDRiD." Diagnostics 13, no. 4 (February 8, 2023): 622. http://dx.doi.org/10.3390/diagnostics13040622.

Повний текст джерела
Анотація:
Diabetic retinopathy (DR) is one of the major complications caused by diabetes and is usually identified from retinal fundus images. Screening of DR from digital fundus images could be time-consuming and error-prone for ophthalmologists. For efficient DR screening, good quality of the fundus image is essential and thereby reduces diagnostic errors. Hence, in this work, an automated method for quality estimation (QE) of digital fundus images using an ensemble of recent state-of-the-art EfficientNetV2 deep neural network models is proposed. The ensemble method was cross-validated and tested on one of the largest openly available datasets, the Deep Diabetic Retinopathy Image Dataset (DeepDRiD). We obtained a test accuracy of 75% for the QE, outperforming the existing methods on the DeepDRiD. Hence, the proposed ensemble method may be a potential tool for automated QE of fundus images and could be handy to ophthalmologists.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Diao, Wei-he, Xia Mao, and Le Chang. "Quality Estimation of Image Sequence for Automatic Target Recognition." Journal of Electronics & Information Technology 32, no. 8 (August 27, 2010): 1779–85. http://dx.doi.org/10.3724/sp.j.1146.2009.01194.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Гавриш, Богдана Михайлівна, and Олег Володимирович Ющик. "Image reproduction quality estimation for output raster scanning devices." Technology audit and production reserves 5, no. 6(25) (September 22, 2015): 39. http://dx.doi.org/10.15587/2312-8372.2015.51149.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Alonso-Fernandez, Fernando, Julian Fierrez, Javier Ortega-Garcia, Joaquin Gonzalez-Rodriguez, Hartwig Fronthaler, Klaus Kollreider, and Josef Bigun. "A Comparative Study of Fingerprint Image-Quality Estimation Methods." IEEE Transactions on Information Forensics and Security 2, no. 4 (December 2007): 734–43. http://dx.doi.org/10.1109/tifs.2007.908228.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Myasnikov, V. V., A. A. Ivanov, M. V. Gashnikov, and E. V. Myasnikov. "Computer program for automatic estimation of digital image quality." Pattern Recognition and Image Analysis 21, no. 3 (September 2011): 415–18. http://dx.doi.org/10.1134/s1054661811020829.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Ling, Zhigang, Guoliang Fan, Jianwei Gong, Yaonan Wang, and Xiao Lu. "Perception oriented transmission estimation for high quality image dehazing." Neurocomputing 224 (February 2017): 82–95. http://dx.doi.org/10.1016/j.neucom.2016.10.050.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Wang, Sha, Dong Zheng, Jiying Zhao, Wa James Tam, and Filippo Speranza. "Adaptive Watermarking and Tree Structure Based Image Quality Estimation." IEEE Transactions on Multimedia 16, no. 2 (February 2014): 311–25. http://dx.doi.org/10.1109/tmm.2013.2291658.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Rehman, A., and Zhou Wang. "Reduced-Reference Image Quality Assessment by Structural Similarity Estimation." IEEE Transactions on Image Processing 21, no. 8 (August 2012): 3378–89. http://dx.doi.org/10.1109/tip.2012.2197011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Liao, Tianli, Jing Chen, and Yifang Xu. "Quality evaluation-based iterative seam estimation for image stitching." Signal, Image and Video Processing 13, no. 6 (March 27, 2019): 1199–206. http://dx.doi.org/10.1007/s11760-019-01466-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Park, Hyeseung, and Seungchul Park. "An Unsupervised Depth-Estimation Model for Monocular Images Based on Perceptual Image Error Assessment." Applied Sciences 12, no. 17 (September 2, 2022): 8829. http://dx.doi.org/10.3390/app12178829.

Повний текст джерела
Анотація:
In this paper, we propose a novel unsupervised learning-based model for estimating the depth of monocular images by integrating a simple ResNet-based auto-encoder and some special loss functions. We use only stereo images obtained from binocular cameras as training data without using depth ground-truth data. Our model basically outputs a disparity map that is necessary to warp an input image to an image corresponding to a different viewpoint. When the input image is warped using the output-disparity map, distortions of various patterns inevitably occur in the reconstructed image. During the training process, the occurrence frequency and size of these distortions gradually decrease, while the similarity between the reconstructed and target images increases, which proves that the accuracy of the predicted disparity maps also increases. Therefore, one of the important factors in this type of training is an efficient loss function that accurately measures how much the difference in quality between the reconstructed and target images is and guides the gap to be properly and quickly closed as the training progresses. In recent related studies, the photometric difference was calculated through simple methods such as L1 and L2 loss or by combining one of these with a traditional computer vision-based hand-coded image-quality assessment algorithm such as SSIM. However, these methods have limitations in modeling various patterns at the level of the human visual system. Therefore, the proposed model uses a pre-trained perceptual image-quality assessment model that effectively mimics human-perception mechanisms to measure the quality of distorted images as image-reconstruction loss. In order to highlight the performance of the proposed loss functions, a simple ResNet50-based network is adopted in our model. We trained our model using stereo images of the KITTI 2015 driving dataset to measure the pixel-level depth for 768 × 384 images. Despite the simplicity of the network structure, thanks to the effectiveness of the proposed image-reconstruction loss, our model outperformed other state-of-the-art studies that have been trained in unsupervised methods on a variety of evaluation indicators.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Gurumurthy, Sasikumar. "Age Estimation and Gender Classification based on Face detection and feature extraction." INTERNATIONAL JOURNAL OF MANAGEMENT & INFORMATION TECHNOLOGY 4, no. 1 (June 30, 2013): 134–40. http://dx.doi.org/10.24297/ijmit.v4i1.809.

Повний текст джерела
Анотація:
Nowadays the computer systems created a various types of automated applications in personal identification like biometrics, face recognition techniques. Face verification has turn into an area of dynamic research and the applications are important in law enforcement because it can be done without involving the subject. Still, the influence of age estimation on face verification become a challenge to decide the similarity of pair images from individual faces considering very limited of data base availability. We focus on the development of image processing and face detection on face verification system by improving the quality of image quality. The main objective of the system is to compare the image with the reference images stored as templates in the database and to determine the age and gender.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Dang, Hong She, Na Zhang, and Chu Jia Guo. "Study of Image Inpainting Method Based on Bayesian Compressive Sensing." Applied Mechanics and Materials 644-650 (September 2014): 4447–51. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.4447.

Повний текст джерела
Анотація:
To the question that traditional image inpainting methods depend on the structure characteristics of the image .The image inpainting method, on the basis of Bayesian compressive sensing, transforms the sparsity of the damaged image first, then gets the posterior distribution function of the sparse coefficient through Bayesian compressive sensing. At last, the mean and the variance of the distribution function are obtained. The mean can be used as the estimation of the sparse coefficient of the image, and the variance is the estimation of the noise. The emulation results proved that this method can improve the inpainting quality of images.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Morrison, H. Boyd. "Depth and Image Quality of Three-Dimensional, Lenticular-Sheet Images." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 41, no. 2 (October 1997): 1338–42. http://dx.doi.org/10.1177/1071181397041002135.

Повний текст джерела
Анотація:
This study investigated the inherent tradeoff between depth and image quality in lenticular-sheet (LS) imaging. Four different scenes were generated as experimental stimuli to represent a range of typical LS images. The overall amount of depth in each image, as well as the degree of foreground and background disparity, were varied, and the images were rated by subjects using the free-modulus magnitude estimation procedure. Generally, subjects preferred images which had smaller amounts of overall depth and tended to dislike excessive amounts of foreground or background disparity. The most preferred image was also determined for each scene by selecting the image with the highest mean rating. In a second experiment, these most preferred LS images for each scene were shown to subjects along with the analogous two-dimensional (2D) photographic versions. Results indicate that observers from the general population looked at the LS images longer than they did at the 2D versions and rated them higher on the attributes of quality of depth and attention-getting ability, although the LS images were rated lower on sharpness. No difference was found in overall quality or likeability.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії