Journal articles on the topic 'Assessment of image'

To see the other types of publications on this topic, follow the link: Assessment of image.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Assessment of image.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Golub, Y. I. "Image quality assessment." «System analysis and applied information science», no. 4 (January 5, 2022): 4–15. http://dx.doi.org/10.21122/2309-4923-2021-4-4-15.

Full text
Abstract:
Quality assessment is an integral stage in the processing and analysis of digital images in various automated systems. With the increase in the number and variety of devices that allow receiving data in various digital formats, as well as the expansion of human activities in which information technology (IT) is used, the need to assess the quality of the data obtained is growing. As well as the bar grows for the requirements for their quality.The article describes the factors that deteriorate the quality of digital images, areas of application of image quality assessment functions, a method for normalizing proximity measures, classes of digital images and their possible distortions, image databases available on the Internet for conducting experiments on assessing image quality with visual assessments of experts.
APA, Harvard, Vancouver, ISO, and other styles
2

Han, Z., X. Tang, X. Gao, and F. Hu. "IMAGE FUSION AND IMAGE QUALITY ASSESSMENT OF FUSED IMAGES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-7/W1 (July 12, 2013): 33–36. http://dx.doi.org/10.5194/isprsarchives-xl-7-w1-33-2013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Starovoitov, V. V., Y. I. Golub, and M. M. Lukashevich. "Digital fundus image quality assessment." «System analysis and applied information science», no. 4 (January 5, 2022): 25–38. http://dx.doi.org/10.21122/2309-4923-2021-4-25-38.

Full text
Abstract:
Diabetic retinopathy (DR) is a disease caused by complications of diabetes. It starts asymptomatically and can end in blindness. To detect it, doctors use special fundus cameras that allow them to register images of the retina in the visible range of the spectrum. On these images one can see features, which determine the presence of DR and its grade. Researchers around the world are developing systems for the automated analysis of fundus images. At present, the level of accuracy of classification of diseases caused by DR by systems based on machine learning is comparable to the level of qualified medical doctors.The article shows variants for representation of the retina in digital images by different cameras. We define the task to develop a universal approach for the image quality assessment of a retinal image obtained by an arbitrary fundus camera. It is solved in the first block of any automated retinal image analysis system. The quality assessment procedure is carried out in several stages. At the first stage, it is necessary to perform binarization of the original image and build a retinal mask. Such a mask is individual for each image, even among the images recorded by one camera. For this, a new universal retinal image binarization algorithm is proposed. By analyzing result of the binarization, it is possible to identify and remove imagesoutliers, which show not the retina, but other objects. Further, the problem of no-reference image quality assessment is solved and images are classified into two classes: satisfactory and unsatisfactory for analysis. Contrast, sharpness and possibility of segmentation of the vascular system on the retinal image are evaluated step by step. It is shown that the problem of no-reference image quality assessment of an arbitrary fundus image can be solved.Experiments were performed on a variety of images from the available retinal image databases.
APA, Harvard, Vancouver, ISO, and other styles
4

Golub, Yu I., F. V. Starovoitov, and V. V. Starovoitov. "Impact of image size reducing for image quality assesment." «System analysis and applied information science», no. 2 (August 18, 2020): 35–45. http://dx.doi.org/10.21122/2309-4923-2020-2-35-45.

Full text
Abstract:
The article describes studies of the effect of image reduction on the quantitative assessment of their quality. Image reduction refers to the proportional reduction of horizontal and vertical image resolutions in pixels. Within the framework of these studies, correlation analysis between quantitative assessments of image quality and subjective assessments of experts was performed. For the experiments, we used images from the public TID2013 database with a resolution of 512 × 384 pixels and expert estimates of their quality, as well as photographs taken with a Nikon D5000 digital camera with a resolution of 4288 × 2848 pixels. All images were reduced in 2, 4 and 8 times. For this two methods were used: bilinear interpolation and interpolation by the nearest neighbor.22 measures were selected to evaluate image quality. Quantitative assessment of image quality was calculated in two stages. At the first stage, an array of local estimates was obtained in the vicinity of each pixel using the selected measures. At the second stage, a global quality assessment was calculated from the obtained local ones. To summarize local quality estimates, the parameters of 16 distributions of random variables were considered.According to the results of the experiments, it was concluded that the accuracy of the quality assessment for some measures decreases with image reduction (for example, FISH, GORD, HELM, LOEN measures). BREN and SHAR measures are recommended as the best. To reduce images, it is better to use the nearest neighbor interpolation method. At the same time, the computation time of estimates is reduced on average by 4 times while reducing images by 2 times. When images are reduced by 8 times, the calculation time decreases on average by 80 times. The amount of memory required to store the reduced images is 25 times less.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Zhipeng, Li Shen, and Linmei Wu. "IMAGE QUATY ASSESSMENT FOR VHR REMOTE SENSING IMAGE CLASSIFICATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 17, 2016): 11–16. http://dx.doi.org/10.5194/isprs-archives-xli-b7-11-2016.

Full text
Abstract:
The data from remote sensing images are widely used for characterizing land use and land cover at present. With the increasing availability of very high resolution (VHR) remote sensing images, the remote sensing image classification becomes more and more important for information extraction. The VHR remote sensing images are rich in details, but high within-class variance as well as low between-class variance make the classification of ground cover a difficult task. What’s more, some related studies show that the quality of VHR remote sensing images also has a great influence on the ability of the automatic image classification. Therefore, the research that how to select the appropriate VHR remote sensing images to meet the application of classification is of great significance. In this context, the factors of VHR remote sensing image classification ability are discussed and some indices are selected for describing the image quality and the image classification ability objectively. Then, we explore the relationship of the indices of image quality and image classification ability under a specific classification framework. The results of the experiments show that these image quality indices are not effective for indicating the image classification ability directly. However, according to the image quality metrics, we can still propose some suggestion for the application of classification.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Zhipeng, Li Shen, and Linmei Wu. "IMAGE QUATY ASSESSMENT FOR VHR REMOTE SENSING IMAGE CLASSIFICATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 17, 2016): 11–16. http://dx.doi.org/10.5194/isprsarchives-xli-b7-11-2016.

Full text
Abstract:
The data from remote sensing images are widely used for characterizing land use and land cover at present. With the increasing availability of very high resolution (VHR) remote sensing images, the remote sensing image classification becomes more and more important for information extraction. The VHR remote sensing images are rich in details, but high within-class variance as well as low between-class variance make the classification of ground cover a difficult task. What’s more, some related studies show that the quality of VHR remote sensing images also has a great influence on the ability of the automatic image classification. Therefore, the research that how to select the appropriate VHR remote sensing images to meet the application of classification is of great significance. In this context, the factors of VHR remote sensing image classification ability are discussed and some indices are selected for describing the image quality and the image classification ability objectively. Then, we explore the relationship of the indices of image quality and image classification ability under a specific classification framework. The results of the experiments show that these image quality indices are not effective for indicating the image classification ability directly. However, according to the image quality metrics, we can still propose some suggestion for the application of classification.
APA, Harvard, Vancouver, ISO, and other styles
7

Swarnkar, Santosh Kumar, and Prof Avinash Sharma. "Content-Based Image Retrieval: An Assessment." International Journal of Trend in Scientific Research and Development Volume-3, Issue-3 (April 30, 2019): 154–56. http://dx.doi.org/10.31142/ijtsrd21708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Boushey, C. J., M. Spoden, F. M. Zhu, E. J. Delp, and D. A. Kerr. "New mobile methods for dietary assessment: review of image-assisted and image-based dietary assessment methods." Proceedings of the Nutrition Society 76, no. 3 (December 12, 2016): 283–94. http://dx.doi.org/10.1017/s0029665116002913.

Full text
Abstract:
For nutrition practitioners and researchers, assessing dietary intake of children and adults with a high level of accuracy continues to be a challenge. Developments in mobile technologies have created a role for images in the assessment of dietary intake. The objective of this review was to examine peer-reviewed published papers covering development, evaluation and/or validation of image-assisted or image-based dietary assessment methods from December 2013 to January 2016. Images taken with handheld devices or wearable cameras have been used to assist traditional dietary assessment methods for portion size estimations made by dietitians (image-assisted methods). Image-assisted approaches can supplement either dietary records or 24-h dietary recalls. In recent years, image-based approaches integrating application technology for mobile devices have been developed (image-based methods). Image-based approaches aim at capturing all eating occasions by images as the primary record of dietary intake, and therefore follow the methodology of food records. The present paper reviews several image-assisted and image-based methods, their benefits and challenges; followed by details on an image-based mobile food record. Mobile technology offers a wide range of feasible options for dietary assessment, which are easier to incorporate into daily routines. The presented studies illustrate that image-assisted methods can improve the accuracy of conventional dietary assessment methods by adding eating occasion detail via pictures captured by an individual (dynamic images). All of the studies reduced underreporting with the help of images compared with results with traditional assessment methods. Studies with larger sample sizes are needed to better delineate attributes with regards to age of user, degree of error and cost.
APA, Harvard, Vancouver, ISO, and other styles
9

Fernandez-Maloigne, Christine, Jaime Moreno, Alessandro Rizzi, and Cristian Bonanomi. "QUALITAS: Image Quality Assessment for Stereoscopic Images." Color and Imaging Conference 2016, no. 1 (November 7, 2016): 7–19. http://dx.doi.org/10.2352/issn.2169-2629.2017.32.7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fernandez-Maloigne, Christine, Jaime Moreno, Alessandro Rizzi, and Cristian Bonanomi. "QUALITAS: Image Quality Assessment for Stereoscopic Images." Journal of Imaging Science and Technology 60, no. 5 (September 1, 2016): 504051–5040513. http://dx.doi.org/10.2352/j.imagingsci.technol.2016.60.5.050405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lee, Jino, and Rae-Hong Park. "Image Quality Assessment of Tone Mapped Images." International Journal of Computer Graphics & Animation 5, no. 2 (April 30, 2015): 9–20. http://dx.doi.org/10.5121/ijcga.2015.5202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Lin, Xilin Yang, Lijun Zhang, Xiao Liu, Shengjie Zhao, and Yong Ma. "Towards Automatic Image Exposure Level Assessment." Mathematical Problems in Engineering 2020 (November 23, 2020): 1–14. http://dx.doi.org/10.1155/2020/2789854.

Full text
Abstract:
The quality of acquired images can be surely reduced by improper exposures. Thus, in many vision-related industries, such as imaging sensor manufacturing and video surveillance, an approach that can routinely and accurately evaluate exposure levels of images is in urgent need. Taking an image as input, such a method is expected to output a scalar value, which can represent the overall perceptual exposure level of the examined image, ranging from extremely underexposed to extremely overexposed. However, studies focusing on image exposure level assessment (IELA) are quite sporadic. It should be noted that blind NR-IQA (no-reference image quality assessment) algorithms or metrics used to measure the quality of contrast-distorted images cannot be used for IELA. The root reason is that though these algorithms can quantify quality distortion of images, they do not know whether the distortion is due to underexposure or overexposure. This paper aims to resolve the issue of IELA to some extent and contributes to two aspects. Firstly, an Image Exposure Database (IEpsD) is constructed to facilitate the study of IELA. IEpsD comprises 24,500 images with various exposure levels, and for each image a subjective exposure score is provided, which represents its perceptual exposure level. Secondly, as IELA can be naturally formulated as a regression problem, we thoroughly evaluate the performance of modern deep CNN architectures for solving this specific task. Our evaluation results can serve as a baseline when the other researchers develop even more sophisticated IELA approaches. To facilitate the other researchers to reproduce our results, we have released the dataset and the relevant source code at https://cslinzhang.github.io/imgExpo/.
APA, Harvard, Vancouver, ISO, and other styles
13

Cruz, Domingos, Carla Valentí, Aureliano Dias, Mário Seixas, and Fernando Schmitt. "Digital Image Documentation for Quality Assessment." Archives of Pathology & Laboratory Medicine 125, no. 11 (November 1, 2001): 1430–35. http://dx.doi.org/10.5858/2001-125-1430-didfqa.

Full text
Abstract:
Abstract Objective.—To demonstrate the feasibility of the use of digital images to document routine cases and to perform diagnostic quality assessment. Methods.—Pathologists documented cases by acquiring up to 12 digital images per case. The images were sampled at 25:1, 50:1, 100:1, 200:1, or 400:1 magnifications, according to adequacy in aiding diagnosis. After each acquisition, the referral pathologist marked a region of interest within each acquired image in order to evaluate intrinsic redundancy. The extrinsic redundancy was determined by counting the unnecessary images. Cases were randomly selected and reviewed by one pathologist. The quality of each image, the possibility of accomplishing a diagnosis based on images, and the degree of agreement was evaluated. Results.—During routine practice, 1469 cases were documented using 3902 images. Most of the images were acquired at higher power magnifications. From all acquired cases, 143 cases and their 373 related images were randomly selected for review. In 88.1% (126/143) of reviewed cases, it was possible to accomplish the diagnosis based on images. In 30.2% (38/126) of these cases, the reviewer considered that the diagnosis could be accomplished with fewer images. The referral pathologist and the reviewer found intrinsic redundancy in 57.8% and 54.5% of images, respectively. Conclusions.—Our results showed that digital image documentation to perform diagnostic quality assessment is a feasible solution. However, owing to the impact on routine practice, guidelines for acquisition and documentation of cases may be needed.
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Mei, E. Ye Wang, and Guo Hua Pan. "Image Quality Assessment Based on Invariant Moments Similarity." Advanced Materials Research 546-547 (July 2012): 565–69. http://dx.doi.org/10.4028/www.scientific.net/amr.546-547.565.

Full text
Abstract:
To resolve the problems of the image quality assessment issue and the algorithm adaptability for different image size and deformation, this paper proposes a image quality assessment algorithm based on Invariant Moments Similarity. Firstly, Hu invariant moments values of original image and evaluated image are computed. Secondly the invariant moments distance is completed between original image and evaluated image. At last, the method assess the restoration image quality depend on the invariant moment distance. The experimental result shows that the algorithm result is better than MSE, PSNR, SSIM for the same-size images. And the algorithm based on invariant moment similarity can evaluate different image-size and deformation images with low computing-complexity. The assessment experimental result for difference actual images certifies the algorithm effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
15

CM, Sushmitha, and Meharunnisa SP. "An Image Quality Assessment of Multi-Exposure Image Fusion by Improving SSIM." International Journal of Trend in Scientific Research and Development Volume-2, Issue-4 (June 30, 2018): 2780–84. http://dx.doi.org/10.31142/ijtsrd15634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Thomas, Deepa Maria, and S. John Livingston. "A Novel Hybrid Image Quality Assessment Algorithm." Indian Journal of Applied Research 4, no. 4 (October 1, 2011): 107–8. http://dx.doi.org/10.15373/2249555x/apr2014/31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Le, Quyet-Tien, Patricia Ladret, Huu-Tuan Nguyen, and Alice Caplier. "Image Aesthetic Assessment Based on Image Classification and Region Segmentation." Journal of Imaging 7, no. 1 (December 27, 2020): 3. http://dx.doi.org/10.3390/jimaging7010003.

Full text
Abstract:
The main goal of this paper is to study Image Aesthetic Assessment (IAA) indicating images as high or low aesthetic. The main contributions concern three points. Firstly, following the idea that photos in different categories (human, flower, animal, landscape, …) are taken with different photographic rules, image aesthetic should be evaluated in a different way for each image category. Large field images and close-up images are two typical categories of images with opposite photographic rules so we want to investigate the intuition that prior Large field/Close-up Image Classification (LCIC) might improve the performance of IAA. Secondly, when a viewer looks at a photo, some regions receive more attention than other regions. Those regions are defined as Regions Of Interest (ROI) and it might be worthy to identify those regions before IAA. The question “Is it worthy to extract some ROIs before IAA?” is considered by studying Region Of Interest Extraction (ROIE) before investigating IAA based on each feature set (global image features, ROI features and background features). Based on the answers, a new IAA model is proposed. The last point is about a comparison between the efficiency of handcrafted and learned features for the purpose of IAA.
APA, Harvard, Vancouver, ISO, and other styles
18

Saifeldeen, Abdalmajeed, Shu Hong Jiao, and Wei Liu. "Entirely Blind Image Quality Assessment Estimator." Applied Mechanics and Materials 543-547 (March 2014): 2496–99. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.2496.

Full text
Abstract:
Prior knowledge about anticipated distortions and their corresponding human opinion scores is needed in the most general purpose no-reference image quality assessment algorithms. When creating the model, all distortion types may not be existed. Predicting the quality of distorted images in practical no-reference image quality assessment algorithms is devised without prior knowledge about images or their distortions. In this study, a blind/no-reference opinion and distortion unaware image quality assessment algorithm based on natural scenes is developed. The proposed approach uses a set of novel features to measure image quality in a spatial domain. The extracted features which are from the scenes gist are formed using Weibull distribution statistics. When testing the proposed algorithm on LIVE database, experiments show that it correlates well with subjective opinion scores. They also show that the proposed algorithm significantly outperforms the popular full-reference peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) methods. Not only do the results reasonably well compete with the recently developed natural image quality evaluator (NIQE) model, but also outperform it.
APA, Harvard, Vancouver, ISO, and other styles
19

WANG, YUQING, MING ZHU, HAOCHEN PANG, and YONG WANG. "QUATERNION BASED COLOR IMAGE QUALITY ASSESSMENT INDEX." International Journal of Image and Graphics 11, no. 02 (April 2011): 195–206. http://dx.doi.org/10.1142/s0219467811004111.

Full text
Abstract:
A quaternion model for describing color image is proposed in order to evaluate its quality. Local variance distribution of luminance layer is calculated. Color information is taken into account by using quaternion matrix. The description method is a combination of luminance layer and color information. The angle between the singular value feature vectors of the quaternion matrices corresponding to the reference image and the distorted image is used to measure the structural similarity of the two color images. When the reference image and distorted images are of unequal size it can also assess their quality. Results from experiments show that the proposed method is better consistent with the human visual characteristics than MSE, PSNR and MSSIM. The resized distorted images can also be assessed rationally by this method.
APA, Harvard, Vancouver, ISO, and other styles
20

De, Kanjar, and Masilamani V. "NO-REFERENCE IMAGE QUALITY MEASURE FOR IMAGES WITH MULTIPLE DISTORTIONS USING RANDOM FORESTS FOR MULTI METHOD FUSION." Image Analysis & Stereology 37, no. 2 (July 9, 2018): 105. http://dx.doi.org/10.5566/ias.1534.

Full text
Abstract:
Over the years image quality assessment is one of the active area of research in image processing. Distortion in images can be caused by various sources like noise, blur, transmission channel errors, compression artifacts etc. Image distortions can occur during the image acquisition process (blur/noise), image compression (ringing and blocking artifacts) or during the transmission process. A single image can be distorted by multiple sources and assessing quality of such images is an extremely challenging task. The human visual system can easily identify image quality in such cases, but for a computer algorithm performing the task of quality assessment is a very difficult. In this paper, we propose a new no-reference image quality assessment for images corrupted by more than one type of distortions. The proposed technique is compared with the best-known framework for image quality assessment for multiply distorted images and standard state of the art Full reference and No-reference image quality assessment techniques available.
APA, Harvard, Vancouver, ISO, and other styles
21

Song, Zengjie, Jiangshe Zhang, and Junmin Liu. "No-Reference Image Quality Assessment Using Image Saliency for JPEG Compressed Images." Journal of Imaging Science and Technology 60, no. 6 (November 1, 2016): 605031–38. http://dx.doi.org/10.2352/j.imagingsci.technol.2016.60.6.060503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Jiao, Jichao, Wenyi Li, Zhongliang Deng, and Qasim Ali Arain. "A structural similarity-inspired performance assessment model for multisensor image registration algorithms." International Journal of Advanced Robotic Systems 14, no. 4 (July 1, 2017): 172988141771705. http://dx.doi.org/10.1177/1729881417717059.

Full text
Abstract:
In order to assess the performance of multisensor image registration algorithms that are used in the multirobot information fusion, we propose a model based on structural similarity whose name is vision registration assessment model. First of all, this article introduces a new image concept named superimposed image for testing subjective and objective assessment methods. Therefore, we assess the superimposed image but not the registered image, which is different from previous image registration assessment methods that usually use reference and sensed images. Then, we calculate eight assessment indicators from different aspects for superimposed images. After that, vision registration assessment model fuses the eight indicators using canonical correlation analysis, which is used for evaluating the quality of an image registration results in different aspects. Finally, three kinds of images which include optical images, infrared images, and SAR images are used to test vision registration assessment model. After evaluating three state-of-the-art image registration methods, experiments indict that the proposed structural similarity-motivated model achieved almost same evaluation results with that of the human object with the consistency rate of 98.3%, which shows that vision registration assessment model is efficient and robust for evaluating multisensor image registration algorithms. Moreover, vision registration assessment model is independent of the emotional factors and outside environment, which is different from the human.
APA, Harvard, Vancouver, ISO, and other styles
23

Ruikar, Jayesh, Ashoke Sinha, and Saurabh Chaudhury. "Image Quality Assessment Using Edge Correlation." International Journal of Electronics and Telecommunications 63, no. 1 (March 1, 2017): 99–107. http://dx.doi.org/10.1515/eletel-2017-0014.

Full text
Abstract:
Abstract In literature, oriented filters are used for low-level vision tasks. In this paper, we propose use of steerable Gaussian filter in image quality assessment. Human visual system is more sensitive to multidirectional edges present in natural images. The most degradation in image quality is caused due to its edges. In this work, an edge based metric termed as steerable Gaussian filtering (SGF) quality index is proposed as objective measure for image quality assessment. The performance of the proposed technique is evaluated over multiple databases. The experimental result shows that proposed method is more reliable and outperform the conventional image quality assessment method.
APA, Harvard, Vancouver, ISO, and other styles
24

Jiang, W., S. Chen, X. Wang, Q. Huang, H. Shi, and Y. Man. "REMOTE SENSING IMAGE QUALITY ASSESSMENT EXPERIMENT WITH POST-PROCESSING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3 (April 30, 2018): 665–68. http://dx.doi.org/10.5194/isprs-archives-xlii-3-665-2018.

Full text
Abstract:
This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhai, Guangtao, Wei Sun, Xiongkuo Min, and Jiantao Zhou. "Perceptual Quality Assessment of Low-light Image Enhancement." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 4 (November 30, 2021): 1–24. http://dx.doi.org/10.1145/3457905.

Full text
Abstract:
Low-light image enhancement algorithms (LIEA) can light up images captured in dark or back-lighting conditions. However, LIEA may introduce various distortions such as structure damage, color shift, and noise into the enhanced images. Despite various LIEAs proposed in the literature, few efforts have been made to study the quality evaluation of low-light enhancement. In this article, we make one of the first attempts to investigate the quality assessment problem of low-light image enhancement. To facilitate the study of objective image quality assessment (IQA), we first build a large-scale low-light image enhancement quality (LIEQ) database. The LIEQ database includes 1,000 light-enhanced images, which are generated from 100 low-light images using 10 LIEAs. Rather than evaluating the quality of light-enhanced images directly, which is more difficult, we propose to use the multi-exposure fused (MEF) image and stack-based high dynamic range (HDR) image as a reference and evaluate the quality of low-light enhancement following a full-reference (FR) quality assessment routine. We observe that distortions introduced in low-light enhancement are significantly different from distortions considered in traditional image IQA databases that are well-studied, and the current state-of-the-art FR IQA models are also not suitable for evaluating their quality. Therefore, we propose a new FR low-light image enhancement quality assessment (LIEQA) index by evaluating the image quality from four aspects: luminance enhancement, color rendition, noise evaluation, and structure preserving, which have captured the most key aspects of low-light enhancement. Experimental results on the LIEQ database show that the proposed LIEQA index outperforms the state-of-the-art FR IQA models. LIEQA can act as an evaluator for various low-light enhancement algorithms and systems. To the best of our knowledge, this article is the first of its kind comprehensive low-light image enhancement quality assessment study.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Siyuan, Yifan Wang, Jiayao Jiang, Jingxian Dong, Weiwei Yi, and Wenguang Hou. "CNN-Based Medical Ultrasound Image Quality Assessment." Complexity 2021 (July 1, 2021): 1–9. http://dx.doi.org/10.1155/2021/9938367.

Full text
Abstract:
The quality of ultrasound image is a key information in medical related application. It is also an important index in evaluating the performance of ultrasonic imaging equipment and image processing algorithms. Yet, there is still no recognized quantitative standard about medical image quality assessment (IQA) due to the fact that IQA is traditionally regarded as a subjective issue, especially in case of the ultrasound medical images. As such, the medical ultrasound IQA on basis of convolutional neural network (CNN) is quantitatively studied in this paper. Firstly, a dataset with 1063 ultrasound images is established through degenerating a certain number of original high-quality images. Subsequently, some operations are performed for the dataset including scoring and abnormal value screening. Then, 478 ultrasonic images are selected as the training and testing examples. The label of each example is obtained by averaging the scores of different doctors. Afterwards, a deep CNN network and a residuals network are taken to establish the IQA models. Meanwhile, the transfer learning strategy is introduced here to accelerate the training and improve the robustness of the model considering the fact that the ultrasound image samples are not abundant. At last, some tests are taken to evaluate the IQA models. They show that the CNN-based IQA is feasible and effective.
APA, Harvard, Vancouver, ISO, and other styles
27

Yemul, Kiran S., Adam M. Zysk, Andrea L. Richardson, Krishnarao V. Tangella, and Lisa K. Jacobs. "Interpretation of Optical Coherence Tomography Images for Breast Tissue Assessment." Surgical Innovation 26, no. 1 (October 7, 2018): 50–56. http://dx.doi.org/10.1177/1553350618803245.

Full text
Abstract:
Purpose. Initial studies have shown that optical coherence tomography (OCT) is an effective margin-evaluation tool for breast-conserving surgery, but methods for the interpretation of breast OCT images have not been directly studied. In this work, breast pathologies were assessed with a handheld OCT probe. OCT images and corresponding histology were used to develop guidelines for the identification of breast tissue features in OCT images. Methods. Mastectomy and breast-conserving surgery specimens from 26 women were imaged with a handheld OCT probe. During standard pathology specimen dissection, representative 1-cm × 1-cm tissue regions were grossly identified, assessed with OCT, inked for orientation and image-matching purposes, and processed. Histology slides corresponding to the OCT image region were digitally photographed. OCT and histology images from the same region were paired by selecting the best structural matches. Results. In total, 2880 OCT images were acquired from 26 breast specimens (from 26 patients) and 48 matching OCT-histology image pairs were identified. These matched image pairs illustrate tissue types including adipose tissue, dense fibrosis, fibroadipose tissue, blood vessels, regular and hyperplastic ducts and lobules, cysts, cyst, fibroadenoma, invasive ductal carcinoma, invasive lobular carcinoma, ductal carcinoma in situ, calcifications, and biopsy cavities. Differentiation between pathologies was achieved by considering feature boundaries, interior appearance, posterior shadowing or enhancement, and overall morphologic patterns. Conclusions. This is the first work to systematically catalog the critical features of breast OCT images. The results indicate that OCT can be used to identify and distinguish between benign and malignant features in human breast tissue.
APA, Harvard, Vancouver, ISO, and other styles
28

Golub, Y. I., and F. V. Starovoitov. "Digital image contrast assessment based on the Weibull distribution parameters." «System analysis and applied information science», no. 2 (August 19, 2021): 4–13. http://dx.doi.org/10.21122/2309-4923-2021-2-4-13.

Full text
Abstract:
The goal of the studies described in the paper is to find a quantitative assessment that maximally correlates with the subjective assessment of the contrast image quality in the absence of reference image. As a result of the literature analysis, 16 functions were selected that are used for no-refernce image quality assessment: BEGH, BISH, BREN, CMO, CURV, FUS, HELM, EBCM, KURT, LAPD, LAPL, LAPM, LOCC, LOEN, SHAR, WAVS. They all use the arithmetical mean of the local contrast quality. As an alternative to averaging local estimates (since the mean is one of two parameters of the normal distribution), it is proposed to use one of two parameters of the Weibull distribution of the same data – scale or shape.For the experiments, digital images with nonlinear contrast distortion from the available CCID2014 database were used. It contains 15 original images with a size of 768x512 pixels and 655 versions with modified contrast. This database of images contains the average visual quality assessment (Mean Opinion Score, briefly MOS) of each image. Spearman’s rank correlation coefficient was used to determine the correspondence between the visual MOS scores and the studied quantitative measures.As a result of the research, a new quality assessment measure of contrast images in the absence of references is presented. To obtain the estimate, local quality values are calculated by the BREN measure, their set is described by the Weibull distribution, and the scale parameter of the distribution serves as the best numerical estimate of the quality of contrast images. This conclusion is confirmed experimentally, and the proposed measure correlates better than other variants with the subjective assessments of experts.
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Xiaoyu, Shirley J. Dyke, Chul Min Yeum, Ilias Bilionis, Ali Lenjani, and Jongseong Choi. "Automated Indoor Image Localization to Support a Post-Event Building Assessment." Sensors 20, no. 6 (March 13, 2020): 1610. http://dx.doi.org/10.3390/s20061610.

Full text
Abstract:
Image data remains an important tool for post-event building assessment and documentation. After each natural hazard event, significant efforts are made by teams of engineers to visit the affected regions and collect useful image data. In general, a global positioning system (GPS) can provide useful spatial information for localizing image data. However, it is challenging to collect such information when images are captured in places where GPS signals are weak or interrupted, such as the indoor spaces of buildings. The inability to document the images’ locations hinders the analysis, organization, and documentation of these images as they lack sufficient spatial context. In this work, we develop a methodology to localize images and link them to locations on a structural drawing. A stream of images can readily be gathered along the path taken through a building using a compact camera. These images may be used to compute a relative location of each image in a 3D point cloud model, which is reconstructed using a visual odometry algorithm. The images may also be used to create local 3D textured models for building-components-of-interest using a structure-from-motion algorithm. A parallel set of images that are collected for building assessment is linked to the image stream using time information. By projecting the point cloud model to the structural drawing, the images can be overlaid onto the drawing, providing clear context information necessary to make use of those images. Additionally, components- or damage-of-interest captured in these images can be reconstructed in 3D, enabling detailed assessments having sufficient geospatial context. The technique is demonstrated by emulating post-event building assessment and data collection in a real building.
APA, Harvard, Vancouver, ISO, and other styles
30

Agudelo-Medina, Oscar A., Hernan Dario Benitez-Restrepo, Gemine Vivone, and Alan Bovik. "Perceptual Quality Assessment of Pan-Sharpened Images." Remote Sensing 11, no. 7 (April 11, 2019): 877. http://dx.doi.org/10.3390/rs11070877.

Full text
Abstract:
Pan-sharpening (PS) is a method of fusing the spatial details of a high-resolution panchromatic (PAN) image with the spectral information of a low-resolution multi-spectral (MS) image. Visual inspection is a crucial step in the evaluation of fused products whose subjectivity renders the assessment of pansharpened data a challenging problem. Most previous research on the development of PS algorithms has only superficially addressed the issue of qualitative evaluation, generally by depicting visual representations of the fused images. Hence, it is highly desirable to be able to predict pan-sharpened image quality automatically and accurately, as it would be perceived and reported by human viewers. Such a method is indispensable for the correct evaluation of PS techniques that produce images for visual applications such as Google Earth and Microsoft Bing. Here, we propose a new image quality assessment (IQA) measure that supports the visual qualitative analysis of pansharpened outcomes by using the statistics of natural images, commonly referred to as natural scene statistics (NSS), to extract statistical regularities from PS images. Importantly, NSS are measurably modified by the presence of distortions. We analyze six PS methods in the presence of two common distortions, blur and white noise, on PAN images. Furthermore, we conducted a human study on the subjective quality of pristine and degraded PS images and created a completely blind (opinion-unaware) fused image quality analyzer. In addition, we propose an opinion-aware fused image quality analyzer, whose predictions with respect to human perceptual evaluations of pansharpened images are highly correlated.
APA, Harvard, Vancouver, ISO, and other styles
31

Ayyanna, K. "Image Quality Assessment for Multi Exposure Fused Images." International Journal for Research in Applied Science and Engineering Technology V, no. III (March 28, 2017): 990–96. http://dx.doi.org/10.22214/ijraset.2017.3182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Chen, Wu Cheng, and Keigo Hirakawa. "Corrupted Reference Image Quality Assessment of Denoised Images." IEEE Transactions on Image Processing 28, no. 4 (April 2019): 1732–47. http://dx.doi.org/10.1109/tip.2018.2878326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Attia, Salim J. "Assessment of Some Enhancement Methods of Renal X-ray Image." NeuroQuantology 18, no. 12 (December 31, 2020): 01–05. http://dx.doi.org/10.14704/nq.2020.18.12.nq20231.

Full text
Abstract:
The study focuses on assessment of the quality of some image enhancement methods which were implemented on renal X-ray images. The enhancement methods included Imadjust, Histogram Equalization (HE) and Contrast Limited Adaptive Histogram Equalization (CLAHE). The images qualities were calculated to compare input images with output images from these three enhancement techniques. An eight renal x-ray images are collected to perform these methods. Generally, the x-ray images are lack of contrast and low in radiation dosage. This lack of image quality can be amended by enhancement process. Three quality image factors were done to assess the resulted images involved (Naturalness Image Quality Evaluator (NIQE), Perception based Image Quality Evaluator (PIQE) and Blind References Image Spatial Quality Evaluator (BRISQE)). The quality of images had been heightened by these methods to support the goals of diagnosis. The results of the chosen enhancement methods of collecting images reflected more qualified images than the original images. According to the results of the quality factors and the assessment of radiology experts, the CLAHE method was the best enhancement method.
APA, Harvard, Vancouver, ISO, and other styles
34

Besprozvannaya, I. I., and A. V. Zhegallo. "The structure of ideas about yourself and others (according to a photographic image and a schematic image)." Experimental Psychology (Russia) 12, no. 3 (2019): 19–27. http://dx.doi.org/10.17759/exppsy.2019120302.

Full text
Abstract:
The participants in the study assessed themselves according to the “personal differential” questionnaire and also performed the evaluation of the other using photo images or graphic schemes. When performing a self-assessment and evaluation of another in a photo image, the three-factor structure described by the authors of the methodology is mainly reproduced: “assessment”, strength ”,“ activity ”. The structure of assessments of another according to the schematic image is substantially different from the classical one, which indicates the fundamental differences in the perception of individual — personal characteristics by the schematic person.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhang, Yin, Xuehan Bai, Junhua Yan, Yongqi Xiao, Wanyi Zhang, C. R. Chatwin, and R. C. D. Young. "A Full-Reference Image Quality Assessment for Multiply Distorted Image based on Visual Mutual Information." Journal of Imaging Science and Technology 63, no. 6 (November 1, 2019): 60504–1. http://dx.doi.org/10.2352/j.imagingsci.technol.2019.63.6.060504.

Full text
Abstract:
Abstract A full-reference image quality assessment (FR-IQA) method for multi-distortion based on visual mutual information (MD-IQA) is proposed to solve the problem that the existing FR-IQA methods are mostly applicable to single-distorted images, but the assessment result for multiply distorted images is not ideal. First, the reference image and the distorted image are preprocessed by steerable pyramid decomposition and contrast sensitivity function (CSF). Next, a Gaussian scale mixture (GSM) model and an image distorted model are respectively constructed for the reference images and the distorted images. Then, visual distorted models are constructed both for the reference images and the distorted images. Finally, the mutual information between the processed reference image and the distorted image is calculated to obtain the full-reference quality assessment index for multiply distorted images. The experimental results show that the proposed method has higher accuracy and better performance for multiply distorted images.
APA, Harvard, Vancouver, ISO, and other styles
36

Mayank and Naveen Kumar Gondhi. "Comparative Assessment of Image Captioning Models." Journal of Computational and Theoretical Nanoscience 17, no. 1 (January 1, 2020): 473–78. http://dx.doi.org/10.1166/jctn.2020.8693.

Full text
Abstract:
Image Captioning is the combination of Computer Vision and Natural Language Processing (NLP) in which simple sentences have been automatically generated describing the content of the image. This paper presents the comparative analysis of different models used for the generation of descriptive English captions for a given image. Feature extractions of the images are done using Convolutional Neural Networks (CNN). These features are then, passed onto Recurrent Neural Networks (RNN) or Long Short-term Memory (LSTM) to generate captions in English language. The evaluation metrics used to appraise the conduct of the models are BLEU score, CIDEr and METEOR.
APA, Harvard, Vancouver, ISO, and other styles
37

Jermain, Peter R., Tyler W. Iorizzo, Mary Maloney, Bassel Mahmoud, and Anna N. Yaroslavsky. "Design and Validation of a Handheld Optical Polarization Imager for Preoperative Delineation of Basal Cell Carcinoma." Cancers 14, no. 16 (August 22, 2022): 4049. http://dx.doi.org/10.3390/cancers14164049.

Full text
Abstract:
Background: Accurate removal of basal cell carcinoma (BCC) is challenging due to the subtle contrast between cancerous and normal skin. A method aiding with preoperative delineation of BCC margins would be valuable. The aim of this study was to implement and clinically validate a novel handheld optical polarization imaging (OPI) device for rapid, noninvasive, in vivo assessment of skin cancer margins. Methods: The handheld imager was designed, built, and tested. For clinical validation, 10 subjects with biopsy-confirmed BCC were imaged. Presumable cancer margins were marked by the study surgeon. The optical images were spectrally encoded to mitigate the impact of endogenous skin chromophores. The results of OPI and of the surgeon’s preoperative visual assessment were compared to clinical intraoperative histopathology. Results: As compared to the previous prototype, the handheld imager incorporates automated image processing and has 10-times shorter acquisition times. It is twice as light and provides twice as large a field of view. Clinical validation demonstrated that margin assessments using OPI were more accurate than visual assessment by the surgeon. The images were in good correlation with histology in 9 out of 10 cases. Conclusions: Handheld OPI could improve the outcomes of skin cancer treatments without impairing clinical workflows.
APA, Harvard, Vancouver, ISO, and other styles
38

Brostrøm, Anders, and Kristian Mølhave. "Spatial Image Resolution Assessment by Fourier Analysis (SIRAF)." Microscopy and Microanalysis 28, no. 2 (March 3, 2022): 469–77. http://dx.doi.org/10.1017/s1431927622000228.

Full text
Abstract:
AbstractDetermining spatial resolution from images is crucial when optimizing focus, determining smallest resolvable object, and assessing size measurement uncertainties. However, no standard algorithm exists to measure resolution from electron microscopy (EM) images, though several have been proposed, where most require user decisions. We present the Spatial Image Resolution Assessment by Fourier analysis (SIRAF) algorithm that uses fast Fourier transform analysis to estimate resolution directly from a single image without user inputs. The method is derived from the underlying assumption that objects display intensity transitions, resembling a step function blurred by a Gaussian point spread function. This hypothesis is tested and verified on simulated EM images with known resolution. To identify potential pitfalls, the algorithm is also tested on simulated images with a variety of settings, and on real SEM images acquired at different magnification and defocus settings. Finally, the versatility of the method is investigated by assessing resolution in images from several microscopy techniques. It is concluded that the algorithm can assess resolution from a large selection of image types, thereby providing a measure of this fundamental image parameter. It may also improve autofocus methods and guide the optimization of magnification settings when balancing spatial resolution and field of view.
APA, Harvard, Vancouver, ISO, and other styles
39

Varga, Domonkos. "Saliency-Guided Local Full-Reference Image Quality Assessment." Signals 3, no. 3 (July 11, 2022): 483–96. http://dx.doi.org/10.3390/signals3030028.

Full text
Abstract:
Research and development of image quality assessment (IQA) algorithms have been in the focus of the computer vision and image processing community for decades. The intent of IQA methods is to estimate the perceptual quality of digital images correlating as high as possible with human judgements. Full-reference image quality assessment algorithms, which have full access to the distortion-free images, usually contain two phases: local image quality estimation and pooling. Previous works have utilized visual saliency in the final pooling stage. In addition to this, visual saliency was utilized as weights in the weighted averaging of local image quality scores, emphasizing image regions that are salient to human observers. In contrast to this common practice, visual saliency is applied in the computation of local image quality in this study, based on the observation that local image quality is determined both by local image degradation and visual saliency simultaneously. Experimental results on KADID-10k, TID2013, TID2008, and CSIQ have shown that the proposed method was able to improve the state-of-the-art’s performance at low computational costs.
APA, Harvard, Vancouver, ISO, and other styles
40

Guo, Ming Wei, Chen Bin Zhang, and Zong Hai Chen. "A Novel Method of Image Quality Assessment." Applied Mechanics and Materials 556-562 (May 2014): 5064–67. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.5064.

Full text
Abstract:
Image quality assessment (IQA) is one of the hot research areas in the field of image processing. For the reason that human being is the final receiver of the image, the image quality assessment should match the characteristics of human visual system. In this paper, we propose a novel method of image quality assessment which uses the visual selective attention of human visual system. For an image of a certain category, our method firstly detects the object in it and then calculate the saliency of the object. Lastly we use the combination of the detector’s score and the saliency as the image quality assessment. Experiments on some images of Pascal VOC dataset and INRIA dataset show that our method does well in image quality assessment.
APA, Harvard, Vancouver, ISO, and other styles
41

Dugonik, Bogdan, Aleksandra Dugonik, Maruška Marovt, and Marjan Golob. "Image Quality Assessment of Digital Image Capturing Devices for Melanoma Detection." Applied Sciences 10, no. 8 (April 21, 2020): 2876. http://dx.doi.org/10.3390/app10082876.

Full text
Abstract:
The fast-growing incidence of skin cancer, especially melanoma, is the guiding principle for intense development of various digital image-capturing devices providing easier recognition of melanoma by dermatologists. Handheld and digital dermoscopy, following of mole changes with smartphones and digital analysing of mole images, is based on evaluation of the colours, shape and deep structures in the skin moles. Incorrect colour information of an image, under- or overexposed images, lack of sharpness and low resolution of the images, can lead to melanoma misdiagnosis. The purpose of our study was to determine the colour error in the image according to the given lighting conditions and different camera settings. We focused on measuring the image quality parameters of smartphones and high-resolution cameras to compare them with the results of state-of-the-art dermoscopy device systems. We applied standardised measuring methods. The spatial frequency response method was applied for measuring the sharpness and resolution of the tested camera systems. Colour images with known reference values were captured from the test target, to evaluate colour error as a CIELAB (Commission Internationale de l’Eclairage) ΔE*ab colour difference as seen by a human observer. The results of our measurements yielded two significant findings. First, all tested cameras produced inaccurate colours when operating in automatic mode, and second, the amount of sharpening was too intensive. These deficiencies can be eliminated through adjusting the camera parameters manually or by image post-production. The presented two-step camera calibration procedure improves the colour accuracy of captured clinical and dermoscopy images significantly.
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Dingxian. "Edge Detection technique based on HDR image quality assessment." Journal of Physics: Conference Series 2078, no. 1 (November 1, 2021): 012029. http://dx.doi.org/10.1088/1742-6596/2078/1/012029.

Full text
Abstract:
Abstract Image edge detection is one of the major study aspects in current computer image processing field. The quality of the input images is uneven, some have large fuzzy areas, some are underexposed, and the edges of objects in the images are difficult to detect, and the application scenarios of image edge detection are limited. In the view of the above problems, this paper has proposed that by applying High Dynamic Range (HDR) image quality assessment technology, combining multiple images with different exposures into one HDR image with detailed edge information, This technology effectively solved problem of low edge information richness, improved the effectiveness of edge detection algorithms, and contributed to the development of edge detection technology.
APA, Harvard, Vancouver, ISO, and other styles
43

Anikeeva, I., and A. Chibunichev. "RANDOM NOISE ASSESSMENT IN AERIAL AND SATELLITE IMAGES." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2021 (June 28, 2021): 771–75. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2021-771-2021.

Full text
Abstract:
Abstract. Random noise in aerial and satellite images is one of the factors, decreasing their quality. The noise level assessment in images is paid not enough attention. The method of numerical estimation of random image noise is considered. The object of the study is the image noise estimating method, based on harmonic analysis. The capability of using this method for aerial and satellite image quality assessment is considered. The results of the algorithm testing on model data and on real satellite images with different terrain surfaces are carried out. The accuracy estimating results for calculating the root-mean-square deviation (RMS) of random image noise by the harmonic analysis method are shown.
APA, Harvard, Vancouver, ISO, and other styles
44

Fante, Kinde Anlay, Fetulhak Abdurahman, and Mulugeta Tegegn Gemeda. "An Ingenious Application-Specific Quality Assessment Methods for Compressed Wireless Capsule Endoscopy Images." Transactions on Environment and Electrical Engineering 4, no. 1 (October 24, 2020): 18. http://dx.doi.org/10.22149/teee.v4i1.139.

Full text
Abstract:
<p>Image quality assessment methods are used in different image processing applications. Among them, image compression and image super-resolution can be mentioned in wireless capsule endoscopy (WCE) applications. The existing image compression algorithms for WCE employ the generalpurpose image quality assessment (IQA) methods to evaluate the quality of the compressed image. Due to the specific nature of the images captured by WCE, the general-purpose IQA methods are not optimal and give less correlated results to that of subjective IQA (visual perception). This paper presents improved image quality assessment techniques for wireless capsule endoscopy applications. The proposed objective IQA methods are obtained by modifying the existing full-reference image quality assessment techniques. The modification is done by excluding the noninformative regions, in endoscopic images, in the computation of IQA metrics. The experimental results demonstrate that the proposed IQA method gives an improved peak signal-tonoise ratio (PSNR) and structural similarity index (SSIM). The proposed image quality assessment methods are more reliable for compressed endoscopic capsule images.</p>
APA, Harvard, Vancouver, ISO, and other styles
45

Zhang, Jiexin, Jianjiang Zhou, Minglei Li, Huiyu Zhou, and Tianzhu Yu. "Quality Assessment of SAR-to-Optical Image Translation." Remote Sensing 12, no. 21 (October 22, 2020): 3472. http://dx.doi.org/10.3390/rs12213472.

Full text
Abstract:
Synthetic aperture radar (SAR) images contain severe speckle noise and weak texture, which are unsuitable for visual interpretation. Many studies have been undertaken so far toward exploring the use of SAR-to-optical image translation to obtain near optical representations. However, how to evaluate the translation quality is a challenge. In this paper, we combine image quality assessment (IQA) with SAR-to-optical image translation to pursue a suitable evaluation approach. Firstly, several machine-learning baselines for SAR-to-optical image translation are established and evaluated. Then, extensive comparisons of perceptual IQA models are performed in terms of their use as objective functions for the optimization of image restoration. In order to study feature extraction of the images translated from SAR to optical modes, an application in scene classification is presented. Finally, the attributes of the translated image representations are evaluated using visual inspection and the proposed IQA methods.
APA, Harvard, Vancouver, ISO, and other styles
46

Wang, Fan, Jia Chen, Haonan Zhong, Yibo Ai, and Weidong Zhang. "No-Reference Image Quality Assessment Based on Image Multi-Scale Contour Prediction." Applied Sciences 12, no. 6 (March 10, 2022): 2833. http://dx.doi.org/10.3390/app12062833.

Full text
Abstract:
Accurately assessing image quality is a challenging task, especially without a reference image. Currently, most of the no-reference image quality assessment methods still require reference images in the training stage, but reference images are usually not available in real scenes. In this paper, we proposed a model named MSIQA inspired by biological vision and a convolution neural network (CNN), which does not require reference images in the training and testing phases. The model contains two modules, a multi-scale contour prediction network that simulates the contour response of the human optic nerve to images at different distances, and a central attention peripheral inhibition module inspired by the receptive field mechanism of retinal ganglion cells. There are two steps in the training stage. In the first step, the multi-scale contour prediction network learns to predict the contour features of images in different scales, and in the second step, the model combines the central attention peripheral inhibition module to learn to predict the quality score of the image. In the experiments, our method has achieved excellent performance. The Pearson linear correlation coefficient of the MSIQA model test on the LIVE database reached 0.988.
APA, Harvard, Vancouver, ISO, and other styles
47

Feng, C., D. Yu, Y. Liang, D. Guo, Q. Wang, and X. Cui. "ASSESSMENT OF INFLUENCE OF IMAGE PROCESSING ON FULLY AUTOMATIC UAV PHOTOGRAMMETRY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 4, 2019): 269–75. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-269-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Nowadays UAVs have been widely used for large scale surveying and mapping. Compared with traditional surveying techniques, UAV photogrammetry is more convenient, cost-effective, and responsive. Aerial images, Position and Orientation System (POS) observations and coordinates of ground control points are usually acquired during a surveying campaign. Aerial images are the data source of feature point extraction, dense matching and ortho-rectification procedures. The quality of the images is one of the most important factors that influence the accuracy and efficiency of UAV photogrammetry. Image processing techniques including image enhancement, image downsampling and image compression are usually used to improve the image quality as well as the efficiency and effectiveness of the photogrammetric data processing. However, all of these image processing techniques bring in uncertainties to the UAV photogrammetry. In this work, the influences of the aforementioned image processing techniques on the accuracy of the automatic UAV photogrammetry are investigated. The automatic photogrammetric data processing mainly consists of image matching, relative orientation, absolute orientation, dense matching, DSM interpolation and orthomosaicing. The results of the experiments show that the influences of the image processing techniques on the accuracy of automatic UAV photogrammetry are insignificant. The image orientation and surface reconstruction accuracies of the original and the enhanced images are comparable. The feature points extraction and image matching procedures are greatly influenced by image downsampling. The accuracies of the image orientations are not influenced by image downsampling and image compression at all.</p>
APA, Harvard, Vancouver, ISO, and other styles
48

Varga, Domonkos. "Full-Reference Image Quality Assessment Based on Grünwald–Letnikov Derivative, Image Gradients, and Visual Saliency." Electronics 11, no. 4 (February 12, 2022): 559. http://dx.doi.org/10.3390/electronics11040559.

Full text
Abstract:
The purpose of image quality assessment is to estimate digital images’ perceptual quality coherent with human judgement. Over the years, many structural features have been utilized or proposed to quantify the degradation of an image in the presence of various noise types. Image gradient is an obvious and very popular tool in the literature to quantify these changes in the images. However, gradient is able to characterize images locally. On the other hand, results from previous studies indicate that global contents of a scene are analyzed before the local features by the human visual system. Relying on these features of the human visual system, we propose a full-reference image quality assessment metric that characterizes the global changes of an image by the Grünwald–Letnikov derivatives and the local changes by image gradients. Moreover, visual saliency is also utilized for weighting the changes in the images and emphasizing those areas of the image which are salient to the human visual system. To prove the efficiency of the proposed method, massive experiments were carried out on publicly available benchmark image quality assessment databases.
APA, Harvard, Vancouver, ISO, and other styles
49

Luo, Wang, and Tian Bing Zhang. "Blind Image Quality Assessment Using Latent Dirichlet Allocation Model." Applied Mechanics and Materials 483 (December 2013): 594–98. http://dx.doi.org/10.4028/www.scientific.net/amm.483.594.

Full text
Abstract:
In this paper, we propose a blind image quality assessment (IQA) method under latent Dirichlet allocation (LDA) model. To assess the image quality, firstly, we learn topic-specific word distribution by training a set of pristine and distorted images without human subjective scores. Secondly, LDA model is used to estimate probability distribution of topic for the regions in the test images. Finally, we calculate the perceptual quality score of the test image by comparing the estimated probabilities of topics of the test image with that for the pristine images. Note that the quality-aware visual words are used to represent the images, which generated with respect to the natural scene statistic features. Experimental evaluation on the publicly available subjective-rated database LIVE demonstrates that our proposed method correlates reasonably well with different mean opinion scores (DMOS).
APA, Harvard, Vancouver, ISO, and other styles
50

Yin, Guanghao, Wei Wang, Zehuan Yuan, Chuchu Han, Wei Ji, Shouqian Sun, and Changhu Wang. "Content-Variant Reference Image Quality Assessment via Knowledge Distillation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 3134–42. http://dx.doi.org/10.1609/aaai.v36i3.20221.

Full text
Abstract:
Generally, humans are more skilled at perceiving differences between high-quality (HQ) and low-quality (LQ) images than directly judging the quality of a single LQ image. This situation also applies to image quality assessment (IQA). Although recent no-reference (NR-IQA) methods have made great progress to predict image quality free from the reference image, they still have the potential to achieve better performance since HQ image information is not fully exploited. In contrast, full-reference (FR-IQA) methods tend to provide more reliable quality evaluation, but its practicability is affected by the requirement for pixel-level aligned reference images. To address this, we firstly propose the content-variant reference method via knowledge distillation (CVRKD-IQA). Specifically, we use non-aligned reference (NAR) images to introduce various prior distributions of high-quality images. The comparisons of distribution differences between HQ and LQ images can help our model better assess the image quality. Further, the knowledge distillation transfers more HQ-LQ distribution difference information from the FR-teacher to the NAR-student and stabilizing CVRKD-IQA performance. Moreover, to fully mine the local-global combined information, while achieving faster inference speed, our model directly processes multiple image patches from the input with the MLP-mixer. Cross-dataset experiments verify that our model can outperform all NAR/NR-IQA SOTAs, even reach comparable performance than FR-IQA methods on some occasions. Since the content-variant and non-aligned reference HQ images are easy to obtain, our model can support more IQA applications with its robustness to content variations. Our code is available: https://github.com/guanghaoyin/CVRKD-IQA.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography