Journal articles on the topic 'No-Reference image quality assessment (NR-IQA)'

To see the other types of publications on this topic, follow the link: No-Reference image quality assessment (NR-IQA).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'No-Reference image quality assessment (NR-IQA).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Haopeng, Bo Yuan, Bo Dong, and Zhiguo Jiang. "No-Reference Blurred Image Quality Assessment by Structural Similarity Index." Applied Sciences 8, no. 10 (October 22, 2018): 2003. http://dx.doi.org/10.3390/app8102003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
No-reference (NR) image quality assessment (IQA) objectively measures the image quality consistently with subjective evaluations by using only the distorted image. In this paper, we focus on the problem of NR IQA for blurred images and propose a new no-reference structural similarity (NSSIM) metric based on re-blur theory and structural similarity index (SSIM). We extract blurriness features and define image blurriness by grayscale distribution. NSSIM scores an image quality by calculating image luminance, contrast, structure and blurriness. The proposed NSSIM metric can evaluate image quality immediately without prior training or learning. Experimental results on four popular datasets show that the proposed metric outperforms SSIM and well-matched to state-of-the-art NR IQA models. Furthermore, we apply NSSIM with known IQA approaches to blurred image restoration and demonstrate that NSSIM is statistically superior to peak signal-to-noise ratio (PSNR), SSIM and consistent with the state-of-the-art NR IQA models.
2

Shi, Jinsong, Pan Gao, and Jie Qin. "Transformer-Based No-Reference Image Quality Assessment via Supervised Contrastive Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 5 (March 24, 2024): 4829–37. http://dx.doi.org/10.1609/aaai.v38i5.28285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Image Quality Assessment (IQA) has long been a research hotspot in the field of image processing, especially No-Reference Image Quality Assessment (NR-IQA). Due to the powerful feature extraction ability, existing Convolution Neural Network (CNN) and Transformers based NR-IQA methods have achieved considerable progress. However, they still exhibit limited capability when facing unknown authentic distortion datasets. To further improve NR-IQA performance, in this paper, a novel supervised contrastive learning (SCL) and Transformer-based NR-IQA model SaTQA is proposed. We first train a model on a large-scale synthetic dataset by SCL (no image subjective score is required) to extract degradation features of images with various distortion types and levels. To further extract distortion information from images, we propose a backbone network incorporating the Multi-Stream Block (MSB) by combining the CNN inductive bias and Transformer long-term dependence modeling capability. Finally, we propose the Patch Attention Block (PAB) to obtain the final distorted image quality score by fusing the degradation features learned from contrastive learning with the perceptual distortion information extracted by the backbone network. Experimental results on six standard IQA datasets show that SaTQA outperforms the state-of-the-art methods for both synthetic and authentic datasets. Code is available at https://github.com/I2-Multimedia-Lab/SaTQA.
3

Lee, Wonkyeong, Eunbyeol Cho, Wonjin Kim, Hyebin Choi, Kyongmin Sarah Beck, Hyun Jung Yoon, Jongduk Baek, and Jang-Hwan Choi. "No-reference perceptual CT image quality assessment based on a self-supervised learning framework." Machine Learning: Science and Technology 3, no. 4 (December 1, 2022): 045033. http://dx.doi.org/10.1088/2632-2153/aca87d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Accurate image quality assessment (IQA) is crucial to optimize computed tomography (CT) image protocols while keeping the radiation dose as low as reasonably achievable. In the medical domain, IQA is based on how well an image provides a useful and efficient presentation necessary for physicians to make a diagnosis. Moreover, IQA results should be consistent with radiologists’ opinions on image quality, which is accepted as the gold standard for medical IQA. As such, the goals of medical IQA are greatly different from those of natural IQA. In addition, the lack of pristine reference images or radiologists’ opinions in a real-time clinical environment makes IQA challenging. Thus, no-reference IQA (NR-IQA) is more desirable in clinical settings than full-reference IQA (FR-IQA). Leveraging an innovative self-supervised training strategy for object detection models by detecting virtually inserted objects with geometrically simple forms, we propose a novel NR-IQA method, named deep detector IQA (D2IQA), that can automatically calculate the quantitative quality of CT images. Extensive experimental evaluations on clinical and anthropomorphic phantom CT images demonstrate that our D2IQA is capable of robustly computing perceptual image quality as it varies according to relative dose levels. Moreover, when considering the correlation between the evaluation results of IQA metrics and radiologists’ quality scores, our D2IQA is marginally superior to other NR-IQA metrics and even shows performance competitive with FR-IQA metrics.
4

Oszust, Mariusz. "No-Reference Image Quality Assessment with Local Gradient Orientations." Symmetry 11, no. 1 (January 16, 2019): 95. http://dx.doi.org/10.3390/sym11010095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Image processing methods often introduce distortions, which affect the way an image is subjectively perceived by a human observer. To avoid inconvenient subjective tests in cases in which reference images are not available, it is desirable to develop an automatic no-reference image quality assessment (NR-IQA) technique. In this paper, a novel NR-IQA technique is proposed in which the distributions of local gradient orientations in image regions of different sizes are used to characterize an image. To evaluate the objective quality of an image, its luminance and chrominance channels are processed, as well as their high-order derivatives. Finally, statistics of used perceptual features are mapped to subjective scores by the support vector regression (SVR) technique. The extensive experimental evaluation on six popular IQA benchmark datasets reveals that the proposed technique is highly correlated with subjective scores and outperforms related state-of-the-art hand-crafted and deep learning approaches.
5

Ahmed, Ismail Taha, Chen Soong Der, Baraa Tareq Hammad, and Norziana Jamil. "Contrast-distorted image quality assessment based on curvelet domain features." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 3 (June 1, 2021): 2595. http://dx.doi.org/10.11591/ijece.v11i3.pp2595-2603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Contrast is one of the most popular forms of distortion. Recently, the existing image quality assessment algorithms (IQAs) works focusing on distorted images by compression, noise and blurring. Reduced-reference image quality metric for contrast-changed images (RIQMC) and no reference-image quality assessment (NR-IQA) for contrast-distorted images (NR-IQA-CDI) have been created for CDI. NR-IQA-CDI showed poor performance in two out of three image databases, where the pearson correlation coefficient (PLCC) were only 0.5739 and 0.7623 in TID2013 and CSIQ database, respectively. Spatial domain features are the basis of NR-IQA-CDI architecture. Therefore, in this paper, the spatial domain features are complementary with curvelet domain features, in order to take advantage of the potent properties of the curvelet in extracting information from images such as multiscale and multidirectional. The experimental outcome rely on K-fold cross validation (K ranged 2-10) and statistical test showed that the performance of NR-IQA-CDI rely on curvelet domain features (NR-IQA-CDI-CvT) significantly surpasses those which are rely on five spatial domain features.
6

Garcia Freitas, Pedro, Luísa da Eira, Samuel Santos, and Mylene Farias. "On the Application LBP Texture Descriptors and Its Variants for No-Reference Image Quality Assessment." Journal of Imaging 4, no. 10 (October 4, 2018): 114. http://dx.doi.org/10.3390/jimaging4100114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Automatic assessing the quality of an image is a critical problem for a wide range of applications in the fields of computer vision and image processing. For example, many computer vision applications, such as biometric identification, content retrieval, and object recognition, rely on input images with a specific range of quality. Therefore, an effort has been made to develop image quality assessment (IQA) methods that are able to automatically estimate quality. Among the possible IQA approaches, No-Reference IQA (NR-IQA) methods are of fundamental interest, since they can be used in most real-time multimedia applications. NR-IQA are capable of assessing the quality of an image without using the reference (or pristine) image. In this paper, we investigate the use of texture descriptors in the design of NR-IQA methods. The premise is that visible impairments alter the statistics of texture descriptors, making it possible to estimate quality. To investigate if this premise is valid, we analyze the use of a set of state-of-the-art Local Binary Patterns (LBP) texture descriptors in IQA methods. Particularly, we present a comprehensive review with a detailed description of the considered methods. Additionally, we propose a framework for using texture descriptors in NR-IQA methods. Our experimental results indicate that, although not all texture descriptors are suitable for NR-IQA, many can be used with this purpose achieving a good accuracy performance with the advantage of a low computational complexity.
7

Gu, Jie, Gaofeng Meng, Cheng Da, Shiming Xiang, and Chunhong Pan. "No-Reference Image Quality Assessment with Reinforcement Recursive List-Wise Ranking." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 8336–43. http://dx.doi.org/10.1609/aaai.v33i01.33018336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Opinion-unaware no-reference image quality assessment (NR-IQA) methods have received many interests recently because they do not require images with subjective scores for training. Unfortunately, it is a challenging task, and thus far no opinion-unaware methods have shown consistently better performance than the opinion-aware ones. In this paper, we propose an effective opinion-unaware NR-IQA method based on reinforcement recursive list-wise ranking. We formulate the NR-IQA as a recursive list-wise ranking problem which aims to optimize the whole quality ordering directly. During training, the recursive ranking process can be modeled as a Markov decision process (MDP). The ranking list of images can be constructed by taking a sequence of actions, and each of them refers to selecting an image for a specific position of the ranking list. Reinforcement learning is adopted to train the model parameters, in which no ground-truth quality scores or ranking lists are necessary for learning. Experimental results demonstrate the superior performance of our approach compared with existing opinion-unaware NR-IQA methods. Furthermore, our approach can compete with the most effective opinion-aware methods. It improves the state-of-the-art by over 2% on the CSIQ benchmark and outperforms most compared opinion-aware models on TID2013.
8

Varga, Domonkos. "No-Reference Image Quality Assessment with Convolutional Neural Networks and Decision Fusion." Applied Sciences 12, no. 1 (December 23, 2021): 101. http://dx.doi.org/10.3390/app12010101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
No-reference image quality assessment (NR-IQA) has always been a difficult research problem because digital images may suffer very diverse types of distortions and their contents are extremely various. Moreover, IQA is also a very hot topic in the research community since the number and role of digital images in everyday life is continuously growing. Recently, a huge amount of effort has been devoted to exploiting convolutional neural networks and other deep learning techniques for no-reference image quality assessment. Since deep learning relies on a massive amount of labeled data, utilizing pretrained networks has become very popular in the literature. In this study, we introduce a novel, deep learning-based NR-IQA architecture that relies on the decision fusion of multiple image quality scores coming from different types of convolutional neural networks. The main idea behind this scheme is that a diverse set of different types of networks is able to better characterize authentic image distortions than a single network. The experimental results show that our method can effectively estimate perceptual image quality on four large IQA benchmark databases containing either authentic or artificial distortions. These results are also confirmed in significance and cross database tests.
9

Yin, Guanghao, Wei Wang, Zehuan Yuan, Chuchu Han, Wei Ji, Shouqian Sun, and Changhu Wang. "Content-Variant Reference Image Quality Assessment via Knowledge Distillation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 3134–42. http://dx.doi.org/10.1609/aaai.v36i3.20221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Generally, humans are more skilled at perceiving differences between high-quality (HQ) and low-quality (LQ) images than directly judging the quality of a single LQ image. This situation also applies to image quality assessment (IQA). Although recent no-reference (NR-IQA) methods have made great progress to predict image quality free from the reference image, they still have the potential to achieve better performance since HQ image information is not fully exploited. In contrast, full-reference (FR-IQA) methods tend to provide more reliable quality evaluation, but its practicability is affected by the requirement for pixel-level aligned reference images. To address this, we firstly propose the content-variant reference method via knowledge distillation (CVRKD-IQA). Specifically, we use non-aligned reference (NAR) images to introduce various prior distributions of high-quality images. The comparisons of distribution differences between HQ and LQ images can help our model better assess the image quality. Further, the knowledge distillation transfers more HQ-LQ distribution difference information from the FR-teacher to the NAR-student and stabilizing CVRKD-IQA performance. Moreover, to fully mine the local-global combined information, while achieving faster inference speed, our model directly processes multiple image patches from the input with the MLP-mixer. Cross-dataset experiments verify that our model can outperform all NAR/NR-IQA SOTAs, even reach comparable performance than FR-IQA methods on some occasions. Since the content-variant and non-aligned reference HQ images are easy to obtain, our model can support more IQA applications with its robustness to content variations. Our code is available: https://github.com/guanghaoyin/CVRKD-IQA.
10

Gavrovska, Ana, Dragi Dujković, Andreja Samčović, Yuliya Golub, and Valery Starovoitov. "Quadratic fitting model in no-reference image quality assessment." Telfor Journal 15, no. 2 (2023): 32–37. http://dx.doi.org/10.5937/telfor2302032g.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The perceptual quality of image is affected by distortions during compression, delivery and storage. Distortions also impact automatic image quality assessment (IQA) that needs to be highly correlated with subjective scores. In the absence of reference, which is a typical scenario in practice, no-reference (NR) metrics are necessary for quality measurements. Recently such methods are proposed, and they employ natural scene statistics (NSS). The experimental analysis performed in this paper takes into consideration two fitting or regression models of several NR-IQA metrics relying on different distortion types. The results show quadratic model as promising for making relations in terms of difference mean opinion score and Shannon entropy.
11

Varga, Domonkos. "No-Reference Image Quality Assessment with Multi-Scale Orderless Pooling of Deep Features." Journal of Imaging 7, no. 7 (July 10, 2021): 112. http://dx.doi.org/10.3390/jimaging7070112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The goal of no-reference image quality assessment (NR-IQA) is to evaluate their perceptual quality of digital images without using the distortion-free, pristine counterparts. NR-IQA is an important part of multimedia signal processing since digital images can undergo a wide variety of distortions during storage, compression, and transmission. In this paper, we propose a novel architecture that extracts deep features from the input image at multiple scales to improve the effectiveness of feature extraction for NR-IQA using convolutional neural networks. Specifically, the proposed method extracts deep activations for local patches at multiple scales and maps them onto perceptual quality scores with the help of trained Gaussian process regressors. Extensive experiments demonstrate that the introduced algorithm performs favorably against the state-of-the-art methods on three large benchmark datasets with authentic distortions (LIVE In the Wild, KonIQ-10k, and SPAQ).
12

Yan, Chenggang, Tong Teng, Yutao Liu, Yongbing Zhang, Haoqian Wang, and Xiangyang Ji. "Precise No-Reference Image Quality Evaluation Based on Distortion Identification." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 3s (October 31, 2021): 1–21. http://dx.doi.org/10.1145/3468872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The difficulty of no-reference image quality assessment (NR IQA) often lies in the lack of knowledge about the distortion in the image, which makes quality assessment blind and thus inefficient. To tackle such issue, in this article, we propose a novel scheme for precise NR IQA, which includes two successive steps, i.e., distortion identification and targeted quality evaluation. In the first step, we employ the well-known Inception-ResNet-v2 neural network to train a classifier that classifies the possible distortion in the image into the four most common distortion types, i.e., Gaussian white noise (WN), Gaussian blur (GB), jpeg compression (JPEG), and jpeg2000 compression (JP2K). Specifically, the deep neural network is trained on the large-scale Waterloo Exploration database, which ensures the robustness and high performance of distortion classification. In the second step, after determining the distortion type of the image, we then design a specific approach to quantify the image distortion level, which can estimate the image quality specially and more precisely. Extensive experiments performed on LIVE, TID2013, CSIQ, and Waterloo Exploration databases demonstrate that (1) the accuracy of our distortion classification is higher than that of the state-of-the-art distortion classification methods, and (2) the proposed NR IQA method outperforms the state-of-the-art NR IQA methods in quantifying the image quality.
13

Stępień, Igor, and Mariusz Oszust. "A Brief Survey on No-Reference Image Quality Assessment Methods for Magnetic Resonance Images." Journal of Imaging 8, no. 6 (June 4, 2022): 160. http://dx.doi.org/10.3390/jimaging8060160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
No-reference image quality assessment (NR-IQA) methods automatically and objectively predict the perceptual quality of images without access to a reference image. Therefore, due to the lack of pristine images in most medical image acquisition systems, they play a major role in supporting the examination of resulting images and may affect subsequent treatment. Their usage is particularly important in magnetic resonance imaging (MRI) characterized by long acquisition times and a variety of factors that influence the quality of images. In this work, a survey covering recently introduced NR-IQA methods for the assessment of MR images is presented. First, typical distortions are reviewed and then popular NR methods are characterized, taking into account the way in which they describe MR images and create quality models for prediction. The survey also includes protocols used to evaluate the methods and popular benchmark databases. Finally, emerging challenges are outlined along with an indication of the trends towards creating accurate image prediction models.
14

Ye, Zhongchang, Xin Ye, and Zhonghua Zhao. "Hybrid No-Reference Quality Assessment for Surveillance Images." Information 13, no. 12 (December 16, 2022): 588. http://dx.doi.org/10.3390/info13120588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Intelligent video surveillance (IVS) technology is widely used in various security systems. However, quality degradation in surveillance images (SIs) may affect its performance on vision-based tasks, leading to the difficulties in the IVS system extracting valid information from SIs. In this paper, we propose a hybrid no-reference image quality assessment (NR IQA) model for SIs that can help to identify undesired distortions and provide useful guidelines for IVS technology. Specifically, we first extract two main types of quality-aware features: the low-level visual features related to various distortions, and the high-level semantic information, which is extracted by a state-of-the-art (SOTA) vision transformer backbone. Then, we fuse these two kinds of features into the final quality-aware feature vector, which is mapped into the quality index through the feature regression module. Our experimental results on two surveillance content quality databases demonstrate that the proposed model achieves the best performance compared to the SOTA on NR IQA metrics.
15

Fu, Hao, Guojun Liu, Xiaoqin Yang, Lili Wei, and Lixia Yang. "Two Low-Level Feature Distributions Based No Reference Image Quality Assessment." Applied Sciences 12, no. 10 (May 14, 2022): 4975. http://dx.doi.org/10.3390/app12104975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
No reference image quality assessment (NR IQA) aims to develop quantitative measures to automatically and accurately estimate perceptual image quality without any prior information about the reference image. In this paper, we introduce two low-level feature distributions (TLLFD) based method for NR IQA. Different from the deep learning method, the proposed method characterizes image quality with the distributions of low-level features, thus it has few parameters, simple model, high efficiency, and strong robustness. First, the texture change of distorted image is extracted by the weighted histogram of generalized local binary pattern. Second, the Weibull distribution of gradient is extracted to represent the structural change of the distorted image. Furthermore, support vector regression is adopted to model the complex nonlinear relationship between feature space and quality measure. Finally, numerical tests are performed on LIVE, CISQ, MICT, and TID2008 standard databases for five different distortion categories JPEG2000 (JP2K), JPEG, White Noise (WN), Gaussian Blur (GB), and Fast Fading (FF). The experimental results indicate that TLLFD method achieves superior performance and strong generalization for image quality prediction as compared to state-of-the-art full-reference, no reference, and even deep learning IQA methods.
16

Guan, Xiaodi, Fan Li, and Lijun He. "Quality Assessment on Authentically Distorted Images by Expanding Proxy Labels." Electronics 9, no. 2 (February 3, 2020): 252. http://dx.doi.org/10.3390/electronics9020252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, we propose a no-reference image quality assessment (NR-IQA) approach towards authentically distorted images, based on expanding proxy labels. In order to distinguish from the human labels, we define the quality score, which is generated by using a traditional NR-IQA algorithm, as “proxy labels”. “Proxy” means that the objective results are obtained by computer after the extraction and assessment of the image features, instead of human judging. To solve the problem of limited image quality assessment (IQA) dataset size, we adopt a cascading transfer-learning method. First, we obtain large numbers of proxy labels which denote the quality score of authentically distorted images by using a traditional no-reference IQA method. Then the deep network is trained by the proxy labels, in order to learn IQA-related knowledge from the amounts of images with their scores. Ultimately, we use fine-tuning to inherit knowledge represented in the trained network. During the procedure, the mapping relationship fits in with human visual perception closer. The experimental results demonstrate that the proposed algorithm shows an outstanding performance as compared with the existing algorithms. On the LIVE In the Wild Image Quality Challenge database and KonIQ-10k database (two standard databases for authentically distorted image quality assessment), the algorithm realized good consistency between human visual perception and the predicted quality score of authentically distorted images.
17

Varga, Domonkos. "No-Reference Image Quality Assessment Based on the Fusion of Statistical and Perceptual Features." Journal of Imaging 6, no. 8 (July 30, 2020): 75. http://dx.doi.org/10.3390/jimaging6080075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The goal of no-reference image quality assessment (NR-IQA) is to predict the quality of an image as perceived by human observers without using any pristine, reference images. In this study, an NR-IQA algorithm is proposed which is driven by a novel feature vector containing statistical and perceptual features. Different from other methods, normalized local fractal dimension distribution and normalized first digit distributions in the wavelet and spatial domains are incorporated into the statistical features. Moreover, powerful perceptual features, such as colorfulness, dark channel feature, entropy, and mean of phase congruency image, are also incorporated to the proposed model. Experimental results on five large publicly available databases (KADID-10k, ESPL-LIVE HDR, CSIQ, TID2013, and TID2008) show that the proposed method is able to outperform other state-of-the-art methods.
18

Ahmed, Ismail Taha, Chen Soong Der, Norziana Jamil, and Mohamad Afendee Mohamed. "Improve of contrast-distorted image quality assessment based on convolutional neural networks." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 6 (December 1, 2019): 5604. http://dx.doi.org/10.11591/ijece.v9i6.pp5604-5614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<span lang="EN-US">Many image quality assessment algorithms (IQAs) have been developed during the past decade. However, most of them are designed for images distorted by compression, noise and blurring. There are very few IQAs designed specifically for Contrast Distorted Images (CDI), e.g. Reduced-reference Image Quality Metric for Contrast-changed images (RIQMC) and NR-IQA for Contrast-Distorted Images (NR-IQA-CDI). The existing NR-IQA-CDI relies on features designed by human or handcrafted features because considerable level of skill, domain expertise and efforts are required to design good handcrafted features. Recently, there is great advancement in machine learning with the introduction of deep learning through Convolutional Neural Networks (CNN) which enable machine to learn good features from raw image automatically without any human intervention. Therefore, it is tempting to explore the ways to transform the existing NR-IQA-CDI from using handcrafted features to machine-crafted features using deep learning, specifically Convolutional Neural Networks (CNN).The results show that NR-IQA-CDI based on non-pre-trained CNN (NR-IQA-CDI-NonPreCNN) significantly outperforms those which are based on handcrafted features. In addition to showing best performance, NR-IQA-CDI-NonPreCNN also enjoys the advantage of zero human intervention in designing feature, making it the most attractive solution for NR-IQA-CDI.</span>
19

Stępień, Igor, and Mariusz Oszust. "No-Reference Quality Assessment of Pan-Sharpening Images with Multi-Level Deep Image Representations." Remote Sensing 14, no. 5 (February 24, 2022): 1119. http://dx.doi.org/10.3390/rs14051119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Pan-Sharpening (PS) techniques provide a better visualization of a multi-band image using the high-resolution single-band image. To support their development and evaluation, in this paper, a novel, accurate, and automatic No-Reference (NR) PS Image Quality Assessment (IQA) method is proposed. In the method, responses of two complementary network architectures in a form of extracted multi-level representations of PS images are employed as quality-aware information. Specifically, high-dimensional data are separately extracted from the layers of the networks and further processed with the Kernel Principal Component Analysis (KPCA) to obtain features used to create a PS quality model. Extensive experimental comparison of the method on the large database of PS images against the state-of-the-art techniques, including popular NR methods adapted in this study to the PS IQA, indicates its superiority in terms of typical criteria.
20

Ryu, Jihyoung. "Improved Image Quality Assessment by Utilizing Pre-Trained Architecture Features with Unified Learning Mechanism." Applied Sciences 13, no. 4 (February 19, 2023): 2682. http://dx.doi.org/10.3390/app13042682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The purpose of the no-reference image quality assessment (NR-IQA) is to measure perceived image quality based on subjective judgments; however, due to the lack of a clean reference image, this is a complicated and unresolved challenge. Massive new IQA datasets have facilitated the creation of deep learning-based image quality measurements. We present a unique model to handle the NR-IQA challenge in this research by employing a hybrid strategy that leverages from pre-trained CNN model and the unified learning mechanism that extracts both local and non-local characteristics from the input patch. The deep analysis of the proposed framework shows that the model uses features and a mechanism that improves the monotonicity relationship between objective and subjective ratings. The intermediary goal was mapped to a quality score using a regression architecture. To extract various feature maps, a deep architecture with an adaptive receptive field was used. Analyses of this biggest NR-IQA benchmark datasets demonstrate that the suggested technique outperforms current state-of-the-art NR-IQA measures.
21

Varga, Domonkos. "A Human Visual System Inspired No-Reference Image Quality Assessment Method Based on Local Feature Descriptors." Sensors 22, no. 18 (September 7, 2022): 6775. http://dx.doi.org/10.3390/s22186775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Objective quality assessment of natural images plays a key role in many fields related to imaging and sensor technology. Thus, this paper intends to introduce an innovative quality-aware feature extraction method for no-reference image quality assessment (NR-IQA). To be more specific, a various sequence of HVS inspired filters were applied to the color channels of an input image to enhance those statistical regularities in the image to which the human visual system is sensitive. From the obtained feature maps, the statistics of a wide range of local feature descriptors were extracted to compile quality-aware features since they treat images from the human visual system’s point of view. To prove the efficiency of the proposed method, it was compared to 16 state-of-the-art NR-IQA techniques on five large benchmark databases, i.e., CLIVE, KonIQ-10k, SPAQ, TID2013, and KADID-10k. It was demonstrated that the proposed method is superior to the state-of-the-art in terms of three different performance indices.
22

Ryu, Jihyoung. "A Visual Saliency-Based Neural Network Architecture for No-Reference Image Quality Assessment." Applied Sciences 12, no. 19 (September 23, 2022): 9567. http://dx.doi.org/10.3390/app12199567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Deep learning has recently been used to study blind image quality assessment (BIQA) in great detail. Yet, the scarcity of high-quality algorithms prevents from developing them further and being used in a real-time scenario. Patch-based techniques have been used to forecast the quality of an image, but they typically award the picture quality score to an individual patch of the image. As a result, there would be a lot of misleading scores coming from patches. Some regions of the image are important and can contribute highly toward the right prediction of its quality. To prevent outlier regions, we suggest a technique with a visual saliency module which allows the only important region to bypass to the neural network and allows the network to only learn the important information required to predict the quality. The neural network architecture used in this study is Inception-ResNet-v2. We assess the proposed strategy using a benchmark database (KADID-10k) to show its efficacy. The outcome demonstrates better performance compared with certain popular no-reference IQA (NR-IQA) and full-reference IQA (FR-IQA) approaches. This technique is intended to be utilized to estimate the quality of an image being acquired in real time from drone imagery.
23

Abdalmajeed, Saifeldeen, and Jiao Shuhong. "Using the Natural Scenes’ Edges for Assessing Image Quality Blindly and Efficiently." Mathematical Problems in Engineering 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/389504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Two real blind/no-reference (NR) image quality assessment (IQA) algorithms in the spatial domain are developed. To measure image quality, the introduced approach uses an unprecedented concept for gathering a set of novel features based on edges of natural scenes. The enhanced sensitivity of the human eye to the information carried by edge and contour of an image supports this claim. The effectiveness of the proposed technique in quantifying image quality has been studied. The gathered features are formed using both Weibull distribution statistics and two sharpness functions to devise two separate NR IQA algorithms. The presented algorithms do not need training on databases of human judgments or even prior knowledge about expected distortions, so they are real NR IQA algorithms. In contrast to the most general no-reference IQA, the model used for this study is generic and has been created in such a way that it is not specified to any particular distortion type. When testing the proposed algorithms on LIVE database, experiments show that they correlate well with subjective opinion scores. They also show that the introduced methods significantly outperform the popular full-reference peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) methods. Besides they outperform the recently developed NR natural image quality evaluator (NIQE) model.
24

Mahmood, Saifeldeen Abdalmajeed. "Three Different Features Based Metric To Assess Image Quality Blindly." FES Journal of Engineering Sciences 8, no. 2 (May 23, 2020): 97–103. http://dx.doi.org/10.52981/fjes.v8i2.121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract When creating image quality assessment metric (IQA) no confirmation all distortion types are available. Non-specific distortion blind/no-reference (NR) IQA algorithms mostly need prior knowledge about anticipated distortions. This paper introduce a generic and distortion unaware (DU) approach for IQA with No Reference (NR). The approach uses three different measuring features which are initiated from the gist of natural scenes (NS) using Log-derivatives of the parameters; a general Gaussian distribution model, two sharpness functions, and Weibull distribution. All features were analyzed and co mpared together to examine their performance. When calibrating the proposed features performance on LIVE database, experiments show they have good contribution to the state of the art IQA and they outperform the popular full-reference peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) methods. Also they show sharpness features are the best when assess both prediction monotonicity and predict accuracy evaluation among the three features categories. Besides they show asymmetric generalized Gaussian distribution (AGGD) based features have the best correlation with differential mean opinion score.
25

LU, WEN, LIHUO HE, WENJIAN TANG, FEI GAO, and WEILONG HOU. "A NOVEL COMPRESSED IMAGES QUALITY METRIC." International Journal of Image and Graphics 11, no. 02 (April 2011): 281–92. http://dx.doi.org/10.1142/s021946781100410x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
As the performance indicator of the image processing algorithms or systems, image quality assessment (IQA) has attracted the attention of many researchers. Aiming to the widely used compression standards, JPEG and JPEG2000, we propose a new no reference (NR) metric for compressed images to do IQA. This metric exploits the causes of distortion by JPEG and JPEG2000, employs the directional discrete cosine transform (DDCT) to obtain the detail and directional information of the images and incorporates with the visual perception to obtain the image quality index. Experimental results show that the proposed metric not only has outstanding performance on JPEG and JPEG2000 images, but also applicable to other types of artifacts.
26

Ullah, Hayat, Muhammad Irfan, Kyungjin Han, and Jong Weon Lee. "DLNR-SIQA: Deep Learning-Based No-Reference Stitched Image Quality Assessment." Sensors 20, no. 22 (November 12, 2020): 6457. http://dx.doi.org/10.3390/s20226457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Due to recent advancements in virtual reality (VR) and augmented reality (AR), the demand for high quality immersive contents is a primary concern for production companies and consumers. Similarly, the topical record-breaking performance of deep learning in various domains of artificial intelligence has extended the attention of researchers to contribute to different fields of computer vision. To ensure the quality of immersive media contents using these advanced deep learning technologies, several learning based Stitched Image Quality Assessment methods have been proposed with reasonable performances. However, these methods are unable to localize, segment, and extract the stitching errors in panoramic images. Further, these methods used computationally complex procedures for quality assessment of panoramic images. With these motivations, in this paper, we propose a novel three-fold Deep Learning based No-Reference Stitched Image Quality Assessment (DLNR-SIQA) approach to evaluate the quality of immersive contents. In the first fold, we fined-tuned the state-of-the-art Mask R-CNN (Regional Convolutional Neural Network) on manually annotated various stitching error-based cropped images from the two publicly available datasets. In the second fold, we segment and localize various stitching errors present in the immersive contents. Finally, based on the distorted regions present in the immersive contents, we measured the overall quality of the stitched images. Unlike existing methods that only measure the quality of the images using deep features, our proposed method can efficiently segment and localize stitching errors and estimate the image quality by investigating segmented regions. We also carried out extensive qualitative and quantitative comparison with full reference image quality assessment (FR-IQA) and no reference image quality assessment (NR-IQA) on two publicly available datasets, where the proposed system outperformed the existing state-of-the-art techniques.
27

Zhang, Run, and Yongbin Wang. "Natural Image Quality Assessment Based on Visual Biological Cognitive Mechanism." International Journal of Software Innovation 7, no. 1 (January 2019): 1–26. http://dx.doi.org/10.4018/ijsi.2019010101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the focus of the main problems in no-reference natural image quality assessment (NR-IQA), the researchers propose a more universal, efficient and integrated resolution based on visual biological cognitive mechanism. First, the authors bring up an inspiring visual cognitive computing model (IVCCM) on the basis of visual heuristic principles. Second, the authors put forward an asymmetric generalized gaussian mixture distribution model (AGGMD), and the model can describe the probability distribution density of the images more precisely. Third, the authors extract the quality-aware multiscale local invariant features (QAMLIF) statistic and perceptive from natural images and form quality-aware uniform features descriptors (QAUFD) based on clustering and encoding the visual quality features. Fourth, the authors build topic semantic model and realize the resolution with Bayesian inference with IVCCM, AGGDM and QAUFD to implement NR-IQA. Theoretical research and experimental results show that the proposed resolution perform better with biological cognitive mechanism.
28

Wang, Yue, Zeng Gang Lin, and Zi Cheng Liao. "Image Quality Assessment Based on Region of Interest." Applied Mechanics and Materials 596 (July 2014): 350–54. http://dx.doi.org/10.4028/www.scientific.net/amm.596.350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper a new No-Reference (NR) image quality assessment (IQA) method based on the point wise statistics of local normalized luminance signals using region of interest (ROI) processing is proposed. This algorithm firstly extracts the ROI which is relative to human subjectivity by using the image gradient and phase congruency, and then extracts the image quality feature in spatial domain. Particularly, most of the present IQA methods mainly focus on predicting the image quality with respect to human perception, yet, in some other image domains, the final receiver of a digital image may not a human. Thus, we propose a method which can assess the image quality relative to edge detection algorithm. In addition, experimental results on LIVE database are provided to justify the superior compared to the significant image quality metrics.
29

Han, Lintao, Hengyi Lv, Yuchen Zhao, Hailong Liu, Guoling Bi, Zhiyong Yin, and Yuqiang Fang. "Conv-Former: A Novel Network Combining Convolution and Self-Attention for Image Quality Assessment." Sensors 23, no. 1 (December 30, 2022): 427. http://dx.doi.org/10.3390/s23010427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
To address the challenge of no-reference image quality assessment (NR-IQA) for authentically and synthetically distorted images, we propose a novel network called the Combining Convolution and Self-Attention for Image Quality Assessment network (Conv-Former). Our model uses a multi-stage transformer architecture similar to that of ResNet-50 to represent appropriate perceptual mechanisms in image quality assessment (IQA) to build an accurate IQA model. We employ adaptive learnable position embedding to handle images with arbitrary resolution. We propose a new transformer block (TB) by taking advantage of transformers to capture long-range dependencies, and of local information perception (LIP) to model local features for enhanced representation learning. The module increases the model’s understanding of the image content. Dual path pooling (DPP) is used to keep more contextual image quality information in feature downsampling. Experimental results verify that Conv-Former not only outperforms the state-of-the-art methods on authentic image databases, but also achieves competing performances on synthetic image databases which demonstrate the strong fitting performance and generalization capability of our proposed model.
30

Stępień, Igor, Rafał Obuchowicz, Adam Piórkowski, and Mariusz Oszust. "Fusion of Deep Convolutional Neural Networks for No-Reference Magnetic Resonance Image Quality Assessment." Sensors 21, no. 4 (February 3, 2021): 1043. http://dx.doi.org/10.3390/s21041043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The quality of magnetic resonance images may influence the diagnosis and subsequent treatment. Therefore, in this paper, a novel no-reference (NR) magnetic resonance image quality assessment (MRIQA) method is proposed. In the approach, deep convolutional neural network architectures are fused and jointly trained to better capture the characteristics of MR images. Then, to improve the quality prediction performance, the support vector machine regression (SVR) technique is employed on the features generated by fused networks. In the paper, several promising network architectures are introduced, investigated, and experimentally compared with state-of-the-art NR-IQA methods on two representative MRIQA benchmark datasets. One of the datasets is introduced in this work. As the experimental validation reveals, the proposed fusion of networks outperforms related approaches in terms of correlation with subjective opinions of a large number of experienced radiologists.
31

Cui, Yueli. "No-Reference Image Quality Assessment Based on Dual-Domain Feature Fusion." Entropy 22, no. 3 (March 17, 2020): 344. http://dx.doi.org/10.3390/e22030344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Image quality assessment (IQA) aims to devise computational models to evaluate image quality in a perceptually consistent manner. In this paper, a novel no-reference image quality assessment model based on dual-domain feature fusion is proposed, dubbed as DFF-IQA. Firstly, in the spatial domain, several features about weighted local binary pattern, naturalness and spatial entropy are extracted, where the naturalness features are represented by fitting parameters of the generalized Gaussian distribution. Secondly, in the frequency domain, the features of spectral entropy, oriented energy distribution, and fitting parameters of asymmetrical generalized Gaussian distribution are extracted. Thirdly, the features extracted in the dual-domain are fused to form the quality-aware feature vector. Finally, quality regression process by random forest is conducted to build the relationship between image features and quality score, yielding a measure of image quality. The resulting algorithm is tested on the LIVE database and compared with competing IQA models. Experimental results on the LIVE database indicate that the proposed DFF-IQA method is more consistent with the human visual system than other competing IQA methods.
32

Qian, Qi, and Qingbing Sang. "No-reference image quality assessment based on automatic machine learning." ITM Web of Conferences 45 (2022): 01034. http://dx.doi.org/10.1051/itmconf/20224501034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In different applications in deep learning, due to different required features, it is necessary to design specialized Neural Network structure. However, the design of the structure largely depends on the relevant subject knowledge of researchers and lots of experiments, resulting in huge waste of manpower. Therefore, in the field of Image Quality Assessment (IQA), the authors propose a method to apply Neural Architecture Search (NAS) to IQA. Mainly through the Differentiable Architecture Search algorithm, the structure of the modular Neural Network unit is searched by the stochastic gradient descent algorithm with better training performance by relaxing the operation features into a continuous space. Also, the idea of weight sharing is used to further save. The authors use the mainstream IQA database LIVE to search for Neural Network structures, and retrain and validate the searched structures in four datasets. A large number of experiments show that the model obtained by the search experiment achieves the effect of the best algorithm at this stage, and has a certain quality. The main contributions of this paper are: Transform the DARTS algorithm to adapt the regression problem, and introduce the Neural Architecture Search algorithm into the IQA field and conduct experimental verification.
33

Chandler, Damon M. "Seven Challenges in Image Quality Assessment: Past, Present, and Future Research." ISRN Signal Processing 2013 (February 6, 2013): 1–53. http://dx.doi.org/10.1155/2013/905685.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Image quality assessment (IQA) has been a topic of intense research over the last several decades. With each year comes an increasing number of new IQA algorithms, extensions of existing IQA algorithms, and applications of IQA to other disciplines. In this article, I first provide an up-to-date review of research in IQA, and then I highlight several open challenges in this field. The first half of this article provides discuss key properties of visual perception, image quality databases, existing full-reference, no-reference, and reduced-reference IQA algorithms. Yet, despite the remarkable progress that has been made in IQA, many fundamental challenges remain largely unsolved. The second half of this article highlights some of these challenges. I specifically discuss challenges related to lack of complete perceptual models for: natural images, compound and suprathreshold distortions, and multiple distortions, and the interactive effects of these distortions on the images. I also discuss challenges related to IQA of images containing nontraditional, and I discuss challenges related to the computational efficiency. The goal of this article is not only to help practitioners and researchers keep abreast of the recent advances in IQA, but to also raise awareness of the key limitations of current IQA knowledge.
34

Varga, Domonkos. "No-Reference Quality Assessment of Authentically Distorted Images Based on Local and Global Features." Journal of Imaging 8, no. 6 (June 19, 2022): 173. http://dx.doi.org/10.3390/jimaging8060173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the development of digital imaging techniques, image quality assessment methods are receiving more attention in the literature. Since distortion-free versions of camera images in many practical, everyday applications are not available, the need for effective no-reference image quality assessment algorithms is growing. Therefore, this paper introduces a novel no-reference image quality assessment algorithm for the objective evaluation of authentically distorted images. Specifically, we apply a broad spectrum of local and global feature vectors to characterize the variety of authentic distortions. Among the employed local features, the statistics of popular local feature descriptors, such as SURF, FAST, BRISK, or KAZE, are proposed for NR-IQA; other features are also introduced to boost the performances of local features. The proposed method was compared to 12 other state-of-the-art algorithms on popular and accepted benchmark datasets containing RGB images with authentic distortions (CLIVE, KonIQ-10k, and SPAQ). The introduced algorithm significantly outperforms the state-of-the-art in terms of correlation with human perceptual quality ratings.
35

Gupta, Praful, Christos Bampis, Jack Glover, Nicholas Paulter, and Alan Bovik. "Multivariate Statistical Approach to Image Quality Tasks." Journal of Imaging 4, no. 10 (October 12, 2018): 117. http://dx.doi.org/10.3390/jimaging4100117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Many existing natural scene statistics-based no reference image quality assessment (NR IQA) algorithms employ univariate parametric distributions to capture the statistical inconsistencies of bandpass distorted image coefficients. Here, we propose a multivariate model of natural image coefficients expressed in the bandpass spatial domain that has the potential to capture higher order correlations that may be induced by the presence of distortions. We analyze how the parameters of the multivariate model are affected by different distortion types, and we show their ability to capture distortion-sensitive image quality information. We also demonstrate the violation of Gaussianity assumptions that occur when locally estimating the energies of distorted image coefficients. Thus, we propose a generalized Gaussian-based local contrast estimator as a way to implement non-linear local gain control, which facilitates the accurate modeling of both pristine and distorted images. We integrate the novel approach of generalized contrast normalization with multivariate modeling of bandpass image coefficients into a holistic NR IQA model, which we refer to as multivariate generalized contrast normalization (MVGCN). We demonstrate the improved performance of MVGCN on quality-relevant tasks on multiple imaging modalities, including visible light image quality prediction and task success prediction on distorted X-ray images.
36

Gao, Guoqing, Lingxiao Li, Hao Chen, Ning Jiang, Shuqi Li, Qing Bian, Hua Bao, and Changhui Rao. "No-Reference Quality Assessment of Extended Target Adaptive Optics Images Using Deep Neural Network." Sensors 24, no. 1 (December 19, 2023): 1. http://dx.doi.org/10.3390/s24010001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper proposes a supervised deep neural network model for accomplishing highly efficient image quality assessment (IQA) for adaptive optics (AO) images. The AO imaging systems based on ground-based telescopes suffer from residual atmospheric turbulence, tracking error, and photoelectric noise, which can lead to varying degrees of image degradation, making image processing challenging. Currently, assessing the quality and selecting frames of AO images depend on either traditional IQA methods or manual evaluation by experienced researchers, neither of which is entirely reliable. The proposed network is trained by leveraging the similarity between the point spread function (PSF) of the degraded image and the Airy spot as its supervised training instead of relying on the features of the degraded image itself as a quality label. This approach is reflective of the relationship between the degradation factors of the AO imaging process and the image quality and does not require the analysis of the image’s specific feature or degradation model. The simulation test data show a Spearman’s rank correlation coefficient (SRCC) of 0.97, and our method was also validated using actual acquired AO images. The experimental results indicate that our method is more accurate in evaluating AO image quality compared to traditional IQA methods.
37

Gu, Ke, Guangtao Zhai, Xiaokang Yang, and Wenjun Zhang. "No-Reference Stereoscopic IQA Approach: From Nonlinear Effect to Parallax Compensation." Journal of Electrical and Computer Engineering 2012 (2012): 1–12. http://dx.doi.org/10.1155/2012/436031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The last decade has seen a booming of the applications of stereoscopic images/videos and the corresponding technologies, such as 3D modeling, reconstruction, and disparity estimation. However, only a very limited number of stereoscopic image quality assessment metrics was proposed through the years. In this paper, we propose a new no-reference stereoscopic image quality assessment algorithm based on the nonlinear additive model, ocular dominance model, and saliency based parallax compensation. Our studies using the Toyama database result in three valuable findings. First, quality of the stereoscopic image has a nonlinear relationship with a direct summation of two monoscopic image qualities. Second, it is a rational assumption that the right-eye response has the higher impact on the stereoscopic image quality, which is based on a sampling survey in the ocular dominance research. Third, the saliency based parallax compensation, resulted from different stereoscopic image contents, is considerably valid to improve the prediction performance of image quality metrics. Experimental results confirm that our proposed stereoscopic image quality assessment paradigm has superior prediction accuracy as compared to state-of-the-art competitors.
38

Lei, Shu, Huang Zijian, Yan Jiebin, and Fei Fengchang. "Super Resolution Image Visual Quality Assessment Based on Feature Optimization." Computational Intelligence and Neuroscience 2022 (June 20, 2022): 1–10. http://dx.doi.org/10.1155/2022/1263348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Most existing no-referenced image quality assessment (NR-IQA) algorithms need to extract features first and then predict image quality. However, only a small number of features work in the model, and the rest will degrade the model performance. Consequently, an NR-IQA framework based on feature optimization is proposed to solve this problem and apply to the SR-IQA field. In this study, we designed a feature engineering method to solve this problem. Specifically, the features associate with the SR images were first collected and aggregated. Furthermore, several advanced feature selection algorithms were used to sort the feature sets according to their importance, and the importance matrix of features is obtained. Then, we examined the linear relationship between the number of features and Pearson linear correlation coefficient (PLCC) to determine the optimal number of features and the optimal feature selection algorithm, so as to obtain the optimal model. The results showed that the image quality scores predicted by the optimal model are in good agreement with the human subjective scores. Adopting the proposed feature optimization framework, we can effectively reduce the number of features in the model and obtain better performance. The experimental results indicated that SR image quality can be accurately predicted using only a small part of image features. In summary, we proposed a feature optimization framework to solve the current problem of irrelevant features in SR-IQA, and an SR image quality assessment model was proposed consequently.
39

Varga, Domonkos. "Multi-Pooled Inception Features for No-Reference Image Quality Assessment." Applied Sciences 10, no. 6 (March 23, 2020): 2186. http://dx.doi.org/10.3390/app10062186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Image quality assessment (IQA) is an important element of a broad spectrum of applications ranging from automatic video streaming to display technology. Furthermore, the measurement of image quality requires a balanced investigation of image content and features. Our proposed approach extracts visual features by attaching global average pooling (GAP) layers to multiple Inception modules of on an ImageNet database pretrained convolutional neural network (CNN). In contrast to previous methods, we do not take patches from the input image. Instead, the input image is treated as a whole and is run through a pretrained CNN body to extract resolution-independent, multi-level deep features. As a consequence, our method can be easily generalized to any input image size and pretrained CNNs. Thus, we present a detailed parameter study with respect to the CNN base architectures and the effectiveness of different deep features. We demonstrate that our best proposal—called MultiGAP-NRIQA—is able to outperform the state-of-the-art on three benchmark IQA databases. Furthermore, these results were also confirmed in a cross database test using the LIVE In the Wild Image Quality Challenge database.
40

Hu, Kai, Yanwen Zhang, Feiyu Lu, Zhiliang Deng, and Yunping Liu. "An Underwater Image Enhancement Algorithm Based on MSR Parameter Optimization." Journal of Marine Science and Engineering 8, no. 10 (September 25, 2020): 741. http://dx.doi.org/10.3390/jmse8100741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The quality of underwater images is often affected by the absorption of light and the scattering and diffusion of floating objects. Therefore, underwater image enhancement algorithms have been widely studied. In this area, algorithms based on Multi-Scale Retinex (MSR) represent an important research direction. Although the visual quality of underwater images can be improved to some extent, the enhancement effect is not good due to the fact that the parameters of these algorithms cannot adapt to different underwater environments. To solve this problem, based on classical MSR, we propose an underwater image enhancement optimization (MSR-PO) algorithm which uses the non-reference image quality assessment (NR-IQA) index as the optimization index. First of all, in a large number of experiments, we choose the Natural Image Quality Evaluator (NIQE) as the NR-IQA index and determine the appropriate parameters in MSR as the optimization object. Then, we use the Gravitational Search Algorithm (GSA) to optimize the underwater image enhancement algorithm based on MSR and the NIQE index. The experimental results show that this algorithm has an excellent adaptive ability to environmental changes.
41

Courtney, Jane. "SEDIQA: Sound Emitting Document Image Quality Assessment in a Reading Aid for the Visually Impaired." Journal of Imaging 7, no. 9 (August 30, 2021): 168. http://dx.doi.org/10.3390/jimaging7090168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
For visually impaired people (VIPs), the ability to convert text to sound can mean a new level of independence or the simple joy of a good book. With significant advances in optical character recognition (OCR) in recent years, a number of reading aids are appearing on the market. These reading aids convert images captured by a camera to text which can then be read aloud. However, all of these reading aids suffer from a key issue—the user must be able to visually target the text and capture an image of sufficient quality for the OCR algorithm to function—no small task for VIPs. In this work, a sound-emitting document image quality assessment metric (SEDIQA) is proposed which allows the user to hear the quality of the text image and automatically captures the best image for OCR accuracy. This work also includes testing of OCR performance against image degradations, to identify the most significant contributors to accuracy reduction. The proposed no-reference image quality assessor (NR-IQA) is validated alongside established NR-IQAs and this work includes insights into the performance of these NR-IQAs on document images. SEDIQA is found to consistently select the best image for OCR accuracy. The full system includes a document image enhancement technique which introduces improvements in OCR accuracy with an average increase of 22% and a maximum increase of 68%.
42

Li, Yuyan, Yubo Dong, Haoyong Li, Danhua Liu, Fang Xue, and Dahua Gao. "No-Reference Hyperspectral Image Quality Assessment via Ranking Feature Learning." Remote Sensing 16, no. 10 (May 8, 2024): 1657. http://dx.doi.org/10.3390/rs16101657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In hyperspectral image (HSI) reconstruction tasks, due to the lack of ground truth in real imaging processes, models are usually trained and validated on simulation datasets and then tested on real measurements captured by real HSI imaging systems. However, due to the gap between the simulation imaging process and the real imaging process, the best model validated on the simulation dataset may fail on real measurements. To obtain the best model for the real-world task, it is crucial to design a suitable no-reference HSI quality assessment metric to reflect the reconstruction performance of different models. In this paper, we propose a novel no-reference HSI quality assessment metric via ranking feature learning (R-NHSIQA), which calculates the Wasserstein distance between the distribution of the deep features of the reconstructed HSIs and the benchmark distribution. Additionally, by introducing the spectral self-attention mechanism, we propose a Spectral Transformer (S-Transformer) to extract the spatial-spectral representative deep features of HSIs. Furthermore, to extract quality-sensitive deep features, we use quality ranking as a pre-training task to enhance the representation capability of the S-Transformer. Finally, we introduce the Wasserstein distance to measure the distance between the distribution of the deep features and the benchmark distribution, improving the assessment capacity of our method, even with non-overlapping distributions. The experimental results demonstrate that the proposed metric yields consistent results with multiple full-reference image quality assessment (FR-IQA) metrics, validating the idea that the proposed metric can serve as a substitute for FR-IQA metrics in real-world tasks.
43

Wang, Bin. "An Image Quality Assessment Approach Based on Saliency Map in Space Domain." Advanced Materials Research 1006-1007 (August 2014): 768–72. http://dx.doi.org/10.4028/www.scientific.net/amr.1006-1007.768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper proposes a no-reference image assessment approach (IQA) based on saliency map in the space domain of the image. The saliency map of the image is extracted by Itti model at first. Next, the saliency-map weighted normalized image is used to get the histogram of the image, then the histogram is modeled by generalized gaussian distribution and the parameter of the generalized gaussian distribution is estimated by parameter estimating approach. Parameters of the generalized gaussian distribution are used as the feature vector for the training and testing image. The feature vectors of the testing image are fed to support vector regresion machine to evaluate the image quality score. Experimental results show that our approach outperforms the recent method of no-reference IQA.
44

Starovoitov, V. V., and F. V. Starovoitov. "COMPARATIVE ANALYSIS OF NO-REFERENCE QUALITY MEASURES FOR DIGITAL IMAGES." «System analysis and applied information science», no. 1 (May 4, 2017): 24–32. http://dx.doi.org/10.21122/2309-4923-2017-1-24-32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents results of a comparative analysis of 34 measures published in the scientific literature and used for evaluation of the image quality without a reference image. In English literature, they are called no-reference (NR) measure or measures NR-type. The first article, the term no-reference, was published in 2000 and each year a growing number of publications on new measures NR-type. However, comparative studies of such measures is not practically conducted. Such measures are very important for a) just made photo quality evaluation, b) assessment of image enhancement transformations and selection of their parameters (such as contrast and brightness adjustments, tone-mapping, decolorization and others). Publicly available image quality databases used for study no-reference quality measures (TID2013, etc.), contain 4-5 variants of images distorted by predefined transformations with unknown parameters. We presented six types of experiments to analyze correlation of the computed numerical quality values with visual estimates of the test images quality. Four of the experiments are new: comparison of images after gamma-correction and contrast enhancement with different parameters, as well as analysis of the retouched images and photos taken with different focal length. It was shown experimentally that no one of the known no-reference quality assessment measure is universal, and the calculated value cannot be converted to a quality scale, excluding factors influencing the distortion of the image. Most of the studied measures calculates local estimates in small neighborhoods, and their arithmetic mean is the quality index of the image. If the image contains large areas of uniform brightness, the measures of this type can give incorrect quality assessment, which will not correlate with the visual assessments.
45

Modak, Sourav, Jonathan Heil, and Anthony Stein. "Pansharpening Low-Altitude Multispectral Images of Potato Plants Using a Generative Adversarial Network." Remote Sensing 16, no. 5 (March 1, 2024): 874. http://dx.doi.org/10.3390/rs16050874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Image preprocessing and fusion are commonly used for enhancing remote-sensing images, but the resulting images often lack useful spatial features. As the majority of research on image fusion has concentrated on the satellite domain, the image-fusion task for Unmanned Aerial Vehicle (UAV) images has received minimal attention. This study investigated an image-improvement strategy by integrating image preprocessing and fusion tasks for UAV images. The goal is to improve spatial details and avoid color distortion in fused images. Techniques such as image denoising, sharpening, and Contrast Limited Adaptive Histogram Equalization (CLAHE) were used in the preprocessing step. The unsharp mask algorithm was used for image sharpening. Wiener and total variation denoising methods were used for image denoising. The image-fusion process was conducted in two steps: (1) fusing the spectral bands into one multispectral image and (2) pansharpening the panchromatic and multispectral images using the PanColorGAN model. The effectiveness of the proposed approach was evaluated using quantitative and qualitative assessment techniques, including no-reference image quality assessment (NR-IQA) metrics. In this experiment, the unsharp mask algorithm noticeably improved the spatial details of the pansharpened images. No preprocessing algorithm dramatically improved the color quality of the enhanced images. The proposed fusion approach improved the images without importing unnecessary blurring and color distortion issues.
46

Zhang, Lin, Xilin Yang, Lijun Zhang, Xiao Liu, Shengjie Zhao, and Yong Ma. "Towards Automatic Image Exposure Level Assessment." Mathematical Problems in Engineering 2020 (November 23, 2020): 1–14. http://dx.doi.org/10.1155/2020/2789854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The quality of acquired images can be surely reduced by improper exposures. Thus, in many vision-related industries, such as imaging sensor manufacturing and video surveillance, an approach that can routinely and accurately evaluate exposure levels of images is in urgent need. Taking an image as input, such a method is expected to output a scalar value, which can represent the overall perceptual exposure level of the examined image, ranging from extremely underexposed to extremely overexposed. However, studies focusing on image exposure level assessment (IELA) are quite sporadic. It should be noted that blind NR-IQA (no-reference image quality assessment) algorithms or metrics used to measure the quality of contrast-distorted images cannot be used for IELA. The root reason is that though these algorithms can quantify quality distortion of images, they do not know whether the distortion is due to underexposure or overexposure. This paper aims to resolve the issue of IELA to some extent and contributes to two aspects. Firstly, an Image Exposure Database (IEpsD) is constructed to facilitate the study of IELA. IEpsD comprises 24,500 images with various exposure levels, and for each image a subjective exposure score is provided, which represents its perceptual exposure level. Secondly, as IELA can be naturally formulated as a regression problem, we thoroughly evaluate the performance of modern deep CNN architectures for solving this specific task. Our evaluation results can serve as a baseline when the other researchers develop even more sophisticated IELA approaches. To facilitate the other researchers to reproduce our results, we have released the dataset and the relevant source code at https://cslinzhang.github.io/imgExpo/.
47

Jin, Chongchong, Zongju Peng, Wenhui Zou, Fen Chen, Gangyi Jiang, and Mei Yu. "No-Reference Quality Assessment for 3D Synthesized Images Based on Visual-Entropy-Guided Multi-Layer Features Analysis." Entropy 23, no. 6 (June 18, 2021): 770. http://dx.doi.org/10.3390/e23060770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Multiview video plus depth is one of the mainstream representations of 3D scenes in emerging free viewpoint video, which generates virtual 3D synthesized images through a depth-image-based-rendering (DIBR) technique. However, the inaccuracy of depth maps and imperfect DIBR techniques result in different geometric distortions that seriously deteriorate the users’ visual perception. An effective 3D synthesized image quality assessment (IQA) metric can simulate human visual perception and determine the application feasibility of the synthesized content. In this paper, a no-reference IQA metric based on visual-entropy-guided multi-layer features analysis for 3D synthesized images is proposed. According to the energy entropy, the geometric distortions are divided into two visual attention layers, namely, bottom-up layer and top-down layer. The feature of salient distortion is measured by regional proportion plus transition threshold on a bottom-up layer. In parallel, the key distribution regions of insignificant geometric distortion are extracted by a relative total variation model, and the features of these distortions are measured by the interaction of decentralized attention and concentrated attention on top-down layers. By integrating the features of both bottom-up and top-down layers, a more visually perceptive quality evaluation model is built. Experimental results show that the proposed method is superior to the state-of-the-art in assessing the quality of 3D synthesized images.
48

Sybingco, Edwin, and Elmer P. Dadios. "Blind Image Quality Assessment Based on Natural Statistics of Double-Opponency." Journal of Advanced Computational Intelligence and Intelligent Informatics 22, no. 5 (September 20, 2018): 725–30. http://dx.doi.org/10.20965/jaciii.2018.p0725.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
One of the challenges in image quality assessment (IQA) is to determine the quality score without the presence of the reference image. In this paper, the authors proposed a no-reference image quality assessment method based on the natural statistics of double-opponent (DO) cells. It utilizes the statistical modeling of the three opponency channels using the generalized Gaussian distribution (GGD) and asymmetric generalized Gaussian distribution (AGGD). The parameters of GGD and AGGD are then applied to feedforward neural network to predict the image quality. Result shows that for any opposing channels, its natural statistics parameters when applied to feedforward neural network can achieve satisfactory prediction of image quality.
49

Zheng, Yuanfeng, Yuchen Yan, and Hao Jiang. "Semi-TSGAN: Semi-Supervised Learning for Highlight Removal Based on Teacher-Student Generative Adversarial Network." Sensors 24, no. 10 (May 13, 2024): 3090. http://dx.doi.org/10.3390/s24103090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Despite recent notable advancements in highlight image restoration techniques, the dearth of annotated data and the lightweight deployment of highlight removal networks pose significant impediments to further advancements in the field. In this paper, to the best of our knowledge, we first propose a semi-supervised learning paradigm for highlight removal, merging the fusion version of a teacher–student model and a generative adversarial network, featuring a lightweight network architecture. Initially, we establish a dependable repository to house optimal predictions as pseudo ground truth through empirical analyses guided by the most reliable No-Reference Image Quality Assessment (NR-IQA) method. This method serves to assess rigorously the quality of model predictions. Subsequently, addressing concerns regarding confirmation bias, we integrate contrastive regularization into the framework to curtail the risk of overfitting on inaccurate labels. Finally, we introduce a comprehensive feature aggregation module and an extensive attention mechanism within the generative network, considering a balance between network performance and computational efficiency. Our experimental evaluations encompass comprehensive assessments on both full-reference and non-reference highlight benchmarks. The results demonstrate conclusively the substantive quantitative and qualitative enhancements achieved by our proposed algorithm in comparison to state-of-the-art methodologies.
50

Irshad, Muhammad, Camilo Sanchez-Ferreira, Sana Alamgeer, Carlos H. Llanos, and Mylène C. Q. Farias. "No-reference Image Quality Assessment of Underwater Images Using Multi-Scale Salient Local Binary Patterns." Electronic Imaging 2021, no. 9 (January 18, 2021): 265–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.9.iqsp-265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Images acquired in underwater scenarios may contain severe distortions due to light absorption and scattering, color distortion, poor visibility, and contrast reduction. Because of these degradations, researchers have proposed several algorithms to restore or enhance underwater images. One way to assess these algorithms’ performance is to measure the quality of the restored/enhanced underwater images. Unfortunately, since reference (pristine) images are often not available, designing no-reference (blind) image quality metrics for this type of scenario is still a challenge. In fact, although the area of image quality has evolved a lot in the last decades, estimating the quality of enhanced and restored images is still an open problem. In this work, we present a no-reference image quality evaluation metric for enhanced underwater images (NR-UWIQA) that uses an adapted version of the multi-scale salient local binary pattern operator to extract image features and a machine learning approach to predict quality. The proposed metric was tested on the UID-LEIA database and presented good accuracy performance when compared to other state-of-the-art methods. In summary, the proposed NR-UWQIA method can be used to evaluate the results of restoration techniques quickly and efficiently, opening a new perspective in the area of underwater image restoration and quality assessment.

To the bibliography