Добірка наукової літератури з теми "No-Reference image quality assessment (NR-IQA)"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "No-Reference image quality assessment (NR-IQA)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "No-Reference image quality assessment (NR-IQA)":

1

Zhang, Haopeng, Bo Yuan, Bo Dong, and Zhiguo Jiang. "No-Reference Blurred Image Quality Assessment by Structural Similarity Index." Applied Sciences 8, no. 10 (October 22, 2018): 2003. http://dx.doi.org/10.3390/app8102003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
No-reference (NR) image quality assessment (IQA) objectively measures the image quality consistently with subjective evaluations by using only the distorted image. In this paper, we focus on the problem of NR IQA for blurred images and propose a new no-reference structural similarity (NSSIM) metric based on re-blur theory and structural similarity index (SSIM). We extract blurriness features and define image blurriness by grayscale distribution. NSSIM scores an image quality by calculating image luminance, contrast, structure and blurriness. The proposed NSSIM metric can evaluate image quality immediately without prior training or learning. Experimental results on four popular datasets show that the proposed metric outperforms SSIM and well-matched to state-of-the-art NR IQA models. Furthermore, we apply NSSIM with known IQA approaches to blurred image restoration and demonstrate that NSSIM is statistically superior to peak signal-to-noise ratio (PSNR), SSIM and consistent with the state-of-the-art NR IQA models.
2

Shi, Jinsong, Pan Gao, and Jie Qin. "Transformer-Based No-Reference Image Quality Assessment via Supervised Contrastive Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 5 (March 24, 2024): 4829–37. http://dx.doi.org/10.1609/aaai.v38i5.28285.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Image Quality Assessment (IQA) has long been a research hotspot in the field of image processing, especially No-Reference Image Quality Assessment (NR-IQA). Due to the powerful feature extraction ability, existing Convolution Neural Network (CNN) and Transformers based NR-IQA methods have achieved considerable progress. However, they still exhibit limited capability when facing unknown authentic distortion datasets. To further improve NR-IQA performance, in this paper, a novel supervised contrastive learning (SCL) and Transformer-based NR-IQA model SaTQA is proposed. We first train a model on a large-scale synthetic dataset by SCL (no image subjective score is required) to extract degradation features of images with various distortion types and levels. To further extract distortion information from images, we propose a backbone network incorporating the Multi-Stream Block (MSB) by combining the CNN inductive bias and Transformer long-term dependence modeling capability. Finally, we propose the Patch Attention Block (PAB) to obtain the final distorted image quality score by fusing the degradation features learned from contrastive learning with the perceptual distortion information extracted by the backbone network. Experimental results on six standard IQA datasets show that SaTQA outperforms the state-of-the-art methods for both synthetic and authentic datasets. Code is available at https://github.com/I2-Multimedia-Lab/SaTQA.
3

Lee, Wonkyeong, Eunbyeol Cho, Wonjin Kim, Hyebin Choi, Kyongmin Sarah Beck, Hyun Jung Yoon, Jongduk Baek, and Jang-Hwan Choi. "No-reference perceptual CT image quality assessment based on a self-supervised learning framework." Machine Learning: Science and Technology 3, no. 4 (December 1, 2022): 045033. http://dx.doi.org/10.1088/2632-2153/aca87d.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Accurate image quality assessment (IQA) is crucial to optimize computed tomography (CT) image protocols while keeping the radiation dose as low as reasonably achievable. In the medical domain, IQA is based on how well an image provides a useful and efficient presentation necessary for physicians to make a diagnosis. Moreover, IQA results should be consistent with radiologists’ opinions on image quality, which is accepted as the gold standard for medical IQA. As such, the goals of medical IQA are greatly different from those of natural IQA. In addition, the lack of pristine reference images or radiologists’ opinions in a real-time clinical environment makes IQA challenging. Thus, no-reference IQA (NR-IQA) is more desirable in clinical settings than full-reference IQA (FR-IQA). Leveraging an innovative self-supervised training strategy for object detection models by detecting virtually inserted objects with geometrically simple forms, we propose a novel NR-IQA method, named deep detector IQA (D2IQA), that can automatically calculate the quantitative quality of CT images. Extensive experimental evaluations on clinical and anthropomorphic phantom CT images demonstrate that our D2IQA is capable of robustly computing perceptual image quality as it varies according to relative dose levels. Moreover, when considering the correlation between the evaluation results of IQA metrics and radiologists’ quality scores, our D2IQA is marginally superior to other NR-IQA metrics and even shows performance competitive with FR-IQA metrics.
4

Oszust, Mariusz. "No-Reference Image Quality Assessment with Local Gradient Orientations." Symmetry 11, no. 1 (January 16, 2019): 95. http://dx.doi.org/10.3390/sym11010095.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Image processing methods often introduce distortions, which affect the way an image is subjectively perceived by a human observer. To avoid inconvenient subjective tests in cases in which reference images are not available, it is desirable to develop an automatic no-reference image quality assessment (NR-IQA) technique. In this paper, a novel NR-IQA technique is proposed in which the distributions of local gradient orientations in image regions of different sizes are used to characterize an image. To evaluate the objective quality of an image, its luminance and chrominance channels are processed, as well as their high-order derivatives. Finally, statistics of used perceptual features are mapped to subjective scores by the support vector regression (SVR) technique. The extensive experimental evaluation on six popular IQA benchmark datasets reveals that the proposed technique is highly correlated with subjective scores and outperforms related state-of-the-art hand-crafted and deep learning approaches.
5

Ahmed, Ismail Taha, Chen Soong Der, Baraa Tareq Hammad, and Norziana Jamil. "Contrast-distorted image quality assessment based on curvelet domain features." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 3 (June 1, 2021): 2595. http://dx.doi.org/10.11591/ijece.v11i3.pp2595-2603.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Contrast is one of the most popular forms of distortion. Recently, the existing image quality assessment algorithms (IQAs) works focusing on distorted images by compression, noise and blurring. Reduced-reference image quality metric for contrast-changed images (RIQMC) and no reference-image quality assessment (NR-IQA) for contrast-distorted images (NR-IQA-CDI) have been created for CDI. NR-IQA-CDI showed poor performance in two out of three image databases, where the pearson correlation coefficient (PLCC) were only 0.5739 and 0.7623 in TID2013 and CSIQ database, respectively. Spatial domain features are the basis of NR-IQA-CDI architecture. Therefore, in this paper, the spatial domain features are complementary with curvelet domain features, in order to take advantage of the potent properties of the curvelet in extracting information from images such as multiscale and multidirectional. The experimental outcome rely on K-fold cross validation (K ranged 2-10) and statistical test showed that the performance of NR-IQA-CDI rely on curvelet domain features (NR-IQA-CDI-CvT) significantly surpasses those which are rely on five spatial domain features.
6

Garcia Freitas, Pedro, Luísa da Eira, Samuel Santos, and Mylene Farias. "On the Application LBP Texture Descriptors and Its Variants for No-Reference Image Quality Assessment." Journal of Imaging 4, no. 10 (October 4, 2018): 114. http://dx.doi.org/10.3390/jimaging4100114.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Automatic assessing the quality of an image is a critical problem for a wide range of applications in the fields of computer vision and image processing. For example, many computer vision applications, such as biometric identification, content retrieval, and object recognition, rely on input images with a specific range of quality. Therefore, an effort has been made to develop image quality assessment (IQA) methods that are able to automatically estimate quality. Among the possible IQA approaches, No-Reference IQA (NR-IQA) methods are of fundamental interest, since they can be used in most real-time multimedia applications. NR-IQA are capable of assessing the quality of an image without using the reference (or pristine) image. In this paper, we investigate the use of texture descriptors in the design of NR-IQA methods. The premise is that visible impairments alter the statistics of texture descriptors, making it possible to estimate quality. To investigate if this premise is valid, we analyze the use of a set of state-of-the-art Local Binary Patterns (LBP) texture descriptors in IQA methods. Particularly, we present a comprehensive review with a detailed description of the considered methods. Additionally, we propose a framework for using texture descriptors in NR-IQA methods. Our experimental results indicate that, although not all texture descriptors are suitable for NR-IQA, many can be used with this purpose achieving a good accuracy performance with the advantage of a low computational complexity.
7

Gu, Jie, Gaofeng Meng, Cheng Da, Shiming Xiang, and Chunhong Pan. "No-Reference Image Quality Assessment with Reinforcement Recursive List-Wise Ranking." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 8336–43. http://dx.doi.org/10.1609/aaai.v33i01.33018336.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Opinion-unaware no-reference image quality assessment (NR-IQA) methods have received many interests recently because they do not require images with subjective scores for training. Unfortunately, it is a challenging task, and thus far no opinion-unaware methods have shown consistently better performance than the opinion-aware ones. In this paper, we propose an effective opinion-unaware NR-IQA method based on reinforcement recursive list-wise ranking. We formulate the NR-IQA as a recursive list-wise ranking problem which aims to optimize the whole quality ordering directly. During training, the recursive ranking process can be modeled as a Markov decision process (MDP). The ranking list of images can be constructed by taking a sequence of actions, and each of them refers to selecting an image for a specific position of the ranking list. Reinforcement learning is adopted to train the model parameters, in which no ground-truth quality scores or ranking lists are necessary for learning. Experimental results demonstrate the superior performance of our approach compared with existing opinion-unaware NR-IQA methods. Furthermore, our approach can compete with the most effective opinion-aware methods. It improves the state-of-the-art by over 2% on the CSIQ benchmark and outperforms most compared opinion-aware models on TID2013.
8

Varga, Domonkos. "No-Reference Image Quality Assessment with Convolutional Neural Networks and Decision Fusion." Applied Sciences 12, no. 1 (December 23, 2021): 101. http://dx.doi.org/10.3390/app12010101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
No-reference image quality assessment (NR-IQA) has always been a difficult research problem because digital images may suffer very diverse types of distortions and their contents are extremely various. Moreover, IQA is also a very hot topic in the research community since the number and role of digital images in everyday life is continuously growing. Recently, a huge amount of effort has been devoted to exploiting convolutional neural networks and other deep learning techniques for no-reference image quality assessment. Since deep learning relies on a massive amount of labeled data, utilizing pretrained networks has become very popular in the literature. In this study, we introduce a novel, deep learning-based NR-IQA architecture that relies on the decision fusion of multiple image quality scores coming from different types of convolutional neural networks. The main idea behind this scheme is that a diverse set of different types of networks is able to better characterize authentic image distortions than a single network. The experimental results show that our method can effectively estimate perceptual image quality on four large IQA benchmark databases containing either authentic or artificial distortions. These results are also confirmed in significance and cross database tests.
9

Yin, Guanghao, Wei Wang, Zehuan Yuan, Chuchu Han, Wei Ji, Shouqian Sun, and Changhu Wang. "Content-Variant Reference Image Quality Assessment via Knowledge Distillation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 3134–42. http://dx.doi.org/10.1609/aaai.v36i3.20221.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Generally, humans are more skilled at perceiving differences between high-quality (HQ) and low-quality (LQ) images than directly judging the quality of a single LQ image. This situation also applies to image quality assessment (IQA). Although recent no-reference (NR-IQA) methods have made great progress to predict image quality free from the reference image, they still have the potential to achieve better performance since HQ image information is not fully exploited. In contrast, full-reference (FR-IQA) methods tend to provide more reliable quality evaluation, but its practicability is affected by the requirement for pixel-level aligned reference images. To address this, we firstly propose the content-variant reference method via knowledge distillation (CVRKD-IQA). Specifically, we use non-aligned reference (NAR) images to introduce various prior distributions of high-quality images. The comparisons of distribution differences between HQ and LQ images can help our model better assess the image quality. Further, the knowledge distillation transfers more HQ-LQ distribution difference information from the FR-teacher to the NAR-student and stabilizing CVRKD-IQA performance. Moreover, to fully mine the local-global combined information, while achieving faster inference speed, our model directly processes multiple image patches from the input with the MLP-mixer. Cross-dataset experiments verify that our model can outperform all NAR/NR-IQA SOTAs, even reach comparable performance than FR-IQA methods on some occasions. Since the content-variant and non-aligned reference HQ images are easy to obtain, our model can support more IQA applications with its robustness to content variations. Our code is available: https://github.com/guanghaoyin/CVRKD-IQA.
10

Gavrovska, Ana, Dragi Dujković, Andreja Samčović, Yuliya Golub, and Valery Starovoitov. "Quadratic fitting model in no-reference image quality assessment." Telfor Journal 15, no. 2 (2023): 32–37. http://dx.doi.org/10.5937/telfor2302032g.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The perceptual quality of image is affected by distortions during compression, delivery and storage. Distortions also impact automatic image quality assessment (IQA) that needs to be highly correlated with subjective scores. In the absence of reference, which is a typical scenario in practice, no-reference (NR) metrics are necessary for quality measurements. Recently such methods are proposed, and they employ natural scene statistics (NSS). The experimental analysis performed in this paper takes into consideration two fitting or regression models of several NR-IQA metrics relying on different distortion types. The results show quadratic model as promising for making relations in terms of difference mean opinion score and Shannon entropy.

Дисертації з теми "No-Reference image quality assessment (NR-IQA)":

1

Hettiarachchi, Don Lahiru Nirmal Manikka. "An Accelerated General Purpose No-Reference Image Quality Assessment Metric and an Image Fusion Technique." University of Dayton / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1470048998.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Nguyen, Tan-Sy. "A smart system for processing and analyzing gastrointestinal abnormalities in wireless capsule endoscopy." Electronic Thesis or Diss., Paris 13, 2023. http://www.theses.fr/2023PA131052.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, nous abordons les défis liés à l'identification et au diagnostic des lésions pathologiques dans le tractus gatro-intestinal (GI). L'analyse des quantités massives d'informations visuelles obtenues par une capsule vidéo-endoscopique (CVE) qui est un excellent outil pour visualiser et examiner le tractus GI y compris l'intestin grêle, représente une charge considérable pour les cliniciens, entraînant un risque accru de diagnostic erroné. Afin de palier à ce problème, nous développons un système intelligent capable de détecter et d'identifier automatiquement diverses pathologies gastro-intestinales. Cependant, la qualité limitée des images acquises en raison de distorsions telles que le bruit, le flou et l'éclairement non uniforme constitue un obstacle significatif. Par conséquent, les techniques de prétraitement des images jouent un rôle crucial dans l'amélioration de la qualité des images acquises, facilitant ainsi les tâches de haut niveau telles que la détection et la classification des anomalies. Afin de résoudre les problèmes liés à la qualité limitée des images causée par les distorsions mentionnées précédemment, plusieurs nouveaux algorithmes d'apprentissage ont été proposés. Plus précisément, les avancées récentes dans le domaine de la restauration et de l'amélioration de la qualité des images reposent sur des approches d'apprentissage qui nécessitent des paires d'images déformées et de référence pour l'entraînement. Cependant, en ce qui concerne la CVE, un défi significatif se pose en raison de l'absence d'une base de données dédiée pour évaluer la qualité des images. À notre connaissance, il n'existe actuellement aucune base de données spécialisée conçu spécifiquement pour évaluer la qualité vidéo en CVE. Par conséquent, en réponse à la nécessité d'une base de données complète d'évaluation de la qualité vidéo, nous proposons tout d'abord la "Quality-Oriented Database for Video Capsule Endoscopy" (QVCED). Ensuite, nos résultats montrent que l'évaluation de la gravité des distorsions améliore significativement l'efficacité de l'amélioration de l'image, en particulier en cas d'illumination inégale. À cette fin, nous proposons une nouvelle métrique dédiée à l'évaluation et à la quantification de l'éclairage inégal dans les images laparoscopiques ou par CVE, en extrayant l'éclairement de l'arrière-plan de l'image et en tenant compte de l'effet de la mise en égalisation de l'histogramme. Notre métrique démontrant sa supériorité et sa performance compétitive par rapport aux méthodes d'évaluation de la qualité d'image avec référence complète (FR-IQA).Après avoir effectué l'étape d'évaluation, nous développons une méthode d'amélioration de la qualité d'image visant à améliorer la qualité globale des images. Le nouvel algorithme est basé sur un mécanisme de l'attention croisée, qui permet d'établir l'interaction d'information entre la tâche de l'extraction du niveau de distorsion et de la localisation de régions dégradées. En employant cet algorithme, nous sommes en mesure d'identifier et de cibler précisément les zones spécifiques des images affectées par les distorsions. Ainsi, cet algorithme permet le traitement approprié adapté à chaque région dégradée, améliorant ainsi efficacement la qualité de l'image. Suite à l'amélioration de la qualité de l'image, des caractéristiques visuelles sont extraites et alimentées dans un classificateur pour fournir un diagnostic par classification. La difficulté dans le domaine de CVE est qu'une partie significative des données reste non étiquetée. Pour relever ce défi, nous avons proposé une méthode efficace basée sur l'approche d'apprentissage auto-supervisé ("Self-Supervised Learning" ou SSL en anglais) afin d'améliorer les performances de la classification. La méthode proposée, utilisant le SSL basé sur l'attention, ont réussi à résoudre le problème des données étiquetées limitées couramment rencontré dans la littérature existante
In this thesis, we address the challenges associated with the identification and diagnosis of pathological lesions in the gastrointestinal (GI) tract. Analyzing massive amounts of visual information obtained by Wireless Capsule Endsocopy (WCE) which is an excellent tool for visualizing and examining the GI tract (including the small intestine), poses a significant burden on clinicians, leading to an increased risk of misdiagnosis.In order to alleviate this issue, we develop an intelligent system capable of automatically detecting and identifying various GI disorders. However, the limited quality of acquired images due to distortions such as noise, blur, and uneven illumination poses a significant obstacle. Consequently, image pre-processing techniques play a crucial role in improving the quality of captured frames, thereby facilitating subsequent high-level tasks like abnormality detection and classification. In order to tackle the issues associated with limitations in image quality caused by the aforementioned distortions, novel learning-based algorithms have been proposed. More precisely, recent advancements in the realm of image restoration and enhancement techniques rely on learning-based approaches that necessitate pairs of distorted and reference images for training. However, a significant challenge arises in WCE which is an excellent tool for visualizing and diagnosing GI disorders, due to the absence of a dedicated dataset for evaluating image quality. To the best of our knowledge, there currently exists no specialized dataset designed explicitly for evaluating video quality in WCE. Therefore, in response to the need for an extensive video quality assessment dataset, we first introduce the "Quality-Oriented Database for Video Capsule Endoscopy" (QVCED).Subsequently, our findings show that assessing distortion severity significantly improves image enhancement effectiveness, especially in the case of uneven illumination. To this end, we propose a novel metric dedicated to the evaluation and quantification of uneven illumination in laparoscopic or WCE images, by extracting the image's background illuminance and considering the mapping effect of Histogram Equalization. Our metric outperforms some state-of-the-art No-Reference Image Quality Assessment (NR-IQA) methods, demonstrating its superiority and competitive performance compared to Full-Reference IQA (FR-IQA) methods.After conducting the assessment step, we proceed to develop an image quality enhancement method aimed at improving the overall quality of the images. This is achieved by leveraging the cross-attention algorithm, which establishes a comprehensive connection between the extracted distortion level and the degraded regions within the images. By employing this algorithm, we are able to precisely identify and target the specific areas in the images that have been affected by distortions. This allows an appropriate enhancement tailored to each degraded region, thereby effectively improving the image quality.Following the improvement of image quality, visual features are extracted and fed into a classifier to provide a diagnosis through classification. The challenge in the WCE domain is that a significant portion of the data remains unlabeled. To overcome this challenge, we have developed an efficient method based on self-supervised learning (SSL) approach to enhance the performance of classification. The proposed method, utilizing attention-based SSL, has successfully addressed the issue of limited labeled data commonly encountered in the existing literature

Частини книг з теми "No-Reference image quality assessment (NR-IQA)":

1

Ahmed, Basma, Mohamed Abdel-Nasser, Osama A. Omer, Amal Rashed, and Domenec Puig. "No-Reference Digital Image Quality Assessment Based on Structure Similarity." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2021. http://dx.doi.org/10.3233/faia210156.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Blind or non-referential image quality assessment (NR-IQA) indicates the problem of evaluating the visual quality of an image without any reference, Therefore, the need to develop a new measure that does not depend on the reference pristine image. This paper presents a NR-IQA method based on restoration scheme and a structural similarity index measure (SSIM). Specifically, we use blind restoration schemes for blurred images by reblurring the blurred image and then we use it as a reference image. Finally, we use the SSIM as a full reference metric. The experiments performed on standard test images as well as medical images. The results demonstrated that our results using a structural similarity index measure are better than other methods such as spectral kurtosis-based method.
2

Ahmed, Basma, Osama A. Omer, Amal Rashed, Domenec Puig, and Mohamed Abdel-Nasser. "Referenceless Image Quality Assessment Utilizing Deep Transfer-Learned Features." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2022. http://dx.doi.org/10.3233/faia220345.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Image quality assessment (IQA) algorithms are critical for determining the quality of high-resolution photographs. This work proposes a hybrid NR IQA approach that uses deep transfer learning to enhance classic NR IQA with deep learning characteristics. Firstly, we simulate a pseudo reference image (PRI) from the input image. Then, we used a pre-trained inception-v3 deep feature extractor to generate the feature maps from the input distorted image and PRI. The distance between the feature maps of the input distorted image and PRI are measured using the local structural similarity (LSS) method. A nonlinear mapping function is used to calculate the final quality scores. When compared to previous work, the proposed method has a promising performance.
3

Abdelouahad, Abdelkaher Ait, Mohammed El Hassouni, Hocine Cherifi, and Driss Aboutajdine. "A New Image Distortion Measure Based on Natural Scene Statistics Modeling." In Geographic Information Systems, 616–30. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2038-4.ch037.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the field of Image Quality Assessment (IQA), this paper examines a Reduced Reference (RRIQA) measure based on the bi-dimensional empirical mode decomposition. The proposed measure belongs to Natural Scene Statistics (NSS) modeling approaches. First, the reference image is decomposed into Intrinsic Mode Functions (IMF); the authors then use the Generalized Gaussian Density (GGD) to model IMF coefficients distribution. At the receiver side, the same number of IMF is computed on the distorted image, and then the quality assessment is done by fitting error between the IMF coefficients histogram of the distorted image and the GGD estimate of IMF coefficients of the reference image, using the Kullback Leibler Divergence (KLD). In addition, the authors propose a new Support Vector Machine-based classification approach to evaluate the performances of the proposed measure instead of the logistic function-based regression. Experiments were conducted on the LIVE dataset.

Тези доповідей конференцій з теми "No-Reference image quality assessment (NR-IQA)":

1

Ariffin, Syed Mohd Zahid Syed Zainal, and Nursuriati Jamil. "Illumination Classification based on No-Reference Image Quality Assessment (NR-IQA)." In the 2019 Asia Pacific Information Technology Conference. New York, New York, USA: ACM Press, 2019. http://dx.doi.org/10.1145/3314527.3314529.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Da Silva, Renato, Luiz Brito, Marcelo Albertini, Marcelo Do Nascimento, and André Backes. "Using CNNs for Quality Assessment of No-Reference and Full-Reference Compressed-Video Frames." In Workshop de Visão Computacional. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/wvc.2020.13484.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
For videos to be streamed, they have to be coded and sent to users as signals that are decoded back to be reproduced. This coding-decoding process may result in distortion that can bring differences in the quality perception of the content, consequently, influencing user experience. The approach proposed by Bosse et al. [1] suggests an Image Quality Assessment (IQA) method using an automated process. They use image datasets prelabeled with quality scores to perform a Convolutional Neural Network (CNN) training. Then, based on the CNN models, they are able to perform predictions of image quality using both Full- Reference (FR) and No-Reference (NR) evaluation. In this paper, we explore these methods exposing the CNN quality prediction to images extracted from actual videos. Various quality compression levels were applied to them as well as two different video codecs. We also evaluated how their models perform while predicting human visual perception of quality in scenarios where there is no human pre-evaluation, observing its behavior along with metrics such as SSIM and PSNR. We observe that FR model is able to better infer human perception of quality for compressed videos. Differently, NR model does not show the same behaviour for most of the evaluated videos.
3

Lin, Kwan-Yee, and Guanxiang Wang. "Hallucinated-IQA: No-Reference Image Quality Assessment via Adversarial Learning." In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00083.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Gil, Adriano, Aasim Khurshid, Juliana Postal, and Thiago Figueira. "Visual assessment of equirectangular images for virtual reality applications In Unity." In XXXII Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/sibgrapi.est.2019.8337.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Virtual Reality (VR) applications provide an immersive experience when using panoramic images that contain a 360-degree view of the scene. Currently, the equirectangular image format is the widely used pattern to represent these panoramic images. The development of a virtual reality viewer of panoramic images should consider several parameters that define the quality of the rendered image. Such parameters include resolution configurations, texture-to-objects mappings and deciding from different rendering approach, but to select the optimal value of these parameters, visual quality analysis is required. In this work, we propose a tool integrated within Unity editor to automate this quality assessment using different settings for the visualization of equirectangular images. We compare the texture mapping of a skybox with a procedural sphere and a cubemap using full-reference objective metrics for Image Quality Analysis (IQA). Based on the assessment results, the tool decides how the final image will be rendered at the target device to produce a visually pleasing and high-quality image.
5

Narsaiah, D., R. Surender Reddy, Aruna Kokkula, P. Anil Kumar, and A. Karthik. "A Novel Full Reference-Image Quality Assessment (FR-IQA) for Adaptive Visual Perception Improvement." In 2021 6th International Conference on Inventive Computation Technologies (ICICT). IEEE, 2021. http://dx.doi.org/10.1109/icict50816.2021.9358610.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zaytoon, Mohamed, and Marwan Torki. "The Effect of Non-Reference Point Cloud Quality Assessment (NR-PCQA) Loss on 3D Scene Reconstruction from a Single Image." In 2023 IEEE Symposium on Computers and Communications (ISCC). IEEE, 2023. http://dx.doi.org/10.1109/iscc58397.2023.10218197.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії