Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Perceptual Quality Assessment.

Artykuły w czasopismach na temat „Perceptual Quality Assessment”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Perceptual Quality Assessment”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Fang, Yuming, Liping Huang, Jiebin Yan, Xuelin Liu i Yang Liu. "Perceptual Quality Assessment of Omnidirectional Images". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 1 (28.06.2022): 580–88. http://dx.doi.org/10.1609/aaai.v36i1.19937.

Pełny tekst źródła
Streszczenie:
Omnidirectional images, also called 360◦images, have attracted extensive attention in recent years, due to the rapid development of virtual reality (VR) technologies. During omnidirectional image processing including capture, transmission, consumption, and so on, measuring the perceptual quality of omnidirectional images is highly desired, since it plays a great role in guaranteeing the immersive quality of experience (IQoE). In this paper, we conduct a comprehensive study on the perceptual quality of omnidirectional images from both subjective and objective perspectives. Specifically, we construct the largest so far subjective omnidirectional image quality database, where we consider several key influential elements, i.e., realistic non-uniform distortion, viewing condition, and viewing behavior, from the user view. In addition to subjective quality scores, we also record head and eye movement data. Besides, we make the first attempt by using the proposed database to train a convolutional neural network (CNN) for blind omnidirectional image quality assessment. To be consistent with the human viewing behavior in the VR device, we extract viewports from each omnidirectional image and incorporate the user viewing conditions naturally in the proposed model. The proposed model is composed of two parts, including a multi-scale CNN-based feature extraction module and a perceptual quality prediction module. The feature extraction module is used to incorporate the multi-scale features, and the perceptual quality prediction module is designed to regress them to perceived quality scores. The experimental results on our database verify that the proposed model achieves the competing performance compared with the state-of-the-art methods.
Style APA, Harvard, Vancouver, ISO itp.
2

Da, Pan, GuiYing Song, Ping Shi i HaoCheng Zhang. "Perceptual quality assessment of nighttime video". Displays 70 (grudzień 2021): 102092. http://dx.doi.org/10.1016/j.displa.2021.102092.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Hamberg, Roelof, i Huib de Ridder. "Continuous assessment of perceptual image quality". Journal of the Optical Society of America A 12, nr 12 (1.12.1995): 2573. http://dx.doi.org/10.1364/josaa.12.002573.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Wang, Yinan, Andrei Chubarau, Hyunjin Yoo, Tara Akhavan i James Clark. "Age-specific perceptual image quality assessment". Electronic Imaging 35, nr 8 (16.01.2023): 302–1. http://dx.doi.org/10.2352/ei.2023.35.8.iqsp-302.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Elloumi, Nessrine, Habiba Loukil Hadj Kacem, Nilanjan Dey, Amira S. Ashour i Med Salim Bouhlel. "Perceptual Metrics Quality". International Journal of Service Science, Management, Engineering, and Technology 8, nr 1 (styczeń 2017): 63–80. http://dx.doi.org/10.4018/ijssmet.2017010105.

Pełny tekst źródła
Streszczenie:
A 3D mesh can be subjected to different types of operations, such as compression, watermarking etc. Such processes lead to geometric distortions compared to the original version. In this context, quantifying the resultant modifications to the original mesh and evaluating the perceptual quality of degraded meshes become a critical issue. The perceptual 3D meshes quality is central in several applications to preserve the visual appearance of these treatments. The used metrics results have to be well correlated to the visual perception of humans. Although there are objective metrics, they do not allow the prediction of the perceptual quality, and do not include the human visual system properties. In the current work, a comparative study between the perceptual quality assessment metrics for 3D meshes was conducted. The experimental study on subjective database published by LIRIS / EPFL was used to test and to validate the results of six metrics. The results established that the Mesh Structural Distortion Measure metric achieved superior results compared to the other metrics.
Style APA, Harvard, Vancouver, ISO itp.
6

Yang, Huan, Yuming Fang i Weisi Lin. "Perceptual Quality Assessment of Screen Content Images". IEEE Transactions on Image Processing 24, nr 11 (listopad 2015): 4408–21. http://dx.doi.org/10.1109/tip.2015.2465145.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Agudelo-Medina, Oscar A., Hernan Dario Benitez-Restrepo, Gemine Vivone i Alan Bovik. "Perceptual Quality Assessment of Pan-Sharpened Images". Remote Sensing 11, nr 7 (11.04.2019): 877. http://dx.doi.org/10.3390/rs11070877.

Pełny tekst źródła
Streszczenie:
Pan-sharpening (PS) is a method of fusing the spatial details of a high-resolution panchromatic (PAN) image with the spectral information of a low-resolution multi-spectral (MS) image. Visual inspection is a crucial step in the evaluation of fused products whose subjectivity renders the assessment of pansharpened data a challenging problem. Most previous research on the development of PS algorithms has only superficially addressed the issue of qualitative evaluation, generally by depicting visual representations of the fused images. Hence, it is highly desirable to be able to predict pan-sharpened image quality automatically and accurately, as it would be perceived and reported by human viewers. Such a method is indispensable for the correct evaluation of PS techniques that produce images for visual applications such as Google Earth and Microsoft Bing. Here, we propose a new image quality assessment (IQA) measure that supports the visual qualitative analysis of pansharpened outcomes by using the statistics of natural images, commonly referred to as natural scene statistics (NSS), to extract statistical regularities from PS images. Importantly, NSS are measurably modified by the presence of distortions. We analyze six PS methods in the presence of two common distortions, blur and white noise, on PAN images. Furthermore, we conducted a human study on the subjective quality of pristine and degraded PS images and created a completely blind (opinion-unaware) fused image quality analyzer. In addition, we propose an opinion-aware fused image quality analyzer, whose predictions with respect to human perceptual evaluations of pansharpened images are highly correlated.
Style APA, Harvard, Vancouver, ISO itp.
8

Hu, Anzhou, Rong Zhang, Dong Yin, Yuan Chen i Xin Zhan. "Perceptual quality assessment of SAR image compression". International Journal of Remote Sensing 34, nr 24 (24.10.2013): 8764–88. http://dx.doi.org/10.1080/01431161.2013.846488.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Chan, Kit Yan, i Ulrich Engelke. "Fuzzy regression for perceptual image quality assessment". Engineering Applications of Artificial Intelligence 43 (sierpień 2015): 102–10. http://dx.doi.org/10.1016/j.engappai.2015.04.007.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Shahriari, Y., Q. Ding, R. Fidler, M. Pelter, Y. Bai, A. Villaroman i X. Hu. "Perceptual Image Processing Based Ecg Quality Assessment". Journal of Electrocardiology 49, nr 6 (listopad 2016): 937. http://dx.doi.org/10.1016/j.jelectrocard.2016.09.040.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Wolfe, Virginia I., David P. Martin i Chester I. Palmer. "Perception of Dysphonic Voice Quality by Naive Listeners". Journal of Speech, Language, and Hearing Research 43, nr 3 (czerwiec 2000): 697–705. http://dx.doi.org/10.1044/jslhr.4303.697.

Pełny tekst źródła
Streszczenie:
For clinical assessment as well as student training, there is a need for information pertaining to the perceptual dimensions of dysphonic voice. To this end, 24 naive listeners judged the similarity of 10 female and 10 male vowel samples, selected from within a narrow range of fundamental frequencies. Most of the perceptual variance for both sets of voices was associated with "degree of abnormality" as reflected by perceptual ratings as well as combined acoustic measures, based upon filtered and unfiltered signals. A second perceptual dimension for female voices was associated with high frequency noise as reflected by two acoustic measures: breathiness index (BRI) and a high-frequency power ratio. A second perceptual dimension for male voices was associated with a breathy-overtight continuum as reflected by period deviation (PDdev) and perceptual ratings of breathiness. Results are discussed in terms of perceptual training and the clinical assessment of pathological voices.
Style APA, Harvard, Vancouver, ISO itp.
12

Zhai, Guangtao, Wei Sun, Xiongkuo Min i Jiantao Zhou. "Perceptual Quality Assessment of Low-light Image Enhancement". ACM Transactions on Multimedia Computing, Communications, and Applications 17, nr 4 (30.11.2021): 1–24. http://dx.doi.org/10.1145/3457905.

Pełny tekst źródła
Streszczenie:
Low-light image enhancement algorithms (LIEA) can light up images captured in dark or back-lighting conditions. However, LIEA may introduce various distortions such as structure damage, color shift, and noise into the enhanced images. Despite various LIEAs proposed in the literature, few efforts have been made to study the quality evaluation of low-light enhancement. In this article, we make one of the first attempts to investigate the quality assessment problem of low-light image enhancement. To facilitate the study of objective image quality assessment (IQA), we first build a large-scale low-light image enhancement quality (LIEQ) database. The LIEQ database includes 1,000 light-enhanced images, which are generated from 100 low-light images using 10 LIEAs. Rather than evaluating the quality of light-enhanced images directly, which is more difficult, we propose to use the multi-exposure fused (MEF) image and stack-based high dynamic range (HDR) image as a reference and evaluate the quality of low-light enhancement following a full-reference (FR) quality assessment routine. We observe that distortions introduced in low-light enhancement are significantly different from distortions considered in traditional image IQA databases that are well-studied, and the current state-of-the-art FR IQA models are also not suitable for evaluating their quality. Therefore, we propose a new FR low-light image enhancement quality assessment (LIEQA) index by evaluating the image quality from four aspects: luminance enhancement, color rendition, noise evaluation, and structure preserving, which have captured the most key aspects of low-light enhancement. Experimental results on the LIEQ database show that the proposed LIEQA index outperforms the state-of-the-art FR IQA models. LIEQA can act as an evaluator for various low-light enhancement algorithms and systems. To the best of our knowledge, this article is the first of its kind comprehensive low-light image enhancement quality assessment study.
Style APA, Harvard, Vancouver, ISO itp.
13

Lim, Jin-Young, Ho-Seok Chang, Dong-Wook Kang, Ki-Doo Kim i Kyeong-Hoon Jung. "No-reference Perceptual Quality Assessment of Digital Image". Journal of Broadcast Engineering 13, nr 6 (30.11.2008): 849–58. http://dx.doi.org/10.5909/jbe.2008.13.6.849.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Ma, Kede, Kai Zeng i Zhou Wang. "Perceptual Quality Assessment for Multi-Exposure Image Fusion". IEEE Transactions on Image Processing 24, nr 11 (listopad 2015): 3345–56. http://dx.doi.org/10.1109/tip.2015.2442920.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Wu, Yadong, Hongying Zhang i Ran Duan. "Total Variation Based Perceptual Image Quality Assessment Modeling". Journal of Applied Mathematics 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/294870.

Pełny tekst źródła
Streszczenie:
Visual quality measure is one of the fundamental and important issues to numerous applications of image and video processing. In this paper, based on the assumption that human visual system is sensitive to image structures (edges) and image local luminance (light stimulation), we propose a new perceptual image quality assessment (PIQA) measure based on total variation (TV) model (TVPIQA) in spatial domain. The proposed measure compares TVs between a distorted image and its reference image to represent the loss of image structural information. Because of the good performance of TV model in describing edges, the proposed TVPIQA measure can illustrate image structure information very well. In addition, the energy of enclosed regions in a difference image between the reference image and its distorted image is used to measure the missing luminance information which is sensitive to human visual system. Finally, we validate the performance of TVPIQA measure with Cornell-A57, IVC, TID2008, and CSIQ databases and show that TVPIQA measure outperforms recent state-of-the-art image quality assessment measures.
Style APA, Harvard, Vancouver, ISO itp.
16

Dong, Xinghui, i Huiyu Zhou. "Texture synthesis quality assessment using perceptual texture similarity". Knowledge-Based Systems 194 (kwiecień 2020): 105591. http://dx.doi.org/10.1016/j.knosys.2020.105591.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Yan, Weiqing, Guanghui Yue, Yuming Fang, Hua Chen, Chang Tang i Gangyi Jiang. "Perceptual objective quality assessment of stereoscopic stitched images". Signal Processing 172 (lipiec 2020): 107541. http://dx.doi.org/10.1016/j.sigpro.2020.107541.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Sloan, Colm, Naomi Harte, Damien Kelly, Anil C. Kokaram i Andrew Hines. "Objective Assessment of Perceptual Audio Quality Using ViSQOLAudio". IEEE Transactions on Broadcasting 63, nr 4 (grudzień 2017): 693–705. http://dx.doi.org/10.1109/tbc.2017.2704421.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Chang, Hua-wen, Qiu-wen Zhang, Qing-gang Wu i Yong Gan. "Perceptual image quality assessment by independent feature detector". Neurocomputing 151 (marzec 2015): 1142–52. http://dx.doi.org/10.1016/j.neucom.2014.04.081.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Zhou Wang i Qiang Li. "Information Content Weighting for Perceptual Image Quality Assessment". IEEE Transactions on Image Processing 20, nr 5 (maj 2011): 1185–98. http://dx.doi.org/10.1109/tip.2010.2092435.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Chang, Hua-Wen, Hua Yang, Yong Gan i Ming-Hui Wang. "Sparse Feature Fidelity for Perceptual Image Quality Assessment". IEEE Transactions on Image Processing 22, nr 10 (październik 2013): 4007–18. http://dx.doi.org/10.1109/tip.2013.2266579.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Tang, Lu, Chuangeng Tian, Leida Li, Bo Hu, Wei Yu i Kai Xu. "Perceptual quality assessment for multimodal medical image fusion". Signal Processing: Image Communication 85 (lipiec 2020): 115852. http://dx.doi.org/10.1016/j.image.2020.115852.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Lowell, Soren Y. "The Acoustic Assessment of Voice in Continuous Speech". Perspectives on Voice and Voice Disorders 22, nr 2 (lipiec 2012): 57–63. http://dx.doi.org/10.1044/vvd22.2.57.

Pełny tekst źródła
Streszczenie:
Acoustic measures are an essential component in the assessment of voice disorders, but the value of these measures is dependent on their relationship to perceptual voice quality and the degree to which these measures reflect the typical speaking patterns of the individual being assessed. Therefore, acoustic measures that can be accurately and reliably derived from continuous speech contexts, which are more representative of every day speaking patterns than sustained vowels, are fundamental to the assessment of voice. In this article, I review the current findings on acoustic measures that are applicable to continuous speech. I will identify spectral- and cepstral-based measures that show strong relationships to perceptual ratings of overall voice severity or relate to particular dimensions of voice quality. I also will discuss the prominence of the cepstral peak as a measure that consistently shows strong predictive capacity for perceptually rated voice severity and provides excellent discrimination of dysphonic and normal voices.
Style APA, Harvard, Vancouver, ISO itp.
24

Moorthy, A. K., i A. C. Bovik. "Blind Image Quality Assessment: From Natural Scene Statistics to Perceptual Quality". IEEE Transactions on Image Processing 20, nr 12 (grudzień 2011): 3350–64. http://dx.doi.org/10.1109/tip.2011.2147325.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Zhu, Hancheng, Yong Zhou, Zhiwen Shao, Wen-Liang Du, Jiaqi Zhao i Rui Yao. "ARET-IQA: An Aspect-Ratio-Embedded Transformer for Image Quality Assessment". Electronics 11, nr 14 (7.07.2022): 2132. http://dx.doi.org/10.3390/electronics11142132.

Pełny tekst źródła
Streszczenie:
Image quality assessment (IQA) aims to automatically evaluate image perceptual quality by simulating the human visual system, which is an important research topic in the field of image processing and computer vision. Although existing deep-learning-based IQA models have achieved significant success, these IQA models usually require input images with a fixed size, which varies the perceptual quality of images. To this end, this paper proposes an aspect-ratio-embedded Transformer-based image quality assessment method, which can implant the adaptive aspect ratios of input images into the multihead self-attention module of the Swin Transformer. In this way, the proposed IQA model can not only relieve the variety of perceptual quality caused by size changes in input images but also leverage more global content correlations to infer image perceptual quality. Furthermore, to comprehensively capture the impact of low-level and high-level features on image quality, the proposed IQA model combines the output features of multistage Transformer blocks for jointly inferring image quality. Experimental results on multiple IQA databases show that the proposed IQA method is superior to state-of-the-art methods for assessing image technical and aesthetic quality.
Style APA, Harvard, Vancouver, ISO itp.
26

Ahmed, Nisar, i Hafiz Muhammad Shahzad Asif. "Perceptual Quality Assessment of Digital Images Using Deep Features". Computing and Informatics 39, nr 3 (2020): 385–409. http://dx.doi.org/10.31577/cai_2020_3_385.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Laparra, Valero, Johannes Ballé, Alexander Berardino i Eero P. Simoncelli. "Perceptual image quality assessment using a normalized Laplacian pyramid". Electronic Imaging 2016, nr 16 (14.02.2016): 1–6. http://dx.doi.org/10.2352/issn.2470-1173.2016.16.hvei-103.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Mu, Hao, i Woon_Seng Gan. "Perceptual Quality Improvement and Assessment for Virtual Bass Systems". Journal of the Audio Engineering Society 63, nr 11 (2.12.2015): 900–913. http://dx.doi.org/10.17743/jaes.2015.0079.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Choi, Kang-Sun, Yeo-Min Yun, Jong-Woo Han i Sung-Jea Ko. "8.4: Perceptual Quality Assessment for Motion Compensated Frame Interpolation". SID Symposium Digest of Technical Papers 41, nr 1 (2010): 102. http://dx.doi.org/10.1889/1.3499824.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Kreiman, Jody, i Bruce R. Gerratt. "Perceptual Assessment of Voice Quality: Past, Present, and Future". Perspectives on Voice and Voice Disorders 20, nr 2 (lipiec 2010): 62–67. http://dx.doi.org/10.1044/vvd20.2.62.

Pełny tekst źródła
Streszczenie:
Despite many years of research, we still do not know how to measure vocal quality. This paper reviews the history of quality assessment, describes some reasons why current approaches are unlikely to be fruitful, and proposes an alternative approach that addresses the primary difficulties with existing protocols.
Style APA, Harvard, Vancouver, ISO itp.
31

Winkler, Stefan. "Issues in vision modeling for perceptual video quality assessment". Signal Processing 78, nr 2 (październik 1999): 231–52. http://dx.doi.org/10.1016/s0165-1684(99)00062-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Liu, Min, Ke Gu, Guangtao Zhai, Patrick Le Callet i Wenjun Zhang. "Perceptual Reduced-Reference Visual Quality Assessment for Contrast Alteration". IEEE Transactions on Broadcasting 63, nr 1 (marzec 2017): 71–81. http://dx.doi.org/10.1109/tbc.2016.2597545.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Silva, Alessandro R., i Mylène C. Q. Farias. "Perceptual quality assessment of 3D videos with stereoscopic degradations". Multimedia Tools and Applications 79, nr 1-2 (6.11.2019): 1603–23. http://dx.doi.org/10.1007/s11042-019-08386-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Yalman, Yildiray. "Histogram based perceptual quality assessment method for color images". Computer Standards & Interfaces 36, nr 6 (listopad 2014): 899–908. http://dx.doi.org/10.1016/j.csi.2014.04.002.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Kuo, Wen-Hung, Po-Hung Lin i Sheue-Ling Hwang. "A framework of perceptual quality assessment on LCD-TV". Displays 28, nr 1 (luty 2007): 35–43. http://dx.doi.org/10.1016/j.displa.2006.11.005.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Xia, Yingjie, Zhenguang Liu, Yan Yan, Yanxiang Chen, Luming Zhang i Roger Zimmermann. "Media Quality Assessment by Perceptual Gaze-Shift Patterns Discovery". IEEE Transactions on Multimedia 19, nr 8 (sierpień 2017): 1811–20. http://dx.doi.org/10.1109/tmm.2017.2679900.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Mustafa, Safi, i Abdul Hameed. "Perceptual quality assessment of video using machine learning algorithm". Signal, Image and Video Processing 13, nr 8 (27.05.2019): 1495–502. http://dx.doi.org/10.1007/s11760-019-01494-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Zhou, Wujie, Gangyi Jiang i Mei Yu. "New visual perceptual pooling strategy for image quality assessment". Journal of Electronics (China) 29, nr 3-4 (lipiec 2012): 254–61. http://dx.doi.org/10.1007/s11767-012-0818-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Farnand, Susan, Young Jang, Lark Kwon Choi i Chuck Han. "A methodology for perceptual image quality assessment of smartphone cameras – color quality". Electronic Imaging 2017, nr 12 (29.01.2017): 95–99. http://dx.doi.org/10.2352/issn.2470-1173.2017.12.iqsp-250.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Silva, Maria Fabiana Bonfim de Lima, Sandra Madureira, Luiz Carlos Rusilo i Zuleica Camargo. "Vocal quality assessment: methodological approach for a perceptive data analysis". Revista CEFAC 19, nr 6 (grudzień 2017): 831–41. http://dx.doi.org/10.1590/1982-021620171961417.

Pełny tekst źródła
Streszczenie:
ABSTRACT Purpose: to present a methodological approach for interpreting perceptual judgments of vocal quality by a group of evaluators using the script Vocal Profile Analysis Scheme. Methods: a cross-sectional study based on 90 speech samples from 25 female teachers with voice disorders and/or laryngeal changes. Prior to the perceptual judgment, three perceptual tasks were performed to select samples to be presented to five evaluators using the Experiment script MFC 3.2 (software PRAAT). Next, a sequence of tests was applied, based on successive approaches of inter- and intra-evaluators’ behavior. Data were treated by statistical analysis (Cochran and Selenor tests). Results: with respect to the analysis of the evaluators' performance, it was possible to define those that presented the best results, in terms of reliability and proximity of analyses, as compared to the most experienced evaluator, excluding one. The results of the cluster analysis also allowed designing a voice quality profile of the group of speakers studied. Conclusions: the proposal of a methodological approach allowed defining evaluators whose judgments were based on phonetic knowledge, and drawing a vocal quality profile of the group of samples analyzed.
Style APA, Harvard, Vancouver, ISO itp.
41

Varga, Domonkos. "No-Reference Image Quality Assessment with Global Statistical Features". Journal of Imaging 7, nr 2 (5.02.2021): 29. http://dx.doi.org/10.3390/jimaging7020029.

Pełny tekst źródła
Streszczenie:
The perceptual quality of digital images is often deteriorated during storage, compression, and transmission. The most reliable way of assessing image quality is to ask people to provide their opinions on a number of test images. However, this is an expensive and time-consuming process which cannot be applied in real-time systems. In this study, a novel no-reference image quality assessment method is proposed. The introduced method uses a set of novel quality-aware features which globally characterizes the statistics of a given test image, such as extended local fractal dimension distribution feature, extended first digit distribution features using different domains, Bilaplacian features, image moments, and a wide variety of perceptual features. Experimental results are demonstrated on five publicly available benchmark image quality assessment databases: CSIQ, MDID, KADID-10k, LIVE In the Wild, and KonIQ-10k.
Style APA, Harvard, Vancouver, ISO itp.
42

Barsties v. Latoszek, Ben, Jörg Mayer, Christopher R. Watts i Bernhard Lehnert. "Advances in Clinical Voice Quality Analysis with VOXplot". Journal of Clinical Medicine 12, nr 14 (12.07.2023): 4644. http://dx.doi.org/10.3390/jcm12144644.

Pełny tekst źródła
Streszczenie:
Background: The assessment of voice quality can be evaluated perceptually with standard clinical practice, also including acoustic evaluation of digital voice recordings to validate and further interpret perceptual judgments. The goal of the present study was to determine the strongest acoustic voice quality parameters for perceived hoarseness and breathiness when analyzing the sustained vowel [a:] using a new clinical acoustic tool, the VOXplot software. Methods: A total of 218 voice samples of individuals with and without voice disorders were applied to perceptual and acoustic analyses. Overall, 13 single acoustic parameters were included to determine validity aspects in relation to perceptions of hoarseness and breathiness. Results: Four single acoustic measures could be clearly associated with perceptions of hoarseness or breathiness. For hoarseness, the harmonics-to-noise ratio (HNR) and pitch perturbation quotient with a smoothing factor of five periods (PPQ5), and, for breathiness, the smoothed cepstral peak prominence (CPPS) and the glottal-to-noise excitation ratio (GNE) were shown to be highly valid, with a significant difference being demonstrated for each of the other perceptual voice quality aspects. Conclusions: Two acoustic measures, the HNR and the PPQ5, were both strongly associated with perceptions of hoarseness and were able to discriminate hoarseness from breathiness with good confidence. Two other acoustic measures, the CPPS and the GNE, were both strongly associated with perceptions of breathiness and were able to discriminate breathiness from hoarseness with good confidence.
Style APA, Harvard, Vancouver, ISO itp.
43

Muschter, Evelyn, Andreas Noll, Jinting Zhao, Rania Hassen, Matti Strese, Basak Gulecyuz, Shu-Chen Li i Eckehard Steinbach. "Perceptual Quality Assessment of Compressed Vibrotactile Signals Through Comparative Judgment". IEEE Transactions on Haptics 14, nr 2 (1.04.2021): 291–96. http://dx.doi.org/10.1109/toh.2021.3077191.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Sung, Jung-Min, Bong-Seok Choi, Bong-Yeol Choi i Yeong-Ho Ha. "Perceptual Quality Assessment on Display based on Analytic Network Process". Journal of the Institute of Electronics and Information Engineers 51, nr 7 (25.07.2014): 180–89. http://dx.doi.org/10.5573/ieie.2014.51.7.180.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Farnand, Susan, Young Jang, Chuck Han i Hau Hwang. "A methodology for perceptual image quality assessment of smartphone cameras". Electronic Imaging 2016, nr 13 (14.02.2016): 1–5. http://dx.doi.org/10.2352/issn.2470-1173.2016.13.iqsp-202.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Takam Tchendjou, Ghislain, i Emmanuel Simeu. "Visual Perceptual Quality Assessment Based on Blind Machine Learning Techniques". Sensors 22, nr 1 (28.12.2021): 175. http://dx.doi.org/10.3390/s22010175.

Pełny tekst źródła
Streszczenie:
This paper presents the construction of a new objective method for estimation of visual perceiving quality. The proposal provides an assessment of image quality without the need for a reference image or a specific distortion assumption. Two main processes have been used to build our models: The first one uses deep learning with a convolutional neural network process, without any preprocessing. The second objective visual quality is computed by pooling several image features extracted from different concepts: the natural scene statistic in the spatial domain, the gradient magnitude, the Laplacian of Gaussian, as well as the spectral and spatial entropies. The features extracted from the image file are used as the input of machine learning techniques to build the models that are used to estimate the visual quality level of any image. For the machine learning training phase, two main processes are proposed: The first proposed process consists of a direct learning using all the selected features in only one training phase, named direct learning blind visual quality assessment DLBQA. The second process is an indirect learning and consists of two training phases, named indirect learning blind visual quality assessment ILBQA. This second process includes an additional phase of construction of intermediary metrics used for the construction of the prediction model. The produced models are evaluated on many benchmarks image databases as TID2013, LIVE, and LIVE in the wild image quality challenge. The experimental results demonstrate that the proposed models produce the best visual perception quality prediction, compared to the state-of-the-art models. The proposed models have been implemented on an FPGA platform to demonstrate the feasibility of integrating the proposed solution on an image sensor.
Style APA, Harvard, Vancouver, ISO itp.
47

Batsi, Sophia, i Lisimachos P. Kondi. "Improved Temporal Pooling for Perceptual Video Quality Assessment Using VMAF". Electronic Imaging 2020, nr 11 (26.01.2020): 68–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.11.hvei-068.

Pełny tekst źródła
Streszczenie:
The Video Multimethod Assessment Fusion (VMAF) method, proposed by Netflix, offers an automated estimation of perceptual video quality for each frame of a video sequence. Then, the arithmetic mean of the per-frame quality measurements is taken by default, in order to obtain an estimate of the overall Quality of Experience (QoE) of the video sequence. In this paper, we validate the hypothesis that the arithmetic mean conceals the bad quality frames, leading to an overestimation of the provided quality. We also show that the Minkowski mean (appropriately parametrized) approximates well the subjectively measured QoE, providing superior Spearman Rank Correlation Coefficient (SRCC), Pearson Correlation Coefficient (PCC), and Root-Mean-Square-Error (RMSE) scores.
Style APA, Harvard, Vancouver, ISO itp.
48

Guangtao Zhai, Jianfei Cai, Weisi Lin, Xiaokang Yang, Wenjun Zhang i M. Etoh. "Cross-Dimensional Perceptual Quality Assessment for Low Bit-Rate Videos". IEEE Transactions on Multimedia 10, nr 7 (listopad 2008): 1316–24. http://dx.doi.org/10.1109/tmm.2008.2004910.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Wang, Shiqi, Ke Gu, Kai Zeng, Zhou Wang i Weisi Lin. "Objective Quality Assessment and Perceptual Compression of Screen Content Images". IEEE Computer Graphics and Applications 38, nr 1 (styczeń 2018): 47–58. http://dx.doi.org/10.1109/mcg.2016.46.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Oh, J., S. I. Woolley, T. N. Arvanitis i J. N. Townend. "A multistage perceptual quality assessment for compressed digital angiogram images". IEEE Transactions on Medical Imaging 20, nr 12 (2001): 1352–61. http://dx.doi.org/10.1109/42.974930.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii