Artículos de revistas sobre el tema "Perceptual Quality Assessment"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Perceptual Quality Assessment.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Perceptual Quality Assessment".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Fang, Yuming, Liping Huang, Jiebin Yan, Xuelin Liu y Yang Liu. "Perceptual Quality Assessment of Omnidirectional Images". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 1 (28 de junio de 2022): 580–88. http://dx.doi.org/10.1609/aaai.v36i1.19937.

Texto completo
Resumen
Omnidirectional images, also called 360◦images, have attracted extensive attention in recent years, due to the rapid development of virtual reality (VR) technologies. During omnidirectional image processing including capture, transmission, consumption, and so on, measuring the perceptual quality of omnidirectional images is highly desired, since it plays a great role in guaranteeing the immersive quality of experience (IQoE). In this paper, we conduct a comprehensive study on the perceptual quality of omnidirectional images from both subjective and objective perspectives. Specifically, we construct the largest so far subjective omnidirectional image quality database, where we consider several key influential elements, i.e., realistic non-uniform distortion, viewing condition, and viewing behavior, from the user view. In addition to subjective quality scores, we also record head and eye movement data. Besides, we make the first attempt by using the proposed database to train a convolutional neural network (CNN) for blind omnidirectional image quality assessment. To be consistent with the human viewing behavior in the VR device, we extract viewports from each omnidirectional image and incorporate the user viewing conditions naturally in the proposed model. The proposed model is composed of two parts, including a multi-scale CNN-based feature extraction module and a perceptual quality prediction module. The feature extraction module is used to incorporate the multi-scale features, and the perceptual quality prediction module is designed to regress them to perceived quality scores. The experimental results on our database verify that the proposed model achieves the competing performance compared with the state-of-the-art methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Da, Pan, GuiYing Song, Ping Shi y HaoCheng Zhang. "Perceptual quality assessment of nighttime video". Displays 70 (diciembre de 2021): 102092. http://dx.doi.org/10.1016/j.displa.2021.102092.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Hamberg, Roelof y Huib de Ridder. "Continuous assessment of perceptual image quality". Journal of the Optical Society of America A 12, n.º 12 (1 de diciembre de 1995): 2573. http://dx.doi.org/10.1364/josaa.12.002573.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Wang, Yinan, Andrei Chubarau, Hyunjin Yoo, Tara Akhavan y James Clark. "Age-specific perceptual image quality assessment". Electronic Imaging 35, n.º 8 (16 de enero de 2023): 302–1. http://dx.doi.org/10.2352/ei.2023.35.8.iqsp-302.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Elloumi, Nessrine, Habiba Loukil Hadj Kacem, Nilanjan Dey, Amira S. Ashour y Med Salim Bouhlel. "Perceptual Metrics Quality". International Journal of Service Science, Management, Engineering, and Technology 8, n.º 1 (enero de 2017): 63–80. http://dx.doi.org/10.4018/ijssmet.2017010105.

Texto completo
Resumen
A 3D mesh can be subjected to different types of operations, such as compression, watermarking etc. Such processes lead to geometric distortions compared to the original version. In this context, quantifying the resultant modifications to the original mesh and evaluating the perceptual quality of degraded meshes become a critical issue. The perceptual 3D meshes quality is central in several applications to preserve the visual appearance of these treatments. The used metrics results have to be well correlated to the visual perception of humans. Although there are objective metrics, they do not allow the prediction of the perceptual quality, and do not include the human visual system properties. In the current work, a comparative study between the perceptual quality assessment metrics for 3D meshes was conducted. The experimental study on subjective database published by LIRIS / EPFL was used to test and to validate the results of six metrics. The results established that the Mesh Structural Distortion Measure metric achieved superior results compared to the other metrics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Yang, Huan, Yuming Fang y Weisi Lin. "Perceptual Quality Assessment of Screen Content Images". IEEE Transactions on Image Processing 24, n.º 11 (noviembre de 2015): 4408–21. http://dx.doi.org/10.1109/tip.2015.2465145.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Agudelo-Medina, Oscar A., Hernan Dario Benitez-Restrepo, Gemine Vivone y Alan Bovik. "Perceptual Quality Assessment of Pan-Sharpened Images". Remote Sensing 11, n.º 7 (11 de abril de 2019): 877. http://dx.doi.org/10.3390/rs11070877.

Texto completo
Resumen
Pan-sharpening (PS) is a method of fusing the spatial details of a high-resolution panchromatic (PAN) image with the spectral information of a low-resolution multi-spectral (MS) image. Visual inspection is a crucial step in the evaluation of fused products whose subjectivity renders the assessment of pansharpened data a challenging problem. Most previous research on the development of PS algorithms has only superficially addressed the issue of qualitative evaluation, generally by depicting visual representations of the fused images. Hence, it is highly desirable to be able to predict pan-sharpened image quality automatically and accurately, as it would be perceived and reported by human viewers. Such a method is indispensable for the correct evaluation of PS techniques that produce images for visual applications such as Google Earth and Microsoft Bing. Here, we propose a new image quality assessment (IQA) measure that supports the visual qualitative analysis of pansharpened outcomes by using the statistics of natural images, commonly referred to as natural scene statistics (NSS), to extract statistical regularities from PS images. Importantly, NSS are measurably modified by the presence of distortions. We analyze six PS methods in the presence of two common distortions, blur and white noise, on PAN images. Furthermore, we conducted a human study on the subjective quality of pristine and degraded PS images and created a completely blind (opinion-unaware) fused image quality analyzer. In addition, we propose an opinion-aware fused image quality analyzer, whose predictions with respect to human perceptual evaluations of pansharpened images are highly correlated.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Hu, Anzhou, Rong Zhang, Dong Yin, Yuan Chen y Xin Zhan. "Perceptual quality assessment of SAR image compression". International Journal of Remote Sensing 34, n.º 24 (24 de octubre de 2013): 8764–88. http://dx.doi.org/10.1080/01431161.2013.846488.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Chan, Kit Yan y Ulrich Engelke. "Fuzzy regression for perceptual image quality assessment". Engineering Applications of Artificial Intelligence 43 (agosto de 2015): 102–10. http://dx.doi.org/10.1016/j.engappai.2015.04.007.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Shahriari, Y., Q. Ding, R. Fidler, M. Pelter, Y. Bai, A. Villaroman y X. Hu. "Perceptual Image Processing Based Ecg Quality Assessment". Journal of Electrocardiology 49, n.º 6 (noviembre de 2016): 937. http://dx.doi.org/10.1016/j.jelectrocard.2016.09.040.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Wolfe, Virginia I., David P. Martin y Chester I. Palmer. "Perception of Dysphonic Voice Quality by Naive Listeners". Journal of Speech, Language, and Hearing Research 43, n.º 3 (junio de 2000): 697–705. http://dx.doi.org/10.1044/jslhr.4303.697.

Texto completo
Resumen
For clinical assessment as well as student training, there is a need for information pertaining to the perceptual dimensions of dysphonic voice. To this end, 24 naive listeners judged the similarity of 10 female and 10 male vowel samples, selected from within a narrow range of fundamental frequencies. Most of the perceptual variance for both sets of voices was associated with "degree of abnormality" as reflected by perceptual ratings as well as combined acoustic measures, based upon filtered and unfiltered signals. A second perceptual dimension for female voices was associated with high frequency noise as reflected by two acoustic measures: breathiness index (BRI) and a high-frequency power ratio. A second perceptual dimension for male voices was associated with a breathy-overtight continuum as reflected by period deviation (PDdev) and perceptual ratings of breathiness. Results are discussed in terms of perceptual training and the clinical assessment of pathological voices.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Zhai, Guangtao, Wei Sun, Xiongkuo Min y Jiantao Zhou. "Perceptual Quality Assessment of Low-light Image Enhancement". ACM Transactions on Multimedia Computing, Communications, and Applications 17, n.º 4 (30 de noviembre de 2021): 1–24. http://dx.doi.org/10.1145/3457905.

Texto completo
Resumen
Low-light image enhancement algorithms (LIEA) can light up images captured in dark or back-lighting conditions. However, LIEA may introduce various distortions such as structure damage, color shift, and noise into the enhanced images. Despite various LIEAs proposed in the literature, few efforts have been made to study the quality evaluation of low-light enhancement. In this article, we make one of the first attempts to investigate the quality assessment problem of low-light image enhancement. To facilitate the study of objective image quality assessment (IQA), we first build a large-scale low-light image enhancement quality (LIEQ) database. The LIEQ database includes 1,000 light-enhanced images, which are generated from 100 low-light images using 10 LIEAs. Rather than evaluating the quality of light-enhanced images directly, which is more difficult, we propose to use the multi-exposure fused (MEF) image and stack-based high dynamic range (HDR) image as a reference and evaluate the quality of low-light enhancement following a full-reference (FR) quality assessment routine. We observe that distortions introduced in low-light enhancement are significantly different from distortions considered in traditional image IQA databases that are well-studied, and the current state-of-the-art FR IQA models are also not suitable for evaluating their quality. Therefore, we propose a new FR low-light image enhancement quality assessment (LIEQA) index by evaluating the image quality from four aspects: luminance enhancement, color rendition, noise evaluation, and structure preserving, which have captured the most key aspects of low-light enhancement. Experimental results on the LIEQ database show that the proposed LIEQA index outperforms the state-of-the-art FR IQA models. LIEQA can act as an evaluator for various low-light enhancement algorithms and systems. To the best of our knowledge, this article is the first of its kind comprehensive low-light image enhancement quality assessment study.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Lim, Jin-Young, Ho-Seok Chang, Dong-Wook Kang, Ki-Doo Kim y Kyeong-Hoon Jung. "No-reference Perceptual Quality Assessment of Digital Image". Journal of Broadcast Engineering 13, n.º 6 (30 de noviembre de 2008): 849–58. http://dx.doi.org/10.5909/jbe.2008.13.6.849.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Ma, Kede, Kai Zeng y Zhou Wang. "Perceptual Quality Assessment for Multi-Exposure Image Fusion". IEEE Transactions on Image Processing 24, n.º 11 (noviembre de 2015): 3345–56. http://dx.doi.org/10.1109/tip.2015.2442920.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Wu, Yadong, Hongying Zhang y Ran Duan. "Total Variation Based Perceptual Image Quality Assessment Modeling". Journal of Applied Mathematics 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/294870.

Texto completo
Resumen
Visual quality measure is one of the fundamental and important issues to numerous applications of image and video processing. In this paper, based on the assumption that human visual system is sensitive to image structures (edges) and image local luminance (light stimulation), we propose a new perceptual image quality assessment (PIQA) measure based on total variation (TV) model (TVPIQA) in spatial domain. The proposed measure compares TVs between a distorted image and its reference image to represent the loss of image structural information. Because of the good performance of TV model in describing edges, the proposed TVPIQA measure can illustrate image structure information very well. In addition, the energy of enclosed regions in a difference image between the reference image and its distorted image is used to measure the missing luminance information which is sensitive to human visual system. Finally, we validate the performance of TVPIQA measure with Cornell-A57, IVC, TID2008, and CSIQ databases and show that TVPIQA measure outperforms recent state-of-the-art image quality assessment measures.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Dong, Xinghui y Huiyu Zhou. "Texture synthesis quality assessment using perceptual texture similarity". Knowledge-Based Systems 194 (abril de 2020): 105591. http://dx.doi.org/10.1016/j.knosys.2020.105591.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Yan, Weiqing, Guanghui Yue, Yuming Fang, Hua Chen, Chang Tang y Gangyi Jiang. "Perceptual objective quality assessment of stereoscopic stitched images". Signal Processing 172 (julio de 2020): 107541. http://dx.doi.org/10.1016/j.sigpro.2020.107541.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Sloan, Colm, Naomi Harte, Damien Kelly, Anil C. Kokaram y Andrew Hines. "Objective Assessment of Perceptual Audio Quality Using ViSQOLAudio". IEEE Transactions on Broadcasting 63, n.º 4 (diciembre de 2017): 693–705. http://dx.doi.org/10.1109/tbc.2017.2704421.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Chang, Hua-wen, Qiu-wen Zhang, Qing-gang Wu y Yong Gan. "Perceptual image quality assessment by independent feature detector". Neurocomputing 151 (marzo de 2015): 1142–52. http://dx.doi.org/10.1016/j.neucom.2014.04.081.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Zhou Wang y Qiang Li. "Information Content Weighting for Perceptual Image Quality Assessment". IEEE Transactions on Image Processing 20, n.º 5 (mayo de 2011): 1185–98. http://dx.doi.org/10.1109/tip.2010.2092435.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Chang, Hua-Wen, Hua Yang, Yong Gan y Ming-Hui Wang. "Sparse Feature Fidelity for Perceptual Image Quality Assessment". IEEE Transactions on Image Processing 22, n.º 10 (octubre de 2013): 4007–18. http://dx.doi.org/10.1109/tip.2013.2266579.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Tang, Lu, Chuangeng Tian, Leida Li, Bo Hu, Wei Yu y Kai Xu. "Perceptual quality assessment for multimodal medical image fusion". Signal Processing: Image Communication 85 (julio de 2020): 115852. http://dx.doi.org/10.1016/j.image.2020.115852.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Lowell, Soren Y. "The Acoustic Assessment of Voice in Continuous Speech". Perspectives on Voice and Voice Disorders 22, n.º 2 (julio de 2012): 57–63. http://dx.doi.org/10.1044/vvd22.2.57.

Texto completo
Resumen
Acoustic measures are an essential component in the assessment of voice disorders, but the value of these measures is dependent on their relationship to perceptual voice quality and the degree to which these measures reflect the typical speaking patterns of the individual being assessed. Therefore, acoustic measures that can be accurately and reliably derived from continuous speech contexts, which are more representative of every day speaking patterns than sustained vowels, are fundamental to the assessment of voice. In this article, I review the current findings on acoustic measures that are applicable to continuous speech. I will identify spectral- and cepstral-based measures that show strong relationships to perceptual ratings of overall voice severity or relate to particular dimensions of voice quality. I also will discuss the prominence of the cepstral peak as a measure that consistently shows strong predictive capacity for perceptually rated voice severity and provides excellent discrimination of dysphonic and normal voices.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Moorthy, A. K. y A. C. Bovik. "Blind Image Quality Assessment: From Natural Scene Statistics to Perceptual Quality". IEEE Transactions on Image Processing 20, n.º 12 (diciembre de 2011): 3350–64. http://dx.doi.org/10.1109/tip.2011.2147325.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Zhu, Hancheng, Yong Zhou, Zhiwen Shao, Wen-Liang Du, Jiaqi Zhao y Rui Yao. "ARET-IQA: An Aspect-Ratio-Embedded Transformer for Image Quality Assessment". Electronics 11, n.º 14 (7 de julio de 2022): 2132. http://dx.doi.org/10.3390/electronics11142132.

Texto completo
Resumen
Image quality assessment (IQA) aims to automatically evaluate image perceptual quality by simulating the human visual system, which is an important research topic in the field of image processing and computer vision. Although existing deep-learning-based IQA models have achieved significant success, these IQA models usually require input images with a fixed size, which varies the perceptual quality of images. To this end, this paper proposes an aspect-ratio-embedded Transformer-based image quality assessment method, which can implant the adaptive aspect ratios of input images into the multihead self-attention module of the Swin Transformer. In this way, the proposed IQA model can not only relieve the variety of perceptual quality caused by size changes in input images but also leverage more global content correlations to infer image perceptual quality. Furthermore, to comprehensively capture the impact of low-level and high-level features on image quality, the proposed IQA model combines the output features of multistage Transformer blocks for jointly inferring image quality. Experimental results on multiple IQA databases show that the proposed IQA method is superior to state-of-the-art methods for assessing image technical and aesthetic quality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Ahmed, Nisar y Hafiz Muhammad Shahzad Asif. "Perceptual Quality Assessment of Digital Images Using Deep Features". Computing and Informatics 39, n.º 3 (2020): 385–409. http://dx.doi.org/10.31577/cai_2020_3_385.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Laparra, Valero, Johannes Ballé, Alexander Berardino y Eero P. Simoncelli. "Perceptual image quality assessment using a normalized Laplacian pyramid". Electronic Imaging 2016, n.º 16 (14 de febrero de 2016): 1–6. http://dx.doi.org/10.2352/issn.2470-1173.2016.16.hvei-103.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Mu, Hao y Woon_Seng Gan. "Perceptual Quality Improvement and Assessment for Virtual Bass Systems". Journal of the Audio Engineering Society 63, n.º 11 (2 de diciembre de 2015): 900–913. http://dx.doi.org/10.17743/jaes.2015.0079.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Choi, Kang-Sun, Yeo-Min Yun, Jong-Woo Han y Sung-Jea Ko. "8.4: Perceptual Quality Assessment for Motion Compensated Frame Interpolation". SID Symposium Digest of Technical Papers 41, n.º 1 (2010): 102. http://dx.doi.org/10.1889/1.3499824.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Kreiman, Jody y Bruce R. Gerratt. "Perceptual Assessment of Voice Quality: Past, Present, and Future". Perspectives on Voice and Voice Disorders 20, n.º 2 (julio de 2010): 62–67. http://dx.doi.org/10.1044/vvd20.2.62.

Texto completo
Resumen
Despite many years of research, we still do not know how to measure vocal quality. This paper reviews the history of quality assessment, describes some reasons why current approaches are unlikely to be fruitful, and proposes an alternative approach that addresses the primary difficulties with existing protocols.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Winkler, Stefan. "Issues in vision modeling for perceptual video quality assessment". Signal Processing 78, n.º 2 (octubre de 1999): 231–52. http://dx.doi.org/10.1016/s0165-1684(99)00062-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Liu, Min, Ke Gu, Guangtao Zhai, Patrick Le Callet y Wenjun Zhang. "Perceptual Reduced-Reference Visual Quality Assessment for Contrast Alteration". IEEE Transactions on Broadcasting 63, n.º 1 (marzo de 2017): 71–81. http://dx.doi.org/10.1109/tbc.2016.2597545.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Silva, Alessandro R. y Mylène C. Q. Farias. "Perceptual quality assessment of 3D videos with stereoscopic degradations". Multimedia Tools and Applications 79, n.º 1-2 (6 de noviembre de 2019): 1603–23. http://dx.doi.org/10.1007/s11042-019-08386-3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Yalman, Yildiray. "Histogram based perceptual quality assessment method for color images". Computer Standards & Interfaces 36, n.º 6 (noviembre de 2014): 899–908. http://dx.doi.org/10.1016/j.csi.2014.04.002.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Kuo, Wen-Hung, Po-Hung Lin y Sheue-Ling Hwang. "A framework of perceptual quality assessment on LCD-TV". Displays 28, n.º 1 (febrero de 2007): 35–43. http://dx.doi.org/10.1016/j.displa.2006.11.005.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Xia, Yingjie, Zhenguang Liu, Yan Yan, Yanxiang Chen, Luming Zhang y Roger Zimmermann. "Media Quality Assessment by Perceptual Gaze-Shift Patterns Discovery". IEEE Transactions on Multimedia 19, n.º 8 (agosto de 2017): 1811–20. http://dx.doi.org/10.1109/tmm.2017.2679900.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Mustafa, Safi y Abdul Hameed. "Perceptual quality assessment of video using machine learning algorithm". Signal, Image and Video Processing 13, n.º 8 (27 de mayo de 2019): 1495–502. http://dx.doi.org/10.1007/s11760-019-01494-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Zhou, Wujie, Gangyi Jiang y Mei Yu. "New visual perceptual pooling strategy for image quality assessment". Journal of Electronics (China) 29, n.º 3-4 (julio de 2012): 254–61. http://dx.doi.org/10.1007/s11767-012-0818-7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Farnand, Susan, Young Jang, Lark Kwon Choi y Chuck Han. "A methodology for perceptual image quality assessment of smartphone cameras – color quality". Electronic Imaging 2017, n.º 12 (29 de enero de 2017): 95–99. http://dx.doi.org/10.2352/issn.2470-1173.2017.12.iqsp-250.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Silva, Maria Fabiana Bonfim de Lima, Sandra Madureira, Luiz Carlos Rusilo y Zuleica Camargo. "Vocal quality assessment: methodological approach for a perceptive data analysis". Revista CEFAC 19, n.º 6 (diciembre de 2017): 831–41. http://dx.doi.org/10.1590/1982-021620171961417.

Texto completo
Resumen
ABSTRACT Purpose: to present a methodological approach for interpreting perceptual judgments of vocal quality by a group of evaluators using the script Vocal Profile Analysis Scheme. Methods: a cross-sectional study based on 90 speech samples from 25 female teachers with voice disorders and/or laryngeal changes. Prior to the perceptual judgment, three perceptual tasks were performed to select samples to be presented to five evaluators using the Experiment script MFC 3.2 (software PRAAT). Next, a sequence of tests was applied, based on successive approaches of inter- and intra-evaluators’ behavior. Data were treated by statistical analysis (Cochran and Selenor tests). Results: with respect to the analysis of the evaluators' performance, it was possible to define those that presented the best results, in terms of reliability and proximity of analyses, as compared to the most experienced evaluator, excluding one. The results of the cluster analysis also allowed designing a voice quality profile of the group of speakers studied. Conclusions: the proposal of a methodological approach allowed defining evaluators whose judgments were based on phonetic knowledge, and drawing a vocal quality profile of the group of samples analyzed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Varga, Domonkos. "No-Reference Image Quality Assessment with Global Statistical Features". Journal of Imaging 7, n.º 2 (5 de febrero de 2021): 29. http://dx.doi.org/10.3390/jimaging7020029.

Texto completo
Resumen
The perceptual quality of digital images is often deteriorated during storage, compression, and transmission. The most reliable way of assessing image quality is to ask people to provide their opinions on a number of test images. However, this is an expensive and time-consuming process which cannot be applied in real-time systems. In this study, a novel no-reference image quality assessment method is proposed. The introduced method uses a set of novel quality-aware features which globally characterizes the statistics of a given test image, such as extended local fractal dimension distribution feature, extended first digit distribution features using different domains, Bilaplacian features, image moments, and a wide variety of perceptual features. Experimental results are demonstrated on five publicly available benchmark image quality assessment databases: CSIQ, MDID, KADID-10k, LIVE In the Wild, and KonIQ-10k.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Barsties v. Latoszek, Ben, Jörg Mayer, Christopher R. Watts y Bernhard Lehnert. "Advances in Clinical Voice Quality Analysis with VOXplot". Journal of Clinical Medicine 12, n.º 14 (12 de julio de 2023): 4644. http://dx.doi.org/10.3390/jcm12144644.

Texto completo
Resumen
Background: The assessment of voice quality can be evaluated perceptually with standard clinical practice, also including acoustic evaluation of digital voice recordings to validate and further interpret perceptual judgments. The goal of the present study was to determine the strongest acoustic voice quality parameters for perceived hoarseness and breathiness when analyzing the sustained vowel [a:] using a new clinical acoustic tool, the VOXplot software. Methods: A total of 218 voice samples of individuals with and without voice disorders were applied to perceptual and acoustic analyses. Overall, 13 single acoustic parameters were included to determine validity aspects in relation to perceptions of hoarseness and breathiness. Results: Four single acoustic measures could be clearly associated with perceptions of hoarseness or breathiness. For hoarseness, the harmonics-to-noise ratio (HNR) and pitch perturbation quotient with a smoothing factor of five periods (PPQ5), and, for breathiness, the smoothed cepstral peak prominence (CPPS) and the glottal-to-noise excitation ratio (GNE) were shown to be highly valid, with a significant difference being demonstrated for each of the other perceptual voice quality aspects. Conclusions: Two acoustic measures, the HNR and the PPQ5, were both strongly associated with perceptions of hoarseness and were able to discriminate hoarseness from breathiness with good confidence. Two other acoustic measures, the CPPS and the GNE, were both strongly associated with perceptions of breathiness and were able to discriminate breathiness from hoarseness with good confidence.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Muschter, Evelyn, Andreas Noll, Jinting Zhao, Rania Hassen, Matti Strese, Basak Gulecyuz, Shu-Chen Li y Eckehard Steinbach. "Perceptual Quality Assessment of Compressed Vibrotactile Signals Through Comparative Judgment". IEEE Transactions on Haptics 14, n.º 2 (1 de abril de 2021): 291–96. http://dx.doi.org/10.1109/toh.2021.3077191.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Sung, Jung-Min, Bong-Seok Choi, Bong-Yeol Choi y Yeong-Ho Ha. "Perceptual Quality Assessment on Display based on Analytic Network Process". Journal of the Institute of Electronics and Information Engineers 51, n.º 7 (25 de julio de 2014): 180–89. http://dx.doi.org/10.5573/ieie.2014.51.7.180.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Farnand, Susan, Young Jang, Chuck Han y Hau Hwang. "A methodology for perceptual image quality assessment of smartphone cameras". Electronic Imaging 2016, n.º 13 (14 de febrero de 2016): 1–5. http://dx.doi.org/10.2352/issn.2470-1173.2016.13.iqsp-202.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Takam Tchendjou, Ghislain y Emmanuel Simeu. "Visual Perceptual Quality Assessment Based on Blind Machine Learning Techniques". Sensors 22, n.º 1 (28 de diciembre de 2021): 175. http://dx.doi.org/10.3390/s22010175.

Texto completo
Resumen
This paper presents the construction of a new objective method for estimation of visual perceiving quality. The proposal provides an assessment of image quality without the need for a reference image or a specific distortion assumption. Two main processes have been used to build our models: The first one uses deep learning with a convolutional neural network process, without any preprocessing. The second objective visual quality is computed by pooling several image features extracted from different concepts: the natural scene statistic in the spatial domain, the gradient magnitude, the Laplacian of Gaussian, as well as the spectral and spatial entropies. The features extracted from the image file are used as the input of machine learning techniques to build the models that are used to estimate the visual quality level of any image. For the machine learning training phase, two main processes are proposed: The first proposed process consists of a direct learning using all the selected features in only one training phase, named direct learning blind visual quality assessment DLBQA. The second process is an indirect learning and consists of two training phases, named indirect learning blind visual quality assessment ILBQA. This second process includes an additional phase of construction of intermediary metrics used for the construction of the prediction model. The produced models are evaluated on many benchmarks image databases as TID2013, LIVE, and LIVE in the wild image quality challenge. The experimental results demonstrate that the proposed models produce the best visual perception quality prediction, compared to the state-of-the-art models. The proposed models have been implemented on an FPGA platform to demonstrate the feasibility of integrating the proposed solution on an image sensor.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Batsi, Sophia y Lisimachos P. Kondi. "Improved Temporal Pooling for Perceptual Video Quality Assessment Using VMAF". Electronic Imaging 2020, n.º 11 (26 de enero de 2020): 68–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.11.hvei-068.

Texto completo
Resumen
The Video Multimethod Assessment Fusion (VMAF) method, proposed by Netflix, offers an automated estimation of perceptual video quality for each frame of a video sequence. Then, the arithmetic mean of the per-frame quality measurements is taken by default, in order to obtain an estimate of the overall Quality of Experience (QoE) of the video sequence. In this paper, we validate the hypothesis that the arithmetic mean conceals the bad quality frames, leading to an overestimation of the provided quality. We also show that the Minkowski mean (appropriately parametrized) approximates well the subjectively measured QoE, providing superior Spearman Rank Correlation Coefficient (SRCC), Pearson Correlation Coefficient (PCC), and Root-Mean-Square-Error (RMSE) scores.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Guangtao Zhai, Jianfei Cai, Weisi Lin, Xiaokang Yang, Wenjun Zhang y M. Etoh. "Cross-Dimensional Perceptual Quality Assessment for Low Bit-Rate Videos". IEEE Transactions on Multimedia 10, n.º 7 (noviembre de 2008): 1316–24. http://dx.doi.org/10.1109/tmm.2008.2004910.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Wang, Shiqi, Ke Gu, Kai Zeng, Zhou Wang y Weisi Lin. "Objective Quality Assessment and Perceptual Compression of Screen Content Images". IEEE Computer Graphics and Applications 38, n.º 1 (enero de 2018): 47–58. http://dx.doi.org/10.1109/mcg.2016.46.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Oh, J., S. I. Woolley, T. N. Arvanitis y J. N. Townend. "A multistage perceptual quality assessment for compressed digital angiogram images". IEEE Transactions on Medical Imaging 20, n.º 12 (2001): 1352–61. http://dx.doi.org/10.1109/42.974930.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía