Dissertations / Theses on the topic 'Subjective and Objective Image Quality Assessment'

To see the other types of publications on this topic, follow the link: Subjective and Objective Image Quality Assessment.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 48 dissertations / theses for your research on the topic 'Subjective and Objective Image Quality Assessment.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sendjasni, Abderrezzaq. "Objective and subjective quality assessment of 360-degree images." Electronic Thesis or Diss., Poitiers, 2023. http://www.theses.fr/2023POIT2251.

Full text
Abstract:
Les images à 360 degrés, aussi appelées images omnidirectionnelles, sont au cœur des contenus immersifs. Avec l’augmentation de leur utilisation notamment grâce à l’expérience interactive et immersive qu’ils offrent, il est primordial de garantir une bonne qualité d’expérience (QoE). Cette dernière est considérablement impactée par la qualité du contenu lui-même. En l’occurrence, les images à 360 degrés, comme tout type de signal visuel, passent par une séquence de processus comprenant l’encodage, la transmission, le décodage et le rendu. Chacun de ces processus est susceptible d’introduire des distorsions dans le contenu. Pour améliorer la qualité d’expérience, toutes ces dégradations potentielles doivent être soigneusement prises en compte et réduites à un niveau imperceptible. Pour atteindre cet objectif, l’évaluation de la qualité de l’image est l’une des stratégies devant être utilisée. Cette thèse aborde l’évaluation de la qualité des images à 360 degrés des points de vue objectif et subjectif. Ainsi, en s’intéressant à l’effet des visiocasques sur la qualité perçue des images 360 degrés, une étude psycho-visuelle est conçue et réalisée en utilisant quatre dispositifs différents. À cette fin, une base de données a été créé et un panel d’observateurs a été impliqué. L’impact des visiocasques sur la qualité a été identifié et mis en évidence comme un facteur important à prendre en compte lors de la réalisation d’expériences subjectives pour des images à 360 degrés. D’un point de vue objectif, nous avons d’abord procédé à une étude comparative extensive de plusieurs modèles de réseaux de neurones convolutifs (CNN) sous diverses configurations. Ensuite, nous avons amélioré la chaîne de traitement de l’évaluation de la qualité basée sur les CNN à différentes échelles, de l’échantillonnage et de la représentation des entrées à l’agrégation des scores de qualité. En se basant sur les résultats de ces études, et de l’analyse comparative, deux modèles de qualité basés sur les CNN sont proposés pour prédire avec précision la qualité des images à 360 degrés. Les observations et les conclusions obtenues à partir des différentes contributions de cette thèse apporteront un éclairage sur l’évaluation de la qualité des images à 360 degrés
360-degree images, a.k.a. omnidirectional images, are in the center of immersive media. With the increase in demands of the latter, mainly thanks to the offered interactive and immersive experience, it is paramount to provide good quality of experience (QoE). This QoE is significantly impacted by the quality of the content. Like any type of visual signal, 360-degree images go through a sequence of processes including encoding, transmission, decoding, and rendering. Each of these processes has the potential to introduce distortions to the content. To improve the QoE, image quality assessment (IQA) is one of the strategies to be followed. This thesis addresses the quality evaluation of 360-degree images from the objective and subjective perspectives. By focusing on the influence of Head Mounted Displays (HMDs) on the perceived quality of 360-degree images, a psycho-visual study is designed and carried out using four different devices. For this purpose, a 360-degree image datasets is created and a panel of observers is involved. The impact of HMDs on the quality ratings is identified and highlighted as an important factor to consider when con- ducting subjective experiments for 360-degree images. From the objective perspective, we first comprehensively benchmarked several convolutional neural network (CNN) models under various configurations. Then, the processing chain of CNN-based 360-IQA is improved at different scales, from input sampling and representation to aggregating quality scores. Based on the observations of the above studies as well as the benchmark, two 360-IQA models based on CNNs are proposed to accurately predict the quality of 360-degree images. The obtained observations and conclusions from the various contributions shall bring insights for assessing the quality of 360-degree images
360-graders bilder, også kjent som rundstrålende bilder, er i sentrum av oppslukende medier. Med økningen i forventninger til sistnevnte, hovedsakelig takket være den aktiverte interaktive og oppslukende opplevelse, er det avgjørende å gi god kvaliteten på opplevelsen (QoE).Denne QoE er betydelig påvirket av kvaliteten på innholdet. Som alle typer visuelle signaler går 360-graders bilder gjennom en sekvens av prosesser, inkludert koding, overføring, dekoding og gjengivelse. Hver av disse prosessene har potensial til å introdusere forvrengninger til innholdet.For å forbedre QoE er vurdering av bildekvalitet (IQA) en av strategiene å følge. Denne oppgaven tar for seg kvalitetsevaluering av 360-graders bilder fra objektive og subjektive perspektiver. Ved å fokusere på påvirkningen av Head Mounted Displays (HMD-er) på den oppfattede kvaliteten til 360-graders bilder, er en psyko-visuell studie designet og utført ved hjelp av fire forskjellige enheter. For dette formålet opprettes et 360-graders bildedatasett og et panel av observatører er involvert. Virkningen av HMD-er på valitetsvurderingene identifiseres og fremheves som en viktig faktor når du utfører subjektive eksperimenter for 360-graders bilder.Fra det objektive perspektivet benchmarket vi først flere konvolusjonelle nevrale nettverk (CNN) under forskjellige konfigurasjoner. Deretter forbedres prosesseringskjeden til CNN-baserte 360-IQA i forskjellige skalaer, fra input-sampling og representasjon til aggregering av kvalitetspoeng. Basert på observasjonene av de ovenfornevnte studiene så vel som benchmark, foreslås to 360-IQA-modeller basert på CNN-er for å nøyaktig forutsi kvaliteten på 360-graders bilder.De innhentede observasjonene og konklusjonene fra de ulike bidragene skal gi innsikt for å vurdere kvaliteten på 360-graders bilder
APA, Harvard, Vancouver, ISO, and other styles
2

Zerman, Emin. "Evaluation et analyse de la qualité vidéo à haute gamme dynamique." Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0003.

Full text
Abstract:
Au cours de la dernière décennie, la technologie de l’image et de la vidéo à haute gamme dynamique (High dynamic range - HDR) a attiré beaucoup d’attention, en particulier dans la communauté multimédia. Les progrés technologiques récents ont facilité l’acquisition, la compression et la reproduction du contenu HDR, ce qui a mené à la commercialisation des écrans HDR et à la popularisation du contenu HDR. Dans ce contexte, la mesure de la qualité du contenu HDR joue un rôle fondamental dans l’amélioration de la chaîne de distribution du contenu ainsi que des opérations qui la composent, telles que la compression et l’affichage. Cependant, l’évaluation de la qualité visuelle HDR présente de nouveaux défis par rapport au contenu à gamme dynamique standard (Standard dynamic range -SDR). Le premier défi concerne les nouvelles conditions introduites par la reproduction du contenu HDR, par ex. l’augmentation de la luminosité et du contraste. Même si une reproduction exacte de la luminance d’une scène n’est pas nécessaire pour la plupart des cas pratiques, une estimation précise de la luminance émise est cependant nécessaire pour les mesures d’évaluation objectives de la qualité HDR. Afin de comprendre les effets du rendu d’affichage sur la perception de la qualité, un algorithme permettant de reproduire très précisement une image HDR a été développé et une expérience subjective a été menée pour analyser l’impact de différents rendus sur l’évaluation subjective et objective de la qualité HDR. En outre, afin de comprendre l’impact de la couleur avec la luminosité accrue des écrans HDR, les effets des différents espaces de couleurs sur les performances de compression vidéo HDR ont également été analysés dans une autre étude subjective. Un autre défi consiste à estimer objectivement la qualité du contenu HDR, en utilisant des ordinateurs et des algorithmes. Afin de relever ce défi, la thèse procède à l’évaluation des performances des métriques de qualité d’image HDR avec référence (Full reference-FR). Les images HDR ont une plus grande plage de luminosité et des valeurs de contraste plus élevées. Etant donné que la plupart des métriques de qualité d’image sont développées pour les images SDR, elles doivent être adaptées afin d’estimer la qualité des images HDR. Différentes méthodes d’adaptation ont été utilisées pour les mesures SDR, et elles ont été comparées avec les métriques de qualité d’image existantes développées exclusivement pour les images HDR. De plus, nous proposons une nouvelle méthode d’évaluation des métriques objectives basée sur une nouvelle approche de classification. Enfin, nous comparons les scores de qualité subjectifs acquis en utilisant différentes méthodologies de test subjectives. L’évaluation subjective de la qualité est considérée comme le moyen le plus efficace et le plus fiable d’obtenir des scores de qualité «vérité-terrain» pour les stimuli sélectionnés, et les scores moyens d’opinion (Mean opinion scores-MOS) obtenus sont les valeurs auxquelles les métriques objectives sont entraînées pour correspondre. En fait, de fortes divergences peuvent facilement être rencontrés lorsque différentes bases de données de qualité multimédia sont considérées. Afin de comprendre la relation entre les valeurs de qualité acquises à l’aide de différentes méthodologies, la relation entre les valeurs MOS et les résultats des comparaisons par paires rééchellonés (Pairwise comparisons - PC) a été comparée. A cette fin, une série d’expériences ont été menées entre les méthodologies double stimulus impairment scale (DSIS) et des comparaisons par paires. Nous proposons d’inclure des comparaisons inter-contenu dans les expériences PC afin d’améliorer les performances de rééchelonnement et de réduire la variance inter-contenu ainsi que les intervalles de confiance. Les scores de PC rééchellonés peuvent également être utilisés pour des scénarios subjectifs d’évaluation de la qualité multimédia autres que le HDR
In the last decade, high dynamic range (HDR) image and video technology gained a lot of attention, especially within the multimedia community. Recent technological advancements made the acquisition, compression, and reproduction of HDR content easier, and that led to the commercialization of HDR displays and popularization of HDR content. In this context, measuring the quality of HDR content plays a fundamental role in improving the content distribution chain as well as individual parts of it, such as compression and display. However, HDR visual quality assessment presents new challenges with respect to the standard dynamic range (SDR) case. The first challenge is the new conditions introduced by the reproduction of HDR content, e.g. the increase in brightness and contrast. Even though accurate reproduction is not necessary for most of the practical cases, accurate estimation of the emitted luminance is necessary for the objective HDR quality assessment metrics. In order to understand the effects of display rendering on the quality perception, an accurate HDR frame reproduction algorithm was developed, and a subjective experiment was conducted to analyze the impact of different display renderings on subjective and objective HDR quality evaluation. Additionally, in order to understand the impact of color with the increased brightness of the HDR displays, the effects of different color spaces on the HDR video compression performance were also analyzed in another subjective study. Another challenge is to estimate the quality of HDR content objectively, using computers and algorithms. In order to address this challenge, the thesis proceeds with the performance evaluation of full-reference (FR) HDR image quality metrics. HDR images have a larger brightness range and higher contrast values. Since most of the image quality metrics are developed for SDR images, they need to be adapted in order to estimate the quality of HDR images. Different adaptation methods were used for SDR metrics, and they were compared with the existing image quality metrics developed exclusively for HDR images. Moreover, we propose a new method for the evaluation of metric discriminability based ona novel classification approach. Motivated by the need to fuse several different quality databases, in the third part of the thesis, we compare subjective quality scores acquired by using different subjective test methodologies. Subjective quality assessment is regarded as the most effective and reliable way of obtaining “ground-truth” quality scores for the selected stimuli, and the obtained mean opinion scores (MOS) are the values to which generally objective metrics are trained to match. In fact, strong discrepancies can easily be notified across databases when different multimedia quality databases are considered. In order to understand the relationship between the quality values acquired using different methodologies, the relationship between MOS values and pairwise comparisons (PC) scaling results were compared. For this purpose, a series of experiments were conducted using double stimulus impairment scale (DSIS) and pairwise comparisons subjective methodologies. We propose to include cross-content comparisons in the PC experiments in order to improve scaling performance and reduce cross-content variance as well as confidence intervals. The scaled PC scores can also be used for subjective multimedia quality assessment scenarios other than HDR
APA, Harvard, Vancouver, ISO, and other styles
3

Ševčík, Martin. "Modelování vlastností modelu HVS v Matlabu." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217656.

Full text
Abstract:
In theoretical part Diploma thesis deals with the model of human vision HVS (Human Visual System), which can be used for image quality assessment in TV technique area. It has been described calculations of selected JND (Just Noticeable Difference) metrics, used in evaluation of HVS. In practical part of the thesis it has been suggested and realized simulation model in Matlab, which may be used for evaluation of three JND metrics from color and grayscale images and evaluation in spatial a frequency domain. Results of JND models have been compared to another objective image quality evaluation metrics (MSE, NMSE, SNR and PSNR). For interpretation of dependencies it has been used images with different defined content.
APA, Harvard, Vancouver, ISO, and other styles
4

Dalasari, Venkata Gopi Krishna, and Sri Krishna Jayanty. "Low Light Video Enhancement along with Objective and Subjective Quality Assessment." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13500.

Full text
Abstract:
Enhancing low light videos has been quite a challenge over the years. A video taken in low light always has the issues of low dynamic range and high noise. This master thesis presents contribution within the field of low light video enhancement. Three models are proposed with different tone mapping algorithms for extremely low light low quality video enhancement. For temporal noise removal, a motion compensated kalman structure is presented. Dynamic range of the low light video is stretched using three different methods. In Model 1, dynamic range is increased by adjustment of RGB histograms using gamma correction with a modified version of adaptive clipping thresholds. In Model 2, a shape preserving dynamic range stretch of the RGB histogram is applied using SMQT. In Model 3, contrast enhancement is done using CLAHE. In the final stage, the residual noise is removed using an efficient NLM. The performance of the models are compared on various Objective VQA metrics like NIQE, GCF and SSIM. To evaluate the actual performance of the models subjective tests are conducted, due to the large number of applications that target humans as the end user of the video.The performance of the three models are compared for a total of ten real time input videos taken in extremely low light environment. A total of 25 human observers subjectively evaluated the performance of the three models based on the parameters: contrast, visibility, visually pleasing, amount of noise and overall quality. A detailed statistical evaluation of the relative performance of the three models is also provided.
APA, Harvard, Vancouver, ISO, and other styles
5

Guo, Jinjiang. "Contributions to objective and subjective visual quality assessment of 3d models." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI099.

Full text
Abstract:
Dans le domaine de l’informatique graphique, les données tridimensionnelles, généralement représentées par des maillages triangulaires, sont employées dans une grande variété d’applications (par exemple, le lissage, la compression, le remaillage, la simplification, le rendu, etc.). Cependant, ces procédés introduisent inévitablement des artefacts qui altèrent la qualité visuelle des données 3D rendues. Ainsi, afin de guider perceptuellement les algorithmes de traitement, il y a un besoin croissant d'évaluations subjectives et objectives de la qualité visuelle à la fois performantes et adaptées, pour évaluer et prédire les artefacts visuels. Dans cette thèse, nous présentons d'abord une étude exhaustive sur les différentes sources d'artefacts associés aux données numériques graphiques, ainsi que l’évaluation objective et subjective de la qualité visuelle des artefacts. Ensuite, nous introduisons une nouvelle étude sur la qualité subjective conçue sur la base de l’évaluations de la visibilité locale des artefacts géométriques, dans laquelle il a été demandé à des observateurs de marquer les zones de maillages 3D qui contiennent des distorsions visibles. Les cartes de distorsion visuelle collectées sont utilisées pour illustrer plusieurs fonctionnalités perceptuelles du système visuel humain (HVS), et servent de vérité-terrain pour évaluer les performances des attributs et des mesures géométriques bien connus pour prédire la visibilité locale des distorsions. Notre deuxième étude vise à évaluer la qualité visuelle de modèles 3D texturés, subjectivement et objectivement. Pour atteindre ces objectifs, nous avons introduit 136 modèles traités avec à la fois des distorsions géométriques et de texture, mené une expérience subjective de comparaison par paires, et invité 101 sujets pour évaluer les qualités visuelles des modèles à travers deux protocoles de rendu. Motivés par les opinions subjectives collectées, nous proposons deux mesures de qualité visuelle objective pour les maillages texturés, en se fondant sur les combinaisons optimales des mesures de qualité issues de la géométrie et de la texture. Ces mesures de perception proposées surpassent leurs homologues en termes de corrélation avec le jugement humain
In computer graphics realm, three-dimensional graphical data, generally represented by triangular meshes, have become commonplace, and are deployed in a variety of application processes (e.g., smoothing, compression, remeshing, simplification, rendering, etc.). However, these processes inevitably introduce artifacts, altering the visual quality of the rendered 3D data. Thus, in order to perceptually drive the processing algorithms, there is an increasing need for efficient and effective subjective and objective visual quality assessments to evaluate and predict the visual artifacts. In this thesis, we first present a comprehensive survey on different sources of artifacts in digital graphics, and current objective and subjective visual quality assessments of the artifacts. Then, we introduce a newly designed subjective quality study based on evaluations of the local visibility of geometric artifacts, in which observers were asked to mark areas of 3D meshes that contain noticeable distortions. The collected perceived distortion maps are used to illustrate several perceptual functionalities of the human visual system (HVS), and serve as ground-truth to evaluate the performances of well-known geometric attributes and metrics for predicting the local visibility of distortions. Our second study aims to evaluate the visual quality of texture mapped 3D model subjectively and objectively. To achieve these goals, we introduced 136 processed models with both geometric and texture distortions, conducted a paired-comparison subjective experiment, and invited 101 subjects to evaluate the visual qualities of the models under two rendering protocols. Driven by the collected subjective opinions, we propose two objective visual quality metrics for textured meshes, relying on the optimal combinations of geometry and texture quality measures. These proposed perceptual metrics outperform their counterparts in term of the correlation with the human judgment
APA, Harvard, Vancouver, ISO, and other styles
6

Shahid, Muhammad. "Methods for Objective and Subjective Video Quality Assessment and for Speech Enhancement." Doctoral thesis, Blekinge Tekniska Högskola [bth.se], Faculty of Engineering - Department of Applied Signal Processing, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00603.

Full text
Abstract:
The overwhelming trend of the usage of multimedia services has raised the consumers' awareness about quality. Both service providers and consumers are interested in the delivered level of perceptual quality. The perceptual quality of an original video signal can get degraded due to compression and due to its transmission over a lossy network. Video quality assessment (VQA) has to be performed in order to gauge the level of video quality. Generally, it can be performed by following subjective methods, where a panel of humans judges the quality of video, or by using objective methods, where a computational model yields an estimate of the quality. Objective methods and specifically No-Reference (NR) or Reduced-Reference (RR) methods are preferable because they are practical for implementation in real-time scenarios. This doctoral thesis begins with a review of existing approaches proposed in the area of NR image and video quality assessment. In the review, recently proposed methods of visual quality assessment are classified into three categories. This is followed by the chapters related to the description of studies on the development of NR and RR methods as well as on conducting subjective experiments of VQA. In the case of NR methods, the required features are extracted from the coded bitstream of a video, and in the case of RR methods additional pixel-based information is used. Specifically, NR methods are developed with the help of suitable techniques of regression using artificial neural networks and least-squares support vector machines. Subsequently, in a later study, linear regression techniques are used to elaborate the interpretability of NR and RR models with respect to the selection of perceptually significant features. The presented studies on subjective experiments are performed using laboratory based and crowdsourcing platforms. In the laboratory based experiments, the focus has been on using standardized methods in order to generate datasets that can be used to validate objective methods of VQA. The subjective experiments performed through crowdsourcing relate to the investigation of non-standard methods in order to determine perceptual preference of various adaptation scenarios in the context of adaptive streaming of high-definition videos. Lastly, the use of adaptive gain equalizer in the modulation frequency domain for speech enhancement has been examined. To this end, two methods of demodulating speech signals namely spectral center of gravity carrier estimation and convex optimization have been studied.
APA, Harvard, Vancouver, ISO, and other styles
7

Khaustova, Darya. "Objective assessment of stereoscopic video quality of 3DTV." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S021/document.

Full text
Abstract:
Le niveau d'exigence minimum pour tout système 3D (images stéréoscopiques) est de garantir le confort visuel des utilisateurs. Le confort visuel est un des trois axes perceptuels de la qualité d'expérience (QoE) 3D qui peut être directement lié aux paramètres techniques du système 3D. Par conséquent, le but de cette thèse est de caractériser objectivement l'impact de ces paramètres sur la perception humaine afin de contrôler la qualité stéréoscopique. La première partie de la thèse examine l'intérêt de prendre en compte l'attention visuelle des spectateurs dans la conception d'une mesure objective de qualité 3D. Premièrement, l'attention visuelle en 2D et 3D sont comparées en utilisant des stimuli simples. Les conclusions de cette première expérience sont validées en utilisant des scènes complexes avec des disparités croisées et décroisées. De plus, nous explorons l'impact de l'inconfort visuel causé par des disparités excessives sur l'attention visuelle. La seconde partie de la thèse est dédiée à la conception d'un modèle objectif de QoE pour des vidéos 3D, basé sur les seuils perceptuels humains et le niveau d'acceptabilité. De plus nous explorons la possibilité d'utiliser la modèle proposé comme une nouvelle échelle subjective. Pour la validation de ce modèle, des expériences subjectives sont conduites présentant aux sujets des images stéréoscopiques fixes et animées avec différents niveaux d'asymétrie. La performance est évaluée en comparant des prédictions objectives avec des notes subjectives pour différents niveaux d'asymétrie qui pourraient provoquer un inconfort visuel
The minimum requirement for any 3D (stereoscopic images) system is to guarantee visual comfort of viewers. Visual comfort is one of the three primary perceptual attributes of 3D QoE, which can be linked directly with technical parameters of a 3D system. Therefore, the goal of this thesis is to characterize objectively the impact of these parameters on human perception for stereoscopic quality monitoring. The first part of the thesis investigates whether visual attention of the viewers should be considered when designing an objective 3D quality metrics. First, the visual attention in 2D and 3D is compared using simple test patterns. The conclusions of this first experiment are validated using complex stimuli with crossed and uncrossed disparities. In addition, we explore the impact of visual discomfort caused by excessive disparities on visual attention. The second part of the thesis is dedicated to the design of an objective model of 3D video QoE, which is based on human perceptual thresholds and acceptability level. Additionally we explore the possibility to use the proposed model as a new subjective scale. For the validation of proposed model, subjective experiments with fully controlled still and moving stereoscopic images with different types of view asymmetries are conducted. The performance is evaluated by comparing objective predictions with subjective scores for various levels of view discrepancies which might provoke visual discomfort
APA, Harvard, Vancouver, ISO, and other styles
8

Sahu, Amit K. "Objective assessment of image quality (OAIQ) in fluorescence-enhanced optical imaging." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chintala, Bala Venkata Sai Sundeep. "Objective Perceptual Quality Assessment of JPEG2000 Image Coding Format Over Wireless Channel." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17785.

Full text
Abstract:
A dominant source of Internet traffic, today, is constituted of compressed images. In modern multimedia communications, image compression plays an important role. Some of the image compression standards set by the Joint Photographic Expert Group (JPEG) include JPEG and JPEG2000. The expert group came up with the JPEG image compression standard so that still pictures could be compressed to be sent over an e-mail, be displayed on a webpage, and make high-resolution digital photography possible. This standard was originally based on a mathematical method, used to convert a sequence of data to the frequency domain, called the Discrete Cosine Transform (DCT). In the year 2000, however, a new standard was proposed by the expert group which came to be known as JPEG2000. The difference between the two is that the latter is capable of providing better compression efficiency. There is also a downside to this new format introduced. The computation required for achieving the same sort of compression efficiency as one would get with the original JPEG format is higher. JPEG is a lossy compression standard which can throw away some less important information without causing any noticeable perception differences. Whereas, in lossless compression, the primary purpose is to reduce the number of bits required to represent the original image samples without any loss of information. The areas of application of the JPEG image compression standard include the Internet, digital cameras, printing, and scanning peripherals. In this thesis work, a simulator kind of functionality setup is needed for conducting the objective quality assessment. An image is given as an input to our wireless communication system and its data size is varied (e.g. 5%, 10%, 15%, etc) and a Signal-to-Noise Ratio (SNR) value is given as input, for JPEG2000 compression. Then, this compressed image is passed through a JPEG encoder and then transmitted over a Rayleigh fading channel. The corresponding image obtained after having applied these constraints on the original image is then decoded at the receiver and inverse discrete wavelet transform (IDWT) is applied to inverse the JPEG 2000 compression. Quantization is done for the coefficients which are scalar-quantized to reduce the number of bits to represent them, without the loss of quality of the image. Then the final image is displayed on the screen. The original input image is co-passed with the images of varying data size for an SNR value at the receiver after decoding. In particular, objective perceptual quality assessment through Structural Similarity (SSIM) index using MATLAB is provided.
APA, Harvard, Vancouver, ISO, and other styles
10

Reilly, Andrew James. "Uniform framework for the objective assessment and optimisation of radiotherapy image quality." Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/5589.

Full text
Abstract:
Image guidance has rapidly become central to current radiotherapy practice. A uniform framework is developed for evaluating image quality across all imaging modalities by modelling the ‘universal phantom’: breaking any phantom down into its constituent fundamental test objects and applying appropriate analysis techniques to these through the construction of an automated analysis tree. This is implemented practically through the new software package ‘IQWorks’ and is applicable to both radiotherapy and diagnostic imaging. For electronic portal imaging (EPI), excellent agreement was observed with two commercial solutions: the QC-3V phantom and PIPS Pro software (Standard Imaging) and EPID QC phantom and epidSoft software (PTW). However, PIPS Pro’s noise correction strategy appears unnecessary for all but the highest frequency modulation transfer function (MTF) point and its contrast to noise ratio (CNR) calculation is not as described. Serious flaws identified in epid- Soft included erroneous file handling leading to incorrect MTF and signal to noise ratio (SNR) results, and a sensitivity to phantom alignment resulting in overestimation of MTF points by up to 150% for alignment errors of only ±1 pixel. The ‘QEPI1’ is introduced as a new EPI performance phantom. Being a simple lead square with a central square hole it is inexpensive and straightforward to manufacture yet enables calculation of a wide range of performance metrics at multiple locations across the field of view. Measured MTF curves agree with those of traditional bar pattern phantoms to within the limits of experimental uncertainty. An intercomparison of the Varian aS1000 and aS500-II detectors demonstrated an improvement in MTF for the aS1000 of 50–100% over the clinically relevant range 0.4–1 cycles/mm, yet with a corresponding reduction in CNR by a factor of p 2. Both detectors therefore offer advantages for different clinical applications. Characterisation of cone-beam CT (CBCT) facilities on two Varian On-Board Imaging (OBI) units revealed that only two out of six clinical modes had been calibrated by default, leading to errors of the order of 400 HU for some modes and materials – well outside the ±40 HU tolerance. Following calibration, all curves agreed sufficiently for dose calculation accuracy within 2%. CNR and MTF experiments demonstrated that a boost in MTF f50 of 20–30% is achievable by using a 5122 rather than a 3842 matrix, but with a reduction in CNR of the order of 30%. The MTF f50 of the single-pulse half-resolution radiographic mode of the Varian PaxScan 4030CB detector was measured in the plane of the detector as 1.0±0.1 cycles/mm using both a traditional tungsten edge and the new QEPI1 phantom. For digitally reconstructed radiographs (DRRs), a reduction in CT slice thickness resulted in an expected improvement in MTF in the patient scanning direction but a deterioration in the orthogonal direction, with the optimum slice thickness being 1–2 mm. Two general purposes display devices were calibrated against the DICOM Greyscale Standard Display Function (GSDF) to within the ±20% limit for Class 2 review devices. By providing an approach to image quality evaluation that is uniform across all radiotherapy imaging modalities this work enables consistent end-to-end optimisation of this fundamental part of the radiotherapy process, thereby supporting enhanced use of image-guidance at all relevant stages of radiotherapy and better supporting the clinical decisions based on it.
APA, Harvard, Vancouver, ISO, and other styles
11

Ansari, Yousuf Hameed, and Sohaib Ahmed Siddiqui. "Quality Assessment for HEVC Encoded Videos: Study of Transmission and Encoding Errors." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13656.

Full text
Abstract:
There is a demand for video quality measurements in modern video applications specifically in wireless and mobile communication. In real time video streaming it is experienced that the quality of video becomes low due to different factors such as encoder and transmission errors. HEVC/H.265 is considered as one of the promising codecs for compression of ultra-high definition videos. In this research, full reference based video quality assessment is performed. The raw format reference videos have been taken from Texas database to make test videos data set. The videos are encoded using HM9 reference software in HEVC format. Encoding errors has been set during the encoding process by adjusting the QP values. To introduce packet loss in the video, the real-time environment has been created. Videos are sent from one system to another system over UDP protocol in NETCAT software. Packet loss is induced with different packet loss ratios into the video using NETEM software. After the compilation of video data set, to assess the video quality two kind of analysis has been performed on them. Subjective analysis has been carried on different human subjects. Objective analysis has been achieved by applying five quality matrices PSNR, SSIM, UIQI, VFI and VSNR. The comparison is conducted on the objective measurement scores with the subjective and in the end results deduce from classical correlation methods.
APA, Harvard, Vancouver, ISO, and other styles
12

Grönborg, Felix, and Otto Ortega. "Evaluation of Methods for Image Analysis with the Purpose of Imitating Subjective Quality Assessment." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-174849.

Full text
Abstract:
Detta examensarbete gjordes i samarbete med Husfoto AB där syftet var att undersöka potentialen i att använda maskininlärniningsalgoritmer för att utföra automatiska klassificeringar mellan godkända och icke-godkända bilder enligt en subjektivt framställd kvalitetsstandard. Både metoder som använder maskininlärning, samt mer traditionella bildanalysmetoder användes, testades och jämfördes inom kvalitetsmåtten precision, känslighet, träffsäkerhet och balanserad träffsäkerhet. Maskininlärningsmetoder som användes var en linjär och en icke-linjär variant av Support Vector Machine (SVM), samt XGboost. De manuella metoderna var en variant av White Patch, samt två egna metoder framtagna för projektet. Bildfelen som undersöktes var vitbalans och färgen på himmel för exteriörbilder, och datan samlades in och annoterades parallellt med arbetet. Trots att mängden data var begränsad så erhölls bättre resultat än förväntat, vilket ger en indikation på att maskininlärning kan användas för klassificeringar med subjektiva bedömningar som referensmått med gott resultat. Resultaten visar att kvalitetsmåtten för flera metoder presterar relativt likvärdigt i många fall med vissa avvikande skillnader. Genom att använda den subjektiva bedömningen av Husfoto för att skapa ett objektivt mått med metoderna som används så visar resultatet att metoderna för vissa fel kommer upp i över 80% träffsäkerhet.

Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska högskolan, Linköpings universitet

APA, Harvard, Vancouver, ISO, and other styles
13

Palit, Robin. "Computational Tools and Methods for Objective Assessment of Image Quality in X-Ray CT and SPECT." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/268492.

Full text
Abstract:
Computational tools of use in the objective assessment of image quality for tomography systems were developed for computer processing units (CPU) and graphics processing units (GPU) in the image quality lab at the University of Arizona. Fast analytic x-ray projection code called IQCT was created to compute the mean projection image for cone beam multi-slice helical computed tomography (CT) scanners. IQCT was optimized to take advantage of the massively parallel architecture of GPUs. CPU code for computing single photon emission computed tomography (SPECT) projection images was written calling upon previous research in the image quality lab. IQCT and the SPECT modeling code were used to simulate data for multimodality SPECT/CT observer studies. The purpose of these observer studies was to assess the benefit in image quality of using attenuation information from a CT measurement in myocardial SPECT imaging. The observer chosen for these studies was the scanning linear observer. The tasks for the observer were localization of a signal and estimation of the signal radius. For the localization study, area under the localization receiver operating characteristic curve (A(LROC)) was computed as A(LROC)^Meas = 0.89332 ± 0.00474 and A(LROC)^No = 0.89408 ± 0.00475, where "Meas" implies the use of attenuation information from the CT measurement, and "No" indicates the absence of attenuation information. For the estimation study, area under the estimation receiver operating characteristic curve (A(EROC)) was quantified as A(EROC)^Meas = 0.55926 ± 0.00731 and A(EROC)^No = 0.56167 ± 0.00731. Based on these results, it was concluded that the use of CT information did not improve the scanning linear observer's ability to perform the stated myocardial SPECT tasks. The risk to the patient of the CT measurement was quantified in terms of excess effective dose as 2.37 mSv for males and 3.38 mSv for females.Another image quality tool generated within this body of work was a singular value decomposition (SVD) algorithm to reduce the dimension of the eigenvalue problem for tomography systems with rotational symmetry. Agreement in the results of this reduced dimension SVD algorithm and those of a standard SVD algorithm are shown for a toy problem. The use of SVD toward image quality metrics such as the measurement and null space are also presented.
APA, Harvard, Vancouver, ISO, and other styles
14

Axelson, Per-Erik. "Quality Measures of Halftoned Images (A Review)." Thesis, Linköping University, Department of Science and Technology, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1138.

Full text
Abstract:

This study is a thesis for the Master of Science degree in Media Technology and Engineering at the Department of Science and Technology, Linkoping University. It was accomplished from November 2002 to May 2003.

Objective image quality measures play an important role in various image processing applications. In this paper quality measures applied on halftoned images are aimed to be in focus. Digital halftoning is the process of generating a pattern of binary pixels that create the illusion of a continuous- tone image. Algorithms built on this technique produce results of very different quality and characteristics. To evaluate and improve their performance, it is important to have robust and reliable image quality measures. This literature survey is to give a general description in digital halftoning and halftone image quality methods.

APA, Harvard, Vancouver, ISO, and other styles
15

Kim, Jae-Seung. "Objective image quality assessment for positron emission tomography : planar (2D) and volumetric (3D) human and model observer studies /." Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/5836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Servetkienė, Vaida. "Gyvenimo kokybės daugiadimensis vertinimas, identifikuojant kritines sritis." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2013~D_20131115_113802-40756.

Full text
Abstract:
Disertacijoje nagrinėjama aktuali gyvenimo kokybės vertinimo problema. Mokslinėje literatūroje vis dar nėra vienodo gyvenimo kokybės suvokimo ir mokslinio apibrėžimo. Kiekvienas asmuo šiai sąvokai gali suteikti savo prasminį atspalvį, tačiau moksle gyvenimo kokybė turi būti konkrečiais rodikliais išreiškiama ir matuojama sąvoka, susijusi su visuomenės gerove konkrečioje šalyje. Šio darbo tyrimo objektas yra gyvenimo kokybės vertinimas. Disertacijos tikslas – išanalizavus mokslinius tarpdisciplininius požiūrius į gyvenimo kokybės vertinimą, parengti daugiadimensį gyvenimo kokybės vertinimo modelį ir nustatyti kritines jos sritis Lietuvoje. Darbe atlikta gyvenimo kokybės apibrėžties, koncepcijų ir praktikoje taikomų gyvenimo kokybės vertinimo metodų lyginamoji analizė, konceptualizuota gyvenimo kokybės sąvoka, ją traktuojant kaip ekonomikos mokslo tyrimo objektą, atspindintį valstybės vykdomos ekonominės politikos efektyvumą, nustatytos pagrindinės gyvenimo kokybės sritys, pateikta gyvenimo kokybės koncepcija ir pasiūlyti jos vertinimo metodologiniai principai, sudarytas daugiadimensis gyvenimo kokybės vertinimo modelis, jį taikant, įvertinta Lietuvos gyventojų gyvenimo kokybė ES šalių kontekste ir nustatytos kritinės jos sritys.
The dissertation examines the topical issue of assessment of the quality of life. Scientific literature still does not offer a uniform perception and scientific definition of the quality of life. Every person can provide this concept with his own interpretation, but in science the quality of life must be a concept expressed by means of specific indicators and measured in relation to the welfare of the population in a specific country. The object of research in this dissertation is the assessment of the quality of life. The aim of the dissertation is, upon analysing interdisciplinary scientific approaches to assessment of the quality of life, to develop a multidimensional model of assessment of the quality of life and to identify the critical areas of the quality of life in Lithuania. The author of the dissertation has carried out a comparative analysis of the definition and conceptions of the quality of life and the quality of life assessment methods employed in practice, conceptualised the concept of the quality of life treating it as an object of economic research which reflects the efficiency of the economic policy of the state, identified the key areas of the quality of life, provided a conception of the quality of life and proposed methodological principles for its assessment, developed a multidimensional quality of life assessment model and, by applying the model, evaluated the quality of life of the Lithuanian population in the context of the EU Member States and... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
17

Bensaied, Ghaly Rania. "Subjective quality assessment : a study on the grading scales : illustrations for stereoscopic and 2D video content." Thesis, Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0013/document.

Full text
Abstract:
Les recommandations (normes) élaborées par l'UIT (l'Union Internationale de Télécommunications) précisent avec rigueur les conditions dans lesquelles les tests subjectifs de qualité visuelle doivent avoir lieu: la salle de test, les conditions de visualisation, le protocole d'évaluation, les méthodes de post-traitement des scores accordées par les évaluateurs, etc... Pourtant, les études de l'état de l'art mettent en évidence que des nombreuses inadvertances perdurent au niveau théorique et expérimental: (1) la modélisation statistique précise des scores attribués par les observateurs humains à un certain type de contenu reste encore inconnue, (2) la différence théorique et applicative entre les évaluations sur des échelles discrètes et continues ne fait pas encore l'objet d'une étude dédiée et (3) l'impact sémantique (psycho-cognitif) des étiquettes associées à l'échelle d'évaluation est toujours invoqué mais jamais évalué. Notre thèse offre un cadre méthodologique et expérimental permettant de: 1. Modéliser avec précision statistique la distribution des scores attribués par les observateurs et évaluer l'impact pratique d'une telle modélisation, 2. Établir la relation théorique entre les scores attribués par les observateurs sur une échelle continue et une échelle discrète, 3. Établir le cadre statistique permettant de quantifier l'impact sémantique induit par les étiquettes sémantiques associées à l'échelle d'évaluation, 4. Spécifier et réaliser un cadre expérimental de référence, à vocation d'utilisation ultérieure par les instances de l'UIT
Quality evaluation is an ever-fascinating field, covering at least a century of research works emerging from psychology, psychophysics, sociology, marketing, medicine… While for visual quality evaluation the IUT recommendations pave the way towards well-configured, consensual evaluation conditions granting reproducibility and comparability of the experimental results, an in-depth analysis of the state-of-the-art studies shows at least three open challenges related to the: (1) the continuous vs. discrete evaluation scales, (2) the statistical distribution of the scores assigned by the observers and (3) the usage of semantic labels on the grading scales. Thus, the present thesis turns these challenges into three research objectives: 1. bridging at the theoretical level the continuous and the discrete scale evaluation procedures and investigating whether the number of the classes on the discrete scales is a criterion meaningful in the results interpretations or just a parameter; studying the theoretical influence of the statistical model of evolution results and of the size of the panel (number of observers) in the accuracy of the results are also targeted; 2. quantifying the bias induced in subjective video quality experiments by the semantic labels (e.g. Excellent, Good, Fair, Poor and Bad) generally associated to the discrete grading scales; 3. designing and deploying an experimental test-bed able to support their precision and statistical relevance. With respect to these objectives, the main contributions are at theoretical, methodological and experimental levels
APA, Harvard, Vancouver, ISO, and other styles
18

Sanches, Silvio Ricardo Rodrigues. "Avaliação objetiva de qualidade de segmentação." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-26062014-111553/.

Full text
Abstract:
A avaliação de qualidade de segmentação de vídeos tem se mostrado um problema pouco investigado no meio científico. Apesar disso, estudos recentes na área resultaram em algumas métricas que têm como finalidade avaliar objetivamente a qualidade da segmentação produzida pelos algoritmos. Tais métricas consideram as diferentes formas em que os erros ocorrem (fatores perceptuais) e seus parâmetros são ajustados de acordo com a aplicação em que se pretende utilizar os vídeos segmentados. Neste trabalho apresentam-se: i) uma avaliação da métrica que representa o estado-da-arte, demonstrando que seu desempenho varia de acordo com o algoritmo; ii) um método subjetivo para avaliação de qualidade de segmentação; e iii) uma nova métrica perceptual objetiva, derivada do método subjetivo aqui proposto, capaz de encontrar o melhor ajuste dos parâmetros de dois algoritmos de segmentação encontrados na literatura, quando os vídeos por eles segmentados são utilizados na composição de cenas em ambientes de Teleconferência Imersiva.
Assessment of video segmentation quality is a problem seldom investigated by the scientific community. Nevertheless, recent studies presented some objective metrics to evaluate algorithms. Such metrics consider different ways in which segmentation errors occur (perceptual factors) and its parameters are adjusted according to the application for which the segmented frames are intended. In this work: i) we demonstrate empirically that the performance of existing metrics changes according to the segmentation algorithm; ii) we developed a subjective method to evaluate segmentation quality; and iii) we contribute with a new objective metric derived on the basis of experiments from subjective method in order to adjust the parameters of two bilayer segmentation algorithms found in the literature when these algorithms are used for compose scenes in Immersive Teleconference environments.
APA, Harvard, Vancouver, ISO, and other styles
19

Xiao, Yao. "User perceived video quality modelling on mobile devices for Vp9 and H265 encoders." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/81842/1/Yao_Xiao_Thesis.pdf.

Full text
Abstract:
This study constructs performance prediction models to estimate the end-user perceived video quality on mobile devices for the latest video encoding techniques –VP9 and H.265. Both subjective and objective video quality assessments were carried out for collecting data and selecting the most desirable predictors. Using statistical regression, two models were generated to achieve 94.5% and 91.5% of prediction accuracies respectively, depending on whether the predictor derived from the objective assessment is involved. These proposed models can be directly used by media industries for video quality estimation, and will ultimately help them to ensure a positive end-user quality of experience on future mobile devices after the adaptation of the latest video encoding technologies.
APA, Harvard, Vancouver, ISO, and other styles
20

Smith, Kathryn Elizabeth. "THE URGE TO PURGE: AN ECOLOGICAL MOMENTARY ASSESSMENT OF PURGING DISORDER AND BULIMIA NERVOSA." Kent State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=kent1416829240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Nouri, Nedia. "Évaluation de la qualité et transmission en temps-réel de vidéos médicales compressées : application à la télé-chirurgie robotisée." Thesis, Vandoeuvre-les-Nancy, INPL, 2011. http://www.theses.fr/2011INPL049N/document.

Full text
Abstract:
L'évolution des techniques chirurgicales, par l'utilisation de robots, permet des interventions mini-invasives avec une très grande précision et ouvre des perspectives d'interventions chirurgicales à distance, comme l'a démontré la célèbre expérimentation « Opération Lindbergh » en 2001. La contrepartie de cette évolution réside dans des volumes de données considérables qui nécessitent des ressources importantes pour leur transmission. La compression avec pertes de ces données devient donc inévitable. Celle-ci constitue un défi majeur dans le contexte médical, celui de l'impact des pertes sur la qualité des données et leur exploitation. Mes travaux de thèse concernent l'étude de techniques permettant l'évaluation de la qualité des vidéos dans un contexte de robotique chirurgicale. Deux approches méthodologiques sont possibles : l'une à caractère subjectif et l'autre à caractère objectif. Nous montrons qu'il existe un seuil de tolérance à la compression avec pertes de type MPEG2 et H.264 pour les vidéos chirurgicales. Les résultats obtenus suite aux essais subjectifs de la qualité ont permis également de mettre en exergue une corrélation entre les mesures subjectives effectuées et une mesure objective utilisant l'information structurelle de l'image. Ceci permet de prédire la qualité telle qu'elle est perçue par les observateurs humains. Enfin, la détermination d'un seuil de tolérance à la compression avec pertes a permis la mise en place d'une plateforme de transmission en temps réel sur un réseau IP de vidéos chirurgicales compressées avec le standard H.264 entre le CHU de Nancy et l'école de chirurgie
The digital revolution in medical environment speeds up development of remote Robotic-Assisted Surgery and consequently the transmission of medical numerical data such as pictures or videos becomes possible. However, medical video transmission requires significant bandwidth and high compression ratios, only accessible with lossy compression. Therefore research effort has been focussed on video compression algorithms such as MPEG2 and H.264. In this work, we are interested in the question of compression thresholds and associated bitrates are coherent with the acceptance level of the quality in the field of medical video. To evaluate compressed medical video quality, we performed a subjective assessment test with a panel of human observers using a DSCQS (Double-Stimuli Continuous Quality Scale) protocol derived from the ITU-R BT-500-11 recommendations. Promising results estimate that 3 Mbits/s could be sufficient (compression ratio aroundthreshold compression level around 90:1 compared to the original 270 Mbits/s) as far as perceived quality is concerned. Otherwise, determining a tolerance to lossy compression has allowed implementation of a platform for real-time transmission over an IP network for surgical videos compressed with the H.264 standard from the University Hospital of Nancy and the school of surgery
APA, Harvard, Vancouver, ISO, and other styles
22

Bršel, Boris. "Porovnání objektivních a subjektivních metrik kvality videa pro Ultra HDTV videosekvence." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-241052.

Full text
Abstract:
Master's thesis deals with the assessment of quality of Ultra HDTV video sequences applying objective metrics. Thesis theoretically describes coding of selected codecs H.265/HEVC and VP9, objective video quality metrics and also subjective methods for assessment of the video sequences quality. Next chapter deals with the implementation of the H.265/HEVC and the VP9 codecs at selected video sequences in the raw format from which arises the test sequences database. Quality of these videos is measured afterwards by objective metrics and selected subjective method. These results are compared for the purpose of finding the most consistent correlations among objective metrics and subjective assessment.
APA, Harvard, Vancouver, ISO, and other styles
23

Ouni, Sonia. "Evaluation de la qualité des images couleur. Application à la recherche & à l'amélioration des images." Thesis, Reims, 2012. http://www.theses.fr/2012REIMS034.

Full text
Abstract:
Le domaine de recherche dans l'évaluation objective de la qualité des images couleur a connu un regain d'intérêt ces dernières années. Les travaux sont essentiellement dictés par l'avènement des images numérique et par les nouveaux besoins en codage d'images (compression, transmission, restauration, indexation,…). Jusqu'à présent la meilleure évaluation reste visuelle (donc subjective) soit par des techniques psychophysiques soit par évaluation experte. Donc, il est utile, voire nécessaire, de mettre en place des critères et des mesures objectifs qui produisent automatiquement des notes de qualité se rapprochant le plus possible des notes de qualité données par l'évaluation subjective. Nous proposons, tout d'abort, une nouvelle métrique avec référence d'évaluation de la qualité des images couleur, nommée Delta E globale, se base sur l'aspect couleur et intègre les caractéristiques du système visuel humain (SVH). Les performances ont été mesurées dans deux domaines d'application la compression et la restauration. Les expérimentations réalisées montrent une corrélation importante entre les résultats obtenus et l'appréciation subjective. Ensuite, nous proposons une nouvelle approche d'évaluation sans référence de la qualité des images couleur en se basant sur les réseaux de neurones : compte tenu du caractère multidimensionnel de la qualité d'images, une quantification de la qualité a été proposée en se basant sur un ensemble d'attributs formant le descripteur PN (Précision, Naturalité). La précision traduit la netteté et la clarté. Quant à la naturalité, elle traduit la luminosité et la couleur. Pour modéliser le critère de la couleur, trois métriques sans référence ont été définies afin de détecter la couleur dominante dans l'image, la proportion de cette couleur et sa dispersion spatiale. Cette approche se base sur les réseaux de neurones afin d'imiter la perception du SVH. Deux variantes de cette approche ont été expérimentées (directe et progressive). Les résultats obtenus ont montré la performance de la variante progressive par rapport à la variante directe. L'application de l'approche proposée dans deux domaines : dans le contexte de la restauration, cette approche a servi comme un critère d'arrêt automatique pour les algorithmes de restauration. De plus, nous l'avons utilisé au sein d'un système d'estimation de la qualité d'images afin de détecter automatiquement le type de dégradation contenu dans une image. Dans le contexte de l'indexation et de la recherche d'images, l'approche proposée a servi d'introduire la qualité des images de la base comme index. Les résultats expérimentaux ont montré l'amélioration des performances du système de recherche d'images par le contenu en utilisant l'index qualité ou en réalisant un raffinement des résultats avec le critère de qualité
The research area in the objective quality assessment of the color images has been a renewed interest in recent years. The work is primarily driven by the advent of digital pictures and additional needs in image coding (compression, transmission, recovery, indexing,...). So far the best evaluation is visual (hence subjective) or by psychophysical techniques or by expert evaluation. Therefore, it is useful, even necessary, to establish criteria and objectives that automatically measures quality scores closest possible quality scores given by the subjective evaluation. We propose, firstly, a new full reference metric to assess the quality of color images, called overall Delta E, based on color appearance and incorporates the features of the human visual system (HVS). Performance was measured in two areas of application compression and restoration. The experiments carried out show a significant correlation between the results and subjective assessment.Then, we propose a new no reference quality assessmenent color images approach based on neural networks: given the multidimensional nature of image quality, a quantification of quality has been proposed, based on a set of attributes forming the descriptor UN (Utility, Naturalness). Accuracy reflects the sharpness and clarity. As for naturality, it reflects the brightness and color. To model the criterion of color, three no reference metrics were defined to detect the dominant color in the image, the proportion of that color and its spatial dispersion. This approach is based on neural networks to mimic the HVS perception. Two variants of this approach have been tried (direct and progressive). The results showed the performance of the progressive variant compared to the direct variant. The application of the proposed approach in two areas: in the context of restoration, this approach has served as a stopping criterion for automatic restoration algorithms. In addition, we have used in a system for estimating the quality of images to automatically detect the type of content in an image degradation. In the context of indexing and image retrieval, the proposed approach was used to introduce the quality of images in the database as an index. The experimental results showed the improvement of system performance image search by content by using the index or by making a quality refinement results with the quality criterion
APA, Harvard, Vancouver, ISO, and other styles
24

Begazo, Dante Coaquira. "Método de avaliação de qualidade de vídeo por otimização condicionada." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-09032018-152946/.

Full text
Abstract:
Esta Tese propõe duas métricas objetivas para avaliar a percepção de qualidade de vídeos sujeitos a degradações de transmissão em uma rede de pacotes. A primeira métrica usa apenas o vídeo degradado, enquanto que a segunda usa os vídeos de referência e degradado. Esta última é uma métrica de referência completa (FR - Full Reference) chamada de QCM (Quadratic Combinational Metric) e a primeira é uma métrica sem referência (NR - No Reference) chamada de VQOM (Viewing Quality Objective Metric). Em particular, o procedimento de projeto é aplicado à degradação de variação de atraso de pacotes (PDV - Packet Delay Variation). A métrica NR é descrita por uma spline cúbica composta por dois polinômios cúbicos que se encontram suavemente num ponto chamado de nó. Para o projeto de ambas métricas, colhem-se opiniões de observadores a respeito das sequências de vídeo degradadas que compõem o conjunto. A função objetiva inclui o erro quadrático total entre as opiniões e suas estimativas paramétricas, ainda consideradas como expressões algébricas. Acrescentam-se à função objetiva três condições de igualdades de derivadas tomadas no nó, cuja posição é especificada dentro de uma grade fina de pontos entre o valor mínimo e o valor máximo do fator de degradação. Essas condições são afetadas por multiplicadores de Lagrange e adicionadas à função objetiva, obtendo-se o lagrangiano, que é minimizado pela determinação dos coeficientes subótimos dos polinômios em função de cada valor do nó na grade. Finalmente escolhe-se o valor do nó que produz o erro quadrático mínimo, determinando assim os valores finais para dos coeficientes do polinômio. Por outro lado, a métrica FR é uma combinação não-linear de duas métricas populares, a PSNR (Peak Signal-to-Noise Ratio) e a SSIM (Structural Similarity Index). Um polinômio completo de segundo grau de duas variáveis é usado para realizar a combinação, porque é sensível a ambas métricas constituintes, evitando o sobreajuste em decorrência do baixo grau. Na fase de treinamento, o conjunto de valores dos coeficientes do polinômio é determinado através da minimização do erro quadrático médio para as opiniões sobre a base de dados de treino. Ambas métricas, a VQOM e a QCM, são treinadas e validadas usando uma base de dados, e testadas com outra independente. Os resultados de teste são comparados com métricas NR e FR recentes através de coeficientes de correlação, obtendo-se resultados favoráveis para as métricas propostas.
This dissertation proposes two objective metrics for estimating human perception of quality for video subject to transmission degradation over packet networks. The first metric just uses traffic data while the second one uses both the degraded and the reference video sequences. That is, the latter is a full reference (FR) metric called Quadratic Combinational Metric (QCM) and the former one is a no reference (NR) metric called Viewing Quality Objective Metric (VQOM). In particular, the design procedure is applied to packet delay variation (PDV) impairments, whose compensation or control is very important to maintain quality. The NR metric is described by a cubic spline composed of two cubic polynomials that meet smoothly at a point called a knot. As the first step in the design of either metric, the spectators score a training set of degraded video sequences. The objective function for designing the NR metric includes the total square error between the scores and their parametric estimates, still regarded as algebraic expressions. In addition, the objective function is augmented by the addition of three equality constraints for the derivatives at the knot, whose position is specified within a fine grid of points between the minimum value and the maximum value of the degradation factor. These constraints are affected by Lagrange multipliers and added to the objective function to obtain the Lagrangian, which is minimized by the suboptimal polynomial coefficients determined as a function of each knot in the grid. Finally, the knot value is selected that yields the minimum square error. By means of the selected knot value, the final values of the polynomial coefficients are determined. On the other hand, the FR metric is a nonlinear combination of two popular metrics, namely, the Peak Signal-to-Noise Ratio (PSNR) and the Structural Similarity Index (SSIM). A complete second-degree two-variable polynomial is used for the combination since it is sensitive to both constituent metrics while avoiding overfitting. In the training phase, the set of values for the coefficients of this polynomial is determined by minimizing the mean square error to the opinions over the training database. Both metrics, the VQOM and the QCM, are trained and validated using one database and tested with a different one. The test results are compared with recent NR and FR metrics by means of correlation coefficients, obtaining favorable results for the proposed metrics.
APA, Harvard, Vancouver, ISO, and other styles
25

Hessel, Charles. "La décomposition automatique d'une image en base et détail : Application au rehaussement de contraste." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLN017/document.

Full text
Abstract:
Dans cette thèse CIFRE en collaboration entre le Centre de Mathématiques et de leurs Applications, École Normale Supérieure de Cachan et l’entreprise DxO, nous abordons le problème de la décomposition additive d’une image en base et détail. Une telle décomposition est un outil fondamental du traitement d’image. Pour une application à la photographie professionnelle dans le logiciel DxO Photolab, il est nécessaire que la décomposition soit exempt d’artefact. Par exemple, dans le contexte de l’amélioration de contraste, où la base est réduite et le détail augmenté, le moindre artefact devient fortement visible. Les distorsions de l’image ainsi introduites sont inacceptables du point de vue d’un photographe.L’objectif de cette thèse est de trouver et d’étudier les filtres les plus adaptés pour effectuer cette tâche, d’améliorer les meilleurs et d’en définir de nouveaux. Cela demande une mesure rigoureuse de la qualité de la décomposition en base plus détail. Nous examinons deux artefact classiques (halo et staircasing) et en découvrons trois autres types tout autant cruciaux : les halos de contraste, le cloisonnement et les halos sombres. Cela nous conduit à construire cinq mire adaptées pour mesurer ces artefacts. Nous finissons par classer les filtres optimaux selon ces mesures, et arrivons à une décision claire sur les meilleurs filtres. Deux filtres sortent du rang, dont un proposé dans cette thèse
In this CIFRE thesis, a collaboration between the Center of Mathematics and their Applications, École Normale Supérieure de Cachan and the company DxO, we tackle the problem of the additive decomposition of an image into base and detail. Such a decomposition is a fundamental tool in image processing. For applications to professional photo editing in DxO Photolab, a core requirement is the absence of artifacts. For instance, in the context of contrast enhancement, in which the base is reduced and the detail increased, minor artifacts becomes highly visible. The distortions thus introduced are unacceptable from the point of view of a photographer.The objective of this thesis is to single out and study the most suitable filters to perform this task, to improve the best ones and to define new ones. This requires a rigorous measure of the quality of the base plus detail decomposition. We examine two classic artifacts (halo and staircasing) and discover three more sorts that are equally crucial: contrast halo, compartmentalization, and the dark halo. This leads us to construct five adapted patterns to measure these artifacts. We end up ranking the optimal filters based on these measurements, and arrive at a clear decision about the best filters. Two filters stand out, including one we propose
APA, Harvard, Vancouver, ISO, and other styles
26

Bednarz, Robin. "Analýza kvality obrazu v digitálních televizních systémech." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-217810.

Full text
Abstract:
Diploma thesis deals with the analysis of quality in digital television systems and contains theoretical description of subjective and objective assessment of quality picture methods. The thesis contains short-term and long-term analysis of quality picture of terrestrial television DVB-T. Measurements and experimentations were carried out with the help of Rohde&Schwarz DVQ analyzer of picture quality and software MPEG-2 Quality Monitor and MPEG-2 Elementary stream analyzer.
APA, Harvard, Vancouver, ISO, and other styles
27

Vlad, Raluca Ioana. "Une méthode pour l'évaluation de la qualité des images 3D stéréoscopiques." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00925280.

Full text
Abstract:
Dans le contexte d'un intérêt grandissant pour les systèmes stéréoscopiques, mais sans méthodes reproductible pour estimer leur qualité, notre travail propose une contribution à la meilleure compréhension des mécanismes de perception et de jugement humains relatifs au concept multi-dimensionnel de qualité d'image stéréoscopique. Dans cette optique, notre démarche s'est basée sur un certain nombre d'outils : nous avons proposé un cadre adapté afin de structurer le processus d'analyse de la qualité des images stéréoscopiques, nous avons implémenté dans notre laboratoire un système expérimental afin de conduire plusieurs tests, nous avons crée trois bases de données d'images stéréoscopiques contenant des configurations précises et enfin nous avons conduit plusieurs expériences basées sur ces collections d'images. La grande quantité d'information obtenue par l'intermédiaire de ces expérimentations a été utilisée afin de construire un premier modèle mathématique permettant d'expliquer la perception globale de la qualité de la stéréoscopie en fonction des paramètres physiques des images étudiée.
APA, Harvard, Vancouver, ISO, and other styles
28

Friberg, Annika. "Interaktionskvalitet - hur mäts det?" Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20810.

Full text
Abstract:
Den tekniska utvecklingen har lett till att massiva mängder av information sänds, i högahastigheter. Detta flöde måste vi lära oss att hantera. För att maximera nyttan av de nyateknikerna och undkomma de problem som detta enorma informationsflöde bär med sig, börinteraktionskvalitet studeras. Vi måste anpassa gränssnitt efter användaren eftersom denneinte har möjlighet att anpassa sig till, och sortera i för stora informationsmängder. Vi måsteutveckla system som gör människan mer effektiv vid användande av gränssnitt.För att anpassa gränssnitten efter användarens behov och begränsningar krävs kunskaperom den mänskliga kognitionen. När kognitiv belastning studeras är det viktigt att en såflexibel, lättillgänglig och icke-påträngande teknik som möjligt används för att få objektivamätresultat, samtidigt som pålitligheten är av största vikt. För att kunna designa gränssnittmed hög interaktionskvalitet krävs en teknik att utvärdera dessa. Målet med uppsatsen är attfastställa en mätmetod väl lämpad för mätning av interaktionskvalitet.För mätning av interaktionskvalitet rekommenderas en kombinering av subjektiva ochfysiologiska mätmetoder, detta innefattar en kombination av Functional near-infraredspecroscopy; en fysiologisk mätmetod som mäter hjärnaktiviteten med hjälp av ljuskällor ochdetektorer som fästs på frontalloben, Electrodermal activity; en fysiologisk mätmetod sommäter hjärnaktiviteten med hjälp av elektroder som fästs över skalpen och NASA task loadindex; en subjektiv, multidimensionell mätmetod som bygger på kortsortering och mäteruppfattad kognitiv belastning i en sammanhängande skala. Mätning med hjälp av dessametoder kan resultera i en ökad interaktionskvalitet i interaktiva, fysiska och digitalagränssnitt. En uppskattning av interaktionskvalitet kan bidra till att fel vid interaktionminimeras, vilket innebär en förbättring av användares upplevelse vid interaktion.
Technical developments have led to the broadcasting of massive amounts of information, athigh velocities. We must learn to handle this flow. To maximize the benefits of newtechnologies and avoid the problems that this immense information flow brings, interactionquality should be studied. We must adjust interfaces to the user because the user does nothave the ability to adapt and sort overly large amounts of information. We must developsystems that make the human more efficient when using interfaces.To adjust the interfaces to the user needs and limitations, knowledge about humancognitive processes is required. When cognitive workload is studied it is important that aflexible, easily accessed and non assertive technique is used to get unbiased results. At thesame time reliability is of great importance. To design interfaces with high interaction quality,a technique to evaluate these is required. The aim of this paper is to establish a method that iswell suited for measurement of interaction quality.When measuring interaction quality, a combination of subjective and physiologicalmethods is recommended. This comprises a combination of Functional near-infraredspectroscopy; a physiological measurement which measures brain activity using light sourcesand detectors placed on the frontal lobe, Electrodermal activity; a physiological measurementwhich measures brain activity using electrodes placed over the scalp and NASA task loadindex; a subjective, multidimensional measurement based on card sorting and measures theindividual perceived cognitive workload on a continuum scale. Measuring with these methodscan result in an increase in interaction quality in interactive, physical and digital interfaces.An estimation of interaction quality can contribute to eliminate interaction errors, thusimproving the user’s interaction experience.
APA, Harvard, Vancouver, ISO, and other styles
29

Bishtawi, Wajih Walid. "Objective measurement of subjective image quality." Thesis, 1996. http://spectrum.library.concordia.ca/3799/1/MM10824.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Wu, Chen-Chi, and 吳楨祺. "Subjective Visual Quality Assessment for Image Retargeting." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/14746718777168515054.

Full text
Abstract:
碩士
國立清華大學
電機工程學系
100
With the development of various technologies in image/video processing applications, it is more convenient to share images/videos on different multimedia devices. However, due to different display screen sizes of multimedia devices, we have to resize images/videos to deliver the image/video data for users. Since image/video retargeting will cause visual information loss or distortion, an evaluation method to assess the performance of image/video retargeting is much desired. In order to evaluate the visual quality of retargeted images/videos, we design a subjective visual quality assessment method to precisely record human subjective perception. Our subjective visual quality assessment method includes three parts. First, we ask participants to make an overall image subjective visual quality voting. Then, we let participant judge images about the subjective image information loss. Finally, we request participants to choose the subjective reasons for their voting. Through analyzing correlations between the results from the subjective visual quality assessment method and the objective visual quality metric, we verify whether the results from the objective visual quality metric are consistent with those from the subjective visual quality assessment method. Besides, statistical results are used to analyze relations between image attributes, and discuss different types of distortion caused by different image retargeting algorithms.
APA, Harvard, Vancouver, ISO, and other styles
31

Yang, Ping-Hui, and 楊炳輝. "Subjective and Objective Assessment of Sleep Quality in College Students." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/83296646845326998385.

Full text
Abstract:
碩士
中臺科技大學
醫學檢驗生物技術系碩士班
102
Background: Poor sleep quality is thought to cause psychological and physical disease. Many college students generally have sleep problems, however, the study of sleep disturbance and sleep behavior for college students are limited. We conduct both subjective and objective evaluation to explore the sleep quality and sleep related problem among college students. Material and methods: The participants included age 18-30 college students. After informed human trails consent, the subjective questionnaires, including the Pittsburgh Sleep Quality Index (PSQI), Epworth Sleepiness Scale (ESS) and the snore outcomes survey (SOS) were used in this research. When the participants with the outcomes of PSQI>5, ESS>10 and SOS<55 were assessed using wrist actigraphy, oximeter or polysomnography (PSG). Participants with actigraphy for 7days to record activity and sleep time of 24 hour; with oximeter for 3 days to assess oxygen saturation and heart rate; with PSG for 1-2 days to evaluate the sleep cycle and sleep apnea by determine EEG, EOG, EMG, ECG, flow, thoracoabdominal motion, oxygen saturation, snore and position. Data was analyzed by using Statistical Package of Social Sciences (SPSS) version 12. Results: Nine hundred and twenty-nine students participate in this study and 900 students have completed questionnaires (recovery rate 97%). The outcomes of descriptive statistics revealed BMI index≧24 (18%, n = 161), PSQI>5 (67.11%, n = 604), ESS>10 (33.50%, n = 302) and SOS<55 (2.40%,n = 22), respectively. The results show one half students have poor sleep quality and 20% or more have daytime sleepiness. The significant correlation are between gender, exercise habit, BMI, (p<0.05). One hundred sixty volunteers with actigraphy and oximeter indicate the normal oxygen saturation (96%) and mean bedtime at 01:10 on weekday and average of 459.9 minutes sleep time and 94.2% sleep efficiency. According to PSG determination, 44% students have abnormal levels of Apnea-Hypopnea Index (AHI >5) who were suspected with sleep apnea. Conclusion: Poor sleep quality is significant problem among students in our institution. The results may be related to delayed bedtime, no exercise habits, higher BMI and less napping. In addition, almost 44% students were suspected to have sleep apnea. The results explore the importance and need for investigation of sleep quality of college students. Even it is time-cost and not easy to obtain the subjects, we will still working on gather subjective and objective sleep related parameters. We hope to help young adults to realize personal sleep quality and to prevent the physiological and psychological disease caused by sleep problems.
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Qiang. "Objective image and video quality assessment with applications." 2009. http://hdl.handle.net/10106/1640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

GUO, Jinjiang. "Contributions to objective and subjective visual quality assessment of 3d models." Thesis, 2016. http://www.theses.fr/2016LYSEI099/document.

Full text
Abstract:
Dans le domaine de l’informatique graphique, les données tridimensionnelles, généralement représentées par des maillages triangulaires, sont employées dans une grande variété d’applications (par exemple, le lissage, la compression, le remaillage, la simplification, le rendu, etc.). Cependant, ces procédés introduisent inévitablement des artefacts qui altèrent la qualité visuelle des données 3D rendues. Ainsi, afin de guider perceptuellement les algorithmes de traitement, il y a un besoin croissant d'évaluations subjectives et objectives de la qualité visuelle à la fois performantes et adaptées, pour évaluer et prédire les artefacts visuels. Dans cette thèse, nous présentons d'abord une étude exhaustive sur les différentes sources d'artefacts associés aux données numériques graphiques, ainsi que l’évaluation objective et subjective de la qualité visuelle des artefacts. Ensuite, nous introduisons une nouvelle étude sur la qualité subjective conçue sur la base de l’évaluations de la visibilité locale des artefacts géométriques, dans laquelle il a été demandé à des observateurs de marquer les zones de maillages 3D qui contiennent des distorsions visibles. Les cartes de distorsion visuelle collectées sont utilisées pour illustrer plusieurs fonctionnalités perceptuelles du système visuel humain (HVS), et servent de vérité-terrain pour évaluer les performances des attributs et des mesures géométriques bien connus pour prédire la visibilité locale des distorsions. Notre deuxième étude vise à évaluer la qualité visuelle de modèles 3D texturés, subjectivement et objectivement. Pour atteindre ces objectifs, nous avons introduit 136 modèles traités avec à la fois des distorsions géométriques et de texture, mené une expérience subjective de comparaison par paires, et invité 101 sujets pour évaluer les qualités visuelles des modèles à travers deux protocoles de rendu. Motivés par les opinions subjectives collectées, nous proposons deux mesures de qualité visuelle objective pour les maillages texturés, en se fondant sur les combinaisons optimales des mesures de qualité issues de la géométrie et de la texture. Ces mesures de perception proposées surpassent leurs homologues en termes de corrélation avec le jugement humain
In computer graphics realm, three-dimensional graphical data, generally represented by triangular meshes, have become commonplace, and are deployed in a variety of application processes (e.g., smoothing, compression, remeshing, simplification, rendering, etc.). However, these processes inevitably introduce artifacts, altering the visual quality of the rendered 3D data. Thus, in order to perceptually drive the processing algorithms, there is an increasing need for efficient and effective subjective and objective visual quality assessments to evaluate and predict the visual artifacts. In this thesis, we first present a comprehensive survey on different sources of artifacts in digital graphics, and current objective and subjective visual quality assessments of the artifacts. Then, we introduce a newly designed subjective quality study based on evaluations of the local visibility of geometric artifacts, in which observers were asked to mark areas of 3D meshes that contain noticeable distortions. The collected perceived distortion maps are used to illustrate several perceptual functionalities of the human visual system (HVS), and serve as ground-truth to evaluate the performances of well-known geometric attributes and metrics for predicting the local visibility of distortions. Our second study aims to evaluate the visual quality of texture mapped 3D model subjectively and objectively. To achieve these goals, we introduced 136 processed models with both geometric and texture distortions, conducted a paired-comparison subjective experiment, and invited 101 subjects to evaluate the visual qualities of the models under two rendering protocols. Driven by the collected subjective opinions, we propose two objective visual quality metrics for textured meshes, relying on the optimal combinations of geometry and texture quality measures. These proposed perceptual metrics outperform their counterparts in term of the correlation with the human judgment
APA, Harvard, Vancouver, ISO, and other styles
34

Chang, Chuan, and 張詮. "BJND-based Stereo Image Coding and Objective Quality Assessment." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/x4686d.

Full text
Abstract:
碩士
國立中正大學
電機工程研究所
104
3D entertainment device has growing recently, from 3D films to smart phones, stereoscopic video can bring users rights in the scene feeling. However, stereoscopic videos’ data are much larger than 2D’s. To solve this problem, lots of stereoscopic video standards are proposed in succession. Most of them use highly similarity between base view and non base view video to improve coding efficiency. Traditional stereoscopic video coding scheme didn’t consider binocular perceptions caused by the distortion of base view video and non-base view video, respectively. Binocular perceptions includes binocular fusion, binocular rivalry and binocular suppression. Above all, stereoscopic video coding which considers binocular perceptions become more important. To solve the problem of stereoscopic video coding, not only coding scheme has to change, but also objective stereo video quality assessment. The quality of experience of stereoscopic video contains both binocular perceptions and the quality of 2D videos. This paper proposed a stereoscopic image coding scheme and an objective image quality assessment both based on BJND (Binocular Just Noticeable Distortion).,BJND, which was proposed recently, could use to measure binocular perceptions occurred by distorted stereoscopic images. With BJND, we can decide there are binocular redundancies in stereo images. Through erase those redundancies, coding efficiency will be improved and maintain quality of experience at same time. And we also design an objective quality assessment algorithm based on BJND which will has highly correlation between subjective quality and proposed method.
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Chin-Hua, and 陳金徽. "Objective Assessment Index for Radiation Quality of Digital Image." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/94212648782359244144.

Full text
Abstract:
碩士
國立成功大學
地球科學系專班
94
Along with the fast development of the remote sensing sensor and computer technique, the remotely sensed image data is a great deal of is a current obvious trend in the spatial information realm application, the acquired raw image quality and the technique of image processing will affect the image interpretation, automatic classification and the quality of generated ortho-image directly . Therefore, how with objective and the method for nearing to the human vision to carry on the valuation of image quality and have become a meaningful research work.   In this study, we will introduce two kinds of new image quality index UQI and SSIM to digital aerial image and high resolution satellite image that are main source of digital mapping area. We uses some image processing techniques that include image enhancement, haze removal , image fusion, image compress and resampling etc. in this related processing and new and traditional each index in concrete study in the applicability of the radiation quality valuation.   Through all the computation results, the applicability in two kinds of new index UQI and SSIM is better than traditional of objective valuation index , also to mapping procedure the distinguishing feature of some images processing method gives the confirmation, for example the method of the image fusion , resampling and image compression method etc..
APA, Harvard, Vancouver, ISO, and other styles
36

"Full-reference objective visual quality assessment for images and videos." 2012. http://library.cuhk.edu.hk/record=b5549488.

Full text
Abstract:
視覺質量評估在各種多媒體應用中起到了關鍵性的作用。因為人類的視覺系統是視覺信號的最終接收髓,王觀視覺質量評估被認為是最可靠的視覺質量評估方法。然而,王觀視覺質量評估耗時、昂貴,並且不適合線上應用。因此,自動的、客觀的視覺質量評估方法已經被開發並被應用於很多實用埸合當中。最廣泛使用的客觀視覺質量度量方法,如均方差(MSE) 、峰值信噪比(PSNR) 等與人IN對視覺信號質量的判斷相距甚遠。因此,開發更準確的客觀質量度量算法將會成為未來視覺信號處理和傳輸應用成功與否的重要因素。
該論文主要研究全參考客觀視覺質量度量算法。主要內容分為三部分。
第一部分討論圖像質量評估。首先研究了一個經典的圖像質量度量算法--SSIM。提出了個新的加權方法並整合至IjSSIM 當中,提升了SSIM自可預測精度。之後,受到前面這個工作的故發,設計7 個全新的圖像質量度量算法,將噪聲分類為加性噪聲和細節失兩大類。這個算法在很多主觀質量圓像資料庫上都有很優秀的預測表現。
第二部分研究視頻質量評估。首先,將上面提到的全新的圓像質量度量算法通過挖掘視頻運動信息和時域相關的人眼視覺特性擴展為視頻質量度量算法。方法包括:使用基於人自民運動的時空域對比敏感度方程,使用基於運動崗量的時域視覺掩蓋,使用基於認知層面的空域整合等等。這個算法被證明對處理標清和高清序列同樣有效。其次,提出了一個測量視頻順間不一致程度的算法。該算法被整合到MSE 中,提高了MSE的預測表現。
上面提到的算法只考慮到了亮度噪聲。論文的最後部分通過個具體應用色差立體圓像生成究了色度噪聲。色差立體圖像是三維立體顯示技衛的其中種方法。它使在普通電視、電腦顯示器、甚至印刷品上顯示三維立體效果成為可能。我們提出了一個新的色差立體圖像生成方法。該方法工作在CIELAB彩色空間,並力圖匹配原始圖像與觀測立體圖像的色彩屬性值。
Visual quality assessment (VQA) plays a fundamental role in multimedia applications. Since the human visual system (HVS) is the ultimate viewer of the visual information, subjective VQA is considered to be the most reliable way to evaluate visual quality. However, subjective VQA is time-consuming, expensive, and not feasible for on-line manipulation. Therefore, automatic objective VQA algorithms, or namely visual quality metrics, have been developed and widely used in practical applications. However, it is well known that the popular visual quality metrics, such as Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), etc., correlate poorly with the human perception of visual quality. The development of more accurate objective VQA algorithms becomes of paramount importance to the future visual information processing and communication applications.
In this thesis, full-reference objective VQA algorithms are investigated. Three parts of the work are discussed as briefly summarized below.
The first part concerns image quality assessment. It starts with the investigation of a popular image quality metric, i.e., Structural Similarity Index (SSIM). A novel weighting function is proposed and incorporated into SSIM, which leads to a substantial performance improvement in terms of matching subjective ratings. Inspired by this work, a novel image quality metric is developed by separately evaluating two distinct types of spatial distortions: detail losses and additive impairments. The pro- posed method demonstrates the state-of-the-art predictive performance on most of the publicly-available subjective quality image databases.
The second part investigates video quality assessment. We extend the proposed image quality metric to assess video quality by exploiting motion information and temporal HVS characteristics, e.g., eye movement spatio-velocity contrast sensitivity function, temporal masking using motion vectors, temporal pooling considering human cognitive behaviors, etc. It has been experimentally verified that the proposed video quality metric can achieve good performance on both standard-definition and high-definition video databases. We also propose a novel method to measure temporal inconsistency, an essential type of video temporal distortions. It is incorporated into the MSE for video quality assessment, and experiments show that it can significantly enhance MSE's predictive performance.
The aforementioned algorithms only analyze luminance distortions. In the last part, we investigate chrominance distortions for a specific application: anaglyph image generation. Anaglyph image is one of the 3D displaying techniques, which enables stereoscopic perception on traditional TVs, PC monitors, projectors, and even papers. Three perceptual color attributes are taken into account for the color distortion measure, i.e., lightness, saturation, and hue, based on which a novel anaglyph image generation algorithm is developed via approximation in the CIELAB color space.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Li, Songnan.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2012.
Includes bibliographical references (leaves 122-130).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
Dedication --- p.ii
Acknowledgments --- p.iii
Abstract --- p.vi
Publications --- p.viii
Nomenclature --- p.xii
Contents --- p.xvii
List of Figures --- p.xx
List of Tables --- p.xxii
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Motivation and Objectives --- p.1
Chapter 1.2 --- Overview of Subjective Visual Quality Assessment --- p.3
Chapter 1.2.1 --- Viewing condition --- p.4
Chapter 1.2.2 --- Candidate observer selection --- p.4
Chapter 1.2.3 --- Test sequence selection --- p.4
Chapter 1.2.4 --- Structure of test session --- p.5
Chapter 1.2.5 --- Assessment procedure --- p.6
Chapter 1.2.6 --- Post-processing of scores --- p.7
Chapter 1.3 --- Overview of Objective Visual Quality Assessment --- p.8
Chapter 1.3.1 --- Classification --- p.8
Chapter 1.3.2 --- HVS-model-based metrics --- p.9
Chapter 1.3.3 --- Engineering-based metrics --- p.21
Chapter 1.3.4 --- Performance evaluation method --- p.28
Chapter 1.4 --- Thesis Outline --- p.29
Chapter I --- Image Quality Assessment --- p.32
Chapter 2 --- Weighted Structural Similarity Index based on Local Smoothness --- p.33
Chapter 2.1 --- Introduction --- p.33
Chapter 2.2 --- The Structural Similarity Index --- p.33
Chapter 2.3 --- Influence of the Smooth Region on SSIM --- p.35
Chapter 2.3.1 --- Overall performance analysis --- p.35
Chapter 2.3.2 --- Performance analysis for individual distortion types --- p.37
Chapter 2.4 --- The Proposed Weighted-SSIM --- p.40
Chapter 2.5 --- Experiments --- p.41
Chapter 2.6 --- Summary --- p.43
Chapter 3 --- Image Quality Assessment by Decoupling Detail Losses and Additive Impairments --- p.44
Chapter 3.1 --- Introduction --- p.44
Chapter 3.2 --- Motivation --- p.45
Chapter 3.3 --- Related Works --- p.47
Chapter 3.4 --- The Proposed Method --- p.48
Chapter 3.4.1 --- Decoupling additive impairments and useful image contents --- p.48
Chapter 3.4.2 --- Simulating the HVS processing --- p.56
Chapter 3.4.3 --- Two quality measures and their combination --- p.58
Chapter 3.5 --- Experiments --- p.59
Chapter 3.5.1 --- Subjective quality image databases --- p.59
Chapter 3.5.2 --- Parameterization --- p.60
Chapter 3.5.3 --- Overall performance --- p.61
Chapter 3.5.4 --- Statistical significance --- p.62
Chapter 3.5.5 --- Performance on individual distortion types --- p.64
Chapter 3.5.6 --- Hypotheses validation --- p.66
Chapter 3.5.7 --- Complexity analysis --- p.69
Chapter 3.6 --- Summary --- p.70
Chapter II --- Video Quality Assessment --- p.71
Chapter 4 --- Video Quality Assessment by Decoupling Detail Losses and Additive Impairments --- p.72
Chapter 4.1 --- Introduction --- p.72
Chapter 4.2 --- Related Works --- p.73
Chapter 4.3 --- The Proposed Method --- p.74
Chapter 4.3.1 --- Framework --- p.74
Chapter 4.3.2 --- Decoupling additive impairments and useful image contents --- p.75
Chapter 4.3.3 --- Motion estimation --- p.76
Chapter 4.3.4 --- Spatio-velocity contrast sensitivity function --- p.77
Chapter 4.3.5 --- Spatial and temporal masking --- p.79
Chapter 4.3.6 --- Two quality measures and their combination --- p.80
Chapter 4.3.7 --- Temporal pooling --- p.81
Chapter 4.4 --- Experiments --- p.82
Chapter 4.4.1 --- Subjective quality video databases --- p.82
Chapter 4.4.2 --- Parameterization --- p.83
Chapter 4.4.3 --- With/without decoupling --- p.84
Chapter 4.4.4 --- Overall predictive performance --- p.85
Chapter 4.4.5 --- Performance on individual distortion types --- p.88
Chapter 4.4.6 --- Cross-distortion performance evaluation --- p.89
Chapter 4.5 --- Summary --- p.91
Chapter 5 --- Temporal Inconsistency Measure --- p.92
Chapter 5.1 --- Introduction --- p.92
Chapter 5.2 --- The Proposed Method --- p.93
Chapter 5.2.1 --- Implementation --- p.93
Chapter 5.2.2 --- MSE TIM --- p.94
Chapter 5.3 --- Experiments --- p.96
Chapter 5.4 --- Summary --- p.97
Chapter III --- Application related to Color and 3D Perception --- p.98
Chapter 6 --- Anaglyph Image Generation --- p.99
Chapter 6.1 --- Introduction --- p.99
Chapter 6.2 --- Anaglyph Image Artifacts --- p.99
Chapter 6.3 --- Related Works --- p.101
Chapter 6.3.1 --- Simple anaglyphs --- p.101
Chapter 6.3.2 --- XYZ and LAB anaglyphs --- p.102
Chapter 6.3.3 --- Ghosting reduction methods --- p.103
Chapter 6.4 --- The Proposed Method --- p.104
Chapter 6.4.1 --- Gamma transfer --- p.104
Chapter 6.4.2 --- Converting RGB to CIELAB --- p.105
Chapter 6.4.3 --- Matching color appearance attributes in CIELAB color space --- p.106
Chapter 6.4.4 --- Converting CIELAB to RGB --- p.110
Chapter 6.4.5 --- Parameterization --- p.111
Chapter 6.5 --- Experiments --- p.112
Chapter 6.5.1 --- Subjective tests --- p.112
Chapter 6.5.2 --- Results and analysis --- p.113
Chapter 6.5.3 --- Complexity --- p.115
Chapter 6.6 --- Summary --- p.115
Chapter 7 --- Conclusions --- p.117
Chapter 7.1 --- Contributions of the Thesis --- p.117
Chapter 7.2 --- Future Research Directions --- p.120
Bibliography --- p.122
APA, Harvard, Vancouver, ISO, and other styles
37

Moorthy, Anush Krishna 1986. "Natural scene statistics based blind image quality assessment and repair." Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-05-4965.

Full text
Abstract:
Progress in multimedia technologies has resulted in a plethora of services and devices that capture, compress, transmit and display audiovisual stimuli. Humans -- the ultimate receivers of such stimuli -- now have access to visual entertainment at their homes, their workplaces as well as on mobile devices. With increasing visual signals being received by human observers, in the face of degradations that occur to due the capture, compression and transmission processes, an important aspect of the quality of experience of such stimuli is the \emph{perceived visual quality}. This dissertation focuses on algorithm development for assessing such visual quality of natural images, without need for the `pristine' reference image, i.e., we develop computational models for no-reference image quality assessment (NR IQA). Our NR IQA model stems from the theory that natural images have certain statistical properties that are violated in the presence of degradations, and quantifying such deviations from \emph{naturalness} leads to a blind estimate of quality. The proposed modular and easily extensible framework is distortion-agnostic, in that it does not need to have knowledge of the distortion afflicting the image (contrary to most present-day NR IQA algorithms) and is not only capable of quality assessment with high correlation with human perception, but also is capable of identifying the distortion afflicting the image. This additional distortion-identification, coupled with blind quality assessment leads to a framework that allows for blind general-purpose image repair, which is the second major contribution of this dissertation. The blind general-purpose image repair framework, and its exemplar algorithm described here stem from a revolutionary perspective on image repair, where the framework does not simply attempt to ameliorate the distortion in the image, but to ameliorate the distortion, so that visual quality at the output is maximized. Lastly, this dissertation describes a large-scale human subjective study that was conducted at UT to assess human behavior and opinion on visual quality of videos when viewed on mobile devices. The study lead to a database of 200 distorted videos, which incorporates previously studied distortions such as compression and wireless packet-loss, and also dynamically varying distortions that change as a function of time, such as frame-freezes and temporally varying compression rates. This study -- the first of its kind -- involved over 50 human subjects and resulted in 5,300 summary subjective scores and time-sampled subjective traces of quality for multiple displays. The last part of this dissertation analyzes human behavior and opinion on time-varying video quality, opening up an extremely interesting and relevant field for future research in the area of quality assessment and human behavior.
text
APA, Harvard, Vancouver, ISO, and other styles
38

Ma, Kede. "Objective Quality Assessment and Optimization for High Dynamic Range Image Tone Mapping." Thesis, 2014. http://hdl.handle.net/10012/8517.

Full text
Abstract:
Tone mapping operators aim to compress high dynamic range (HDR) images to low dynamic range ones so as to visualize HDR images on standard displays. Most existing works were demonstrated on specific examples without being thoroughly tested on well-established and subject-validated image quality assessment models. A recent tone mapped image quality index (TMQI) made the first attempt on objective quality assessment of tone mapped images. TMQI consists of two fundamental building blocks: structural fidelity and statistical naturalness. In this thesis, we propose an enhanced tone mapped image quality index (eTMQI) by 1) constructing an improved nonlinear mapping function to better account for the local contrast visibility of HDR images and 2) developing an image dependent statistical naturalness model to quantify the unnaturalness of tone mapped images based on a subjective study. Experiments show that the modified structural fidelity and statistical naturalness terms in eTMQI better correlate with subjective quality evaluations. Furthermore, we propose an iterative optimization algorithm for tone mapping. The advantages of this algorithm are twofold: 1) eTMQI and TMQI can be compared in a more straightforward way; 2) better quality tone mapped images can be automatically generated by using eTMQI as the optimization goal. Numerical and subjective experiments demonstrate that eTMQI is a superior objective quality assessment metric for tone mapped images and consistently outperforms TMQI.
APA, Harvard, Vancouver, ISO, and other styles
39

Elgström, Henrik. "Assessment of image quality in x-ray fluoroscopy based on Model observers as an objective measure for quality control and image optimization." Thesis, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-158081.

Full text
Abstract:
BACKGROUND: Although the Image Quality (IQ) indices calculated by objective Model observers contains more favourable characteristics compared to Figure Of Merits (FOM) derived from the more common subjective evaluations of modern digital diagnostic fluoroscopy units, like CDRAD or the Leeds test-objects, practical issues in form of limited access to unprocessed raw data and intricate laboratory measurements have made the conventional computational methods too inefficient and laborious. One approach of the Statistical Decision Variables (CDV) analysis, made available in the FluoroQuality software, overcome these limitations by calculating the SNR2rate from information entirely based on image frames directly obtained from the imaging system, operating in its usual clinical mode.      AIM: The overall aim of the project has been to make the proposed Model observer methodology readily available and verified for use in common IQ tests that takes place in a hospital based on simple measuring procedures with the default image enhancement techniques turned on. This includes conversion of FluoroQuality to MATLAB, assessment of its applicability on a modern digital unit by means of comparisons of measured SNR2rate with the expected linear response predicted by the classical Rose model, assessment of the methods limiting and optimized imaging conditions (with regard to both equipment and software parameters) and dose-efficiency measurements of the SNR2rate/Doserate Dose-to-information (DI) index including both routine quality control of the detector and equipment parameter analyses.      MATERIALS AND METHODS: A Siemens Axiom Artis Zee MP diagnostic fluoroscopy unit, a Diamentor transmission ionisation chamber and a small T20 solid state detector have been used for acquisition of image data and measurements of Air Kerma-area product rate (KAP-rate) and Entrance Surface Air Kerma rate (ESAK-rate without backscatter). Two sets of separate non-attached test-details, of aluminium and tissue equivalent materials respectively, and a Leeds test object were used as contrasting signals. Dose-efficiency measurements consisted of variation of 4 different parameters: Source-Object-Distance, Phantom PMMA thickness, Field size and Dose rate setting. In addition to these, dimensions of the test details as well as computational parameters of the software, like ROI size and number of frames, were included in the theoretical analyses.      RESULTS: FluoroQuality has successfully been converted to MATLAB and the method has been verified with SNR2rate in accordance with the Rose model with only small deviations observed in contrast analyses, most likely reflecting the methods sensitivity in observing non-linear effects. Useful guidelines for measurement procedures with regard to accuracy and precision have been derived from the studies. Results from measurements of the (squared) DI-indices indicates comparable precision (≤ 8%) with the highest performing visual evaluations but with higher accuracy and reproducibility. What still remains for the method to compete with subjective routine QC tests is to integrate the SNR2rate measurements in an efficient enough QA program.
APA, Harvard, Vancouver, ISO, and other styles
40

Lorsakul, Auranuch. "Objective Assessment of Image Quality: Extension of Numerical Observer Models to Multidimensional Medical Imaging Studies." Thesis, 2015. https://doi.org/10.7916/D8Z60NB4.

Full text
Abstract:
Encompassing with fields on engineering and medical image quality, this dissertation proposes a novel framework for diagnostic performance evaluation based on objective image-quality assessment, an important step in the development of new imaging devices, acquisitions, or image-processing techniques being used for clinicians and researchers. The objective of this dissertation is to develop computational modeling tools that allow comprehensive evaluation of task-based assessment including clinical interpretation of images regardless of image dimensionality. Because of advances in the development of medical imaging devices, several techniques have improved image quality where the format domain of the outcome images becomes multidimensional (e.g., 3D+time or 4D). To evaluate the performance of new imaging devices or to optimize various design parameters and algorithms, the quality measurement should be performed using an appropriate image-quality figure-of-merit (FOM). Classical FOM such as bias and variance, or mean-square error, have been broadly used in the past. Unfortunately, they do not reflect the fact that the average performance of the principal agent in medical decision-making is frequently a human observer, nor are they aware of the specific diagnostic task. The standard goal for image quality assessment is a task-based approach in which one evaluates human observer performance of a specified diagnostic task (e.g. detection of the presence of lesions). However, having a human observer performs the tasks is costly and time-consuming. To facilitate practical task-based assessment of image quality, a numerical observer is required as a surrogate for human observers. Previously, numerical observers for the detection task have been studied both in research and industry; however, little research effort has been devoted toward development of one utilized for multidimensional imaging studies (e.g., 4D). Limiting the numerical observer tools that accommodate all information embedded in a series of images, the performance assessment of a particular new technique that generates multidimensional data is complex and limited. Consequently, key questions remain unanswered about how much the image quality improved using these new multidimensional images on a specific clinical task. To address this gap, this dissertation proposes a new numerical-observer methodology to assess the improvement achieved from newly developed imaging technologies. This numerical observer approach can be generalized to exploit pertinent statistical information in multidimensional images and accurately predict the performance of a human observer over the complexity of the image domains. Part I of this dissertation aims to develop a numerical observer that accommodates multidimensional images to process correlated signal components and appropriately incorporate them into an absolute FOM. Part II of this dissertation aims to apply the model developed in Part I to selected clinical applications with multidimensional images including: 1) respiratory-gated positron emission tomography (PET) in lung cancer (3D+t), 2) kinetic parametric PET in head-and-neck cancer (3D+k), and 3) spectral computed tomography (CT) in atherosclerotic plaque (3D+e). The author compares the task-based performance of the proposed approach to that of conventional methods, evaluated based on a broadly-used signal-known-exactly /background-known-exactly paradigm, which is in the context of the specified properties of a target object (e.g., a lesion) on highly realistic and clinical backgrounds. A realistic target object is generated with specific properties and applied to a set of images to create pathological scenarios for the performance evaluation, e.g., lesions in the lungs or plaques in the artery. The regions of interest (ROIs) of the target objects are formed over an ensemble of data measurements under identical conditions and evaluated for the inclusion of useful information from different complex domains (i.e., 3D+t, 3D+k, 3D+e). This work provides an image-quality assessment metric with no dimensional limitation that could help substantially improve assessment of performance achieved from new developments in imaging that make use of high dimensional data.
APA, Harvard, Vancouver, ISO, and other styles
41

Gislason-Lee, Amber J., A. Kumcu, S. M. Kengyelics, D. S. Brettle, L. A. Treadgold, M. Sivananthan, and A. G. Davies. "How much image noise can be added in cardiac x-ray imaging without loss in perceived image quality?" 2015. http://hdl.handle.net/10454/16978.

Full text
Abstract:
Yes
Cardiologists use x-ray image sequences of the moving heart acquired in real-time to diagnose and treat cardiac patients. The amount of radiation used is proportional to image quality; however, exposure to radiation is damaging to patients and personnel. The amount by which radiation dose can be reduced without compromising patient care was determined. For five patient image sequences, increments of computer-generated quantum noise (white + colored) were added to the images, frame by frame using pixel-to-pixel addition, to simulate corresponding increments of dose reduction. The noise adding software was calibrated for settings used in cardiac procedures, and validated using standard objective and subjective image quality measurements. The degraded images were viewed next to corresponding original (not degraded) images in a two-alternativeforced- choice staircase psychophysics experiment. Seven cardiologists and five radiographers selected their preferred image based on visualization of the coronary arteries. The point of subjective equality, i.e., level of degradation where the observer could not perceive a difference between the original and degraded images, was calculated; for all patients the median was 33% 15% dose reduction. This demonstrates that a 33% 15% increase in image noise is feasible without being perceived, indicating potential for 33% 15% dose reduction without compromising patient care.
Funded in part by Philips Healthcare, the Netherlands. Part of this work has been performed in the project PANORAMA, co-funded by grants from Belgium, Italy, France, the Netherlands, and the United Kingdom, and the ENIAC Joint Undertaking.
APA, Harvard, Vancouver, ISO, and other styles
42

Tseng, Chao-Ching, and 曾昭清. "Research on Supplier Assessment, Quality Improvement and Credibility of Subjective and Objective Scoring Schemes – Taking Taiwan\'s Passive Components as an Example." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/qxrq8f.

Full text
Abstract:
碩士
國立彰化師範大學
企業管理學系
107
ABSTRACT This research, using evaluation and assessment provided by appropriate suppliers, ensures the data to become more transparent and closer to reality through the assessment and evaluation process management. Even under the circumstances of having more subjective than objective observations, more information still can be distilled from original data. This study will use a passive component company as a case. In regards to the supplier evaluation method, there are nine sub-evaluation indicators. All of aforementioned indicators add up to 100% and will be assessed and explored both subjectively and objectively. In order to effectively establish a monitoring mechanism for a supplier evaluation and find possible anomalies (i.e. over- or under-estimations), we will use K-Means method to target the evaluations of A and B factories from the first quarter of 2017 to the first quarter of 2018 by comparing the original evaluation result with that of the K-Means method categorized into three probable outcomes (i.e. A, B, and C). Grades A, B, and C that belong to qualified suppliers represent a score of 95 points and more, a score of 90 but below 95 points, and a score of 85 but below 90 points, respectively. We hope to find and address the problem through this study, as well as establish a set of objective evaluations to increase quality and ensure consistent supply in the long run.
APA, Harvard, Vancouver, ISO, and other styles
43

GIANNITRAPANI, PAOLO. "Study of the subjective effects of blur on the vision of natural images: an abstract, physical parametric model for Image Quality Assessment." Doctoral thesis, 2022. http://hdl.handle.net/11573/1637467.

Full text
Abstract:
Looking at a link between blur and visual discomfort, in the present thesis, blur is viewed as a cause of a cognitive loss, and the discomfort as the immediate consequence of this loss. Among the basic cognitive functions of the Human Visual System (HVS), detection, recognition, and coarse localization functions are strongly conditioned by the individual experience. Conversely, it seems plausible that the fine localization function is committed to stabler and inter-subjective functions of the HVS. After a preliminary discussion of the operators and the ML model used (Part II), the approach presented in Part III of this thesis starts from postulating that, in the absence of vision problems, the HVS performs the fine localization of the observed objects with the best accuracy allowed by its physical macro-structure. This is a fundamental assumption, because it is known from the estimation theory that the maximum accuracy attainable when measuring the fine position of patterns in background noise is obtained by the Fisher Information about positional parameters. In fact, the Fisher Information inverse yields the minimum estimation variance. The proposed approach is based on an abstract, functional model of the Receptive Fields (RF) of the HVS, referred to as Virtual Receptive Field (VRF) and it is tuned to statistical features of natural scenes. It is a complex-valued operator, orientation-selective both in the space domain and in the spatial frequency domain. The role of the VRF model is to extract the Positional Fisher Information (PFI) as a measure of the pattern localizability loss. In the Image Quality Assessment (IQA) Full Reference (FR) environment, subjective assessments refer to the retinal image and lead to the MOS/DMOS values (Difference of Mean Opinion Score). The quality calculated by the IQA metrics is objective and refers to the image reproduced on the display. A parametric scoring function maps these metrics onto the MOS/DMOS values and depends critically on the Viewing Distance (VD) of the subject from the monitor in which the image is reproduced. When objective quality estimates for different VDs are required, as in the case of auditoria, cinemas, classrooms, a re-training procedure must be repeated for each different VDs. In the final part of this thesis (Part IV), the problem of VD is dealt with from a theoretical point of view and a model of the scoring function is defined for the case of blurred images where image degradation substantially depends on the VD. Starting from a Fisher Information loss model applied to the Gaussian distortion case in natural images, we see that the VD is estimated from the data themselves. Several maps are given with the aim of obtaining a DMOS prediction at different distances starting from the data available for a specific distance, without performing new experiments. Moreover, the theoretical results are verified on some most popular IQA FR methods and the problem of VD correction is generalized to the other distortions. Finally, the impact of isolated, long, strong, unidirectional edges on early vision is shown. As for the VD correction, an a-priori linear estimator is presented. It does not require rectification through a re-training procedure. Useful maps for detecting the position and the intensity of the PFI losses in an image are given, and the isoluminance colors allow to highlight strong and isolated edges, maintaining a constant intensity at the same edge level. We have an easy visual feedback on the images themselves to see where the greatest loss of information and the greatest discomfort due to blur are.
APA, Harvard, Vancouver, ISO, and other styles
44

Kumcu, A., L. Platisa, H. Chen, Amber J. Gislason-Lee, A. G. Davies, P. Schelkens, Y. Taeymans, and W. Philips. "Selecting stimuli parameters for video quality studies based on perceptual similarity distances." 2015. http://hdl.handle.net/10454/16977.

Full text
Abstract:
Yes
This work presents a methodology to optimize the selection of multiple parameter levels of an image acquisition, degradation, or post-processing process applied to stimuli intended to be used in a subjective image or video quality assessment (QA) study. It is known that processing parameters (e.g. compression bit-rate) or techni- cal quality measures (e.g. peak signal-to-noise ratio, PSNR) are often non-linearly related to human quality judgment, and the model of either relationship may not be known in advance. Using these approaches to select parameter levels may lead to an inaccurate estimate of the relationship between the parameter and subjective quality judgments – the system’s quality model. To overcome this, we propose a method for modeling the rela- tionship between parameter levels and perceived quality distances using a paired comparison parameter selection procedure in which subjects judge the perceived similarity in quality. Our goal is to enable the selection of evenly sampled parameter levels within the considered quality range for use in a subjective QA study. This approach is tested on two applications: (1) selection of compression levels for laparoscopic surgery video QA study, and (2) selection of dose levels for an interventional X-ray QA study. Subjective scores, obtained from the follow-up single stimulus QA experiments conducted with expert subjects who evaluated the selected bit-rates and dose levels, were roughly equidistant in the perceptual quality space - as intended. These results suggest that a similarity judgment task can help select parameter values corresponding to desired subjective quality levels.
Parts of this work were performed within the Telesurgery project (co-funded by iMinds, a digital research institute founded by the Flemish Government; project partners are Unilabs Teleradiology, SDNsquare and Barco, with project support from IWT) and the PANORAMA project (co-funded by grants from Belgium, Italy, France, the Netherlands, the United Kingdom, and the ENIAC Joint Undertaking).
APA, Harvard, Vancouver, ISO, and other styles
45

"New Signal Processing Methods for Blur Detection and Applications." Doctoral diss., 2019. http://hdl.handle.net/2286/R.I.54945.

Full text
Abstract:
abstract: The depth richness of a scene translates into a spatially variable defocus blur in the acquired image. Blurring can mislead computational image understanding; therefore, blur detection can be used for selective image enhancement of blurred regions and the application of image understanding algorithms to sharp regions. This work focuses on blur detection and its application to image enhancement. This work proposes a spatially-varying defocus blur detection based on the quotient of spectral bands; additionally, to avoid the use of computationally intensive algorithms for the segmentation of foreground and background regions, a global threshold defined using weak textured regions on the input image is proposed. Quantitative results expressed in the precision-recall space as well as qualitative results overperform current state-of-the-art algorithms while keeping the computational requirements at competitive levels. Imperfections in the curvature of lenses can lead to image radial distortion (IRD). Computer vision applications can be drastically affected by IRD. This work proposes a novel robust radial distortion correction algorithm based on alternate optimization using two cost functions tailored for the estimation of the center of distortion and radial distortion coefficients. Qualitative and quantitative results show the competitiveness of the proposed algorithm. Blur is one of the causes of visual discomfort in stereopsis. Sharpening applying traditional algorithms can produce an interdifference which causes eyestrain and visual fatigue for the viewer. A sharpness enhancement method for stereo images that incorporates binocular vision cues and depth information is presented. Perceptual evaluation and quantitative results based on the metric of interdifference deviation are reported; results of the proposed algorithm are competitive with state-of-the-art stereo algorithms. Digital images and videos are produced every day in astonishing amounts. Consequently, the market-driven demand for higher quality content is constantly increasing which leads to the need of image quality assessment (IQA) methods. A training-free, no-reference image sharpness assessment method based on the singular value decomposition of perceptually-weighted normalized-gradients of relevant pixels in the input image is proposed. Results over six subject-rated publicly available databases show competitive performance when compared with state-of-the-art algorithms.
Dissertation/Thesis
Doctoral Dissertation Electrical Engineering 2019
APA, Harvard, Vancouver, ISO, and other styles
46

Duarte, Carlos Rafael Lopes. "Avaliação de Qualidade de Nuvens de Pontos baseada em Aprendizagem Profunda." Master's thesis, 2019. http://hdl.handle.net/10316/87990.

Full text
Abstract:
Dissertação de Mestrado Integrado em Engenharia Electrotécnica e de Computadores apresentada à Faculdade de Ciências e Tecnologia
A tecnologia de Nuvens de Pontos tem sido uma das mais promissoras e mais exploradas no que toca à representação de objetos e mapeamento 3D. No entanto a avaliação de qualidade visual deste tipo de conteúdos ainda não é considerada satisfatória. Esta dissertação propõe um novo método de Avaliação Objetiva de Qualidade de Nuvens de Pontos baseada em resultados de Avaliação Subjetiva de Qualidade, utilizando Aprendizagem Profunda, particularmente Redes Neuronais Convolucionais. Este trabalho consistiu em retreinar Redes Neuronais Convolucionais pré-treinadas, usando Transfer Learning para prever de forma objetiva a qualidade deste tipo de conteúdos. Para isto foi necessário encontrar formas alternativas de representação de Nuvens de Pontos, particularmente projeções 2D, para que estas, em conjunto com resultados de Avaliação Subjetiva de Qualidade, pudessem ser utilizados para treinar as Redes Neuronais Convolucionais. Por fim, após o treino das redes, foi criado um dataset suplementar de teste para confirmar a validade e qualidade dos resultados. As redes treinadas foram então utilizadas para prever os valores de qualidade para o dataset suplementar. Para os valores preditos pela rede, foram calculados fatores de desempenho que os comparam com os resultados subjetivos considerando estes últimos como referência. Os melhores resultados obtidos para estes conteúdos foram: 0.2601 de Raíz do Erro Médio Quadrático (Root Mean Square Error - RMSE), 0.9822 de Coeficiente de Correlação de Pearson (Pearson Correlation Coefficient - PCC), 0.9137 de Coeficiente de Correlação de Spearman (Spearman’s Rank Order Correlation Coefficient - SROCC) e 0.2778 de Rácio de Outliers (Outlier Ratio - OR). Os resultados obtidos mostram que é possível utilizar Aprendizagem Profunda para avaliar a qualidade visual destes conteúdos, superando os resultados de estado da arte presentes na literatura.
Point Clouds has been one of the most promising and explored technologies regarding 3D object representation and 3D mapping. However, the solutions to automatically evaluate the visual quality of this type of contents are not yet satisfactory. This Master Thesis propose a new methodology for Objective Quality Assessment of Point Clouds based on Subjective Quality Assessment results, using Deep Learning, particularly Convolutional Neural Networks. The goal of this work was to retrain pre-trained Convolutional Neural Networks, using Transfer Learning, to predict the visual quality of this type of contents. For this it was necessary to find new alternative ways to represent Point Clouds to, together with Subjective Quality Assessment results, train the Convolutional Neural Networks. Finally, after training the networks, an additional dataset was created to confirm the quality of the obtained results. The trained networks were used to predict the quality scores for the additional dataset. After this, some performance indexes were computed comparing the predicted scores against the subjective ground truth. The best results to these contents were 0.2601 of Root Mean Square Error (RMSE), 0.9822 of Pearson Correlation Coefficient (PCC), 0.9137 of Spearman’s Rank Order Correlation Coefficient and 0.2778 of Outlier Ratio (OR). The results obtained show that is possible to predict the subjective visual quality of Point Clouds using Deep Learning, particularly CNNs, outperforming state-of-art results.
APA, Harvard, Vancouver, ISO, and other styles
47

Filipe, José Nunes dos Santos. "Improved image rendering for focused plenoptic cameras with extended depth-of-field." Master's thesis, 2019. http://hdl.handle.net/10400.8/3891.

Full text
Abstract:
This dissertation presents a research work on rendering images from light elds captured with a focused plenoptic camera with extended depth of eld. A basic overview of the 7 dimensional plenoptic function is rst given, followed by a description of the Two-Plane Parametrisation. Some of the various methods used for sampling the plenoptic functions are then described, namely those equivalent to acquisition functions implemented by the camera gantry, the unfocused plenoptic camera and the focused plenoptic camera. State-of-the-art image rendering algorithms have also been studied both for focused and unfocused plenoptic cameras. A comprehensive study of the behaviour of focus metrics when applied to images rendered form a focused plenoptic camera is presented, including 34 of the most widely used metrics in the literature. Due to high frequency artefacts, caused by the rendering process, it was found that the currently available focus metrics yield in ated values for this kind of images, leading to misindication, where worse-focused images have better focus measures. Subjective tests were carried out, in order to corroborate these results. Then, methods for minimizing the rendering artefacts are proposed. An algorithm for choosing the maximum patch size for each micro-image was designed, in order to minimize the distortions caused by the vignetting e ect of the micro-lens. Then an inpainting algorithm, based on anisotropic di usion inpainting, is used to minimize the remaining artefacts present in the borders between adjacent micro-images. Finally, a method to deal with the redundant information generated by a plenoptic camera with extended depth of eld is presented. Three di erent views of the same scene are rendered, with the three di erent types of lenses. Then, it is proven that making any linear combination of the images always results in worse focus than selecting the better focused one. Thus, a multi-focus image fusion algorithm is proposed to merge the three images captured by a extended depth-of- eld camera into a single one, which presents higher focus level than any of the three individual images.
APA, Harvard, Vancouver, ISO, and other styles
48

VRZALOVÁ, Monika. "Role sestry ve screeningu deprese u seniorů." Master's thesis, 2016. http://www.nusl.cz/ntk/nusl-260905.

Full text
Abstract:
The diploma thesis deals with problems of depression in older people. Mainly the work is focused on identifying and analyzing the role of nurses in screening for depression in older people in primary care, acute care, long-term care and home care. This thesis was focused on theoretical direction and was used the method of design and demonstration. In this thesis was set one main goals with five research questions. The main goal was to identify and analyze the role of nurses in screening for depression in the elderly. RQ 1: What is the role of the nurse in screening for depression in the elderly? RQ 2: What is the role of the nurse in the primary care in screening for depression in the elderly? RQ 3: What is the role of the nurse in screening for depression in hospitalized patients in acute care? RQ 4: What is the role of the nurse in screening for depression in seniors in long-term and home care? RQ 5: What rating scales and methods are used in screening for depression in the elderly? The thesis introduce the concept of depression. The following are specified the causes of and the important factors that affect depression in the elderly. It also deals the differences in the clinical symptomatology of depression in old age. It explains possibilities and various barriers in the diagnosis of depression. Another chapter introduces complete geriatric examination, diagnostic classification systems, possible screening methods and scales for detection of depression in the elderly population. It also deals methods of pharmacological and non-pharmacological treatment and its possible complications associated with older age. By reason of increased suicide rate caused by depressive disorder the issue of suicidal behavior in the elderly is introduced. The next chapter deals with the nursing process, which is used by nurses in practice. It consists of the evaluation of the patient's health condition, making nursing diagnosis, creating nursing plan and subsequent implementation and evaluation. The nursing process is also needy for providing quality care. The nursing process in the stage of nursing diagnosis, introduces possible nursing diagnosis for a patient suffering from depression, which are based on the latest classification. Finally is described the role of nurses in screening for depression in the elderly in different health facilities and their contribution to the timely evaluation of depression in the elderly. This chapter introduces the role of nurses, nursing screening and collaboration with a physician. The role of nurses in screening for depression in different medical facilities is based on the first phase of the nursing process of assessment. On the basis of objective and subjective information, the nurse will assess the overall health and mental condition of the patient. Primarily, it was investigated what is the role of the nurse in screening for depression. On the basis of content analysis and synthesis it was necessary to used and processed domestic and foreign literature. A number of relevant sources are the results of various studies and Meta-analyzes, mostly from abroad, but also from the Czech Republic. The thesis can serve as a basis for nurses. The result of this thesis is to create e-learning material available for students in the Faculty of Health and Social Sciences of South Bohemia in Ceske Budejovice in the tutorial called Moodle.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography