Dissertations / Theses on the topic 'Metric quality assessment'

To see the other types of publications on this topic, follow the link: Metric quality assessment.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Metric quality assessment.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Mariana, Valerie Ruth. "The Multidimensional Quality Metric (MQM) Framework: A New Framework for Translation Quality Assessment." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4312.

Full text
Abstract:
This document is a supplement to the article entitled “The Multidimensional Quality Metric (MQM) Framework: A New Framework for Translation Quality Assessment”, which has been acepted for publication in the upcoming January volume of JoSTrans, the Journal of Specialized Translation. The article is a coauthored project between Dr. Alan K. Melby, Dr. Troy Cox and myself. In this document you will find a preface describing the process of writing the article, an annotated bibliography of sources consulted in my research, a summary of what I learned, and a conclusion that considers the future avenues opened up by this research. Our article examines a new method for assessing the quality of a translation known as the Multidimensional Quality Metric, MQM. In our experiment we set the MQM framework to mirror, as closely as possible, the American Translators Association's (ATA) translator certification exam. To do this we mapped the ATA error categories to corresponding MQM error categories. We acquired a set of 29 student translations and had a group of student raters use the MQM framework to rate these translations. We measured the practicality of the MQM framework by comparing the time required for ratings to the average time required to rate translations in the industry. In addition, we had 2 ATA certified translators rate the anchor translation (a translation that was scored by every rater in order to have a point of comparison). The certified translators' ratings were used to verify that the scores given by the student raters were valid. Reliability was also measured, which found that the student raters were not interchangeable, but that the measurement estimate of reliability was adequate. The article's goal was to determine the extent to which the Multidimensional Quality Metric framework for translation evaluation is viable (practical, reliable and valid) when designed to mirror the ATA certification exam. Overall, the results of the experiment showed that MQM could be a viable way to rate translation quality when operationalized based on the ATA's translator certification exam. This is an important discovery in the field of translation quality, because it shows that MQM could be a viable tool for future researchers. Our experiment suggests that researchers ought to take advantage of the MQM framework because, not only is it free, but any studies completed using the MQM framework would have a common base, making these studies more easily comparable.
APA, Harvard, Vancouver, ISO, and other styles
2

Hettiarachchi, Don Lahiru Nirmal Manikka. "An Accelerated General Purpose No-Reference Image Quality Assessment Metric and an Image Fusion Technique." University of Dayton / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1470048998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Snow, Tyler A. "Establishing the Viability of the Multidimensional Quality Metrics Framework." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5593.

Full text
Abstract:
The Multidimensional Quality Metrics (MQM) framework is a new system for creating customized translation quality assessment and evaluation metrics designed to fit specific translation needs. In this study I test the viability of MQM to determine whether the framework in its current state is ready for implementation as a quality assessment framework in the translation industry. Other contributions from this study include: (1) online software for designing and using metrics based on the MQM framework; (2) a survey of the typical, real-world quality assessment and evaluation practices of language service providers in the translation industry; and (3) a measurement scale for determining the viability of translation quality assessment and evaluation frameworks such as MQM. The study demonstrates that the MQM framework is a viable solution when it comes to the validity and practicality of creating translation quality metrics for the translation industry. It is not clear whether those metrics can be used reliably without extensive training of qualified assessors on the use of MQM metrics.
APA, Harvard, Vancouver, ISO, and other styles
4

Berry, Michael CSE UNSW. "Assessment of software measurement." Awarded by:University of New South Wales. CSE, 2006. http://handle.unsw.edu.au/1959.4/25134.

Full text
Abstract:
Background and purpose. This thesis documents a program of five studies concerned with the assessment of software measurement. The goal of this program is to assist the software industry to improve the information support for managers, analysts and software engineers by providing evidence of where opportunities for improving measurement and analysis exist. Methods. The first study examined the assessment of software measurement frameworks using models of best practice based on performance/success factors. The software measurement frameworks of thirteen organisations were surveyed. The association between a factor and the outcome experienced with the organisations' frameworks was then evaluated. The subsequent studies were more info-centric and investigated using models of information quality to assess the support provided for software processes. For these studies, information quality models targeting specific software processes were developed using practitioner focus groups. The models were instantiated in survey instruments and the responses were analysed to identify opportunities to improve the information support provided. The final study compared the use of two different information quality models for the assessing and improving information support. Assessments of the same quantum of information were made using a targeted model and a generic model. The assessments were then evaluated by an expert panel in order to identify which information quality model was more effective for improvement purposes. Results. The study of performance factors for software measurement frameworks confirmed the association of some factors with success and quantified that association. In particular, it demonstrated the importance of evaluating contextual factors. The conclusion is that factor-based models may be appropriately used for risk analysis and for identifying constraints on measurement performance. Note, however, that a follow-up study showed that some initially successful frameworks subsequently failed. This implied an instability in the dependent variable, success, that could reduce the value of factor-based models for predicting success. The studies of targeted information quality models demonstrated the effectiveness of targeted assessments for identifying improvement opportunities and suggest that they are likely to be more effective for improvement purposes than using generic information quality models. The studies also showed the effectiveness of importance-performance analysis for prioritizing improvement opportunities.
APA, Harvard, Vancouver, ISO, and other styles
5

Preiss, Jens [Verfasser], Philipp [Akademischer Betreuer] Urban, Edgar [Akademischer Betreuer] Dörsam, and Michael [Akademischer Betreuer] Goesele. "Color-Image Quality Assessment: From Metric to Application / Jens Preiss. Betreuer: Philipp Urban ; Edgar Dörsam ; Michael Goesele." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2015. http://d-nb.info/1110980949/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gao, Zhigang. "Image/video compression and quality assessment based on wavelet transform." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1187195053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

R, V. Krishnam Raju Kunadha Raju. "Perceptual Image Quality Prediction Using Region of Interest Based Reduced Reference Metrics Over Wireless Channel." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13631.

Full text
Abstract:
As there is a rapid growth in the field of wireless communications, the demand for various multimedia services is also increasing. The data that is being transmitted suffers from distortions through source encoding and transmission over errorprone channels. Due to these errors, the quality of the content is degraded. There is a need for service providers to provide certain Quality of Experience (QoE) to the end user. Several methods are being developed by network providers for better QoE.The human tendency mainly focuses on distortions in the Region of Interest(ROI) which are perceived to be more annoying compared to the Background(BG). With this as a base, the main aim of this thesis is to get an accurate prediction quality metric to measure the quality of the image over ROI and the BG independently. Reduced Reference Image Quality Assessment (RRIQA), a reduced reference image quality assessment metric, is chosen for this purpose. In this method, only partial information about the reference image is available to assess the quality. The quality metric is measured independently over ROI and BG. Finally the metric estimated over ROI and BG are pooled together to get aROI aware metric to predict the Mean Opinion Score (MOS) of the image.In this thesis, an ROI aware quality metric is used to measure the quality of distorted images that are generated using a wireless channel. The MOS of distorted images are obtained. Finally, the obtained MOS are validated with the MOS obtained from a database [1].It is observed that the proposed image quality assessment method provides better results compared to the traditional approach. It also gives a better performance over a wide variety of distortions. The obtained results show that the impairments in ROI are perceived to be more annoying when compared to the BG.
APA, Harvard, Vancouver, ISO, and other styles
8

Khaustova, Darya. "Objective assessment of stereoscopic video quality of 3DTV." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S021/document.

Full text
Abstract:
Le niveau d'exigence minimum pour tout système 3D (images stéréoscopiques) est de garantir le confort visuel des utilisateurs. Le confort visuel est un des trois axes perceptuels de la qualité d'expérience (QoE) 3D qui peut être directement lié aux paramètres techniques du système 3D. Par conséquent, le but de cette thèse est de caractériser objectivement l'impact de ces paramètres sur la perception humaine afin de contrôler la qualité stéréoscopique. La première partie de la thèse examine l'intérêt de prendre en compte l'attention visuelle des spectateurs dans la conception d'une mesure objective de qualité 3D. Premièrement, l'attention visuelle en 2D et 3D sont comparées en utilisant des stimuli simples. Les conclusions de cette première expérience sont validées en utilisant des scènes complexes avec des disparités croisées et décroisées. De plus, nous explorons l'impact de l'inconfort visuel causé par des disparités excessives sur l'attention visuelle. La seconde partie de la thèse est dédiée à la conception d'un modèle objectif de QoE pour des vidéos 3D, basé sur les seuils perceptuels humains et le niveau d'acceptabilité. De plus nous explorons la possibilité d'utiliser la modèle proposé comme une nouvelle échelle subjective. Pour la validation de ce modèle, des expériences subjectives sont conduites présentant aux sujets des images stéréoscopiques fixes et animées avec différents niveaux d'asymétrie. La performance est évaluée en comparant des prédictions objectives avec des notes subjectives pour différents niveaux d'asymétrie qui pourraient provoquer un inconfort visuel
The minimum requirement for any 3D (stereoscopic images) system is to guarantee visual comfort of viewers. Visual comfort is one of the three primary perceptual attributes of 3D QoE, which can be linked directly with technical parameters of a 3D system. Therefore, the goal of this thesis is to characterize objectively the impact of these parameters on human perception for stereoscopic quality monitoring. The first part of the thesis investigates whether visual attention of the viewers should be considered when designing an objective 3D quality metrics. First, the visual attention in 2D and 3D is compared using simple test patterns. The conclusions of this first experiment are validated using complex stimuli with crossed and uncrossed disparities. In addition, we explore the impact of visual discomfort caused by excessive disparities on visual attention. The second part of the thesis is dedicated to the design of an objective model of 3D video QoE, which is based on human perceptual thresholds and acceptability level. Additionally we explore the possibility to use the proposed model as a new subjective scale. For the validation of proposed model, subjective experiments with fully controlled still and moving stereoscopic images with different types of view asymmetries are conducted. The performance is evaluated by comparing objective predictions with subjective scores for various levels of view discrepancies which might provoke visual discomfort
APA, Harvard, Vancouver, ISO, and other styles
9

Rossholm, Andreas. "On Enhancement and Quality Assessment of Audio and Video in Communication Systems." Doctoral thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00604.

Full text
Abstract:
The use of audio and video communication has increased exponentially over the last decade and has gone from speech over GSM to HD resolution video conference between continents on mobile devices. As the use becomes more widespread the interest in delivering high quality media increases even on devices with limited resources. This includes both development and enhancement of the communication chain but also the topic of objective measurements of the perceived quality. The focus of this thesis work has been to perform enhancement within speech encoding and video decoding, to measure influence factors of audio and video performance, and to build methods to predict the perceived video quality. The audio enhancement part of this thesis addresses the well known problem in the GSM system with an interfering signal generated by the switching nature of TDMA cellular telephony. Two different solutions are given to suppress such interference internally in the mobile handset. The first method involves the use of subtractive noise cancellation employing correlators, the second uses a structure of IIR notch filters. Both solutions use control algorithms based on the state of the communication between the mobile handset and the base station. The video enhancement part presents two post-filters. These two filters are designed to improve visual quality of highly compressed video streams from standard, block-based video codecs by combating both blocking and ringing artifacts. The second post-filter also performs sharpening. The third part addresses the problem of measuring audio and video delay as well as skewness between these, also known as synchronization. This method is a black box technique which enables it to be applied on any audiovisual application, proprietary as well as open standards, and can be run on any platform and over any network connectivity. The last part addresses no-reference (NR) bitstream video quality prediction using features extracted from the coded video stream. Several methods have been used and evaluated: Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Least Square Support Vector Machines (LS-SVM), showing high correlation with both MOS and objective video assessment methods as PSNR and PEVQ. The impact from temporal, spatial and quantization variations on perceptual video quality has also been addressed, together with the trade off between these, and for this purpose a set of locally conducted subjective experiments were performed.
APA, Harvard, Vancouver, ISO, and other styles
10

Sanches, Silvio Ricardo Rodrigues. "Avaliação objetiva de qualidade de segmentação." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-26062014-111553/.

Full text
Abstract:
A avaliação de qualidade de segmentação de vídeos tem se mostrado um problema pouco investigado no meio científico. Apesar disso, estudos recentes na área resultaram em algumas métricas que têm como finalidade avaliar objetivamente a qualidade da segmentação produzida pelos algoritmos. Tais métricas consideram as diferentes formas em que os erros ocorrem (fatores perceptuais) e seus parâmetros são ajustados de acordo com a aplicação em que se pretende utilizar os vídeos segmentados. Neste trabalho apresentam-se: i) uma avaliação da métrica que representa o estado-da-arte, demonstrando que seu desempenho varia de acordo com o algoritmo; ii) um método subjetivo para avaliação de qualidade de segmentação; e iii) uma nova métrica perceptual objetiva, derivada do método subjetivo aqui proposto, capaz de encontrar o melhor ajuste dos parâmetros de dois algoritmos de segmentação encontrados na literatura, quando os vídeos por eles segmentados são utilizados na composição de cenas em ambientes de Teleconferência Imersiva.
Assessment of video segmentation quality is a problem seldom investigated by the scientific community. Nevertheless, recent studies presented some objective metrics to evaluate algorithms. Such metrics consider different ways in which segmentation errors occur (perceptual factors) and its parameters are adjusted according to the application for which the segmented frames are intended. In this work: i) we demonstrate empirically that the performance of existing metrics changes according to the segmentation algorithm; ii) we developed a subjective method to evaluate segmentation quality; and iii) we contribute with a new objective metric derived on the basis of experiments from subjective method in order to adjust the parameters of two bilayer segmentation algorithms found in the literature when these algorithms are used for compose scenes in Immersive Teleconference environments.
APA, Harvard, Vancouver, ISO, and other styles
11

Slanina, Martin. "Metody a prostředky pro hodnocení kvality obrazu." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-233489.

Full text
Abstract:
Disertační práce se zabývá metodami a prostředky pro hodnocení kvality obrazu ve videosekvencích, což je velmi aktuální téma, zažívající velký rozmach zejména v souvislosti s digitálním zpracováním videosignálů. Přestože již existuje relativně velké množství metod a metrik pro objektivní, tedy automatizované měření kvality videosekvencí, jsou tyto metody zpravidla založeny na porovnání zpracované (poškozené, například komprimací) a originální videosekvence. Metod pro hodnocení kvality videosekvení bez reference, tedy pouze na základě analýzy zpracovaného materiálu, je velmi málo. Navíc se takové metody převážně zaměřují na analýzu hodnot signálu (typicky jasu) v jednotlivých obrazových bodech dekódovaného signálu, což je jen těžko aplikovatelné pro moderní komprimační algoritmy jako je H.264/AVC, který používá sofistikovené techniky pro odstranění komprimačních artefaktů. V práci je nejprve podán stučný přehled dostupných metod pro objektivní hodnocení komprimovaných videosekvencí se zdůrazněním rozdílného principu metod využívajících referenční materiál a metod pracujících bez reference. Na základě analýzy možných přístupů pro hodnocení video sekvencí komprimovaných moderními komprimačními algoritmy je v dalším textu práce popsán návrh nové metody určené pro hodnocení kvality obrazu ve videosekvencích komprimovaných s využitím algoritmu H.264/AVC. Nová metoda je založena na sledování hodnot parametrů, které jsou obsaženy v transportním toku komprimovaného videa, a přímo souvisí s procesem kódování. Nejprve je provedena úvaha nad vlivem některých takových parametrů na kvalitu výsledného videa. Následně je navržen algoritmus, který s využitím umělé neuronové sítě určuje špičkový poměr signálu a šumu (peak signal-to-noise ratio -- PSNR) v komprimované videosekvenci -- plně referenční metrika je tedy nahrazována metrikou bez reference. Je ověřeno několik konfigurací umělých neuronových sítí od těch nejjednodušších až po třívrstvé dopředné sítě. Pro učení sítí a následnou analýzu jejich výkonnosti a věrnosti určení PSNR jsou vytvořeny dva soubory nekomprimovaných videosekvencí, které jsou následně komprimovány algoritmem H.264/AVC s proměnným nastavením kodéru. V závěrečné části práce je proveden rozbor chování nově navrženého algoritmu v případě, že se změní vlastnosti zpracovávaného videa (rozlišení, střih), případně kodéru (formát skupiny současně kódovaných snímků). Chování algoritmu je analyzováno až do plného vysokého rozlišení zdrojového signálu (full HD -1920 x 1080 obrazových bodů).
APA, Harvard, Vancouver, ISO, and other styles
12

Olgun, Ferhat Ramazan. "Evaluation Of Visual Quality Metrics." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613733/index.pdf.

Full text
Abstract:
The aim of this study is to work on the visual quality metrics that are widely accepted in literature, to evaluate them on different distortion types and to give a comparison of overall performances in terms of prediction accuracy, monotonicity, consistency and complexity. The algorithms behind the quality metrics in literature and parameters used for quality metric performance evaluations are studied. This thesis also includes the explanation of Human Visual System, classification of visual quality metrics and subjective quality assessment methods. Experimental results that show the correlation between objective scores and human perception are taken to compare the eight widely accepted visual quality metrics.
APA, Harvard, Vancouver, ISO, and other styles
13

Begazo, Dante Coaquira. "Método de avaliação de qualidade de vídeo por otimização condicionada." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-09032018-152946/.

Full text
Abstract:
Esta Tese propõe duas métricas objetivas para avaliar a percepção de qualidade de vídeos sujeitos a degradações de transmissão em uma rede de pacotes. A primeira métrica usa apenas o vídeo degradado, enquanto que a segunda usa os vídeos de referência e degradado. Esta última é uma métrica de referência completa (FR - Full Reference) chamada de QCM (Quadratic Combinational Metric) e a primeira é uma métrica sem referência (NR - No Reference) chamada de VQOM (Viewing Quality Objective Metric). Em particular, o procedimento de projeto é aplicado à degradação de variação de atraso de pacotes (PDV - Packet Delay Variation). A métrica NR é descrita por uma spline cúbica composta por dois polinômios cúbicos que se encontram suavemente num ponto chamado de nó. Para o projeto de ambas métricas, colhem-se opiniões de observadores a respeito das sequências de vídeo degradadas que compõem o conjunto. A função objetiva inclui o erro quadrático total entre as opiniões e suas estimativas paramétricas, ainda consideradas como expressões algébricas. Acrescentam-se à função objetiva três condições de igualdades de derivadas tomadas no nó, cuja posição é especificada dentro de uma grade fina de pontos entre o valor mínimo e o valor máximo do fator de degradação. Essas condições são afetadas por multiplicadores de Lagrange e adicionadas à função objetiva, obtendo-se o lagrangiano, que é minimizado pela determinação dos coeficientes subótimos dos polinômios em função de cada valor do nó na grade. Finalmente escolhe-se o valor do nó que produz o erro quadrático mínimo, determinando assim os valores finais para dos coeficientes do polinômio. Por outro lado, a métrica FR é uma combinação não-linear de duas métricas populares, a PSNR (Peak Signal-to-Noise Ratio) e a SSIM (Structural Similarity Index). Um polinômio completo de segundo grau de duas variáveis é usado para realizar a combinação, porque é sensível a ambas métricas constituintes, evitando o sobreajuste em decorrência do baixo grau. Na fase de treinamento, o conjunto de valores dos coeficientes do polinômio é determinado através da minimização do erro quadrático médio para as opiniões sobre a base de dados de treino. Ambas métricas, a VQOM e a QCM, são treinadas e validadas usando uma base de dados, e testadas com outra independente. Os resultados de teste são comparados com métricas NR e FR recentes através de coeficientes de correlação, obtendo-se resultados favoráveis para as métricas propostas.
This dissertation proposes two objective metrics for estimating human perception of quality for video subject to transmission degradation over packet networks. The first metric just uses traffic data while the second one uses both the degraded and the reference video sequences. That is, the latter is a full reference (FR) metric called Quadratic Combinational Metric (QCM) and the former one is a no reference (NR) metric called Viewing Quality Objective Metric (VQOM). In particular, the design procedure is applied to packet delay variation (PDV) impairments, whose compensation or control is very important to maintain quality. The NR metric is described by a cubic spline composed of two cubic polynomials that meet smoothly at a point called a knot. As the first step in the design of either metric, the spectators score a training set of degraded video sequences. The objective function for designing the NR metric includes the total square error between the scores and their parametric estimates, still regarded as algebraic expressions. In addition, the objective function is augmented by the addition of three equality constraints for the derivatives at the knot, whose position is specified within a fine grid of points between the minimum value and the maximum value of the degradation factor. These constraints are affected by Lagrange multipliers and added to the objective function to obtain the Lagrangian, which is minimized by the suboptimal polynomial coefficients determined as a function of each knot in the grid. Finally, the knot value is selected that yields the minimum square error. By means of the selected knot value, the final values of the polynomial coefficients are determined. On the other hand, the FR metric is a nonlinear combination of two popular metrics, namely, the Peak Signal-to-Noise Ratio (PSNR) and the Structural Similarity Index (SSIM). A complete second-degree two-variable polynomial is used for the combination since it is sensitive to both constituent metrics while avoiding overfitting. In the training phase, the set of values for the coefficients of this polynomial is determined by minimizing the mean square error to the opinions over the training database. Both metrics, the VQOM and the QCM, are trained and validated using one database and tested with a different one. The test results are compared with recent NR and FR metrics by means of correlation coefficients, obtaining favorable results for the proposed metrics.
APA, Harvard, Vancouver, ISO, and other styles
14

MARINI, FABRIZIO. "Content based no-reference image quality metrics." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2012. http://hdl.handle.net/10281/29794.

Full text
Abstract:
Images are playing a more and more important role in sharing, expressing, mining and exchanging information in our daily lives. Now we can all easily capture and share images anywhere and anytime. Since digital images are subject to a wide variety of distortions during acquisition, processing, compression, storage, transmission and reproduction; it becomes necessary to assess the Image Quality. In this thesis, starting from an organized overview of available Image Quality Assessment methods, some original contributions in the framework of No-reference image quality metrics are described.
APA, Harvard, Vancouver, ISO, and other styles
15

Zabaleta, Razquin Itziar. "Image processing algorithms as artistic tools in digital cinema." Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/672840.

Full text
Abstract:
The industry of cinema has experienced a radical change in the last decades: the transition from film cinematography to its digital format. As a consequence, several challenges have appeared, but, at the same time, many possibilities are open now for cinematographers to explore with this new medium. In this thesis, we propose different tools that can be useful for cinematographers while doing their craft. First, we develop a tool for automatic color grading. It is a statistics-based method to automatically transfer the style from a graded image to unprocessed footage. Some advantages of the model are its simplicity and low computational cost, which make it amenable for real-time implementation, allowing cinematographers to experiment on-set with different styles and looks. Then, a method for adding texture to footage is created. In cinema, the most commonly used texture is film grain, either directly shooting on film, or adding synthetic grain later-on at post-production stage. We propose a model of "retinal noise" which is inspired by processes in the visual system, and produces results that look natural and visually pleasing. It has parameters that allow to vary widely the resulting texture appearance, which make it an artistic tool for cinematographers. Moreover, due to the "masking" phenomenon of the visual system, the addition of this texture improves the perceived visual quality of images, resulting in bit rate and bandwidth savings. The method has been validated through psychophysical experiments in which observers, including cinema professionals, prefer it over film grain emulation alternatives from academia and the industry. Finally, we introduce a physiology-based image quality metric, which can have several applications in the image processing field, and more specifically in the cinema and broadcasting context: video coding, image compression, etc. We study an optimization of the model parameters in order to be competitive with the state-of-the-art quality metrics. An advantage of the method is its reduced number of parameters, compared with some state-of-the-art methods based in deep-learning, which have a number of parameters several orders of magnitude larger.
La industria del cine ha experimentado un cambio radical en las últimas décadas: la transición de su soporte fílmico a la tecnología del cine digital. Como consecuencia, han aparecido algunos desafíos técnicos, pero, al mismo tiempo, infinitas nuevas posibilidades se han abierto con la utilización de este nuevo medio. En esta tesis, se proponen diferentes herramientas que pueden ser útiles en el contexto del cine. Primero, se ha desarrollado una herramienta para aplicar \textit{color grading} de manera automática. Es un método basado en estadísticas de imágenes, que transfiere el estilo de una imagen de referencia a metraje sin procesar. Las ventajas del método son su sencillez y bajo coste computacional, que lo hacen adecuado para ser implementado a tiempo real, permitiendo que se pueda experimentar con diferentes estilos y 'looks', directamente on-set. En segundo lugar, se ha creado un método para mejorar imágenes mediante la adición de textura. En cine, el grano de película es la textura más utilizada, ya sea porque la grabación se hace directamente sobre película, o porque ha sido añadido a posteriori en contenido grabado en formato digital. En esta tesis se propone un método de 'ruido retiniano' inspirado en procesos del sistema visual, que produce resultados naturales y visualmente agradables. El modelo cuenta con parámetros que permiten variar ampliamente la apariencia de la textura, y por tanto puede ser utilizado como una herramienta artística para cinematografía. Además, debido al fenómeno de enmascaramiento del sistema visual, al añadir esta textura se produce una mejora en la calidad percibida de las imágenes, lo que supone ahorros en ancho de banda y tasa de bits. El método ha sido validado mediante experimentos psicofísicos en los cuales ha sido elegido por encima de otros métodos que emulan grano de película, métodos procedentes de academia como de industria. Finalmente, se describe una métrica de calidad de imágenes, basada en fenómenos fisiológicos, con aplicaciones tanto en el campo del procesamiento de imágenes, como más concretamente en el contexto del cine y la transmisión de imágenes: codificación de vídeo, compresión de imágenes, etc. Se propone la optimización de los parámetros del modelo, de manera que sea competitivo con otros métodos del estado del arte . Una ventaja de este método es su reducido número de parámetros comparado con algunos métodos basados en deep learning, que cuentan con un número varios órdenes de magnitud mayor.
APA, Harvard, Vancouver, ISO, and other styles
16

Jung, Agata. "Comparison of Video Quality Assessment Methods." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15062.

Full text
Abstract:
Context: The newest standard in video coding High Efficiency Video Coding (HEVC) should have an appropriate coder to fully use its potential. There are a lot of video quality assessment methods. These methods are necessary to establish the quality of the video. Objectives: This thesis is a comparison of video quality assessment methods. Objective is to find out which objective method is the most similar to the subjective method. Videos used in tests are encoded in the H.265/HEVC standard. Methods: For testing MSE, PSNR, SSIM methods there is special software created in MATLAB. For VQM method downloaded software was used for testing. Results and conclusions: For videos watched on mobile device: PSNR is the most similar to subjective metric. However for videos watched on television screen: VQM is the most similar to subjective metric. Keywords: Video Quality Assessment, Video Quality Prediction, Video Compression, Video Quality Metrics
APA, Harvard, Vancouver, ISO, and other styles
17

Alkhattabi, Mona A. "Information quality assessment in e-learning systems." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4867.

Full text
Abstract:
E-learning systems provide a promising solution as an information exchanging channel. Improved technology could mean faster and easier access to information but does not necessarily ensure the quality of this information. Therefore it is essential to develop valid and reliable methods of quality measurement and carry out careful information quality evaluations. Information quality frameworks are developed to measure the quality of information systems, generally from the designers¿ viewpoint. The recent proliferation of e-services, and e-learning particularly, raises the need for a new quality framework in the context of e-learning systems. The main contribution of this thesis is to propose a new information quality framework, with 14 information quality attributes grouped in three quality dimensions: intrinsic, contextual representation and accessibility. We report results based on original questionnaire data and factor analysis. Moreover, we validate the proposed framework using an empirical approach. We report our validation results on the basis of data collected from an original questionnaire and structural equation modeling (SEM) analysis, confirmatory factor analysis (CFA) in particular. However, it is difficult to measure information quality in an e-learning context because the concept of information quality is complex and it is expected that the measurements will be multidimensional in nature. Reliable measures need to be obtained in a systematic way, whilst considering the purpose of the measurement. Therefore, we start by adopting a Goal Question Metrics (GQM) approach to develop a set of quality metrics for the identified quality attributes within the proposed framework. We then define an assessment model and measurement scheme, based on a multi element analysis technique. The obtained results can be considered to be promising and positive, and revealed that the framework and assessment scheme could give good predictions for information quality within e-learning context. This research generates novel contributions as it proposes a solution to the problems raised from the absence of consensus regarding evaluation standards and methods for measuring information quality within an e-learning context. Also, it anticipates the feasibility of taking advantage of web mining techniques to automate the retrieval process of the information required for quality measurement. This assessment model is useful to e-learning systems designers, providers and users as it gives a comprehensive indication of the quality of information in such systems, and also facilitates the evaluation, allows comparisons and analysis of information quality.
APA, Harvard, Vancouver, ISO, and other styles
18

Alrished, Mohamad Ayad A. "A quantitative analysis and assessment of the performance of image quality metrics." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/128987.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020
Cataloged from student-submitted PDF of thesis.
Includes bibliographical references (pages 79-82).
Image quality assessment addresses the distortion levels and the perceptual quality of a restored or corrupted image. A plethora of metrics has been developed to that end. The usual mean of success of an image quality metric is their ability to agree with the opinions of human subjects, often represented by the mean opinion score. Despite the promising performance of some image quality metrics in predicting the mean opinion score, several problems are still unaddressed. This thesis focuses on analyzing and assessing the performance of image quality metrics. To that end, this work proposes an objective assessment criterion and considers three indicators related to the metrics: (i) robustness to local distortions; (ii) consistency in their values'; and (iii) sensitivity to distortion parameters. In addition, the implementation procedures of the proposed indicators is presented. The thesis then analyzes and assesses several image quality metrics using the developed indicators for images corrupted with Gaussian noise. This work uses both widely-used public image datasets and self-designed controlled cases to measure the performance of IQMs. The results indicate that some image quality metrics are prone to poor performance depending on the number of features. In addition, the work shows that the consistency in IQMs' values depends on the distortion level. Finally, the results highlight the sensitivity of different metrics to the Gaussian noise parameter. The objective methodology in this thesis unlocks additional insights regarding the performance of IQMs. In addition to the subjective assessment, studying the properties of IQMs outlined in the framework helps in finding a metric suitable for specific applications.
by Mohamad Ayad A. Alrished.
S.M.
S.M. Massachusetts Institute of Technology, Department of Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
19

Alkhattabi, Mona Awad. "Information quality assessment in e-learning systems." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4867.

Full text
Abstract:
E-learning systems provide a promising solution as an information exchanging channel. Improved technology could mean faster and easier access to information but does not necessarily ensure the quality of this information. Therefore it is essential to develop valid and reliable methods of quality measurement and carry out careful information quality evaluations. Information quality frameworks are developed to measure the quality of information systems, generally from the designers' viewpoint. The recent proliferation of e-services, and e-learning particularly, raises the need for a new quality framework in the context of e-learning systems. The main contribution of this thesis is to propose a new information quality framework, with 14 information quality attributes grouped in three quality dimensions: intrinsic, contextual representation and accessibility. We report results based on original questionnaire data and factor analysis. Moreover, we validate the proposed framework using an empirical approach. We report our validation results on the basis of data collected from an original questionnaire and structural equation modeling (SEM) analysis, confirmatory factor analysis (CFA) in particular. However, it is difficult to measure information quality in an e-learning context because the concept of information quality is complex and it is expected that the measurements will be multidimensional in nature. Reliable measures need to be obtained in a systematic way, whilst considering the purpose of the measurement. Therefore, we start by adopting a Goal Question Metrics (GQM) approach to develop a set of quality metrics for the identified quality attributes within the proposed framework. We then define an assessment model and measurement scheme, based on a multi element analysis technique. The obtained results can be considered to be promising and positive, and revealed that the framework and assessment scheme could give good predictions for information quality within e-learning context. This research generates novel contributions as it proposes a solution to the problems raised from the absence of consensus regarding evaluation standards and methods for measuring information quality within an e-learning context. Also, it anticipates the feasibility of taking advantage of web mining techniques to automate the retrieval process of the information required for quality measurement. This assessment model is useful to e-learning systems designers, providers and users as it gives a comprehensive indication of the quality of information in such systems, and also facilitates the evaluation, allows comparisons and analysis of information quality.
APA, Harvard, Vancouver, ISO, and other styles
20

Silva, Alexandre Fieno da. "No-reference video quality assessment model based on artifact metrics for digital transmission applications." reponame:Repositório Institucional da UnB, 2017. http://repositorio.unb.br/handle/10482/24733.

Full text
Abstract:
Tese (doutorado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2017.
Submitted by Raquel Almeida (raquel.df13@gmail.com) on 2017-06-22T19:03:58Z No. of bitstreams: 1 2017_AlexandreFienodaSilva.pdf: 5179649 bytes, checksum: de1d53930e22f809bd34322d5c5270d0 (MD5)
Approved for entry into archive by Raquel Viana (raquelviana@bce.unb.br) on 2017-10-05T17:04:26Z (GMT) No. of bitstreams: 1 2017_AlexandreFienodaSilva.pdf: 5179649 bytes, checksum: de1d53930e22f809bd34322d5c5270d0 (MD5)
Made available in DSpace on 2017-10-05T17:04:26Z (GMT). No. of bitstreams: 1 2017_AlexandreFienodaSilva.pdf: 5179649 bytes, checksum: de1d53930e22f809bd34322d5c5270d0 (MD5) Previous issue date: 2017-10-05
Um dos principais fatores para a redução da qualidade do conteúdo visual, em sistemas de imagem digital, são a presença de degradações introduzidas durante as etapas de processamento de sinais. Contudo, medir a qualidade de um vídeo implica em comparar direta ou indiretamente um vídeo de teste com o seu vídeo de referência. Na maioria das aplicações, os seres humanos são o meio mais confiável de estimar a qualidade de um vídeo. Embora mais confiáveis, estes métodos consomem tempo e são difíceis de incorporar em um serviço de controle de qualidade automatizado. Como alternativa, as métricas objectivas, ou seja, algoritmos, são geralmente usadas para estimar a qualidade de um vídeo automaticamente. Para desenvolver uma métrica objetiva é importante entender como as características perceptuais de um conjunto de artefatos estão relacionadas com suas forças físicas e com o incômodo percebido. Então, nós estudamos as características de diferentes tipos de artefatos comumente encontrados em vídeos comprimidos (ou seja, blocado, borrado e perda-de-pacotes) por meio de experimentos psicofísicos para medir independentemente a força e o incômodo desses artefatos, quando sozinhos ou combinados no vídeo. Nós analisamos os dados obtidos desses experimentos e propomos vários modelos de qualidade baseados nas combinações das forças perceptuais de artefatos individuais e suas interações. Inspirados pelos resultados experimentos, nós propomos uma métrica sem-referência baseada em características extraídas dos vídeos (por exemplo, informações DCT, a média da diferença absoluta entre blocos de uma imagem, variação da intensidade entre pixels vizinhos e atenção visual). Um modelo de regressão não-linear baseado em vetores de suporte (Support Vector Regression) é usado para combinar todas as características e estimar a qualidade do vídeo. Nossa métrica teve um desempenho muito melhor que as métricas de artefatos testadas e para algumas métricas com-referência (full-reference).
The main causes for the reducing of visual quality in digital imaging systems are the unwanted presence of degradations introduced during processing and transmission steps. However, measuring the quality of a video implies in a direct or indirect comparison between test video and reference video. In most applications, psycho-physical experiments with human subjects are the most reliable means of determining the quality of a video. Although more reliable, these methods are time consuming and difficult to incorporate into an automated quality control service. As an alternative, objective metrics, i.e. algorithms, are generally used to estimate video quality quality automatically. To develop an objective metric, it is important understand how the perceptual characteristics of a set of artifacts are related to their physical strengths and to the perceived annoyance. Then, to study the characteristics of different types of artifacts commonly found in compressed videos (i.e. blockiness, blurriness, and packet-loss) we performed six psychophysical experiments to independently measure the strength and overall annoyance of these artifact signals when presented alone or in combination. We analyzed the data from these experiments and proposed several models for the overall annoyance based on combinations of the perceptual strengths of the individual artifact signals and their interactions. Inspired by experimental results, we proposed a no-reference video quality metric based in several features extracted from the videos (e.g. DCT information, cross-correlation of sub-sampled images, average absolute differences between block image pixels, intensity variation between neighbouring pixels, and visual attention). A non-linear regression model using a support vector (SVR) technique is used to combine all features to obtain an overall quality estimate. Our metric performed better than the tested artifact metrics and for some full-reference metrics.
APA, Harvard, Vancouver, ISO, and other styles
21

Aniche, Mauricio Finavaro. "Context-based code quality assessment." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-13092016-123733/.

Full text
Abstract:
Two tasks that software engineers constantly perform are writing code that is easy to evolve and maintain, and detecting poorly written pieces of code. For the former, software engineers commonly rely on well-known software architecture styles, such as Model-View-Controller (MVC). To the latter, they rely on code metrics and code smell detection approaches. However, up to now, these code metrics and code smell approaches do not take into account underlying architectureall classes are assessed as if they were the same. In practice, software developers know that classes differ in terms of responsibilities and implementation, and thus, we expect these classes to present different levels of coupling, cohesion, and complexity. As an example, in an MVC system, Controllers are responsible for the flow between the Model and the View, and Models are responsible for representing the systems business concepts. Thus, in this thesis, we evaluate the impact of architectural roles within a system architecture on code metrics and code smells. We performed an empirical analysis in 120 open source systems, and interviewed and surveyed more than 50 software developers. Our findings show that each architectural role has a different code metric values distribution, which is a likely consequence of their specific responsibilities. Thus, we propose SATT, an approach that provides specific thresholds for architectural roles that are significantly different from others in terms of code smells. We also show that classes that play a specific architectural role contain specific code smells, which developers perceive as problems, and can impact class\' change- and defect-proneness. Based on our findings, we suggest that developers understand the responsibilities of each architectural role in their system architecture, so that code metrics and code smells techniques can provide more accurate feedback.
Duas tarefas que desenvolvedores de software constantemente fazem são escrever código fácil de ser mantido e evoluído, e detectar pedaços de código problemáticos. Para a primeira tarefa, desenvolvedores comumente fazem uso de conhecidos padrões arquiteturais, como Model-View-Controller (MVC). Para a segunda tarefa, desenvolvedores fazem uso de métricas de código e estratégias de detecção de maus cheiros de código (code smells). No entanto, até o momento, métricas de código e estratégias de detecção de maus cheiros de código não levam em conta a arquitetura do software em análise. Isso significa que todas classes são avaliadas como se umas fossem iguais às outras. Na prática, sabemos que classes são diferentes em suas responsibilidades e implementação, e portanto, esperamos que elas variem em termos de acoplamento, coesão e complexidade. Por exemplo, em um sistema MVC, Controladores são responsáveis pelo fluxo entre a camada de Modelo e a camada de Visão, e Modelos representam a visão de negócios do sistema. Nesta tese, nós avaliamos o impacto dos papéis arquiteturais em técnicas de medição de métricas de código e de detecção de maus cheiros de código. Nós realizamos um estudo empírico em 120 sistemas de código aberto, e entrevistamos e realizamos questionários com mais de 50 desenvolvedores. Nossos resultados mostram que cada papel arquitetural possui distribuições diferentes de valores de métrica de código, consequência das diferentes responsabilidades de cada papel. Como consequência, propomos SATT, uma abordagem que provê thresholds específicos para papéis arquiteturais que são significantemente diferentes de outros em termos de métricas de código. Mostramos também que classes que cumprem um papel arquitetural específico também contêm maus cheiros de código específicos. Esses maus cheiros são percebidos por desenvolvedores como problemas reais e podem fazer com que essas classes sejam mais modificadas e apresentem mais defeitos do que classes limpas. Sugerimos então que desenvolvedores entendam a arquitetura dos seus sistemas, bem como as responsabilidades de cada papel arquitetural que as classes desempenham, para que tanto métricas de código quanto estratégias de detecção de maus cheiros de código possam prover um melhor retorno.
APA, Harvard, Vancouver, ISO, and other styles
22

Bršel, Boris. "Porovnání objektivních a subjektivních metrik kvality videa pro Ultra HDTV videosekvence." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-241052.

Full text
Abstract:
Master's thesis deals with the assessment of quality of Ultra HDTV video sequences applying objective metrics. Thesis theoretically describes coding of selected codecs H.265/HEVC and VP9, objective video quality metrics and also subjective methods for assessment of the video sequences quality. Next chapter deals with the implementation of the H.265/HEVC and the VP9 codecs at selected video sequences in the raw format from which arises the test sequences database. Quality of these videos is measured afterwards by objective metrics and selected subjective method. These results are compared for the purpose of finding the most consistent correlations among objective metrics and subjective assessment.
APA, Harvard, Vancouver, ISO, and other styles
23

Glazunov, Vladimir. "Quality assessment of a large real world industry project." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-31155.

Full text
Abstract:
Quality Monitor is application, which automatically analyzes software projects forquality and makes quality assessment reports. This thesis project aims to instantiate Quality Monitor for a large real-world .Net project and to extend Quality Monitor by considering other data sources than just source code. This extended analysis scope includes bug reports, features, and time reports besides .Net assemblies (code) as artifacts. Different tools were investigated for the analysis of code, bug reports, features and time reports. The analysis of .Net assemblies was implemented as none of the existing tools under evaluation met all requirements. The analysis of .Net assemblies was successfully completed; it allows the extraction data necessary for creating Call and Control Flow graphs. These graphs are used for calculating additional metrics allowing for an improved assessment of quality of the project. Implementation of .Net assembly reader was tested using large real world industrial project. Other data sources were analyzed theoretically, but excluded for further implementation. Altogether the thesis includes an analysis of possible Quality Monitor extensions including their requirements, design, and (partially) their implementation and evaluation.
APA, Harvard, Vancouver, ISO, and other styles
24

Schano, Gregory R. "Effect of Education on Adult Sepsis Quality Metrics In Critical Care Transport." Mount St. Joseph University Dept. of Nursing / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=msjdn155951570531873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Owens, Janna Yvonne Smithey. "Evalutation [i.e. Evaluation] of sediment-sensitive biological metrics as biomonitoring tools on varied spatial scales." Birmingham, Ala. : University of Alabama at Birmingham, 2006. http://www.mhsl.uab.edu/dt/2006p/owens.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Wu, Xinhao, and Maike Zhang. "An empirical assessment of the predictive quality of internal product metrics to predict software maintainability in practice." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20149.

Full text
Abstract:
Background. Maintainability of software products continues to be an area of im- portance and interest both for practice and research. The time used for maintenance usually exceeds 70% of the whole period of software development process. At present, there is a large number of metrics that have been suggested to indicate the main- tainability of a software product. However, there is a gap in validation of proposed source code metrics and the external quality of software maintainability. Objectives. In this thesis, we aim to catalog the proposed metrics for software maintainability. From this catalog we will validate a subset of commonly proposed maintainability indicators. Methods. Through a literature review with a systematic search and selection ap- proach, we collated maintainability metrics from secondary studies on software main- tainability. A subset of commonly metrics identified in the literature review were validated in a retrospective study. The retrospective study used a large open source software "Elastic Search" as a case. We collected internal source code metrics and a proxy for maintainability of the system for 911 bug fixes in 14 version (11 experi- mental samples, 3 are verification samples) of the product. Results. Following a systematic search and selection process, we identified 11 sec- ondary studies on software maintainability. From these studies we identified 290 source code metrics that are claimed to be indicators of the maintainability of a soft- ware product. We used mean time to repair (MTTR) as a proxy for maintainability of a product. Our analysis reveals that for the "elasticsearch" software, the values of the four indicators LOC, CC, WMC and RFC have the strongest correlation with MTTR. Conclusions. In this thesis, we validated a subset of commonly proposed source code metrics for predicting maintainability. The empirical validation using a popu- lar large-scale open source system reveals that some metrics have shown a stronger correlation with a proxy for maintainability in use. This study provides important empirical evidence towards a better understanding of source code attributes and maintainability in practice. However, a single case and a retrospective study are insufficient to establish a cause effect relation. Therefore, further replications of our study design with more diverse cases can increase the confidence in the predictive ability and thus the usefulness of the proposed metrics.
APA, Harvard, Vancouver, ISO, and other styles
27

Zerman, Emin. "Evaluation et analyse de la qualité vidéo à haute gamme dynamique." Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0003.

Full text
Abstract:
Au cours de la dernière décennie, la technologie de l’image et de la vidéo à haute gamme dynamique (High dynamic range - HDR) a attiré beaucoup d’attention, en particulier dans la communauté multimédia. Les progrés technologiques récents ont facilité l’acquisition, la compression et la reproduction du contenu HDR, ce qui a mené à la commercialisation des écrans HDR et à la popularisation du contenu HDR. Dans ce contexte, la mesure de la qualité du contenu HDR joue un rôle fondamental dans l’amélioration de la chaîne de distribution du contenu ainsi que des opérations qui la composent, telles que la compression et l’affichage. Cependant, l’évaluation de la qualité visuelle HDR présente de nouveaux défis par rapport au contenu à gamme dynamique standard (Standard dynamic range -SDR). Le premier défi concerne les nouvelles conditions introduites par la reproduction du contenu HDR, par ex. l’augmentation de la luminosité et du contraste. Même si une reproduction exacte de la luminance d’une scène n’est pas nécessaire pour la plupart des cas pratiques, une estimation précise de la luminance émise est cependant nécessaire pour les mesures d’évaluation objectives de la qualité HDR. Afin de comprendre les effets du rendu d’affichage sur la perception de la qualité, un algorithme permettant de reproduire très précisement une image HDR a été développé et une expérience subjective a été menée pour analyser l’impact de différents rendus sur l’évaluation subjective et objective de la qualité HDR. En outre, afin de comprendre l’impact de la couleur avec la luminosité accrue des écrans HDR, les effets des différents espaces de couleurs sur les performances de compression vidéo HDR ont également été analysés dans une autre étude subjective. Un autre défi consiste à estimer objectivement la qualité du contenu HDR, en utilisant des ordinateurs et des algorithmes. Afin de relever ce défi, la thèse procède à l’évaluation des performances des métriques de qualité d’image HDR avec référence (Full reference-FR). Les images HDR ont une plus grande plage de luminosité et des valeurs de contraste plus élevées. Etant donné que la plupart des métriques de qualité d’image sont développées pour les images SDR, elles doivent être adaptées afin d’estimer la qualité des images HDR. Différentes méthodes d’adaptation ont été utilisées pour les mesures SDR, et elles ont été comparées avec les métriques de qualité d’image existantes développées exclusivement pour les images HDR. De plus, nous proposons une nouvelle méthode d’évaluation des métriques objectives basée sur une nouvelle approche de classification. Enfin, nous comparons les scores de qualité subjectifs acquis en utilisant différentes méthodologies de test subjectives. L’évaluation subjective de la qualité est considérée comme le moyen le plus efficace et le plus fiable d’obtenir des scores de qualité «vérité-terrain» pour les stimuli sélectionnés, et les scores moyens d’opinion (Mean opinion scores-MOS) obtenus sont les valeurs auxquelles les métriques objectives sont entraînées pour correspondre. En fait, de fortes divergences peuvent facilement être rencontrés lorsque différentes bases de données de qualité multimédia sont considérées. Afin de comprendre la relation entre les valeurs de qualité acquises à l’aide de différentes méthodologies, la relation entre les valeurs MOS et les résultats des comparaisons par paires rééchellonés (Pairwise comparisons - PC) a été comparée. A cette fin, une série d’expériences ont été menées entre les méthodologies double stimulus impairment scale (DSIS) et des comparaisons par paires. Nous proposons d’inclure des comparaisons inter-contenu dans les expériences PC afin d’améliorer les performances de rééchelonnement et de réduire la variance inter-contenu ainsi que les intervalles de confiance. Les scores de PC rééchellonés peuvent également être utilisés pour des scénarios subjectifs d’évaluation de la qualité multimédia autres que le HDR
In the last decade, high dynamic range (HDR) image and video technology gained a lot of attention, especially within the multimedia community. Recent technological advancements made the acquisition, compression, and reproduction of HDR content easier, and that led to the commercialization of HDR displays and popularization of HDR content. In this context, measuring the quality of HDR content plays a fundamental role in improving the content distribution chain as well as individual parts of it, such as compression and display. However, HDR visual quality assessment presents new challenges with respect to the standard dynamic range (SDR) case. The first challenge is the new conditions introduced by the reproduction of HDR content, e.g. the increase in brightness and contrast. Even though accurate reproduction is not necessary for most of the practical cases, accurate estimation of the emitted luminance is necessary for the objective HDR quality assessment metrics. In order to understand the effects of display rendering on the quality perception, an accurate HDR frame reproduction algorithm was developed, and a subjective experiment was conducted to analyze the impact of different display renderings on subjective and objective HDR quality evaluation. Additionally, in order to understand the impact of color with the increased brightness of the HDR displays, the effects of different color spaces on the HDR video compression performance were also analyzed in another subjective study. Another challenge is to estimate the quality of HDR content objectively, using computers and algorithms. In order to address this challenge, the thesis proceeds with the performance evaluation of full-reference (FR) HDR image quality metrics. HDR images have a larger brightness range and higher contrast values. Since most of the image quality metrics are developed for SDR images, they need to be adapted in order to estimate the quality of HDR images. Different adaptation methods were used for SDR metrics, and they were compared with the existing image quality metrics developed exclusively for HDR images. Moreover, we propose a new method for the evaluation of metric discriminability based ona novel classification approach. Motivated by the need to fuse several different quality databases, in the third part of the thesis, we compare subjective quality scores acquired by using different subjective test methodologies. Subjective quality assessment is regarded as the most effective and reliable way of obtaining “ground-truth” quality scores for the selected stimuli, and the obtained mean opinion scores (MOS) are the values to which generally objective metrics are trained to match. In fact, strong discrepancies can easily be notified across databases when different multimedia quality databases are considered. In order to understand the relationship between the quality values acquired using different methodologies, the relationship between MOS values and pairwise comparisons (PC) scaling results were compared. For this purpose, a series of experiments were conducted using double stimulus impairment scale (DSIS) and pairwise comparisons subjective methodologies. We propose to include cross-content comparisons in the PC experiments in order to improve scaling performance and reduce cross-content variance as well as confidence intervals. The scaled PC scores can also be used for subjective multimedia quality assessment scenarios other than HDR
APA, Harvard, Vancouver, ISO, and other styles
28

Mehmood, Kashif. "Conception des Systèmes d'Information : une approche centrée sur les Patrons de Gestion de la Qualité." Phd thesis, Conservatoire national des arts et metiers - CNAM, 2010. http://tel.archives-ouvertes.fr/tel-00922995.

Full text
Abstract:
Les modèles conceptuels (MC) jouent un rôle crucial qui est celui de servir de base à l'ensemble du processus de développement d'un système d'information (SI) mais aussi de moyen de communication à la fois au sein de l'équipe de développement et avec les utilisateurs durant les premières étapes de validation. Leur qualité joue par conséquent un rôle déterminant dans le succès du système final. Des études ont montré que la majeure partie des changements que subit un SI concerne des manques ou des défaillances liés aux fonctionnalités attendues. Sachant que la définition de ses fonctionnalités incombe à la phase de l'analyse et conception dont les MC constituent les livrables, il apparaît indispensable pour une méthode de conception de veiller à la qualité des MC qu'elle produit. Notre approche vise les problèmes liés à la qualité de la modélisation conceptuelle en proposant une solution intégrée au processus de développement qui à l'avantage d'être complète puisqu'elle adresse à la fois la mesure de la qualité ainsi que son amélioration. La proposition couvre les aspects suivants: i. Formulation de critères de qualité en fédérant dans un premier temps les travaux existant sur la qualité des MC. En effet, un des manques constaté dans le domaine de la qualité des MC est l'absence de consensus sur les concepts et leurs définitions. Ce travail a été validé par une étude empirique. Ce travail a également permis d'identifier les parties non couverte par la littérature et de les compléter en proposant de nouveaux concepts ou en précisant ceux dont la définition n'était complète. ii. Définition d'un concept (pattern de qualité) permettant de capitaliser les bonnes pratiques dans le domaine de la mesure et de l'amélioration de la qualité des MC. Un pattern de qualité sert à aider un concepteur de SI dans l'identification des critères de qualité applicables à sa spécification, puis de le guider progressivement dans la mesure de la qualité ainsi que dans son amélioration. Sachant que la plupart des approches existantes s'intéresse à la mesure de la qualité et néglige les moyens de la corriger. La définition de ce concept est motivée par la difficulté et le degré d'expertise important qu'exige la gestion de la qualité surtout au niveau conceptuel où le logiciel fini n'est pas encore disponible et face à la diversité des concepts de qualité (critères et métriques) pouvant s'appliquer. iii. Formulation d'une méthode orientée qualité incluant à la fois des concepts, des guides et des techniques permettant de définir les concepts de qualité souhaités, leur mesure et l'amélioration de la qualité des MC. Cette méthode propose comme point d'entrée le besoin de qualité que doit formuler le concepteur. Il est ensuite guidée de manière flexible dans le choix des critères de qualité adaptés jusqu'à la mesure et la proposition de recommandations aidant à l'amélioration de la qualité du MC initial conformément au besoin formulé. iv. Développement d'un prototype "CM-Quality". Notre prototype met en œuvre la méthode proposée et offre ainsi une aide outillé à son application. Nous avons enfin mené deux expérimentations ; la première avait comme objectif de valider les concepts de qualité utilisés et de les retenir. La deuxième visait à valider la méthode de conception guidée par la qualité proposée
APA, Harvard, Vancouver, ISO, and other styles
29

Chen, Dejiu. "Systems Modeling and Modularity Assessment for Embedded Computer Control Applications." Doctoral thesis, KTH, Maskinkonstruktion, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3792.

Full text
Abstract:
AbstractThe development of embedded computer control systems(ECS) requires a synergetic integration of heterogeneoustechnologies and multiple engineering disciplines. Withincreasing amount of functionalities and expectations for highproduct qualities, short time-to-market, and low cost, thesuccess of complexity control and built-in flexibility turn outto be one of the major competitive edges for many ECS products.For this reason, modeling and modularity assessment constitutetwo critical subjects of ECS engineering.In the development ofECS, model-based design is currently being exploited in most ofthe sub-systems engineering activities. However, the lack ofsupport for formalization and systematization associated withthe overall systems modeling leads to problems incomprehension, cross-domain communication, and integration oftechnologies and engineering activities. In particular, designchanges and exploitation of "components" are often risky due tothe inability to characterize components' properties and theirsystem-wide contexts. Furthermore, the lack of engineeringtheories for modularity assessment in the context of ECS makesit difficult to identify parameters of concern and to performearly system optimization. This thesis aims to provide a more complete basis for theengineering of ECS in the areas of systems modeling andmodularization. It provides solution domain models for embeddedcomputer control systems and the software subsystems. Thesemeta-models describe the key system aspects, design levels,components, component properties and relationships with ECSspecific semantics. By constituting the common basis forabstracting and relating different concerns, these models willalso help to provide better support for obtaining holisticsystem views and for incorporating useful technologies fromother engineering and research communities such as to improvethe process and to perform system optimization. Further, amodeling framework is derived, aiming to provide a perspectiveon the modeling aspect of ECS development and to codifyimportant modeling concepts and patterns. In order to extendthe scope of engineering analysis to cover flexibility relatedattributes and multi-attribute tradeoffs, this thesis alsoprovides a metrics system for quantifying componentdependencies that are inherent in the functional solutions.Such dependencies are considered as the key factors affectingcomplexity control, concurrent engineering, and flexibility.The metrics system targets early system-level design and takesinto account several domain specific features such asreplication and timing accuracy. Keywords:Domain-Specific Architectures, Model-basedSystem Design, Software Modularization and Components, QualityMetrics.
QC 20100524
APA, Harvard, Vancouver, ISO, and other styles
30

Earnest, Steven F. P. "Integrating GIS with Benthic Metrics: Calibrating a Biotic Index to Effectively Discriminate Stream Impacts in Urban Areas of the Blackland Prairie Eco-Region." Thesis, University of North Texas, 2003. https://digital.library.unt.edu/ark:/67531/metadc4425/.

Full text
Abstract:
Rapid Bioassessment Protocols integrate a suite of community, population, and functional metrics, determined from the collection of benthic macroinvertebrates or fish, into a single assessment. This study was conducted in Dallas County Texas, an area located in the blackland prairie eco-region that is semi-arid and densely populated. The objectives of this research were to identify reference streams and propose a set of metrics that are best able to discriminate between differences in community structure due to natural variability from those caused by changes in water quality due to watershed impacts. Using geographic information systems, a total of nine watersheds, each representing a different mix of land uses, were chosen for evaluation. A total of 30 metrics commonly used in RBP protocols were calculated. Efficacy of these metrics to distinguish change was determined using several statistical techniques. Ten metrics were used to classify study area watersheds according to stream quality. Many trends, such as taxa presence along habitat quality gradients, were observed. These gradients coincided with expected responses of stream communities to landscape and habitat variables.
APA, Harvard, Vancouver, ISO, and other styles
31

Dragana, Sandić-Stanković. "Мулти-резолуциона мера за објективну оцену квалитета синтетизованих слика ФТВ видео сигнала." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2016. http://www.cris.uns.ac.rs/record.jsf?recordId=101211&source=NDLTD&language=en.

Full text
Abstract:
Основни допринос ове докторске дисертације је развој алгоритама за објективну процену визуелног квалитета слике синтетизоване применом ДИБР (Depth Image Based Rendering) техника које узрокују неуниформна изобличења у области ивица. Применом нелинеарних морфолошких филтара у мултирезолуционој декомпозицији слика код израчунавања предложене метрике, важне геометријске информације као што су ивице су добро очуване без помака и замућења у сликама на различитим скалама мултирезолуционе репрезентације. Израчунавањем МСЕ по подопсезима који садрже ивице, пиксел по пиксел, прецизно се мери разлика две мултирезолуционе репрезентације. Тако се највећи значај у процени квалитета додељује области ивица. Процене предложене метрике се добро поклапају са субјективним оценама.
Osnovni doprinos ove doktorske disertacije je razvoj algoritama za objektivnu procenu vizuelnog kvaliteta slike sintetizovane primenom DIBR (Depth Image Based Rendering) tehnika koje uzrokuju neuniformna izobličenja u oblasti ivica. Primenom nelinearnih morfoloških filtara u multirezolucionoj dekompoziciji slika kod izračunavanja predložene metrike, važne geometrijske informacije kao što su ivice su dobro očuvane bez pomaka i zamućenja u slikama na različitim skalama multirezolucione reprezentacije. Izračunavanjem MSE po podopsezima koji sadrže ivice, piksel po piksel, precizno se meri razlika dve multirezolucione reprezentacije. Tako se najveći značaj u proceni kvaliteta dodeljuje oblasti ivica. Procene predložene metrike se dobro poklapaju sa subjektivnim ocenama.
The main contribution of this doctoral thesis is the development of algorithms for objectiveDIBR-synthesized view quality assessment. DIBR algorithms introduce nonuniformgeometric distortions affecting the edge coherency in the synthesized images.The non-linearmorphological filters used in multi-scale image decompositions of the proposed metricmaintain important geometric information such as edges across different resolutionlevels.Calculating MSE pixel-by-pixel through subbands in which the edges are extracted,the difference of the two multiresolution representations, the reference and the synthesizedimage, is precisely measured. In that way the importance of edge areas which are prone tosynthesis artifacts is emphasized in the image quality assessment. The proposed metric hasvery good agreement with human judgment.
APA, Harvard, Vancouver, ISO, and other styles
32

Price, Kendall Susan. "Effects of Cattle Exclusion on Stream Habitat in the Shenandoah Valley, Virginia." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/31952.

Full text
Abstract:
Cattle exclusion from streams is believed to improve riparian vegetation, in-stream habitat, and composition of aquatic organisms. Yet research on the effects of cattle exclusion have yielded conflicting results. The goal of this study was to examine relationships between physical habitat and benthic macroinvertebrate populations with increasing downstream distance from cattle-impacted stream segments, and determine which physical habitat and chemical water quality parameters are affected by cattle presence. Macroinvertebrates from 24 sites in Rockingham County, VA were used to calculate bioassessment metrics. Fourteen sites made up 4 longitudinal studies where improvement of biotic condition with distance from cattle impact was examined. Linear regression and multilevel modeling results indicated improving macroinvertebrate assemblage with increasing distance downstream from cattle-impacted reaches. Presence of riparian trees and distance from impact had a positive influence on bioassessment scores. A total of 39 stream sites in the Shenandoah Valley were classified using the Rapid Habitat Assessment (RHA) which is based on 10 visual evaluations of physical characteristics. Four of the ten RHA parameters, embeddedness, bank stability, vegetative protection, and riparian vegetative zone width, along with the total RHA score, were associated with cattle presence. This study found that a) RHA factors reflect direct cattle impacts on the riparian zone, but RHA has limitations as a general predictor of cattle impact, b) cattle influence on benthic macroinvertebrates extends hundreds of meters beyond the immediate pasture boundary, and c) improvement in Virginia Stream Condition Index can be predicted as a function of distance downstream.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
33

Everhart, Chichi Kate. "Strategies for Measuring Quality Care in Healthcare Organizations in the United States." ScholarWorks, 2018. http://scholarworks.waldenu.edu/dissertations/4851.

Full text
Abstract:
Abstract According to members of the Institute of Medicine, about 98,000 hospitalized patients in the United States die each year because of poor quality care. The problem of poor healthcare quality may exist in part due to limited information on effective performance measurement processes. A multiple case study design was used to gain broad insight into possible solutions to the problems of determining the quality of healthcare services using performance measurements. Hospital/healthcare organization leaders in North Carolina who had implemented optimal performance measurements for quality care were interviewed. The conceptual frameworks that served as a proposition for the study were Goldratt's theory of constraint, Deming's 14 point model and Lewin's model of the change process in human systems. The data collection process involved semistructured interviews of 12 individuals. Data sources and conceptual framework triangulations were used in the data analysis process(coding approaches, study dependability, credibility, transferability methods and case study protocol use) . The themes that emerged from the study were strategies for performance measurement and strategies to enhance service quality in healthcare organizations etc. Results might contribute to social change by helping healthcare leaders and patients improve their knowledge and understanding of optimal performance measurement strategies, which may effect positive organizational changes.
APA, Harvard, Vancouver, ISO, and other styles
34

Sharif, Bonita. "Empirical Assessment of UML Class Diagram Layouts Based on Architectural Importance." Kent State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=kent1271679781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Rodríguez, Demóstenes Zegarra. "Proposta da métrica eVSQM para avaliação de QoE no serviço de streaming de vídeo sobre TCP." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-16102014-165108/.

Full text
Abstract:
Atualmente existem inúmeros serviços multimídia que são transportados através da rede IP, dos quais, o tráfego dos serviços de vídeo experimentou um maior crescimento nos últimos anos. O sucesso de aplicações de streaming de vídeo é um dos fatores pelo qual, este tráfego se incrementou. Alguns recentes estudos projetam que este tipo de serviços no ano 2016, alcançará aproximadamente o 55% do tráfego total da Internet. Considerando a importância que os serviços de vídeo alcançarão nos próximos anos, este trabalho foca-se na avaliação da qualidade de experiência (QoE) dos usuários ao utilizar estes serviços. Assim, nesta tese é proposta uma métrica de avaliação de vídeo denominada eVsQM, do inglês enhanced Video streaming Quality Metric, a qual é baseada principalmente no número, duração e localização temporal dos congelamentos de imagens (pausas) durante uma transmissão de vídeo, considerando também o tipo de conteúdo do vídeo transmitido. Esta métrica foi determinada a partir de um modelo matemático que utilizou os resultados de testes subjetivos de avaliação de vídeo, pois, este tipo de testes são os que melhor se aproximam da QoE real do usuário. Cabe destacar, que na realização dos testes subjetivos foi utilizada uma metodologia concordante com o tipo de degradação que o vídeo possui, ou seja, a pausa. No streaming de vídeo novas soluções são criadas com a finalidade de melhorar a QoE do usuário. O DASH, do inglês Dynamic Adaptive Streaming over HTTP, muda a resolução do vídeo transmitido de acordo com as características da rede. Porém, se a rede é muito flutuante existirão muitas variações de resolução e a QoE do usuário será degradada. Neste trabalho é proposto um parâmetro a ser utilizado no algoritmo DASH que funciona como um limiar para controlar a frequência destas comutações de resolução. Este parâmetro é denominado como SDF (do inglês Switching Degradation Factor) e permite que a QoE mantida em níveis aceitáveis, inclusive em situações onde a rede é muito flutuante. Adicionalmente, neste trabalho é proposto um novo modelo de faturamento nos serviços de telecomunicações, que inclua no processo de tarifação um parâmetro relacionado com a QoE, visando ter uma tarifação de serviços de comunicações mais justa do ponto de vista dos usuários. Desta forma, usuários que recebem uma menor qualidade no serviço devem pagar menos em relação aos usuários que recebem uma melhor qualidade do mesmo serviço.
Nowadays, there are several multimedia services, which are carried via IP networks. From these all services; the traffic regarding video applications had the greatest growth in the last years. The success of video streaming applications is one of the major contributors to video traffic growth. Some recent studies project that video services, will reach approximately 55% of the total Internet traffic in 2016. Considering the relevance that video services will achieve in the coming years, this work focuses on the users Quality of Experience (QoE) when using these services. Thus, this thesis proposes an evaluation metric named enhanced Video streaming Quality Metric (eVsQM), which is based primarily on the number, duration and temporal location of the image freezes (pauses) during a video transmission. Also, this metric considers the video content type and was determined from a mathematical model that used as inputs, the video quality assessment results from subjective tests due, these types of test are the most correlated with real users QoE. It is worth noting that to perform these subjective tests was used a methodology consistent with the kind of video degradation (pause). For another hand, new video streaming solutions are created for the purpose of improving the users QoE of the user. Dynamic Adaptive Streaming over HTTP (DASH) changes the video resolution according to the network characteristics. However, if the network is very fluctuant, many video resolution switching events will be performed and users QoE will be degraded. This thesis proposes a parameter to be used in DASH algorithms that works as a threshold to control the resolution switching frequency. This parameter is named Switching Degradation Factor (SDF) and is responsible to maintain the QoE in acceptable levels, inclusive in scenarios in which the network capacity is very fluctuating.
APA, Harvard, Vancouver, ISO, and other styles
36

Sanja, Maksimović-Moićević. "Predlog nove mere za ocenu kvaliteta slike prilikom interpolacije i njena implementacija u računarskoj obradi signal slike." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2015. http://www.cris.uns.ac.rs/record.jsf?recordId=95429&source=NDLTD&language=en.

Full text
Abstract:

Osnovni doprinos ove doktorske disertacije je razvoj algortima i sistema za objektivnu procenu vizuelnog kvaliteta slike uzimajući u obzir najvažnija moguća oštećenja kao što su zamućenje ivica (oštrina) i poremećaj prirodnog izgleda teksture objekata na slici sa jedne strane i uticaj sadržaja slike (procenta ivica u slici) na procenu kvaliteta sa druge strane. Dakle, hipoteza izneta u ovom radu je da je potreban multiparametarski pristup da bi se dobila objektivna procena kvaliteta slike koja je što približnija subjektivnoj proceni.

APA, Harvard, Vancouver, ISO, and other styles
37

Lautenschleger, Ary Henrique. "Análise da operação de sistemas de distribuição considerando as incertezas da carga e da geração distribuída." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/185256.

Full text
Abstract:
Neste trabalho é apresentado um método probabilístico para avaliação do desempenho de redes de distribuição considerando incertezas na demanda das cargas e na potência gerada por sistemas distribuídos intermitentes. Os consumidores são divididos em agrupamentos por classe e faixa de consumo e a modelagem da demanda horária dos consumidores de cada agrupamento é realizada por uma lei de distribuição acumulada de probabilidade (CDF) adequada. A geração distribuída é contemplada pela consideração de fonte solar fotovoltaica. O procedimento de simulação do Método de Monte Carlo é empregado e a técnica da Joint Normal Transform é utilizada na geração de números aleatórios correlacionados, empregados na amostragem da demanda dos consumidores e da energia produzida pelos sistemas de geração distribuídos. O método proposto foi aplicado ao conhecido sistema de 13 barras do IEEE e os resultados dos indicadores de perdas na operação bem como indicadores de violação de tensão crítica e precária obtidos com o modelo probabilístico são comparados aos obtidos com o modelo determinístico convencional. É demonstrado que nem sempre a média é uma descrição suficiente para o comportamento dos componentes de redes de distribuição e que é mais adequado utilizar uma representação com intervalos de confiança para as grandezas de interesse.
This work presents a probabilistic method for performance evaluation of distribution networks considering uncertainties in load demand and power generated by intermittent distributed systems. Consumers are divided into clusters by class and consumption range, so the modeling for the hourly demand of the consumers on each cluster is performed by a suitable cumulative probability distribution (CDF). Distributed generation is considered by means of solar photovoltaic sources. The Monte Carlo Simulation (MCS) Method is employed and the Joint Normal Transform technique is applied for correlated random numbers generation, used to sample consumer demand and the energy generated by distributed generation systems. The proposed method was applied in the well-known IEEE 13 node test feeder and the results of the operation losses as well as voltage violation indices obtained by the probabilistic model are compared to those obtained with the conventional deterministic model. It is shown that the mean is not always a sufficient description for the behavior of distribution network components and that it is more appropriate to use confidence intervals for the quantities of interest.
APA, Harvard, Vancouver, ISO, and other styles
38

Santos, Izaias Souza dos. "Geoquímica e distribuição dos metais traço em testemunhos de sedimento do açude Marcela, Itabaiana - Sergipe." Universidade Federal de Sergipe, 2010. https://ri.ufs.br/handle/riufs/6127.

Full text
Abstract:
Conselho Nacional de Desenvolvimento Científico e Tecnológico
This study addresses the distribution of trace metals in sediment cores from the dam Marcela in order to evaluate the occurrence of impacts associated with human and industrial activity. The dam is located in the city Itabaiana in the state of Sergipe, it was built in the period 1953 - 1957 barring Fuzil stream. It has an area of 1.4 km2 with storage capacity of 2,700,000 m3. Two sediment cores were collected in November of 2008 with approximately 45cm in two distinct points. The samples were sectioned in 5 cm each and they were analyzed by to determine the following chemical elements: Co, Cr, Cu, Ni, Pb, Zn, Mn, Al, Fe, Corg and Ntotal. The average value of Corg/Ntotal in the range 4,97- 7,64 and 6,39-7,69, for cores I and II respectively, indicative autochthonous and allochthonous origin of the organic matter. The multivariate statistical analysis (Principal component analysis) applied to the set of results showed that the two cores in relation to concentrations of metals are different, with evidence of enrichment for Cr, Cu, Mn and Zn in the surface layers. The contamination factor calculed showed contamination moderate level for metals Cr, Cu, Mn and Zn. The risk assessment code (RAC), which consider the percentage of metal extracted in the label fraction (F1) of BCR procedure, showed that chromium does not present risk to the environment, copper, nickel and lead were low to medium risk, and zinc had of very high to High risk to the aquatic environment. Small variations in environmental conditions, such as pH or salinity, could therefore increase availability of the elements to the aquatic system. The metals concentrations were always at the lower limit the TEC and PEC, defined by consensual sediment quality guidelines (SQGs), in this case, it is not possible to predict what adverse effects the metal can cause in this environment.
Neste trabalho foi determinada a distribuição de metais traço em testemunhos de sedimento do Açude Marcela com o objetivo de avaliar a ocorrência de impactos associados à atividade humana e industrial, desenvolvidas naquela região. O Açude Marcela localiza-se na cidade de Itabaiana Sergipe, foi construído no período 1953 à 1957 pelo barramento do riacho Fuzil e tem uma área de 1,4km2 , com capacidade de armazenamento de 2.700.000 m3. Foram coletados em novembro de 2008 dois testemunhos de sedimentos com aproximadamente 45cm de profundidade em dois pontos distintos do açude. Os testemunhos foram secionados a cada 5cm para determinação dos seguintes elementos químicos: Co, Cr, Cu, Ni, Pb, Zn, Mn, Al, Fe, Corg e Ntotal. A relação Corg/Ntotal variou de 4,97-7,64 e 6,39-7,69 para os testemunhos I e II, respectivamente, indicando origem autóctone e alóctone para a matéria orgânica presente no sedimento. A análise estatística multivariada (análise de componentes principais-ACP), aplicada ao conjunto dos resultados, mostrou que os dois testemunhos, em relação às concentrações dos metais, são estatisticamente diferentes, com evidências de enriquecimento por Cr, Cu, Mn e Zn, nas camadas mais superficiais. O fator de contaminação calculado mostrou um nível de contaminação moderado para os metais Cr, Cu, Mn e Zn. O Fator de Risco (RAC), que compreende a percentagem do metal extraída na fração lábil (F1) do procedimento (BCR) empregado, indicou que o cromo não apresentou risco ao ambiente. Cobre, níquel e chumbo apresentaram risco baixo a médio, e zinco apresentou risco alto a altíssimo para o ambiente aquático. Sendo assim, pequenas variações nas condições ambientais podem remobilizar esses elementos do sedimento para a coluna d água. As concentrações dos metais nos testemunhos estiveram entre TEC e o PEC, definidos pelos valores guias de qualidade de sedimento consensual (VGQS), indicando que, nas condições atuais, o sedimento pode exercer efeito adverso aos organismos do açude em questão.
APA, Harvard, Vancouver, ISO, and other styles
39

Maciel, Maria Goretti de Lacerda. "O Qualis periódicos na percepção dos programas de pós-graduação." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/90076.

Full text
Abstract:
A avaliação da qualidade das publicações resultantes dos programas de pós-graduação por meio do sistema Qualis, em especial os artigos de periódicos, tem sido alvo de críticas positivas e negativas, abrindo espaço para uma revisão e aperfeiçoamento, inclusive considerando que sua implantação teve início apenas em 1998. Os resultados das buscas em distintas bases de informação científica mostram que o tema do sistema Qualis da Capes tem atraído a atenção da academia do país e que não há consensos em relação nem à metodologia de classificação, tampouco aos pesos conferidos para a avaliação dos programas de pós-graduação. Mostram também que o tema carece de uma revisão crítica, do ponto de vista dos coordenadores de programas de pós-graduação, objeto desta dissertação. Esta pesquisa foi desenvolvida mediante consulta por via eletrônica para a coleta de informações endereçada aos coordenadores de programas de pós-graduação, organizados por área de conhecimento. Além disso, foi feita pesquisa documental por grande área para identificação de procedimentos e critérios adotados pelas comissões de área da Capes. Os programas de pós-graduação brasileiros recomendados pela Capes estão buscando uma maior qualidade nos seus cursos de mestrado e doutorado, o que necessariamente implica ampliar as suas formas de comunicações científicas. Para alguns pesquisadores, o sistema é uma classificação indireta, já que não avalia a qualidade das pesquisas ou dos artigos produzidos.
The evaluation of quality of publications resulting from graduate programs through the Capes Qualis system, in particular those journal articles has been criticized both positive and negative, making room for a review and improvement, The results of searches on different bases of scientific information showed that the theme of Qualis system of Capes has attracted the attention of the academy of country and that there are no consensus with respect to the ranking methodology nor concerning the weights given to evaluation of graduate programs. Also the system lacks a critical review from the point of view of coordinators of graduate programs, o7bject of this dissertation. This survey was developed by electronic consultation to graduate coordinators, organized by area of expertise. It has been searches by major field research for identifying procedures and criteria adopted by the committees of areas. The Brazilian graduate programs, recommended by Capes are seeking a higher quality in their master's and doctoral courses, which necessarily implies broaden their scientific communications forms. To some researchers the system is an indirect rating, since does not evaluate the qualities of surveys or articles produced.
APA, Harvard, Vancouver, ISO, and other styles
40

Santos-Araujo, Sabrina Novaes dos. "Soil-to-plant transfer of heavy metals and an assessment of human health risks in vegetable-producing areas of São Paulo state." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/11/11140/tde-30042015-152533/.

Full text
Abstract:
While contaminated food products are known to be a leading source of exposure to potentially toxic elements (PTEs), for the general population, few studies have been carried out to examine PTEs levels in soils and plants in wet tropical regions such as Brazil. While the most commonly used index for estimating PTEs accumulation in vegetables and the subsequent exposure to humans who eat them is the bioconcentration factor (BCF) - the ratio between the concentration of metals in the edible portions of produce and their total concentration in soils - the BCF does not provide an adequate description of soil-to-plant metal transfers. A better understanding of such transfers requires information about the soil attributes that influence the availability of PTEs to plants. The state of São Paulo (SP) is the largest consumer of vegetables in Brazil, as well as the largest and most diversified producer. Studies are therefore needed on PTEs concentrations in soils and vegetables, in order to assess their quality under guidelines established by Brazilian legislation. It is likewise crucial to establish critical limits of these elements in soils, via models that assess risks to human health, based on data that reflect current conditions in the soils of São Paulo. The objectives in this study were: (i) to characterize and to evaluate the relations between the concentrations of Cd, Cu, Ni, Pb and Zn in soils and in vegetables from the \"Green Belt\" of the state of São Paulo, Brazil, taking the limits established by legislation into account; (ii) to develop empiric models to derive appropriate soil screening values and to provide an accurate risk assessment for tropical regions; (iii) to develop proposals for improved human health-based screening values for Cd, Cu, Ni, Pb and Zn in São Paulo soils, using soil - vegetable relations. With the exception of Cd, there was a positive correlation between pseudototais and bioavailable contents of PTEs. Cd and Pb content in plants, moreover, not significantly correlated with any of the variables studied. All models of random forests and trees were good predictors of results generated from a regression model and provided useful information about which covariates were important to forecast only for the zinc concentration in the plant. The soil-plant transfer models proposed in this study had a good performance and are useful for eight of the ten combinations (five metals versus two species). SP data combined with NL data for Cd in lettuce and for Ni and Zn in lettuce and in carrot when pH, organic carbon - OC and clay contents were included in the model. Including such soil properties results in improved relations between PTEs concentrations in soils and in vegetables to derive appropriate screening values for SP State. The model in which pH, OC and clay contents were included gave the most useful results with SP and NL data set combined for Cu, Pb, Zn in lettuce and for Cd and Cu in carrot. Our setup did not work for Ni and for Pb in carrot because the data models gave an inconsistent result and the combination of datasets did not or insufficiently improve the results.
Uma das principais vias de exposição de elementos potencialmente tóxicos (EPT) para a maioria da população é por meio da ingestão de alimentos, mas poucos são os estudos relacionados às concentrações de EPTs em solos e em vegetais de regiões tropicais úmidas, sobretudo no Brasil. O índice mais comumente utilizado para estimar o acúmulo de EPTs em vegetais e a subsequente exposição humana pelo consumo de vegetais é o fator de bioconcentração (BCF), que é a razão entre a concentração de metais em partes comestíveis de hortaliças e da concentração total do metal no solo. Porém, o BCF não descreve adequadamente a transferência solo-planta de metais. Assim, a utilização de relações envolvendo os principais atributos dos solos que influenciam a disponibilidade dos EPTs às plantas pode explicar com mais detalhe as relações solo-planta. O estado de São Paulo é o maior mercado consumidor, além de ser o maior e mais diversificado produtor olerícola no Brasil. Assim, são necessárias pesquisas referentes às concentrações de metais pesados em solos e hortaliças, para avaliação da qualidade dos mesmos em relação aos limites estabelecidos pela legislação. Os objetivos neste trabalho foram: (i) caracterizar e avaliar as relações solo-planta entre as concentrações de Cd, Cu, Ni, Pb e Zn em solos e olerícolas de folhas e raiz no estado de São Paulo, tendo em vista os limites estabelecidos pela legislação; (ii) desenvolver modelos empíricos para poder derivar adequados limites críticos do solo e fornecer uma avaliação de risco precisa para regiões tropicais; (iii) desenvolver propostas para melhorar os limites críticos baseados na saúde humana para Cd, Cu, Ni, Pb e Zn em solos de São Paulo, utilizando relações solo-planta adequadas para as condições tropicais. Com exceção do Cd, houve correlação positiva entre os teores pseudototais e biodisponíveis dos EPTs. Os teores de Cd e de Pb nas plantas, por outro lado, não correlacionaram significativamente com nenhuma das variáveis estudadas. Os modelos de florestas aleatórias e árvores foram bons preditores de resultados gerados a partir de um modelo de regressão e forneceram informações úteis sobre quais covariáveis foram importantes para previsão apenas para o teor de Zn na planta. A aplicação de modelos de transferência solo-planta proposto neste estudo tiveram bom desempenho e foram úteis para oito das dez combinações (cinco metais contra duas espécies). O conjunto de resultados de SP pode ser combinado com o da Holanda usando o modelo em que se incluem pH, teor de carbono orgânico - CO e teor de argila para Cd em alface e para Ni e Zn na alface e na cenoura. O modelo foi mais eficiente com os conjunto de resultados combinados para Cu, Pb, Zn, em alface e para Cd e Cu na cenoura. A abordagem não foi eficiente para Ni e para Pb em cenoura, com resultados incoerentes para os conjuntos de resultados combindados ou separados, para os quatro modelos testados.
APA, Harvard, Vancouver, ISO, and other styles
41

Preiss, Jens. "Color-Image Quality Assessment: From Metric to Application." Phd thesis, 2015. https://tuprints.ulb.tu-darmstadt.de/4389/1/Preiss_PhD-Thesis.pdf.

Full text
Abstract:
In digital imaging, evaluating the visual quality of images is a crucial requirement for most image-processing systems. For such an image quality assessment, mainly objective assessments are employed which automatically predict image quality by a computer algorithm. The vast majority of objective assessments are so-called image difference metrics which predict the perceived difference between a distorted image and a reference. Due to the limited understanding of the human visual system, image quality assessment is not straightforward and still an open research field. The majority of image-difference metrics disregard color information which allows for faster computation. Even though their performance is sufficient for many applications, they are not able to correctly predict the quality for a variety of color distortions. Furthermore, many image-difference metrics do not account for viewing conditions which may have a large impact on the perceived image quality (e.g., a large display in an office compared with a small mobile device in the bright sunlight). The main goal of my research was the development of a new image difference metric called improved Color-Image-Difference (iCID) which normalizes images to standard viewing conditions and extracts chromatic features. The new metric was then used as objective function to improve gamut mapping as well as tone mapping. Both methods represent essential transformations for the reproduction of color images. The performance of the proposed metric was verified by visual experiments as well as by comparisons with human judgments. The visual experiments reveal significant improvements over state-of-the-art gamut-mapping and tone-mapping transformations. For gamut-mapping distortions, iCID exhibits the significantly highest correlation to human judgments and for conventional distortions (e.g., noise, blur, and compression artifacts), iCID outperforms almost all state-of-the-art metrics.
APA, Harvard, Vancouver, ISO, and other styles
42

Shen, Kuan-Hung, and 沈冠宏. "Machine learning based no-reference assessment metric for stereoscopic image quality of experience." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/25281189021731135195.

Full text
Abstract:
碩士
國立中興大學
電機工程學系所
105
Perceptual quality plays an irreplaceable importance in viewing stereoscopic 3d images. Generally, worse quality stereoscopic 3D images will cause the viewer’s eyes tolerate fatigue, painfulness or feel dizziness, headache. Therefore, in this study, we propose a no-reference metric for stereoscopic image quality of experience (QoE) to evaluate the visual discomfort when the viewers view stereoscopic images. We develop two regression models in machine learning (ML), support vector machine (SVM) and random forest (RF), to assess the scores of visual discomfort and then compare the performance between two models. We test our method on the publicly available EPFL 3D image database and IEEE-SA stereoscopic image databases. First, the disparity of stereoscopic pairs is calculated and the depth information, called the depth-disparity map, is obtained from the resulting disparity map through Otsu’s algorithm. Next, four kinds of features are extracted based on the pixel values and distribution of depth-disparity map to build the input data and then use above-mentioned two regression models to analyze the data. Finally, the correlation between the predicted scores obtained from the proposed metric and the subjective scores provided by the databases is calculated. The experimental results show that the proposed metric achieves an impressive performance comparing with current state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
43

Larson, Eric C. "The strategy of image quality assessment a new fidelity metric based upon distortion contrast decoupling /." 2008. http://digital.library.okstate.edu/etd/umi-okstate-2840.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

"Perceptual-Based Locally Adaptive Noise and Blur Detection." Doctoral diss., 2016. http://hdl.handle.net/2286/R.I.38426.

Full text
Abstract:
abstract: The quality of real-world visual content is typically impaired by many factors including image noise and blur. Detecting and analyzing these impairments are important steps for multiple computer vision tasks. This work focuses on perceptual-based locally adaptive noise and blur detection and their application to image restoration. In the context of noise detection, this work proposes perceptual-based full-reference and no-reference objective image quality metrics by integrating perceptually weighted local noise into a probability summation model. Results are reported on both the LIVE and TID2008 databases. The proposed metrics achieve consistently a good performance across noise types and across databases as compared to many of the best very recent quality metrics. The proposed metrics are able to predict with high accuracy the relative amount of perceived noise in images of different content. In the context of blur detection, existing approaches are either computationally costly or cannot perform reliably when dealing with the spatially-varying nature of the defocus blur. In addition, many existing approaches do not take human perception into account. This work proposes a blur detection algorithm that is capable of detecting and quantifying the level of spatially-varying blur by integrating directional edge spread calculation, probability of blur detection and local probability summation. The proposed method generates a blur map indicating the relative amount of perceived local blurriness. In order to detect the flat/near flat regions that do not contribute to perceivable blur, a perceptual model based on the Just Noticeable Difference (JND) is further integrated in the proposed blur detection algorithm to generate perceptually significant blur maps. We compare our proposed method with six other state-of-the-art blur detection methods. Experimental results show that the proposed method performs the best both visually and quantitatively. This work further investigates the application of the proposed blur detection methods to image deblurring. Two selective perceptual-based image deblurring frameworks are proposed, to improve the image deblurring results and to reduce the restoration artifacts. In addition, an edge-enhanced super resolution algorithm is proposed, and is shown to achieve better reconstructed results for the edge regions.
Dissertation/Thesis
Doctoral Dissertation Electrical Engineering 2016
APA, Harvard, Vancouver, ISO, and other styles
45

Yaghmaei, Ayoub. "Documents Usability Estimation." Thesis, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-79133.

Full text
Abstract:
The improvements of technical documents quality influence the popularity of its relevant product; as the customers do not like to waste their time in the help desk’s queue, they will be more satisfied if they can independently solve their problems through the technical manuals in an acceptable time. Moreover, the cost of support issues will decrease for the product providers. In addition, the help desk team members could have more time to support the rest of unresolved issues in a better-qualified way. To afford the mentioned benefits, we have done the current thesis to estimate the usability of the documents before publishing them. As the result of such prediction, the technical documentation writers could have a goal-driven approach to improve the quality of their products or services’ manuals. Furthermore, as different structural metrics have been observed in this research, the result of the thesis could create an opportunity to have multi-discipline improvement in Information Quality (IQ) process management.
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Zhisong. "Formal metrics for quantitative assessment of the quality of expert systems." Thesis, 1994. http://spectrum.library.concordia.ca/3298/1/NN97686.pdf.

Full text
Abstract:
Investigates several new tools for measuring complexity of expert systems. Most effective was RC (Rule-based complexity) metric, a hybrid metric that takes into account the matching patterns, size and search space af expert systems.
APA, Harvard, Vancouver, ISO, and other styles
47

Jalbani, Akhtar Ali. "Quality Assessment and Quality Improvement for UML Models." Doctoral thesis, 2011. http://hdl.handle.net/11858/00-1735-0000-0006-B6AA-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Pipa, Ana Margarida Conceição. "Owl ontology quality assessment and optimization in the cybersecurity domain." Master's thesis, 2018. http://hdl.handle.net/10071/18743.

Full text
Abstract:
The purpose of this dissertation is to assess the quality of ontologies in patterns perceived by cybersecurity context. A content analysis between ontologies indicated that there were more pronounced differences in OWL ontologies in the cybersecurity field. Results showed an increase of relevance from expressivity to variability. Additionally, no differences were found in strategies used in most of the incidents. The ontology background needs to be emphasized to understand the quality of the phenomena. In addition, ontologies are a means of representing an area of knowledge through their semantic structure. The search of information and integration of data from different origins provides a common base that guarantees the coherence of the data. This can be categorized and described in a normative way. The unification of information with the world that surrounds us allows to create synergies between entities and relationships. However, the area of cybersecurity is one of the real-world domains where knowledge is uncertain. It is therefore necessary to analyze the challenges of choosing the appropriate representation of un-structured information. Vulnerabilities are identified, but incident response is not an automatic mechanism for understanding and processing unstructured text found on the web.
O objetivo desta dissertação foi avaliar a qualidade das ontologias, em padrões percebidos pelo contexto de cibersegurança. Uma análise de conteúdo entre ontologias indicou que havia diferenças mais pronunciadas por ontologias OWL no campo da cibersegurança. Os resultados mostram um aumento da relevância de expressividade para a variabilidade. Além disso, não foram encontradas diferenças em estratégias utilizadas na maioria dos incidentes. O conhecimento das ontologias precisa de ser enfatizado para se entender os fenómenos de qualidade. Além disso, as ontologias são um meio de representar uma área de conhecimento através da sua estrutura semântica e facilita a pesquisa de informações e a integração de dados de diferentes origens, pois fornecem uma base comum que garante a coerência dos dados, categorizados e descritos, de forma normativa. A unificação da informação com o mundo que nos rodeia permite criar sinergias entre entidades e relacionamentos. No entanto, a área de cibersegurança é um dos domínios do mundo real em que o conhecimento é incerto e é fundamental analisar os desafios de escolher a representação apropriada de informações não estruturadas. As vulnerabilidades são identificadas, mas a resposta a incidentes não é um mecanismo automático para se entender e processar textos não estruturados encontrados na web.
APA, Harvard, Vancouver, ISO, and other styles
49

Brunet, Dominique. "A Study of the Structural Similarity Image Quality Measure with Applications to Image Processing." Thesis, 2012. http://hdl.handle.net/10012/6982.

Full text
Abstract:
Since its introduction in 2004, the Structural Similarity (SSIM) index has gained widespread popularity as an image quality assessment measure. SSIM is currently recognized to be one of the most powerful methods of assessing the visual closeness of images. That being said, the Mean Squared Error (MSE), which performs very poorly from a perceptual point of view, still remains the most common optimization criterion in image processing applications because of its relative simplicity along with a number of other properties that are deemed important. In this thesis, some necessary tools to assist in the design of SSIM-optimal algorithms are developed. This work combines theoretical developments with experimental research and practical algorithms. The description of the mathematical properties of the SSIM index represents the principal theoretical achievement in this thesis. Indeed, it is demonstrated how the SSIM index can be transformed into a distance metric. Local convexity, quasi-convexity, symmetries and invariance properties are also proved. The study of the SSIM index is also generalized to a family of metrics called normalized (or M-relative) metrics. Various analytical techniques for different kinds of SSIM-based optimization are then devised. For example, the best approximation according to the SSIM is described for orthogonal and redundant basis sets. SSIM-geodesic paths with arclength parameterization are also traced between images. Finally, formulas for SSIM-optimal point estimators are obtained. On the experimental side of the research, the structural self-similarity of images is studied. This leads to the confirmation of the hypothesis that the main source of self-similarity of images lies in their regions of low variance. On the practical side, an implementation of local statistical tests on the image residual is proposed for the assessment of denoised images. Also, heuristic estimations of the SSIM index and the MSE are developed. The research performed in this thesis should lead to the development of state-of-the-art image denoising algorithms. A better comprehension of the mathematical properties of the SSIM index represents another step toward the replacement of the MSE with SSIM in image processing applications.
APA, Harvard, Vancouver, ISO, and other styles
50

Lee, Yuan-Yu, and 李元喻. "Assessment of Landuse Change and Associated Impacts on Water Quality and Ecology by Employing Landscape Metrics -Case Study in Taipei Water Source Special District." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/tw68m8.

Full text
Abstract:
碩士
國立臺北科技大學
土木與防災研究所
101
The change of hydrological environment in a watershed due to land use variation is usually slow and imperceptible. Since there are complicated relationships among water quality, water resource and ecosystem, the change of geographical environment will result in a variation of ecological environment and impacts of water quality. The rivers and the Feitsui reservoir at the Taipei water source district is an important source of water supply in Taipei Area. The control and management of water quality is a primary concern in this area. Therefore, it is an urgent need to establish the knowledge in land uses, water quality and ecology in the watershed. The objective of the study is to use the landscape ecological metrics to assess land use change and associated impacts on water quality and ecology. The SPOT satellite images of 1995 and 2012 were implemented classification, change detection, and the landscape ecology metrics calculation at eight sampling sites in the district. The results showed that the agricultural area has expanded and partially replaced the area of forest and road at the Pinglin, Hu Liao Tan of Peishih River and Cukeng Dam of Nanshih River. The change was less in Bihu and in the upstream of Tungho River. It can also conclude from the study that when it tends to the downstream location in the special district, the cluster of construction zone, the contiguity of agricultural zone, the meander of road zone and the polymerization of water zone increase, but the fractional of forest zone decrease. This study performed the correlation analysis for landscape metrics of each area and ecological indices of water quality. The result showed that it could bring out negative influences when large area and long length of edge of construction zone and road exist, which is the results of human activities. The trend was getting worse in agricultural blocks when the quantity of patches increased, the density elevated and there were situation such as perforation and fragmentation. There were positive effects in blocks of grass and forest if they were in large area of blocks and adjacent to each other, because they functioned like the buffer strips. It can see from the study that it is capable of evaluating the landscape distribution and the change of water quality in this area by comparing the spatial and temporal variation of landscape metrics.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography