Tesis sobre el tema "Image quality estimation"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 29 mejores tesis para su investigación sobre el tema "Image quality estimation".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Akinbola, Akintunde A. "Estimation of image quality factors for face recognition". Morgantown, W. Va. : [West Virginia University Libraries], 2005. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4308.
Texto completoTitle from document title page. Document formatted into pages; contains vi, 56 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 52-56).
Istenič, Klemen. "Underwater image-based 3D reconstruction with quality estimation". Doctoral thesis, Universitat de Girona, 2021. http://hdl.handle.net/10803/672199.
Texto completoAquesta tesi aborda el desenvolupament de mètodes per a l'estimació precisa de l’escala i la incertesa de models 3D basats en imatges adquirides amb sistemes de càmeres monoculars o no sincronitzades en entorns submarins, de difícil accés i sense GPS. El sistema desenvolupat permet la creació de models 3D amb textura fent servir dades òptiques i de navegació, i és independent d’una plataforma, càmera o missió específica. La tesi presenta dos nous mètodes per a l’escalat automàtic de models 3D basats en SfM mitjançant mesuradors làser. Tots dos es van utilitzar per realitzar una anàlisi exhaustiva d'errors d’escalat de models en aigües submarines profundes per determinar avantatges i limitacions de les estratègies de reconstrucció 3D. A més, es proposa un nou sistema basat en SfM per demostrar la viabilitat de la reconstrucció 3D, globalment consistent, i amb informació d'incertesa mentre el robot encara està a l’aigua o poc després
Esta tesis aborda el desarrollo de recursos para el escalado preciso y la estimación de la incertidumbre de modelos 3D basados en imágenes, y con fines científicos. El marco de reconstrucción 3D desarrollado permite la creación de modelos 3D texturizados basados en datos ópticos y de navegación, adquiridos con sistemas monoculares o no sincronizados de cámaras en entornos (submarinos) de difícil acceso sin disponibilidad de GPS. Además, presenta dos nuevos métodos para el escalado automático de modelos 3D basados en SfM mediante medidores laser. Ambos se utilizaron para analizar los errores en escala, de modelos de ambientes submarinos en aguas profundas, con el fin de determinar las ventajas y las limitaciones de las estrategias de reconstrucción 3D. Además, se propone un nuevo sistema para demostrar la viabilidad de una reconstrucción global consistente junto con su incertidumbre mientras el robot aún está en el agua o poco después
Programa de Doctorat en Tecnologia
Cui, Lei. "Topics in image recovery and image quality assessment /Cui Lei". HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/368.
Texto completoThomas, Graham A. "Motion estimation and its application in broadcast television". Thesis, University of Essex, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.258717.
Texto completoTseng, Hsin-Wu, Jiahua Fan y Matthew A. Kupinski. "Assessing computed tomography image quality for combined detection and estimation tasks". SPIE-SOC PHOTO-OPTICAL INSTRUMENTATION ENGINEERS, 2017. http://hdl.handle.net/10150/626451.
Texto completoGhosh, Roy Gourab. "A Simple Second Derivative Based Blur Estimation Technique". The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1366890068.
Texto completoZhang, Changjun. "Seismic absorption estimation and compensation". Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2820.
Texto completoNezhadarya, Ehsan. "Image derivative estimation and its applications to edge detection, quality monitoring and copyright protection". Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/44504.
Texto completoFuin, N. "Estimation of the image quality in emission tomography : application to optimization of SPECT system design". Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1417803/.
Texto completoAl, Chami Zahi. "Estimation de la qualité des données multimedia en temps réel". Thesis, Pau, 2021. http://www.theses.fr/2021PAUU3066.
Texto completoOver the past decade, data providers have been generating and streaming a large amount of data, including images, videos, audio, etc. In this thesis, we will be focusing on processing images since they are the most commonly shared between the users on the global inter-network. In particular, treating images containing faces has received great attention due to its numerous applications, such as entertainment and social media apps. However, several challenges could arise during the processing and transmission phase: firstly, the enormous number of images shared and produced at a rapid pace requires a significant amount of time to be processed and delivered; secondly, images are subject to a wide range of distortions during the processing, transmission, or combination of many factors that could damage the images’content. Two main contributions are developed. First, we introduce a Full-Reference Image Quality Assessment Framework in Real-Time, capable of:1) preserving the images’content by ensuring that some useful visual information can still be extracted from the output, and 2) providing a way to process the images in real-time in order to cope with the huge amount of images that are being received at a rapid pace. The framework described here is limited to processing those images that have access to their reference version (a.k.a Full-Reference). Secondly, we present a No-Reference Image Quality Assessment Framework in Real-Time. It has the following abilities: a) assessing the distorted image without having its distortion-free image, b) preserving the most useful visual information in the images before publishing, and c) processing the images in real-time, even though the No-Reference image quality assessment models are considered very complex. Our framework offers several advantages over the existing approaches, in particular: i. it locates the distortion in an image in order to directly assess the distorted parts instead of processing the whole image, ii. it has an acceptable trade-off between quality prediction accuracy and execution latency, andiii. it could be used in several applications, especially these that work in real-time. The architecture of each framework is presented in the chapters while detailing the modules and components of the framework. Then, a number of simulations are made to show the effectiveness of our approaches to solve our challenges in relation to the existing approaches
Arici, Tarik. "Single and multi-frame video quality enhancement". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29722.
Texto completoCommittee Chair: Yucel Altunbasak; Committee Member: Brani Vidakovic; Committee Member: Ghassan AlRegib; Committee Member: James Hamblen; Committee Member: Russ Mersereau. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Wang, Liang. "NOVEL DENSE STEREO ALGORITHMS FOR HIGH-QUALITY DEPTH ESTIMATION FROM IMAGES". UKnowledge, 2012. http://uknowledge.uky.edu/cs_etds/4.
Texto completoNawarathna, Ruwan D. "Detection of Temporal Events and Abnormal Images for Quality Analysis in Endoscopy Videos". Thesis, University of North Texas, 2013. https://digital.library.unt.edu/ark:/67531/metadc283849/.
Texto completoOrtiz, Cayón Rodrigo. "Amélioration de la vitesse et de la qualité d'image du rendu basé image". Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4004/document.
Texto completoTraditional photo-realistic rendering requires intensive manual and computational effort to create scenes and render realistic images. Thus, creation of content for high quality digital imagery has been limited to experts and highly realistic rendering still requires significant computational time. Image-Based Rendering (IBR) is an alternative which has the potential of making high-quality content creation and rendering applications accessible to casual users, since they can generate high quality photo-realistic imagery without the limitations mentioned above. We identified three important shortcomings of current IBR methods: First, each algorithm has different strengths and weaknesses, depending on 3D reconstruction quality and scene content and often no single algorithm offers the best image quality everywhere in the image. Second, such algorithms present strong artifacts when rendering partially reconstructed objects or missing objects. Third, most methods still result in significant visual artifacts in image regions where reconstruction is poor. Overall, this thesis addresses significant shortcomings of IBR for both speed and image quality, offering novel and effective solutions based on selective rendering, learning-based model substitution and depth error prediction and correction
Cotte, Florian. "Estimation d’objets de très faible amplitude dans des images radiologiques X fortement bruitées". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT112.
Texto completoIn the field of X-ray radiology for medical diagnostics, progress in computer, electronics and materials industry over the past three decades have led to the development of digital sensors to improve the quality of images. This CIFRE thesis, prepared in collaboration between the Gipsa-Lab laboratory and the company Trixell, manufacturer of digital flat detectors for radiological imaging, takes place in an industrial context for improving the image quality of X-ray sensors. More specifically, various technological causes can generate disturbances, called "artifacts". The fine knowledge of these technological causes (internal or external to the sensor) makes it possible to model these artifacts and to eliminate them from images.The chosen approach models the image as a sum of 3 terms Y = C + S + B : the clinical content, the signal or artifact to be modeled and the noise. The problem is to find the artifact from Y and knowledge about the clinical content and noise. To solve this inverse problem, several Bayesian approaches using various prior knowledge are developed. Unlike existing estimation methods that are specific to a particular artifact, our approach is generic and our models take into account spatially variable shapes and features of artifacts that are locally stationary. They also give us a feedback on the quality of the estimate, validating or invalidating the model. The methods are evaluated and compared on synthetic images for 2 types of artifacts. On real images, these methods are illustrated on the removal of anti-scattering grids. The performances of the developed algorithms are superior to those of the methods dedicated to a given artifact, at the cost of greater complexity. The latest results obtained open interesting perspectives, especially for non-stationary artefacts in space and time
Harouna, Seybou Aboubacar. "Analyse d'images couleurs pour le contrôle qualité non destructif". Thesis, Poitiers, 2016. http://www.theses.fr/2016POIT2282/document.
Texto completoColor is a major criterion for many sectors to identify, to compare or simply to control the quality of products. This task is generally assumed by a human operator who performs a visual inspection. Unfortunately, this method is unreliable and not repeatable due to the subjectivity of the operator. To avoid these limitations, a RGB camera can be used to capture and extract the photometric properties. This method is simple to deploy and permits a high speed control. However, it's very sensitive to the metamerism effects. Therefore, the reflectance measurement is the more reliable solution to ensure the conformity between samples and a reference. Thus in printing industry, spectrophotometers are used to measure uniform color patches printed on a lateral band. For a control of the entire printed surface, multispectral cameras are used to estimate the reflectance of each pixel. However, they are very expensive compared to conventional cameras. In this thesis, we study the use of an RGB camera for the spectral reflectance estimation in the context of printing. We propose a complete spectral description of the reproduction chain to reduce the number of measurements in the training stages and to compensate for the acquisition limitations. Our first main contribution concerns the consideration of the colorimetric limitations in the spectral characterization of a camera. The second main contribution is the exploitation of the spectral printer model in the reflectance estimation methods
Jiang, Shiguo. "Estimating Per-pixel Classification Confidence of Remote Sensing Images". The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354557859.
Texto completoKaller, Ondřej. "Pokročilé metody snímání a hodnocení kvality 3D videa". Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-369744.
Texto completoBelgued, Youssef. "Amélioration de la qualité géométrique des images spatiales radar : méthodes de localisation et restitution du relief par radargrammétrie". Toulouse, INPT, 2000. http://www.theses.fr/2000INPT019H.
Texto completoConze, Pierre-Henri. "Estimation de mouvement dense long-terme et évaluation de qualité de la synthèse de vues. Application à la coopération stéréo-mouvement". Phd thesis, INSA de Rennes, 2014. http://tel.archives-ouvertes.fr/tel-00992940.
Texto completoDelvit, Jean-Marc. "Évaluation de la résolution d'un instrument optique par une méthode neuronale : application à une image quelconque de télédétection". Toulouse, ENSAE, 2003. http://www.theses.fr/2003ESAE0010.
Texto completoLeão, Junior Emerson [UNESP]. "Análise da qualidade da informação produzida por classificação baseada em orientação a objeto e SVM visando a estimativa do volume do reservatório Jaguari-Jacareí". Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/152234.
Texto completoApproved for entry into archive by ALESSANDRA KUBA OSHIRO null (alessandra@fct.unesp.br) on 2017-12-06T10:52:22Z (GMT) No. of bitstreams: 1 leaojunior_e_me_prud.pdf: 4186679 bytes, checksum: ee186b23411343c3e2d782d622226699 (MD5)
Made available in DSpace on 2017-12-06T10:52:22Z (GMT). No. of bitstreams: 1 leaojunior_e_me_prud.pdf: 4186679 bytes, checksum: ee186b23411343c3e2d782d622226699 (MD5) Previous issue date: 2017-04-25
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Considerando o cenário durante a crise hídrica de 2014 e a situação crítica dos reservatórios do sistema Cantareira no estado de São Paulo, este estudo realizado no reservatório Jaguari-Jacareí, consistiu na extração de informações a partir de imagens multiespectrais e análise da qualidade da informação relacionada com a acurácia no cálculo do volume de água do reservatório. Inicialmente, a superfície do espelho d’água foi obtida pela classificação da cobertura da terra a partir de imagens multiespectrais RapidEye tomadas antes e durante a crise hídrica (2013 e 2014, respectivamente), utilizando duas abordagens distintas: classificação orientada a objeto (Object-based Image Analysis - OBIA) e classificação baseada em pixel (Support Vector Machine – SVM). A acurácia do usuário por classe permitiu expressar o erro para detectar a superfície do espelho d’água para cada abordagem de classificação de 2013 e 2014. O segundo componente da estimação do volume foi a representação do relevo submerso, que considerou duas fontes de dados na construção do modelo numérico do terreno (MNT): dados topográficos provenientes de levantamento batimétrico disponibilizado pela Sabesp e o modelo de superfície AW3D30 (ALOS World 3D 30m mesh), para complementar a informação não disponível além da cota 830,13 metros. A comparação entre as duas abordagens de classificação dos tipos de cobertura da terra do entorno do reservatório Jaguari-Jacareí mostrou que SVM resultou em indicadores de acurácia ligeiramente superiores à OBIA, para os anos de 2013 e 2014. Em relação à estimação de volume do reservatório, incorporando a informação do nível de água divulgado pela Sabesp, a abordagem SVM apresentou menor discrepância relativa do que OBIA. Apesar disso, a qualidade da informação produzida na estimação de volume, resultante da propagação da variância associada aos dados envolvidos no processo, ambas as abordagens produziram valores similares de incerteza, mas com uma sutil superioridade de OBIA, para alguns dos cenários avaliados. No geral, os métodos de classificação utilizados nesta dissertação produziram informação acurada e adequada para o monitoramento de recursos hídricos e indicou que a abordagem SVM teve um desempenho sutilmente superior na classificação dos tipos de cobertura da terra, na estimação do volume e em alguns dos cenários considerados na propagação da incerteza.
This study aims to extract information from multispectral images and to analyse the information quality in the water volume estimation of Jaguari-Jacareí reservoir. The presented study of changes in the volume of the Jaguari-Jacareí reservoir was motivated by the critical situation of the reservoirs from Cantareira System in São Paulo State caused by water crisis in 2014. Reservoir area was extracted from RapidEye multispectral images acquired before and during the water crisis (2013 and 2014, respectively) through land cover classification. Firstly, the image classification was carried out in two distinct approaches: object-based (Object-based Image Analysis - OBIA) and pixel-based (Support Vector Machine - SVM) method. The classifications quality was evaluated through thematic accuracy, in which for every technique the user accuracy allowed to express the error for the class representing the water in 2013 and 2014. Secondly, we estimated the volume of the reservoir’s water body, using the numerical terrain model generated from two additional data sources: topographic data from a bathymetric survey, available from Sabesp, and the elevation model AW3D30 (to complement the information in the area where data from Sabesp was not available). When compare the two classification techniques, it was found that in the image classification, SVM performance slightly overcame the OBIA classification technique for 2013 and 2014. In the volume calculation considering the water level estimated from the generated DTM, the result obtained by SVM approach was better in 2013, whereas OBIA approach was more accurate in 2014. Considering the quality of the information produced in the volume estimation, both approaches presented similar values of uncertainty, with the OBIA method slightly less uncertain than SVM. In conclusion, the classification methods used in this dissertation produced accurate information to monitor water resource, but SVM had a subtly superior performance in the classification of land cover types, volume estimation and some of the scenarios considered in the propagation of uncertainty.
Chou, Xinhan y 周昕翰. "Defocus Blur Identification for Depth Estimation and Image Quality Assessment". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/51786532470766991315.
Texto completo國立中正大學
電機工程研究所
101
In this thesis, we present a defocus blur identification technique based on histogram analysis of an image. The image defocus process is formulated by incorporating the non-linear camera response and intensity dependent noise model. The histogram matching between the synthesized and real defocused regions is then carried out with intensity dependent filtering. By iteratively changing the point-spread function parameters, the best blur extent is identified from histogram comparison. The presented technique is first applied to depth measurement using the defocus information. It is also used for image quality assessment applications, specifically associated with optical defocus blur. We have performed the experiments on both the real scene images. The results have demonstrated the robustness and feasibility of the proposed technique.
Tian, Xiaoyu. "Prospective Estimation of Radiation Dose and Image Quality for Optimized CT Performance". Diss., 2016. http://hdl.handle.net/10161/12818.
Texto completoX-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.
Dissertation
LAPINI, ALESSANDRO. "Advanced multiresolution bayesian methods and sar image modelling for speckle removal". Doctoral thesis, 2014. http://hdl.handle.net/2158/843707.
Texto completoSilva, Lourenço de Mértola Belford Correia da. "Quality assessment of 2D image rendering for 4D light field content". Master's thesis, 2018. http://hdl.handle.net/10071/18244.
Texto completoA tecnologia de campos de luz – Light Field (LF), composta por representações visuais de dados com grande quantidade de informação, pode ser usada para solucionar algumas das limitações atuais da tecnologia 3D, além de permitir novas funcionalidades que não são suportadas diretamente pela imagem 2D tradicional. No entanto, os dispositivos de visualização actuais não estão preparados para processar este tipo de conteúdo, o que significa que são necessários algoritmos de renderização para apresentar este tipo de conteúdo visual em versão 2D ou em versão 3D com múltiplas vistas. No entanto, a qualidade visual do ponto vista da percepção do utilizador é altamente dependente da abordagem de renderização adotada. Portanto, a tecnologia de renderização LF requer avaliação de qualidade adequada com pessoas reais, já que não há maneira melhor e mais confiável de avaliar a qualidade deste tipo de algoritmos. Neste contexto, esta dissertação tem como objetivo estudar, implementar e comparar diversos algoritmos e abordagens de renderização LF. A avaliação de desempenho é feita recorrendo a testes subjetivos de avaliação de qualidade para entender qual algoritmo que apresenta melhor desempenho em determinadas situações e a influência, em termos da qualidade subjetiva, de alguns parâmetros de input em certos algoritmos. Além disso, também é avaliada uma comparação de abordagens de renderização com focagem em apenas um plano versus renderização com focagem em todos os planos.
DINI, FABRIZIO. "Target detection and tracking in video surveillance". Doctoral thesis, 2010. http://hdl.handle.net/2158/574120.
Texto completoGariepy, Ryan. "Quadrotor Position Estimation using Low Quality Images". Thesis, 2011. http://hdl.handle.net/10012/6274.
Texto completoAlves, Styve da Conceicao. "Estimativa e diagnóstico da qualidade do ar obtida por dados de observação da Terra". Master's thesis, 2017. http://hdl.handle.net/10316/82952.
Texto completoO ar que respiramos pode conter diversos poluentes, dependendo de diversos factores que podem contribuir à sua constituição, e de modo a precaver para situações de riscos, elevados teores, estes, podem provocar graves efeitos no ambiente e na saúde pública. As emissões dos principais poluentes atmosféricos na Europa diminuíram desde 1990. Durante a última década, esta redução das emissões resultou, para alguns dos poluentes, na melhoria da qualidade do ar em toda a região. No entanto, devido às complexas ligações entre as emissões e a qualidade do ar, as reduções de emissões nem sempre produzem uma correspondente queda nas concentrações atmosféricas. Nesse sentido, o presente trabalho dirige-se ao estudo da capacidade modelar, através de ferramentas estatísticas, os poluentes, bem como as suas relações químicas/moleculares. Obter informação quantificada no que diz respeito à influência e interdependência cruzada das diferentes vertentes de caracterização química da qualidade do ar. Esta ideia surgiu através de a colaboração da Primelayer e Spacelayer, que disponibilizou os dados relativos, ao mês de fevereiro de 2017 no Meco, munícipe de Montemor-o-velho, esse dados foram, os factores meteorológicos, a humidade relativa, temperatura, direção do vento, velocidade do vento e pluviosidade e indicadores da qualidade do ar CO, NO, NO2, NH3, SO2, O3, PM2.5, PM10, PANs, NMVOCs, para caracterizar os diversos poluentes químicos. Como ferramentas foi utilizado o código SNAP do Copernicus e análise multivariacional para estabelecer padrões de emissão de poluentes. Para estabelecer estes padrões de emissão, partimos de uma estratégia de modelar por via de análise multivariada, através de dados relativos ao mês de fevereiro de 2017 dos poluentes e dos indicadores de qualidade referidos anteriormente. O presente estudo evidenciou uma boa descrição (modelação) dos teores de PM2.5, NO2, CO e NMVOCs.
The air we breathe may contain several pollutants, depending on several factors that may contribute to its constitution, and in order to prevent risks, high levels of these can have serious effects on the environment and public health. Emissions of major air pollutants in Europe have declined since 1990. During the last decade, this reduction in emissions has resulted, for some of the pollutants, in improving air quality throughout the region. However, due to the complex links between emissions and air quality, emission reductions do not always produce a corresponding fall in atmospheric concentrations. In this sense, the present work is directed to the study of the modeling capacity, through statistical tools, the pollutants, as well as their chemical / molecular relations. To obtain quantified information regarding the influence and interdependence of the different aspects of chemical characterization of air quality. This idea arose through the collaboration of Primelayer and Spacelayer, who provided the data related to the month of February, 2017 in Meco, Montemor-o-velho municipality, this data were, meteorological factors, relative humidity, temperature, direction wind velocity, rainfall and air quality indicators CO, NO, NO2, NH3, SO2, O3, PM2.5, PM10, PANs, NMVOCs, to characterize the various chemical pollutants. Copernicus SNAP code and multivariate analysis were used as tools to establish pollutant emission standards. In order to establish these emission standards, we started with a multivariate modeling strategy based on data from February 2017 on the pollutants and quality indicators referred to above. The present study showed a good description (modeling) of PM2.5, NO2, CO and NMVOCs.