Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Image quality estimation.

Дисертації з теми "Image quality estimation"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-29 дисертацій для дослідження на тему "Image quality estimation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Akinbola, Akintunde A. "Estimation of image quality factors for face recognition." Morgantown, W. Va. : [West Virginia University Libraries], 2005. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4308.

Повний текст джерела
Анотація:
Thesis (M.S.)--West Virginia University, 2005.
Title from document title page. Document formatted into pages; contains vi, 56 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 52-56).
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Istenič, Klemen. "Underwater image-based 3D reconstruction with quality estimation." Doctoral thesis, Universitat de Girona, 2021. http://hdl.handle.net/10803/672199.

Повний текст джерела
Анотація:
This thesis addresses the development of resources for accurate scaling and uncertainty estimation of image-based 3D models for scientific purposes based on data acquired with monocular or unsynchronized camera systems in difficult-to-access GPS-denied (underwater) environments. The developed 3D reconstruction framework allows the creation of textured 3D models based on optical and navigation data and is independent of a specific platform, camera or mission. The dissertation presents two new methods for automatically scaling of SfM-based 3D models using laser scalers. Both were used to perform an in-depth scale error analysis of large-scale models of deep-sea underwater environments to determine the advantages and limitations of image-based 3D reconstruction strategies. In addition, a novel SfM-based system is proposed to demonstrate the feasibility of producing a globally consistent reconstruction with its uncertainty while the robot is still in the water or shortly after
Aquesta tesi aborda el desenvolupament de mètodes per a l'estimació precisa de l’escala i la incertesa de models 3D basats en imatges adquirides amb sistemes de càmeres monoculars o no sincronitzades en entorns submarins, de difícil accés i sense GPS. El sistema desenvolupat permet la creació de models 3D amb textura fent servir dades òptiques i de navegació, i és independent d’una plataforma, càmera o missió específica. La tesi presenta dos nous mètodes per a l’escalat automàtic de models 3D basats en SfM mitjançant mesuradors làser. Tots dos es van utilitzar per realitzar una anàlisi exhaustiva d'errors d’escalat de models en aigües submarines profundes per determinar avantatges i limitacions de les estratègies de reconstrucció 3D. A més, es proposa un nou sistema basat en SfM per demostrar la viabilitat de la reconstrucció 3D, globalment consistent, i amb informació d'incertesa mentre el robot encara està a l’aigua o poc després
Esta tesis aborda el desarrollo de recursos para el escalado preciso y la estimación de la incertidumbre de modelos 3D basados en imágenes, y con fines científicos. El marco de reconstrucción 3D desarrollado permite la creación de modelos 3D texturizados basados en datos ópticos y de navegación, adquiridos con sistemas monoculares o no sincronizados de cámaras en entornos (submarinos) de difícil acceso sin disponibilidad de GPS. Además, presenta dos nuevos métodos para el escalado automático de modelos 3D basados en SfM mediante medidores laser. Ambos se utilizaron para analizar los errores en escala, de modelos de ambientes submarinos en aguas profundas, con el fin de determinar las ventajas y las limitaciones de las estrategias de reconstrucción 3D. Además, se propone un nuevo sistema para demostrar la viabilidad de una reconstrucción global consistente junto con su incertidumbre mientras el robot aún está en el agua o poco después
Programa de Doctorat en Tecnologia
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Cui, Lei. "Topics in image recovery and image quality assessment /Cui Lei." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/368.

Повний текст джерела
Анотація:
Image recovery, especially image denoising and deblurring is widely studied during the last decades. Variational models can well preserve edges of images while restoring images from noise and blur. Some variational models are non-convex. For the moment, the methods for non-convex optimization are limited. This thesis finds new non-convex optimizing method called difference of convex algorithm (DCA) for solving different variational models for various kinds of noise removal problems. For imaging system, noise appeared in images can show different kinds of distribution due to the different imaging environment and imaging technique. Here we show how to apply DCA to Rician noise removal and Cauchy noise removal. The performance of our experiments demonstrates that our proposed non-convex algorithms outperform the existed ones by better PSNR and less computation time. The progress made by our new method can improve the precision of diagnostic technique by reducing Rician noise more efficiently and can improve the synthetic aperture radar imaging precision by reducing Cauchy noise within. When applying variational models to image denoising and deblurring, a significant subject is to choose the regularization parameters. Few methods have been proposed for regularization parameter selection for the moment. The numerical algorithms of existed methods for parameter selection are either complicated or implicit. In order to find a more efficient and easier way to estimate regularization parameters, we create a new image quality sharpness metric called SQ-Index which is based on the theory of Global Phase Coherence. The new metric can be used for estimating parameters for a various of variational models, but also can estimate the noise intensity based on special models. In our experiments, we show the noise estimation performance with this new metric. Moreover, extensive experiments are made for dealing with image denoising and deblurring under different kinds of noise and blur. The numerical results show the robust performance of image restoration by applying our metric to parameter selection for different variational models.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Thomas, Graham A. "Motion estimation and its application in broadcast television." Thesis, University of Essex, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.258717.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Tseng, Hsin-Wu, Jiahua Fan, and Matthew A. Kupinski. "Assessing computed tomography image quality for combined detection and estimation tasks." SPIE-SOC PHOTO-OPTICAL INSTRUMENTATION ENGINEERS, 2017. http://hdl.handle.net/10150/626451.

Повний текст джерела
Анотація:
Maintaining or even improving image quality while lowering patient dose is always the desire in clinical computed tomography (CT) imaging. Iterative reconstruction (IR) algorithms have been designed to allow for a reduced dose while maintaining or even improving an image. However, we have previously shown that the dose-saving capabilities allowed with IR are different for different clinical tasks. The channelized scanning linear observer (CSLO) was applied to study clinical tasks that combine detection and estimation when assessing CT image data. The purpose of this work is to illustrate the importance of task complexity when assessing dose savings and to move toward more realistic tasks when performing these types of studies. Human-observer validation of these methods will take place in a future publication. Low-contrast objects embedded in body-size phantoms were imaged multiple times and reconstructed by filtered back projection (FBP) and an IR algorithm. The task was to detect, localize, and estimate the size and contrast of low-contrast objects in the phantom. Independent signal-present and signal-absent regions of interest cropped from images were channelized by the dense-difference of Gauss channels for CSLO training and testing. Estimation receiver operating characteristic (EROC) curves and the areas under EROC curves (EAUC) were calculated by CSLO as the figure of merit. The one-shot method was used to compute the variance of the EAUC values. Results suggest that the IR algorithm studied in this work could efficiently reduce the dose by similar to 50% while maintaining an image quality comparable to conventional FBP reconstruction warranting further investigation using real patient data. (C) The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ghosh, Roy Gourab. "A Simple Second Derivative Based Blur Estimation Technique." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1366890068.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhang, Changjun. "Seismic absorption estimation and compensation." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2820.

Повний текст джерела
Анотація:
As seismic waves travel through the earth, the visco-elasticity of the earth's medium will cause energy dissipation and waveform distortion. This phenomenon is referred to as seismic absorption or attenuation. The absorptive property of a medium can be described by a quality factor Q, which determines the energy decay and a velocity dispersion relationship. Four new ideas have been developed in this thesis to deal with the estimation and application of seismic absorption. By assuming that the amplitude spectrum of a seismic wavelet may be modeled by that of a Ricker wavelet, an analytical relation has been derived to estimate a quality factor from the seismic data peak frequency variation with time. This relation plays a central role in quality factor estimation problems. To estimate interval Q for reservoir description, a method called reflectivity guided seismic attenuation analysis is proposed. This method first estimates peak frequencies at a common midpoint location, then correlates the peak frequency with sparsely-distributed reflectivities, and finally calculates Q values from the peak frequencies at the reflectivity locations. The peak frequency is estimated from the prestack CMP gather using peak frequency variation with offset analysis which is similar to amplitude variation with offset analysis in implementation. The estimated Q section has the same layer boundaries of the acoustic impedance or other layer properties. Therefore, the seismic attenuation property obtained with the guide of reflectivity is easy to interpret for the purpose of reservoir description. To overcome the instability problem of conventional inverse Q filtering, Q compensation is formulated as a least-squares (LS) inverse problem based on statistical theory. The matrix of forward modeling is composed of time-variant wavelets. The LS de-absorption is solved by an iterative non-parametric approach. To compensate for absorption in migrated seismic sections, a refocusing technique is developed using non-stationary multi-dimensional deconvolution. A numerical method is introduced to calculate the blurring function in layered media, and a least squares inverse scheme is used to remove the blurring effect in order to refocus the migrated image. This refocusing process can be used as an alternative to regular migration with absorption compensation.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Nezhadarya, Ehsan. "Image derivative estimation and its applications to edge detection, quality monitoring and copyright protection." Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/44504.

Повний текст джерела
Анотація:
Multi-order image derivatives are used in many image processing and computer vision applications, such as edge detection, feature extraction, image enhancement, segmentation, matching, watermarking and quality assessment. In some applications, the image derivatives are modified and then inverse-transformed to the image domain. For example, one approach for image denoising is to keep the significant image derivatives and shrink the non-significant derivatives. The denoised image is then reconstructed from the modified derivatives. The main challenge here is how to inverse-transform the derivatives to the image domain. This thesis proposes different algorithms to estimate the image derivatives and apply them to image denosing , watermarking and quality assessment. For noisy color images, we present a method that yields accurate and robust estimates of the gradient magnitude and direction. This method obtains the gradient at a certain direction by applying a prefilter and a postfilter in the perpendicular direction. Simulation results show that the proposed method outperforms state-of-the-art methods. We also present a multi-scale derivative transform, MSDT, that obtains the gradient at a given image scale using the detail horizontal, vertical and diagonal wavelet coefficients of the image at that scale. The inverse transform is designed such that any change in the image derivative results in the minimum possible change in the image. The MSDT transform is used to derive a novel multi-scale image watermarking method. This method embeds the watermark bits in the angles of the significant gradient vectors, at different image scales. Experimental results show that the proposed method outperforms other watermarking methods in terms of robustness to attacks, imperceptibility of the watermark and watermark capacity.The MSDT is then used to obtain a semi-blind method for video quality assessment. The method embeds pseudo-random binary watermarks in the derivative vectors of the original undistorted video. The quality of the distorted video is estimated based on the similarity between the embedded and the extracted watermarks. The simulation results on video distorted by compression/decompression show that the proposed method can accurately estimate the quality of a video and its frames for a wide range of compression ratios.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Fuin, N. "Estimation of the image quality in emission tomography : application to optimization of SPECT system design." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1417803/.

Повний текст джерела
Анотація:
In Emission Tomography the design of the Imaging System has a great influence on the quality of the output image. Optimisation of the system design is a difficult problem due to the computational complexity and to the challenges in its mathematical formulation. In order to compare different system designs, an efficient and effective method to calculate the Image Quality is needed. In this thesis the statistical and deterministic methods for the calculation of the uncertainty in the reconstruction are presented. In the deterministic case, the Fisher Information Matrix (FIM) formalism can be employed to characterize such uncertainty. Unfortunately, computing, storing and inverting the FIM is not feasible with 3D imaging systems. In order to tackle the problem of the computational load in calculating the inverse of the FIM a novel approximation, that relies on a sub-sampling of the FIM, is proposed. The FIM is calculated over a subset of voxels arranged in a grid that covers the whole volume. This formulation reduces the computational complexity in inverting the FIM but nevertheless accounts for the global interdependence between the variables, for the acquisition geometry and for the object dependency. Using this approach, the noise properties as a function of the system geometry parameterisation were investigated for three different cases. In the first study, the design of a parallel-hole collimator for SPECT is optimised. The new method can be applied to evaluating problems like trading-off collimator resolution and sensitivity. In the second study, the reconstructed image quality was evaluated in the case of truncated projection data; showing how the subsampling approach is very accurate for evaluating the effects of missing data. Finally, the noise properties of a D-SPECT system were studied for varying acquisition protocols; showing how the new method is well-suited to problems like optimising adaptive data sampling schemes.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Al, Chami Zahi. "Estimation de la qualité des données multimedia en temps réel." Thesis, Pau, 2021. http://www.theses.fr/2021PAUU3066.

Повний текст джерела
Анотація:
Au cours de la dernière décennie, les fournisseurs de données ont généré et diffusé une grande quantité de données, notamment des images, des vidéos, de l'audio, etc. Dans cette thèse, nous nous concentrerons sur le traitement des images puisqu'elles sont les plus communément partagées entre les utilisateurs sur l'inter-réseau mondial. En particulier, le traitement des images contenant des visages a reçu une grande attention en raison de ses nombreuses applications, telles que les applications de divertissement et de médias sociaux. Cependant, plusieurs défis pourraient survenir au cours de la phase de traitement et de transmission : d'une part, le nombre énorme d'images partagées et produites à un rythme rapide nécessite un temps de traitement et de livraison considérable; d’autre part, les images sont soumises à un très grand nombre de distorsions lors du traitement, de la transmission ou de la combinaison de nombreux facteurs qui pourraient endommager le contenu des images. Deux contributions principales sont développées. Tout d'abord, nous présentons un framework d'évaluation de la qualité d'image ayant une référence complète en temps réel, capable de : 1) préserver le contenu des images en s'assurant que certaines informations visuelles utiles peuvent toujours être extraites de l'image résultante, et 2) fournir un moyen de traiter les images en temps réel afin de faire face à l'énorme quantité d'images reçues à un rythme rapide. Le framework décrit ici est limité au traitement des images qui ont accès à leur image de référence (connu sous le nom référence complète). Dans notre second chapitre, nous présentons un framework d'évaluation de la qualité d'image sans référence en temps réel. Il a les capacités suivantes : a) évaluer l'image déformée sans avoir recours à son image originale, b) préserver les informations visuelles les plus utiles dans les images avant de les publier, et c) traiter les images en temps réel, bien que les modèles d'évaluation de la qualité des images sans référence sont considérés très complexes. Notre framework offre plusieurs avantages par rapport aux approches existantes, en particulier : i. il localise la distorsion dans une image afin d'évaluer directement les parties déformées au lieu de traiter l'image entière, ii. il a un compromis acceptable entre la précision de la prédiction de qualité et le temps d’exécution, et iii. il pourrait être utilisé dans plusieurs applications, en particulier celles qui fonctionnent en temps réel. L'architecture de chaque framework est présentée dans les chapitres tout en détaillant les modules et composants du framework. Ensuite, un certain nombre de simulations sont faites pour montrer l'efficacité de nos approches pour résoudre nos défis par rapport aux approches existantes
Over the past decade, data providers have been generating and streaming a large amount of data, including images, videos, audio, etc. In this thesis, we will be focusing on processing images since they are the most commonly shared between the users on the global inter-network. In particular, treating images containing faces has received great attention due to its numerous applications, such as entertainment and social media apps. However, several challenges could arise during the processing and transmission phase: firstly, the enormous number of images shared and produced at a rapid pace requires a significant amount of time to be processed and delivered; secondly, images are subject to a wide range of distortions during the processing, transmission, or combination of many factors that could damage the images’content. Two main contributions are developed. First, we introduce a Full-Reference Image Quality Assessment Framework in Real-Time, capable of:1) preserving the images’content by ensuring that some useful visual information can still be extracted from the output, and 2) providing a way to process the images in real-time in order to cope with the huge amount of images that are being received at a rapid pace. The framework described here is limited to processing those images that have access to their reference version (a.k.a Full-Reference). Secondly, we present a No-Reference Image Quality Assessment Framework in Real-Time. It has the following abilities: a) assessing the distorted image without having its distortion-free image, b) preserving the most useful visual information in the images before publishing, and c) processing the images in real-time, even though the No-Reference image quality assessment models are considered very complex. Our framework offers several advantages over the existing approaches, in particular: i. it locates the distortion in an image in order to directly assess the distorted parts instead of processing the whole image, ii. it has an acceptable trade-off between quality prediction accuracy and execution latency, andiii. it could be used in several applications, especially these that work in real-time. The architecture of each framework is presented in the chapters while detailing the modules and components of the framework. Then, a number of simulations are made to show the effectiveness of our approaches to solve our challenges in relation to the existing approaches
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Arici, Tarik. "Single and multi-frame video quality enhancement." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29722.

Повний текст джерела
Анотація:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Yucel Altunbasak; Committee Member: Brani Vidakovic; Committee Member: Ghassan AlRegib; Committee Member: James Hamblen; Committee Member: Russ Mersereau. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Wang, Liang. "NOVEL DENSE STEREO ALGORITHMS FOR HIGH-QUALITY DEPTH ESTIMATION FROM IMAGES." UKnowledge, 2012. http://uknowledge.uky.edu/cs_etds/4.

Повний текст джерела
Анотація:
This dissertation addresses the problem of inferring scene depth information from a collection of calibrated images taken from different viewpoints via stereo matching. Although it has been heavily investigated for decades, depth from stereo remains a long-standing challenge and popular research topic for several reasons. First of all, in order to be of practical use for many real-time applications such as autonomous driving, accurate depth estimation in real-time is of great importance and one of the core challenges in stereo. Second, for applications such as 3D reconstruction and view synthesis, high-quality depth estimation is crucial to achieve photo realistic results. However, due to the matching ambiguities, accurate dense depth estimates are difficult to achieve. Last but not least, most stereo algorithms rely on identification of corresponding points among images and only work effectively when scenes are Lambertian. For non-Lambertian surfaces, the "brightness constancy" assumption is no longer valid. This dissertation contributes three novel stereo algorithms that are motivated by the specific requirements and limitations imposed by different applications. In addressing high speed depth estimation from images, we present a stereo algorithm that achieves high quality results while maintaining real-time performance. We introduce an adaptive aggregation step in a dynamic-programming framework. Matching costs are aggregated in the vertical direction using a computationally expensive weighting scheme based on color and distance proximity. We utilize the vector processing capability and parallelism in commodity graphics hardware to speed up this process over two orders of magnitude. In addressing high accuracy depth estimation, we present a stereo model that makes use of constraints from points with known depths - the Ground Control Points (GCPs) as referred to in stereo literature. Our formulation explicitly models the influences of GCPs in a Markov Random Field. A novel regularization prior is naturally integrated into a global inference framework in a principled way using the Bayes rule. Our probabilistic framework allows GCPs to be obtained from various modalities and provides a natural way to integrate information from various sensors. In addressing non-Lambertian reflectance, we introduce a new invariant for stereo correspondence which allows completely arbitrary scene reflectance (bidirectional reflectance distribution functions - BRDFs). This invariant can be used to formulate a rank constraint on stereo matching when the scene is observed by several lighting configurations in which only the lighting intensity varies.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Nawarathna, Ruwan D. "Detection of Temporal Events and Abnormal Images for Quality Analysis in Endoscopy Videos." Thesis, University of North Texas, 2013. https://digital.library.unt.edu/ark:/67531/metadc283849/.

Повний текст джерела
Анотація:
Recent reports suggest that measuring the objective quality is very essential towards the success of colonoscopy. Several quality indicators (i.e. metrics) proposed in recent studies are implemented in software systems that compute real-time quality scores for routine screening colonoscopy. Most quality metrics are derived based on various temporal events occurred during the colonoscopy procedure. The location of the phase boundary between the insertion and the withdrawal phases and the amount of circumferential inspection are two such important temporal events. These two temporal events can be determined by analyzing various camera motions of the colonoscope. This dissertation put forward a novel method to estimate X, Y and Z directional motions of the colonoscope using motion vector templates. Since abnormalities of a WCE or a colonoscopy video can be found in a small number of frames (around 5% out of total frames), it is very helpful if a computer system can decide whether a frame has any mucosal abnormalities. Also, the number of detected abnormal lesions during a procedure is used as a quality indicator. Majority of the existing abnormal detection methods focus on detecting only one type of abnormality or the overall accuracies are somewhat low if the method tries to detect multiple abnormalities. Most abnormalities in endoscopy images have unique textures which are clearly distinguishable from normal textures. In this dissertation a new method is proposed that achieves the objective of detecting multiple abnormalities with a higher accuracy using a multi-texture analysis technique. The multi-texture analysis method is designed by representing WCE and colonoscopy image textures as textons.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Ortiz, Cayón Rodrigo. "Amélioration de la vitesse et de la qualité d'image du rendu basé image." Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4004/document.

Повний текст джерела
Анотація:
Le rendu photo-réaliste traditionnel exige un effort manuel et des calculs intensifs pour créer des scènes et rendre des images réalistes. C'est principalement pour cette raison que la création de contenus pour l’imagerie numérique de haute qualité a été limitée aux experts et le rendu hautement réaliste nécessite encore des temps de calcul significatifs. Le rendu basé image (IBR) est une alternative qui a le potentiel de rendre les applications de création et de rendu de contenus de haute qualité accessibles aux utilisateurs occasionnels, puisqu'ils peuvent générer des images photo-réalistes de haute qualité sans subir les limitations mentionnées ci-dessus. Nous avons identifié trois limitations importantes des méthodes actuelles de rendu basé image : premièrement, chaque algorithme possède des forces et faiblesses différentes, en fonction de la qualité de la reconstruction 3D et du contenu de la scène, et un seul algorithme ne permet souvent pas d’obtenir la meilleure qualité de rendu partout dans l’image. Deuxièmement, ces algorithmes présentent de forts artefacts lors du rendu d’objets manquants ou partiellement reconstruits. Troisièmement, la plupart des méthodes souffrent encore d'artefacts visuels significatifs dans les régions de l’image où la reconstruction est de faible qualité. Dans l'ensemble, cette thèse propose plusieurs améliorations significatives du rendu basé image aussi bien en termes de vitesse de rendu que de qualité d’image. Ces nouvelles solutions sont basées sur le rendu sélectif, la substitution de modèle basé sur l'apprentissage, et la prédiction et la correction des erreurs de profondeur
Traditional photo-realistic rendering requires intensive manual and computational effort to create scenes and render realistic images. Thus, creation of content for high quality digital imagery has been limited to experts and highly realistic rendering still requires significant computational time. Image-Based Rendering (IBR) is an alternative which has the potential of making high-quality content creation and rendering applications accessible to casual users, since they can generate high quality photo-realistic imagery without the limitations mentioned above. We identified three important shortcomings of current IBR methods: First, each algorithm has different strengths and weaknesses, depending on 3D reconstruction quality and scene content and often no single algorithm offers the best image quality everywhere in the image. Second, such algorithms present strong artifacts when rendering partially reconstructed objects or missing objects. Third, most methods still result in significant visual artifacts in image regions where reconstruction is poor. Overall, this thesis addresses significant shortcomings of IBR for both speed and image quality, offering novel and effective solutions based on selective rendering, learning-based model substitution and depth error prediction and correction
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Cotte, Florian. "Estimation d’objets de très faible amplitude dans des images radiologiques X fortement bruitées." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT112.

Повний текст джерела
Анотація:
Dans le domaine de la radiologie par rayons X pour le diagnostic médical, les progrès de l'industrie en informatique, électronique et matériaux des trois dernières décennies ont permis le développement de capteurs numériques permettant d'améliorer la qualité des images. Cette thèse CIFRE, préparée en collaboration entre le laboratoire Gipsa-Lab et l'entreprise Trixell, constructeur de détecteurs plats numériques destinés à l'imagerie radiologique, s'inscrit dans un contexte industriel d'amélioration de la qualité image des capteurs à rayons X. Plus précisément, diverses causes technologiques peuvent générer des perturbations, appelées "artéfacts". La connaissance fine de ces causes technologiques (internes ou externes au capteur) permet de modéliser ces artéfacts et de les éliminer des images.La démarche choisie modélise l'image comme une somme de 3 termes Y = C + S + B :le contenu clinique, le signal ou artéfact à modéliser et le bruit. Le problème consiste donc à retrouver l'artéfact à partir de Y et de connaissances sur le contenu clinique et le bruit. Pour résoudre ce problème inverse mal posé, plusieurs approches bayésiennes utilisant diverses connaissances a priori sont développées. Contrairement aux méthodes d'estimation existantes qui sont spécifiques à un artéfact particulier, notre approche est générique et nos modèles prennent en considération des formes et caractéristiques spatialement variables des artéfacts mais localement stationnaires. Elles permettent de plus d'avoir un retour sur la qualité de l'estimation, validant ou invalidant la modélisation. Les méthodes sont évaluées et comparées sur des images synthétiques pour 2 types d'artefacts. Sur des images réelles, ces méthodes sont illustrées sur la suppression des grilles anti-diffusantes. Les performances des algorithmes développés sont supérieures à celles des méthodes dédiées à un artéfact donné, au prix d'une plus grande complexité. Les derniers résultats obtenus ouvrent des perspectives intéressantes, en particulier pour les artefacts non stationnaires dans l'espace et dans le temps
In the field of X-ray radiology for medical diagnostics, progress in computer, electronics and materials industry over the past three decades have led to the development of digital sensors to improve the quality of images. This CIFRE thesis, prepared in collaboration between the Gipsa-Lab laboratory and the company Trixell, manufacturer of digital flat detectors for radiological imaging, takes place in an industrial context for improving the image quality of X-ray sensors. More specifically, various technological causes can generate disturbances, called "artifacts". The fine knowledge of these technological causes (internal or external to the sensor) makes it possible to model these artifacts and to eliminate them from images.The chosen approach models the image as a sum of 3 terms Y = C + S + B : the clinical content, the signal or artifact to be modeled and the noise. The problem is to find the artifact from Y and knowledge about the clinical content and noise. To solve this inverse problem, several Bayesian approaches using various prior knowledge are developed. Unlike existing estimation methods that are specific to a particular artifact, our approach is generic and our models take into account spatially variable shapes and features of artifacts that are locally stationary. They also give us a feedback on the quality of the estimate, validating or invalidating the model. The methods are evaluated and compared on synthetic images for 2 types of artifacts. On real images, these methods are illustrated on the removal of anti-scattering grids. The performances of the developed algorithms are superior to those of the methods dedicated to a given artifact, at the cost of greater complexity. The latest results obtained open interesting perspectives, especially for non-stationary artefacts in space and time
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Harouna, Seybou Aboubacar. "Analyse d'images couleurs pour le contrôle qualité non destructif." Thesis, Poitiers, 2016. http://www.theses.fr/2016POIT2282/document.

Повний текст джерела
Анотація:
La couleur est un critère important dans de nombreux secteurs d'activité pour identifier, comparer ou encore contrôler la qualité de produits. Cette tâche est souvent assumée par un opérateur humain qui effectue un contrôle visuel. Malheureusement la subjectivité de celui-ci rend ces contrôles peu fiables ou répétables. Pour contourner ces limitations, l'utilisation d'une caméra RGB permet d'acquérir et d'extraire des propriétés photométriques. Cette solution est facile à mettre en place et offre une rapidité de contrôle. Cependant, elle est sensible au phénomène de métamérisme. La mesure de réflectance spectrale est alors la solution la plus appropriée pour s'assurer de la conformité colorimétrique entre des échantillons et une référence. Ainsi dans l'imprimerie, des spectrophotomètres sont utilisés pour mesurer des patchs uniformes imprimés sur une bande latérale. Pour contrôler l'ensemble d'une surface imprimée, des caméras multi-spectrales sont utilisées pour estimer la réflectance de chaque pixel. Cependant, elles sont couteuses comparées aux caméras conventionnelles. Dans ces travaux de recherche, nous étudions l'utilisation d'une caméra RGB pour l'estimation de la réflectance dans le cadre de l'imprimerie. Nous proposons une description spectrale complète de la chaîne de reproduction pour réduire le nombre de mesures dans les phases d'apprentissage et pour compenser les limitations de l'acquisition. Notre première contribution concerne la prise en compte des limitations colorimétriques lors de la caractérisation spectrale d'une caméra. La deuxième contribution est l'exploitation du modèle spectrale de l'imprimante dans les méthodes d'estimation de réflectance
Color is a major criterion for many sectors to identify, to compare or simply to control the quality of products. This task is generally assumed by a human operator who performs a visual inspection. Unfortunately, this method is unreliable and not repeatable due to the subjectivity of the operator. To avoid these limitations, a RGB camera can be used to capture and extract the photometric properties. This method is simple to deploy and permits a high speed control. However, it's very sensitive to the metamerism effects. Therefore, the reflectance measurement is the more reliable solution to ensure the conformity between samples and a reference. Thus in printing industry, spectrophotometers are used to measure uniform color patches printed on a lateral band. For a control of the entire printed surface, multispectral cameras are used to estimate the reflectance of each pixel. However, they are very expensive compared to conventional cameras. In this thesis, we study the use of an RGB camera for the spectral reflectance estimation in the context of printing. We propose a complete spectral description of the reproduction chain to reduce the number of measurements in the training stages and to compensate for the acquisition limitations. Our first main contribution concerns the consideration of the colorimetric limitations in the spectral characterization of a camera. The second main contribution is the exploitation of the spectral printer model in the reflectance estimation methods
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Jiang, Shiguo. "Estimating Per-pixel Classification Confidence of Remote Sensing Images." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354557859.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Kaller, Ondřej. "Pokročilé metody snímání a hodnocení kvality 3D videa." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-369744.

Повний текст джерела
Анотація:
Disertační práce se zabývá metodami snímání a hodnocení kvality 3D obrazů a videí. Po krátkém shrnutí fyziologie prostorového vnímání, obsahuje práce stav poznání v oblastech problému adaptivní paralaxy a konfigurace kamer pro snímání klasického stereopáru. Taktéž shrnuje dnešní možnosti odhadu hloubkové mapy. Zmíněny jsou aktivní i pasivní metody, detailněji je vysvětleno profilometrické skenování. Byly změřeny některé technické parametry dvou technologií současných 3D zobrazovačů, a to polarizačně-oddělujících a využívajících časový multiplex, například přeslechy mezi levým a pravým snímkem. Jádro práce tvoří nová metoda pro vytváření hloubkové mapy při snímání 3D scény, kterážto byla autorem navržena a testována. Inovativnost tohoto přístupu spočívá v chytré kombinaci současných aktivních a pasivních metod snímání hloubky scény, která vtipně využívá výhod obou metod. Nakonec jsou prezentovány výsledky subjektivních testů kvality 3D videa. Největší přínos zde má navržená metrika modelující výsledky subjektivních testů kvality 3D videa.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Belgued, Youssef. "Amélioration de la qualité géométrique des images spatiales radar : méthodes de localisation et restitution du relief par radargrammétrie." Toulouse, INPT, 2000. http://www.theses.fr/2000INPT019H.

Повний текст джерела
Анотація:
L'observation de la Terre depuis l'espace par les capteurs radar a ouvert de nouvelles perspectives dans le domaine des techniques d'exploitation des images et dans le domaine des applications servies par ces produits radar. Cette thèse s'intéresse à la qualité géométrique des images radar à synthèse d'ouverture acquises par satellite. Cet aspect est d'une grande importance lors de l'intégration de ces données dans des systèmes avec des sources de données hétérogènes, et lors de l'application de méthodes basées sur les modèles géométriques des images telles que la localisation et la restitution du relief. Nous commençons par la description de la modélisation du processus physique de prise de vue de l'image radar qui sert de base aux méthodes de localisation et aux applications liées à la géométrie des images. Puis nous montrons que des erreurs contaminent les valeurs des paramètres du modèle de prise de vue et nous analysons tous les foyers éventuels d'imprécision afin, d'une part, d'établir le vecteur d'état des paramètres à estimer et, d'autre part, par un processus de modélisation/simulation de disposer d'un outil d'expertise sur les potentialités en localisation et en restitution du relief de systèmes spatiaux radar existants ou futurs. L'ajustement des modèles de prise de vue, posé comme un problème d'estimation de paramètres en présence de bruit dans les mesures d'appui, est ensuite résolu simultanément pour un bloc d'images qui se chevauchent. Finalement, nous étudions la radargrammétrie qui consiste à générer des modèles numériques de terrain à partir d'images radar stéréoscopiques. Une nouvelle chaîne de traitement radargrammétrique est mise au point avec un module original de mise en géométrie épipolaire et une analyse de l'influence du filtrage des images en amont de l'étape d'appariement.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Conze, Pierre-Henri. "Estimation de mouvement dense long-terme et évaluation de qualité de la synthèse de vues. Application à la coopération stéréo-mouvement." Phd thesis, INSA de Rennes, 2014. http://tel.archives-ouvertes.fr/tel-00992940.

Повний текст джерела
Анотація:
Les nouvelles technologies de la vidéo numérique tendent vers la production, la transmission et la diffusion de contenus de très haute qualité, qu'ils soient monoscopiques ou stéréoscopiques. Ces technologies ont énormément évolué ces dernières années pour faire vivre à l'observateur l'expérience la plus réaliste possible. Pour des raisons artistiques ou techniques liées à l'acquisition et à la transmission du contenu, il est parfois nécessaire de combiner la vidéo acquise à des informations de synthèse tout en veillant à maintenir un rendu photo-réaliste accru. Pour faciliter la tâche des opérateurs de production et post-production, le traitement combiné de contenus capturés et de contenus de synthèse exige de disposer de fonctionnalités automatiques sophistiquées. Parmi celles-ci, nos travaux de recherche ont porté sur l'évaluation de qualité de la synthèse de vues et l'élaboration de stratégies d'estimation de mouvement dense et long-terme. L'obtention d'images synthétisées de bonne qualité est essentielle pour les écrans 3D auto-stéréoscopiques. En raison d'une mauvaise estimation de disparité ou interpolation, les vues synthétisées générées par DIBR font cependant parfois l'objet d'artéfacts. C'est pourquoi nous avons proposé et validé une nouvelle métrique d'évaluation objective de la qualité visuelle des images obtenues par synthèse de vues. Tout comme les techniques de segmentation ou d'analyse de scènes dynamiques, l'édition vidéo requiert une estimation dense et long-terme du mouvement pour propager des informations synthétiques à l'ensemble de la séquence. L'état de l'art dans le domaine se limitant quasi-exclusivement à des paires d'images consécutives, nous proposons plusieurs contributions visant à estimer le mouvement dense et long-terme. Ces contributions se fondent sur une manipulation robuste de vecteurs de flot optique de pas variables (multi-steps). Dans ce cadre, une méthode de fusion séquentielle ainsi qu'un filtrage multilatéral spatio-temporel basé trajectoires ont été proposés pour générer des champs de déplacement long-termes robustes aux occultations temporaires. Une méthode alternative basée intégration combinatoire et sélection statistique a également été mise en œuvre. Enfin, des stratégies à images de référence multiples ont été étudiées afin de combiner des trajectoires provenant d'images de référence sélectionnées selon des critères de qualité du mouvement. Ces différentes contributions ouvrent de larges perspectives, notamment dans le contexte de la coopération stéréo-mouvement pour lequel nous avons abordé les aspects correction de disparité à l'aide de champs de déplacement denses long-termes.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Delvit, Jean-Marc. "Évaluation de la résolution d'un instrument optique par une méthode neuronale : application à une image quelconque de télédétection." Toulouse, ENSAE, 2003. http://www.theses.fr/2003ESAE0010.

Повний текст джерела
Анотація:
La connaissance de la résolution d’un instrument permet de comparer les caractéristiques de plusieurs imageurs, coopératifs ou non et d’améliorer en terme de qualité les images issues de ces instruments. Mais, le terme de résolution reste assez vague et a été l'objet de nombreuses définitions. La résolution est ce qui caractérise la capacité d’un système imageur à fournir une image dans laquelle on pourra distinguer des détails plus ou moins petits. Nous définissons comme résolution le triplet {échantillonnage, bruit, Fonction de Transfert de Modulation} Nous proposons, dans ce travail, d’évaluer la Fonction de transfert de modulation (FTM) et le bruit pour un pas d’échantillonnage donné à partir d’une image quelconque sans utiliser d’image de référence. Il faut remarquer que deux images quelconques auront a priori deux résolutions différentes, donc deux triplets différents, mais aussi deux paysages différents. C’est un des problèmes majeurs de cette étude, problème qui nécessite de modéliser un paysage quelconque. Les phénomènes à modéliser sont complexes et noué-linéaires ; pour ces raisons, nous avons choisi d’utiliser des réseaux de neurones artificiels (RNA). En effet, les RNA sont des modèles non linéaires simples, comportant peu de paramètres. Ils sont en plus d’excellents interpolateurs. En pratique, il s’agit dans un premier temps de trier les images selon leur type de paysage. Des paysages très structurés (urbains) sont utiles pour estimer la FTM et des paysages peu structurés (ruraux) sont utiles pour estimer le bruit. Ensuite, il est essentiel de caractériser chacune des composantes du triplet. Le RNA apprend à associer, grâce à des images connues, la caractérisation de chacune des composantes du triplet à la résolution de l’image considérée. Cette caractérisation est une étape essentielle au bon fonctionnement de la méthode. Il s’agit de trouver des paramètres pertinents pour l’estimation du triplet résolution. Pour cela, nous utilisons une caractérisation du paysage, certaines propriétés fréquentielles des images ainsi que des propriétés issues de l'analyse des images en paquets d’ondelettes. Enfin, le RNA peut être utilisé de manière autonome sur des images inconnues pour estimer leur triplet résolution. Le résultat est une estimation de la FTM avec des erreurs moyennes de 5% et une estimation de bruit avec des erreurs de l’ordre du 1/4 de pas de quantification (sur l'écart type du bruit) pour des images codées sur 8 bits.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Leão, Junior Emerson [UNESP]. "Análise da qualidade da informação produzida por classificação baseada em orientação a objeto e SVM visando a estimativa do volume do reservatório Jaguari-Jacareí." Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/152234.

Повний текст джерела
Анотація:
Submitted by Emerson Leão Júnior null (emerson.leaojr@gmail.com) on 2017-12-05T18:07:16Z No. of bitstreams: 1 leao_ej_me_prud.pdf: 4186679 bytes, checksum: ee186b23411343c3e2d782d622226699 (MD5)
Approved for entry into archive by ALESSANDRA KUBA OSHIRO null (alessandra@fct.unesp.br) on 2017-12-06T10:52:22Z (GMT) No. of bitstreams: 1 leaojunior_e_me_prud.pdf: 4186679 bytes, checksum: ee186b23411343c3e2d782d622226699 (MD5)
Made available in DSpace on 2017-12-06T10:52:22Z (GMT). No. of bitstreams: 1 leaojunior_e_me_prud.pdf: 4186679 bytes, checksum: ee186b23411343c3e2d782d622226699 (MD5) Previous issue date: 2017-04-25
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Considerando o cenário durante a crise hídrica de 2014 e a situação crítica dos reservatórios do sistema Cantareira no estado de São Paulo, este estudo realizado no reservatório Jaguari-Jacareí, consistiu na extração de informações a partir de imagens multiespectrais e análise da qualidade da informação relacionada com a acurácia no cálculo do volume de água do reservatório. Inicialmente, a superfície do espelho d’água foi obtida pela classificação da cobertura da terra a partir de imagens multiespectrais RapidEye tomadas antes e durante a crise hídrica (2013 e 2014, respectivamente), utilizando duas abordagens distintas: classificação orientada a objeto (Object-based Image Analysis - OBIA) e classificação baseada em pixel (Support Vector Machine – SVM). A acurácia do usuário por classe permitiu expressar o erro para detectar a superfície do espelho d’água para cada abordagem de classificação de 2013 e 2014. O segundo componente da estimação do volume foi a representação do relevo submerso, que considerou duas fontes de dados na construção do modelo numérico do terreno (MNT): dados topográficos provenientes de levantamento batimétrico disponibilizado pela Sabesp e o modelo de superfície AW3D30 (ALOS World 3D 30m mesh), para complementar a informação não disponível além da cota 830,13 metros. A comparação entre as duas abordagens de classificação dos tipos de cobertura da terra do entorno do reservatório Jaguari-Jacareí mostrou que SVM resultou em indicadores de acurácia ligeiramente superiores à OBIA, para os anos de 2013 e 2014. Em relação à estimação de volume do reservatório, incorporando a informação do nível de água divulgado pela Sabesp, a abordagem SVM apresentou menor discrepância relativa do que OBIA. Apesar disso, a qualidade da informação produzida na estimação de volume, resultante da propagação da variância associada aos dados envolvidos no processo, ambas as abordagens produziram valores similares de incerteza, mas com uma sutil superioridade de OBIA, para alguns dos cenários avaliados. No geral, os métodos de classificação utilizados nesta dissertação produziram informação acurada e adequada para o monitoramento de recursos hídricos e indicou que a abordagem SVM teve um desempenho sutilmente superior na classificação dos tipos de cobertura da terra, na estimação do volume e em alguns dos cenários considerados na propagação da incerteza.
This study aims to extract information from multispectral images and to analyse the information quality in the water volume estimation of Jaguari-Jacareí reservoir. The presented study of changes in the volume of the Jaguari-Jacareí reservoir was motivated by the critical situation of the reservoirs from Cantareira System in São Paulo State caused by water crisis in 2014. Reservoir area was extracted from RapidEye multispectral images acquired before and during the water crisis (2013 and 2014, respectively) through land cover classification. Firstly, the image classification was carried out in two distinct approaches: object-based (Object-based Image Analysis - OBIA) and pixel-based (Support Vector Machine - SVM) method. The classifications quality was evaluated through thematic accuracy, in which for every technique the user accuracy allowed to express the error for the class representing the water in 2013 and 2014. Secondly, we estimated the volume of the reservoir’s water body, using the numerical terrain model generated from two additional data sources: topographic data from a bathymetric survey, available from Sabesp, and the elevation model AW3D30 (to complement the information in the area where data from Sabesp was not available). When compare the two classification techniques, it was found that in the image classification, SVM performance slightly overcame the OBIA classification technique for 2013 and 2014. In the volume calculation considering the water level estimated from the generated DTM, the result obtained by SVM approach was better in 2013, whereas OBIA approach was more accurate in 2014. Considering the quality of the information produced in the volume estimation, both approaches presented similar values of uncertainty, with the OBIA method slightly less uncertain than SVM. In conclusion, the classification methods used in this dissertation produced accurate information to monitor water resource, but SVM had a subtly superior performance in the classification of land cover types, volume estimation and some of the scenarios considered in the propagation of uncertainty.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Chou, Xinhan, and 周昕翰. "Defocus Blur Identification for Depth Estimation and Image Quality Assessment." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/51786532470766991315.

Повний текст джерела
Анотація:
碩士
國立中正大學
電機工程研究所
101
In this thesis, we present a defocus blur identification technique based on histogram analysis of an image. The image defocus process is formulated by incorporating the non-linear camera response and intensity dependent noise model. The histogram matching between the synthesized and real defocused regions is then carried out with intensity dependent filtering. By iteratively changing the point-spread function parameters, the best blur extent is identified from histogram comparison. The presented technique is first applied to depth measurement using the defocus information. It is also used for image quality assessment applications, specifically associated with optical defocus blur. We have performed the experiments on both the real scene images. The results have demonstrated the robustness and feasibility of the proposed technique.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Tian, Xiaoyu. "Prospective Estimation of Radiation Dose and Image Quality for Optimized CT Performance." Diss., 2016. http://hdl.handle.net/10161/12818.

Повний текст джерела
Анотація:

X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].

Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.

As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.

More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.

With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.

Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.

With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.

Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.

Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.


Dissertation
Стилі APA, Harvard, Vancouver, ISO та ін.
25

LAPINI, ALESSANDRO. "Advanced multiresolution bayesian methods and sar image modelling for speckle removal." Doctoral thesis, 2014. http://hdl.handle.net/2158/843707.

Повний текст джерела
Анотація:
SAR imaging systems are widely used for the observation of Earth surface. Despeckling methods based on Bayesian estimation in the Undecimated Wavelet Transform domain have been considered, as well as optimal image formats for despeckling, despeckling of correlated speckle noise and the problem of quality assessment for the despeckling of SAR images.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Silva, Lourenço de Mértola Belford Correia da. "Quality assessment of 2D image rendering for 4D light field content." Master's thesis, 2018. http://hdl.handle.net/10071/18244.

Повний текст джерела
Анотація:
Light Field (LF) technology, comprising visual data representations with huge amount of information, can be used to solve some of the current 3D technology limitations while enabling also new image functionalities not straightforwardly supported by traditional 2D imaging. However, current displays are not ready to process this kind of content, which means that rendering algorithms are necessary to present this type of visual content in 2D or 3D multi-view displays. However, the visual quality experienced by the user is highly dependent on the rendering approach adopted. Therefore, LF rendering technology requires appropriate quality assessment tests with real people, as there is no better and reliable way to assess the quality of these type of algorithms. In this context, this dissertation aims to study, implement, improve and compare various LF rendering algorithms and rendering approaches. Performance evaluation is done through subjective quality assessment tests aiming to understand which algorithm performs better in certain situations and the subjective quality impact of some of those algorithm parameters. Additionally, a comparison of single plane of focus versus all-infocus LF rendering approaches is also evaluated.
A tecnologia de campos de luz – Light Field (LF), composta por representações visuais de dados com grande quantidade de informação, pode ser usada para solucionar algumas das limitações atuais da tecnologia 3D, além de permitir novas funcionalidades que não são suportadas diretamente pela imagem 2D tradicional. No entanto, os dispositivos de visualização actuais não estão preparados para processar este tipo de conteúdo, o que significa que são necessários algoritmos de renderização para apresentar este tipo de conteúdo visual em versão 2D ou em versão 3D com múltiplas vistas. No entanto, a qualidade visual do ponto vista da percepção do utilizador é altamente dependente da abordagem de renderização adotada. Portanto, a tecnologia de renderização LF requer avaliação de qualidade adequada com pessoas reais, já que não há maneira melhor e mais confiável de avaliar a qualidade deste tipo de algoritmos. Neste contexto, esta dissertação tem como objetivo estudar, implementar e comparar diversos algoritmos e abordagens de renderização LF. A avaliação de desempenho é feita recorrendo a testes subjetivos de avaliação de qualidade para entender qual algoritmo que apresenta melhor desempenho em determinadas situações e a influência, em termos da qualidade subjetiva, de alguns parâmetros de input em certos algoritmos. Além disso, também é avaliada uma comparação de abordagens de renderização com focagem em apenas um plano versus renderização com focagem em todos os planos.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

DINI, FABRIZIO. "Target detection and tracking in video surveillance." Doctoral thesis, 2010. http://hdl.handle.net/2158/574120.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Gariepy, Ryan. "Quadrotor Position Estimation using Low Quality Images." Thesis, 2011. http://hdl.handle.net/10012/6274.

Повний текст джерела
Анотація:
The use of unmanned systems is becoming widespread in commercial and military sectors. The ability of these systems to take on dull, dirty, and dangerous tasks which were formerly done by humans is encouraging their rapid adoption. In particular, a subset of these undesirable tasks are uniquely suited for small unmanned aerial vehicles such as quadrotor helicopters. Examples of such tasks include surveillance, mapping, and search and rescue. Many of these potential tasks require quadrotors to be deployed in environments where a degree of position estimation is required and traditional GPS-based positioning technologies are not applicable. Likewise, since unmanned systems in these environments are often intended to serve the purpose of scouts or first--responders, no maps or reference beacons will be available. Additionally, there is no guarantee of clear features within the environment which an onboard sensor suite (typically made up of a monocular camera and inertial sensors) will be able to track to maintain an estimate of vehicle position. Up to 90% of the features detected in the environment may produce motion estimates which are inconsistent with the true vehicle motion. Thus, new methods are needed to compensate for these environmental deficiencies and measurement inconsistencies. In this work, a RANSAC-based outlier rejection technique is combined with an Extended Kalman Filter (EKF) to generate estimates of vehicle position in a 2--D plane. A low complexity feature selection technique is used in place of more modern techniques in order to further reduce processor load. The overall algorithm was faster than the traditional approach by a factor of 4. Outlier rejection allows the abundance of low quality, poorly tracked image features to be filtered appropriately, while the EKF allows a motion model of the quadrotor to be incorporated into the position estimate. The algorithm is tested in real-time on a quadrotor vehicle in an indoor environment with no clear features and found to be able to successfully estimate position of the vehicle to within 40 cm, superior to those produced when no outlier rejection technique was used. It is also found that the choice of simple feature selection approaches is valid, as complex feature selection approaches which may take over 10 times as long to run still result in outliers being present. When the algorithm is used for vehicle control, periodic synchronization to ground truth data was required due to nearly 1 second of latency present in the closed--loop system. However, the system as a whole is a valid proof of concept for the use of low quality images for quadrotor position control. The overall results from the work suggest that it is possible for unmanned systems to use visual data to estimate state even in operational environments which are poorly suited for visual estimation techniques. The filter algorithm described in this work can be seen as a useful tool for expanding the operational capabilities of small aerial vehicles.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Alves, Styve da Conceicao. "Estimativa e diagnóstico da qualidade do ar obtida por dados de observação da Terra." Master's thesis, 2017. http://hdl.handle.net/10316/82952.

Повний текст джерела
Анотація:
Dissertação de Mestrado em Química apresentada à Faculdade de Ciências e Tecnologia
O ar que respiramos pode conter diversos poluentes, dependendo de diversos factores que podem contribuir à sua constituição, e de modo a precaver para situações de riscos, elevados teores, estes, podem provocar graves efeitos no ambiente e na saúde pública. As emissões dos principais poluentes atmosféricos na Europa diminuíram desde 1990. Durante a última década, esta redução das emissões resultou, para alguns dos poluentes, na melhoria da qualidade do ar em toda a região. No entanto, devido às complexas ligações entre as emissões e a qualidade do ar, as reduções de emissões nem sempre produzem uma correspondente queda nas concentrações atmosféricas. Nesse sentido, o presente trabalho dirige-se ao estudo da capacidade modelar, através de ferramentas estatísticas, os poluentes, bem como as suas relações químicas/moleculares. Obter informação quantificada no que diz respeito à influência e interdependência cruzada das diferentes vertentes de caracterização química da qualidade do ar. Esta ideia surgiu através de a colaboração da Primelayer e Spacelayer, que disponibilizou os dados relativos, ao mês de fevereiro de 2017 no Meco, munícipe de Montemor-o-velho, esse dados foram, os factores meteorológicos, a humidade relativa, temperatura, direção do vento, velocidade do vento e pluviosidade e indicadores da qualidade do ar CO, NO, NO2, NH3, SO2, O3, PM2.5, PM10, PANs, NMVOCs, para caracterizar os diversos poluentes químicos. Como ferramentas foi utilizado o código SNAP do Copernicus e análise multivariacional para estabelecer padrões de emissão de poluentes. Para estabelecer estes padrões de emissão, partimos de uma estratégia de modelar por via de análise multivariada, através de dados relativos ao mês de fevereiro de 2017 dos poluentes e dos indicadores de qualidade referidos anteriormente. O presente estudo evidenciou uma boa descrição (modelação) dos teores de PM2.5, NO2, CO e NMVOCs.
The air we breathe may contain several pollutants, depending on several factors that may contribute to its constitution, and in order to prevent risks, high levels of these can have serious effects on the environment and public health. Emissions of major air pollutants in Europe have declined since 1990. During the last decade, this reduction in emissions has resulted, for some of the pollutants, in improving air quality throughout the region. However, due to the complex links between emissions and air quality, emission reductions do not always produce a corresponding fall in atmospheric concentrations. In this sense, the present work is directed to the study of the modeling capacity, through statistical tools, the pollutants, as well as their chemical / molecular relations. To obtain quantified information regarding the influence and interdependence of the different aspects of chemical characterization of air quality. This idea arose through the collaboration of Primelayer and Spacelayer, who provided the data related to the month of February, 2017 in Meco, Montemor-o-velho municipality, this data were, meteorological factors, relative humidity, temperature, direction wind velocity, rainfall and air quality indicators CO, NO, NO2, NH3, SO2, O3, PM2.5, PM10, PANs, NMVOCs, to characterize the various chemical pollutants. Copernicus SNAP code and multivariate analysis were used as tools to establish pollutant emission standards. In order to establish these emission standards, we started with a multivariate modeling strategy based on data from February 2017 on the pollutants and quality indicators referred to above. The present study showed a good description (modeling) of PM2.5, NO2, CO and NMVOCs.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії