Dissertations / Theses on the topic 'Adaptive image processing'

To see the other types of publications on this topic, follow the link: Adaptive image processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Adaptive image processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Yakoubian, Jeffrey Scott. "Adaptive histogram equalization for mammographic image processing." Thesis, Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/16387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hadhoud, M. M. "The adaptive LMS alogrithm in image processing." Thesis, University of Southampton, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.380631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wong, Hau San. "Adaptive image processing using computational intelligence techniques." Thesis, The University of Sydney, 1998. https://hdl.handle.net/2123/27658.

Full text
Abstract:
In this thesis, we illustrate the essential aspects of the adaptive image processing problem in terms of two applications: the adaptive assignment of the regularization parameters in image restoration, and the adaptive characterization of edges in feature detection applications. These two problems are representative of the general adaptive image processing paradigm in that the three requirements for its successive implementation: namely the segmentation of an image into its main feature types, the characterization of each of these features, and the optimization of the image model parameters corresponding to the individual features, are present. In view of these requirements, we have adopted the three main approaches within the class of computational intelligence algorithms, namely neu— ral network techniques, fuzzy set theory, and evolutionary computation, for solving the adaptive image processing problem. This is in view of the direct correspondence between some of the above requirements with the particular capabilities of specific computational intelligence approaches. We first applied neural network techniques to the adaptive regularization problem in image restoration. Instead of the usual approach of selecting the regularization parameter values by trial and error, we adopt a learning approach by treating the parameters in various local image regions as network weights of a model—based neural network with hierarchical architecture (HMBNN), such that they are adjustable through the supply of training examples specifying the desired image quality. In addition, we also applied the HMBNN to the problem
APA, Harvard, Vancouver, ISO, and other styles
4

Riehle, Thomas J. "Adaptive bilateral extensor for image interpolation." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4555.

Full text
Abstract:
Thesis (M.S.)--University of Missouri-Columbia, 2006.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (February 23, 2007) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
5

Shen, Liang. "Region-based adaptive image processing techniques for mammography." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0011/NQ34701.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Coleman, Sonya. "Scalable operators for adaptive processing of digital images." Thesis, University of Ulster, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.270447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Podd, Frank J. W. "Medical X-ray dose reduction including adaptive image processing." Thesis, University of Surrey, 1997. http://epubs.surrey.ac.uk/842724/.

Full text
Abstract:
This thesis investigates possible methods for dose reduction for one of the main contributors to medical x-ray dose, that of fluoroscopic examinations. Background information is provided on the subjects of radiation interaction mechanisms, radiation dose measures, and the health risk from medical x-rays. This illuminates a running theme of the thesis, namely the compromise between image quality and low patient dose. Possible dose reduction methods using both spatial and temporal image processing techniques are investigated. Edge detection is one of the most important sub-components of the spatial image processing system. The commonly used edge detectors are investigated from a theoretical viewpoint and their performances under Poisson noise conditions are compared using receiver operating characteristic analysis. A new metric is suggested for the quantitative comparison of the edge operators under high detection and low false alarm probability conditions. An adaptive pulse dropping control system is created in order to use the image processing sub-systems with low-dose examinations. This control system determines the best x-ray tube pulse-rate based on the amount of movement present in the image. A method of distributing the dose so that areas of high clinical importance have a higher image quality than less important regions is discussed. This method uses a wedge-shaped x-ray beam filter. The problem of varying pixel intensity due to the differing filter thickness is countered by rescaling the image. The various image processing techniques are combined to create a low-dose imaging system. This system achieves a dose reduction of an order of magnitude.
APA, Harvard, Vancouver, ISO, and other styles
8

Arrowood, Joseph Louis Jr. "Theory and application of adaptive filter banks." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/15369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Moussa, Badi M. S. "Adaptive transform coding for digital image communication." Thesis, Loughborough University, 1985. https://dspace.lboro.ac.uk/2134/27360.

Full text
Abstract:
The performance of transform image coding schemes can be improved substantially by adapting to changes in image statistics. Essentially, this is accomplished through adaptation of the transform, bit allocation, and/or quantization parameters according to time-varying image statistics. Additionally adaptation can be used to achieve transmission rate reduction whilst maintaining a given picture quality.
APA, Harvard, Vancouver, ISO, and other styles
10

Price, Jeffery Ray. "A framework for adaptive image interpolation." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/13718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Kani, Bijan. "Enhanced logical adaptive systems for image processing and pattern recognition." Thesis, Brunel University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Caton, Simon James. "On-demand distributed image processing over an adaptive Campus-Grid." Thesis, Cardiff University, 2009. http://orca.cf.ac.uk/54950/.

Full text
Abstract:
This thesis explores how scientific applications, which are based upon short jobs (seconds and minutes) can capitalize upon the idle workstations of a Campus-Grid. These resources are donated on a voluntary basis, and consequently, the Campus-Grid is constantly adapting and the availability of workstations changes. Typically, to utilize these resources a Condor system or equivalent would be used. However, such systems are designed with different trade-offs and incentives in mind and therefore do not provide intrinsic support for short jobs. The motivation for creating a provisioning scenario for short jobs is that Image Processing, as well as other areas of scientific analysis, are typically composed of short running jobs, but still require parallel solutions. Much of the literature in this area comments on the challenges of performing such analysis efficiently and effectively even when dedicated resources are in use. The main challenges are: latency and scheduling penalties, granularity and the potential for very short jobs. A volunteer Grid retains these challenges but also adds further challenges. These can be summarized as: unpredictable re source availability and longevity, multiple machine owners and administrators who directly affect the operating environment. Ultimately, this creates the requirement for well conceived and effective fault management strategies. However, these are typically not in place to enable transparent fault-free job administration for the user. This research demonstrates that these challenges are answerable, and that in doing so opportunistically sourced Campus-Grid resources can host disparate applications constituted of short running jobs, of as little as one second in length. This is demonstrated by the significant improvements in performance when the system presented here was compared to a well established Condor system. Here, improvements are increased job efficiency from 60–70% to 95%–100%, up to a 99% reduction in application makespan and up to a 13000% increase in the efficiency of resource utilization. The Condor pool in use is approximately 1,600 workstations distributed across 27 administrative domains of Cardiff University. The application domain of this research is Matlab-based image processing, and the application area used to demonstrate the approach is the analysis of Magnetic Resonance Imagery (MRI). However, the presented approach is generalizable to any application domain with similar characteristics.
APA, Harvard, Vancouver, ISO, and other styles
13

Nasiopoulos, Panagiotis. "Adaptive compression coding." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28508.

Full text
Abstract:
An adaptive image compression coding technique, ACC, is presented. This algorithm is shown to preserve edges and give better quality decompressed pictures and better compression ratios than that of the Absolute Moment Block Truncation Coding. Lookup tables are used to achieve better compression rates without affecting the visual quality of the reconstructed image. Regions with approximately uniform intensities are successfully detected by using the range and these regions are approximated by their average. This procedure leads to further reduction in the compression data rates. A method for preserving edges is introduced. It is shown that as more details are preserved around edges the pictorial results improve dramatically. The ragged appearance of the edges in AMBTC is reduced or eliminated, leading to images far superior than those of AMBTC. For most of the images ACC yields Root Mean Square Error smaller than that obtained by AMBTC. Decompression time is shown to be comparable to that of AMBTC for low threshold values and becomes significantly lower as the compression rate becomes smaller. An adaptive filter is introduced which helps recover lost texture at very low compression rates (0.8 to 0.6 b/p, depending on the degree of texture in the image). This algorithm is easy to implement since no special hardware is needed.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
14

Chaiyaboonthanit, Thanit. "Image coding using wavelet transform and adaptive block truncation coding /." Online version of thesis, 1991. http://hdl.handle.net/1850/10913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Jónsson, Ragner H. "Adaptive subband coding of video using probability distribution models." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/14453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Perry, Stuart William. "Adaptive image restoration perception based neural network models and algorithms /." Connect to full text, 1998. http://hdl.handle.net/2123/389.

Full text
Abstract:
Thesis (Ph. D.)--University of Sydney, 1999.
Title from title screen (viewed Apr. 16, 2008). Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the School of Electrical and Information Engineering, Faculty of Engineering. Degree awarded 1999; thesis submitted 1998. Includes bibliography. Also available in print form.
APA, Harvard, Vancouver, ISO, and other styles
17

Klausutis, Timothy J. "Adaptive lapped transforms with applications to image coding." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/15925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Chung, Wilson C. "Adaptive subband video coding in a rate-distortion-constrained framework." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/15459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Lynch, Michael Richard. "Adaptive techniques in signal processing and connectionist models." Thesis, University of Cambridge, 1990. https://www.repository.cam.ac.uk/handle/1810/244884.

Full text
Abstract:
This thesis covers the development of a series of new methods and the application of adaptive filter theory which are combined to produce a generalised adaptive filter system which may be used to perform such tasks as pattern recognition. Firstly, the relevant background adaptive filter theory is discussed in Chapter 1 and methods and results which are important to the rest of the thesis are derived or referenced. Chapter 2 of this thesis covers the development of a new adaptive algorithm which is designed to give faster convergence than the LMS algorithm but unlike the Recursive Least Squares family of algorithms it does not require storage of a matrix with n2 elements, where n is the number of filter taps. In Chapter 3 a new extension of the LMS adaptive notch filter is derived and applied which gives an adaptive notch filter the ability to lock and track signals of varying pitch without sacrificing notch depth. This application of the LMS filter is of interest as it demonstrates a time varying filter solution to a stationary problem. The LMS filter is next extended to the multidimensional case which allows the application of LMS filters to image processing. The multidimensional filter is then applied to the problem of image registration and this new application of the LMS filter is shown to have significant advantages over current image registration methods. A consideration of the multidimensional LMS filter as a template matcher and pattern recogniser is given. In Chapter 5 a brief review of statistical pattern recognition is given, and in Chapter 6 a review of relevant connectionist models. In Chapter 7 the generalised adaptive filter is derived. This is an adaptive filter with the ability to model non-linear input-output relationships. The Volterra functional analysis of non-linear systems is given and this is combined with adaptive filter methods to give a generalised non-linear adaptive digital filter. This filter is then considered as a linear adaptive filter operating in a non-linearly extended vector space. This new filter is shown to have desirable properties as a pattern recognition system. The performance and properties of the new filter is compared with current connectionist models and results demonstrated in Chapter 8. In Chapter 9 further mathematical analysis of the networks leads to suggested methods to greatly reduce network complexity for a given problem by choosing suitable pattern classification indices and allowing it to define its own internal structure. In Chapter 10 robustness of the network to imperfections in its implementation is considered. Chapter 11 finishes the thesis with some conclusions and suggestions for future work.
APA, Harvard, Vancouver, ISO, and other styles
20

Liu, Sam J. "Low bit-rate image and video compression using adaptive segmentation and quantization." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/14850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Druckmüllerová, Hana. "Application of Adaptive Filters in Processing of Solar Corona Images." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2014. http://www.nusl.cz/ntk/nusl-234232.

Full text
Abstract:
Fotografování sluneční koróny patří mezi nejobtížnější úlohy astrofotografie a zároveň je jednou z klíčových metod pro studium koróny. Tato práce přináší ucelený souhrn metod pro pozorování sluneční koróny pomocí snímků. Práce obsahuje nutnou matematickou teorii, postup pro zpracování snímků a souhrn adaptivních filtrů pro vizualizaci koronálních struktur v digitálních obrazech. Dále přináší návrh nových metod určených především pro obrazy s vyšším obsahem šumu, než je běžné u obrazů bílé koróny pořízených během úplných zatmění Slunce, např. pro obrazy pořízené pomocí úzkopásmových filtrů. Fourier normalizing-radial-graded filter, který byl navržen v rámci této práce, je založen na aproximaci hodnot pixelů a jejich variability pomocí trigonometrických polynomů s využitím dalších vlastností obrazu.
APA, Harvard, Vancouver, ISO, and other styles
22

Arroyo, Esquivel Esteban. "An adaptive image pre-processing system for quality control in production lines." Master's thesis, Instituto Politécnico de Bragança, Escola Superior de Tecnologia e Gestão, 2012. http://hdl.handle.net/10198/7984.

Full text
Abstract:
Adaptive and self-optimized behaviours in automated quality control systems based on computer vision and hence on digital image processing, constitute an approach that may signi cantly enhance the robustness, e ciency and exibility of the industrial systems. This work presents the development of an adaptive software system designed for the pre-processing of digital images captured in industrial production scenarios. The system is capable of characterizing and distinguishing between di erent image scenes, evaluating their current global quality, determining the best enhancement strategy to be executed and auto-adapting its functional parameters according to the actual generated data. Real images captured in a washing machines production line are presented and used to test and validate the system algorithms. Experimental results demonstrate the robust performance of the system, the image quality enhancements achieved and the improvement of the recognition e ectiveness of the posterior processing algorithms. The contribution of the present work consists in the implementation of a dynamic scene recognition algorithm, based on a neural network architecture and a parametric indexes extraction process, that allows the execution of customized quality correction strategies and it is capable of considering the system's history in order to self-adapt. With this project it is intended to incur in the automated industrial quality control eld by addressing a novel approach on adaptive pre-processing systems for quality enhancement purposes. Os comportamentos adaptativos e auto-optimizáveis nos sistemas de controlo de qualidade baseados em visão arti cial e, portanto, em processamento de imagem, constituem uma abordagem que poderia aumentar signi cativamente a robustez, e ciência e exibilidade das respostas dos sistemas industriais. Este trabalho apresenta o desenvolvimento de um sistema de software adaptativo desenhado para o pré-processamento de imagens capturadas em ambientes de produção industrial. O sistema é capaz de caracterizar e distinguir entre diferentes imagens, avaliar a sua qualidade global actual, determinar a melhor estratégia de melhoramento por executar e auto-adaptar os seus parâmetros funcionais de acordo com a informação gerada. Imagens reais capturadas numa linha de produção de máquinas de lavar são apresentadas e usadas para testar e validar os algoritmos do sistema. Resultados experimentais demonstram o rendimento robusto da arquitectura, os melhoramentos da qualidade da imagem atingidos e o incremento da efectividade de reconhecimento dos algoritmos de processamento posteriores. A contribução do presente trabalho consiste na implementação de um algoritmo dinâmico de reconhecimento de cenários, baseado numa arquitetura de rede neural e num processo de extracção de parâmetros, que permite a execução de estratégias de correcção de qualidade e é capaz de considerar a história do sistema com o objectivo de auto-adaptar-se. Com este projecto pretende-se incorrer no campo do controlo de qualidade automatizado focando uma abordagem novedosa em sistemas de pré-processamento adaptativo com propósitos de melhoramento de imagem. Comportamientos adaptativos y auto-optimizables en sistemas de control de calidad basados en visión arti cial y, por tanto, en procesamiento de imagen, constituyen un abordaje que podría aumentar signi cativamente la robustez, e ciencia y exibilidad de las respuestas de los sistemas industriales. Este trabajo presenta el desarrollo de un sistema de software adaptativo diseñado para el pre-procesamiento de imágenes capturadas en escenarios de producción industrial. El sistema es capaz de caracterizar y distinguir entre diferentes imágenes, evaluar su cualidad global actual, determinar la ejecución de la estrategia de mejoramiento óptima y auto-adaptar sus parámetros funcionales de acuerdo con la información generada. Imágenes reales capturadas en una línea de producción de lavadoras de ropa son presentadas y utilizadas para veri car y validar los algoritmos del sistema. Los resultados experimentales obtenidos demuestran un rendimiento robusto de la arquitectura, mejoras tangibles en la calidad de la imagen y un incremento de la efectividad de reconocimiento de los algoritmos de procesamiento posteriores. La contribución del presente trabajo consiste en la implementación de un algoritmo dinámico de reconocimiento de escenarios, basado en una arquitectura de red neuronal y en un proceso de extracción de parámetros, que permite la ejecución de estrategias de corrección de calidad y es capaz de considerar la historia del sistema con el objectivo de auto-adaptarse. Con este proyecto se pretende incursionar en el campo del control de calidad automatizado enfocando un abordaje novedoso en sistemas de pre-procesamiento adaptativo con propósitos de mejoramiento de imagen.
APA, Harvard, Vancouver, ISO, and other styles
23

Stein, Andrew Neil. "Adaptive image segmentation and tracking : a Bayesian approach." Thesis, Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/13397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lazareva, A. "An automated image processing system for the detection of photoreceptor cells in adaptive optics retinal images." Thesis, City, University of London, 2017. http://openaccess.city.ac.uk/19164/.

Full text
Abstract:
The rapid progress in Adaptive Optics (AO) imaging, in the last decades, has had a transformative impact on the entire approach underpinning the investigations of retinal tissues. Capable of imaging the retina in vivo at the cellular level, AO systems have revealed new insights into retinal structures, function, and the origins of various retinal pathologies. This has expanded the field of clinical research and opened a wide range of applications for AO imaging. The advances in image processing techniques contribute to a better observation of retinal microstructures and therefore more accurate detection of pathological conditions. The development of automated tools for processing images obtained with AO allows for objective examination of a larger number of images with time and cost savings and thus facilitates the use of AO imaging as a practical and efficient tool, by making it widely accessible to the clinical ophthalmic community. In this work, an image processing framework is developed that allows for enhancement of AO high-resolution retinal images and accurate detection of photoreceptor cells. The proposed framework consists of several stages: image quality assessment, illumination compensation, noise suppression, image registration, image restoration, enhancement and detection of photoreceptor cells. The visibility of retinal features is improved by tackling specific components of the AO imaging system, affecting the quality of acquired retinal data. Therefore, we attempt to fully recover AO retinal images, free from any induced degradation effects. A comparative study of different methods and evaluation of their efficiency on retinal datasets is performed by assessing image quality. In order to verify the achieved results, the cone packing density distribution was calculated and correlated with statistical histological data. From the performed experiments, it can be concluded that the proposed image processing framework can effectively improve photoreceptor cell image quality and thus can serve as a platform for further investigation of retinal tissues. Quantitative analysis of the retinal images obtained with the proposed image processing framework can be used for comparison with data related to pathological retinas, as well as for understanding the effect of age and retinal pathology on cone packing density and other microstructures.
APA, Harvard, Vancouver, ISO, and other styles
25

Ang, P. Y. "Adaptive, curvature-sensitive mesh and medial axis construction." Thesis, Queen's University Belfast, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.273133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Maji, Suman Kumar. "Multiscale methods in signal processing for adaptive optics." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00909085.

Full text
Abstract:
In this thesis, we introduce a new approach to wavefront phase reconstruction in Adaptive Optics (AO) from the low-resolution gradient measurements provided by a wavefront sensor, using a non-linear approach derived from the Microcanonical Multiscale Formalism (MMF). MMF comes from established concepts in statistical physics, it is naturally suited to the study of multiscale properties of complex natural signals, mainly due to the precise numerical estimate of geometrically localized critical exponents, called the singularity exponents. These exponents quantify the degree of predictability, locally, at each point of the signal domain, and they provide information on the dynamics of the associated system. We show that multiresolution analysis carried out on the singularity exponents of a high-resolution turbulent phase (obtained by model or from data) allows a propagation along the scales of the gradients in low-resolution (obtained from the wavefront sensor), to a higher resolution. We compare our results with those obtained by linear approaches, which allows us to offer an innovative approach to wavefront phase reconstruction in Adaptive Optics.
APA, Harvard, Vancouver, ISO, and other styles
27

Messer, Kieron. "Automatic image database retrieval system using adaptive colour and texture descriptors." Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/614/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Rio, Alexandre. "Investigation of intelligent adaptive image enhancement to aid night time driving." Thesis, Cranfield University, 1999. http://dspace.lib.cranfield.ac.uk/handle/1826/11091.

Full text
Abstract:
Driving at night is a difficult task. In an attempt to ease this task, most automotive companies are developing systems that aim to increase the safety of the driver and his/her passengers at night. Jaguar Cars Ltd have been involved in such project for several years and have developed a Night Vision System (NVS) based upon the Near Infrared (NIR) and Head-Up Display (HUD) technologies. This thesis is concerned with the application of digital image enhancement algorithms to further increase the driver's visual range at night. The purpose of this research work is to provide the driver with a safe and non-disturbing, enhanced view of the road scene ahead, which is presented on a head-up display. In this automotive environment, specific requirements such as real-time processing, robustness and reliability must be kept in mind to design algorithms that will not compromise the safety of the driver, his/her passengers and other road users. To fulfill these requirements, we have developed a novel intelligent image enhancement scheme for night time driving that actively adapts to the road scene. This scheme results in the enhancement of the contrast in a portion of the projected HUD road scene as if extra headlamps were directed to the region of the image that represents where the road is going. Human Factors studies have shown that this region is where the driver is concentrating his attention when driving. The position of the region of interest is defined by the computation of an approximation of the vanishing point of the road, updated for every frame using a novel, reliable and optimised road edge detection algorithm. The enhancement of the contrast within the region of interest is obtained by applying several novel low-level algorithms based upon the grey level segmentation of the image into regions and the use of the global histogram equalisation and quantised bi-histogram equalisation algorithms. These novel algorithms have all been implemented on the Matrox Genesis board based upon the multitasking, multiprocessor and parallel DSP TMS320C80 chip from Texas Instruments. All algorithms described in this thesis are able to sustain real-time processing at the NTSC frame rate of 30 frames per second. This new concept for a night time driving aid is an attractive solution that meets the numerous requirements driven by Human Factors research in an automotive environment, in particular safety requirements.
APA, Harvard, Vancouver, ISO, and other styles
29

Yin, Li. "Adaptive Background Modeling with Temporal Feature Update for Dynamic Foreground Object Removal." DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/5040.

Full text
Abstract:
In the study of computer vision, background modeling is a fundamental and critical task in many conventional applications. This thesis presents an introduction to background modeling and various computer vision techniques for estimating the background model to achieve the goal of removing dynamic objects in a video sequence. The process of estimating the background model with temporal changes in the absence of foreground moving objects is called adaptive background modeling. In this thesis, three adaptive background modeling approaches were presented for the purpose of developing \teacher removal" algorithms. First, an adaptive background modeling algorithm based on linear adaptive prediction is presented. Second, an adaptive background modeling algorithm based on statistical dispersion is presented. Third, a novel adaptive background modeling algorithm based on low rank and sparsity constraints is presented. The design and implementation of these algorithms are discussed in detail, and the experimental results produced by each algorithm are presented. Lastly, the results of this research are generalized and potential future research is discussed.
APA, Harvard, Vancouver, ISO, and other styles
30

Yanbin, Yu. "Transform coding of images based on shape adaptive models." Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/7485.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Bjerke, Benjamin A. "Noise and Degradation Reduction for Signal and Image Processing via Non-adaptive Convolution Filtering." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/23700.

Full text
Abstract:
Noise and degradation reduction is of significant importance in virtually all systems where these phenomena are present, specifically in the fields of signal and image processing.  The effect of image processing on target detection is of significant interest because noise and degradations can greatly reduce the effectiveness of detection algorithms, due to the presence of high intensity noise which is often mistaken as a target.  In signal processing, noise in vibration data, or any time-series data, can reduce the accuracy of measurement and can prevent the passing of useful information.    
Many filters that have been developed are designed to reduce a single class of noise, such as Wiener and Frost filters.  When these filters are applied to types of noise that they were not designed for, the effect of the noise reduction can be greatly reduced.  The proposed Two-Stage Non-Adaptive Convolution (TSNAC) filter significantly reduces both additive and multiplicative noise in these two unique systems.
The performance of these filters is compared through several Image Quality (IQ) metrics.
It will be shown that the proposed TSNAC filter reduces noise and degradations more effectively in both SAR images and synthetic vibration data than the competing filters.  It will show higher IQ scores, greater computational efficiency in target detection, and significant improvement in signal restoration of simulated vibration data.

Master of Science
APA, Harvard, Vancouver, ISO, and other styles
32

Vantaram, Sreenath Rao. "Fast unsupervised multiresolution color image segmentation using adaptive gradient thresholding and progressive region growing /." Online version of thesis, 2009. http://hdl.handle.net/1850/9016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Perry, Stuart William. "Adaptive Image Restoration: Perception Based Neural Nework Models and Algorithms." Thesis, The University of Sydney, 1999. http://hdl.handle.net/2123/389.

Full text
Abstract:
Abstract This thesis describes research into the field of image restoration. Restoration is a process by which an image suffering some form of distortion or degradation can be recovered to its original form. Two primary concepts within this field have been investigated. The first concept is the use of a Hopfield neural network to implement the constrained least square error method of image restoration. In this thesis, the author reviews previous neural network restoration algorithms in the literature and builds on these algorithms to develop a new faster version of the Hopfield neural network algorithm for image restoration. The versatility of the neural network approach is then extended by the author to deal with the cases of spatially variant distortion and adaptive regularisation. It is found that using the Hopfield-based neural network approach, an image suffering spatially variant degradation can be accurately restored without a substantial penalty in restoration time. In addition, the adaptive regularisation restoration technique presented in this thesis is shown to produce superior results when compared to non-adaptive techniques and is particularly effective when applied to the difficult, yet important, problem of semi-blind deconvolution. The second concept investigated in this thesis, is the difficult problem of incorporating concepts involved in human visual perception into image restoration techniques. In this thesis, the author develops a novel image error measure which compares two images based on the differences between local regional statistics rather than pixel level differences. This measure more closely corresponds to the way humans perceive the differences between two images. Two restoration algorithms are developed by the author based on versions of the novel image error measure. It is shown that the algorithms which utilise this error measure have improved performance and produce visually more pleasing images in the cases of colour and grayscale images under high noise conditions. Most importantly, the perception based algorithms are shown to be extremely tolerant of faults in the restoration algorithm and hence are very robust. A number of experiments have been performed to demonstrate the performance of the various algorithms presented.
APA, Harvard, Vancouver, ISO, and other styles
34

Perry, Stuart William. "Adaptive Image Restoration: Perception Based Neural Nework Models and Algorithms." University of Sydney, School of Electrical and Information Engineering, 1999. http://hdl.handle.net/2123/389.

Full text
Abstract:
Abstract This thesis describes research into the field of image restoration. Restoration is a process by which an image suffering some form of distortion or degradation can be recovered to its original form. Two primary concepts within this field have been investigated. The first concept is the use of a Hopfield neural network to implement the constrained least square error method of image restoration. In this thesis, the author reviews previous neural network restoration algorithms in the literature and builds on these algorithms to develop a new faster version of the Hopfield neural network algorithm for image restoration. The versatility of the neural network approach is then extended by the author to deal with the cases of spatially variant distortion and adaptive regularisation. It is found that using the Hopfield-based neural network approach, an image suffering spatially variant degradation can be accurately restored without a substantial penalty in restoration time. In addition, the adaptive regularisation restoration technique presented in this thesis is shown to produce superior results when compared to non-adaptive techniques and is particularly effective when applied to the difficult, yet important, problem of semi-blind deconvolution. The second concept investigated in this thesis, is the difficult problem of incorporating concepts involved in human visual perception into image restoration techniques. In this thesis, the author develops a novel image error measure which compares two images based on the differences between local regional statistics rather than pixel level differences. This measure more closely corresponds to the way humans perceive the differences between two images. Two restoration algorithms are developed by the author based on versions of the novel image error measure. It is shown that the algorithms which utilise this error measure have improved performance and produce visually more pleasing images in the cases of colour and grayscale images under high noise conditions. Most importantly, the perception based algorithms are shown to be extremely tolerant of faults in the restoration algorithm and hence are very robust. A number of experiments have been performed to demonstrate the performance of the various algorithms presented.
APA, Harvard, Vancouver, ISO, and other styles
35

Hu, Li. "Low power CMOS image sensor using adaptive address event representation /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?ECED%202007%20HU.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Yuk, Shun-cho Jacky, and 郁順祖. "Adaptive video defogging base on background modeling." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/208174.

Full text
Abstract:
The performance of intelligent video surveillance systems is always degraded under complicated scenarios, like dynamic changing backgrounds and extremely bad weathers. Dynamic changing backgrounds make the foreground/background segmentation, which is often the first step in vision-based algorithms, become unreliable. Bad weathers, such as foggy scenes, not only degrade the visual quality of the monitoring videos, but also seriously affect the accuracy of the vision-based algorithms. In this thesis, a fast and robust texture-based background modeling technique is first presented for tackling the problem of foreground/background segmentation under dynamic backgrounds. An adaptive multi-modal framework is proposed which uses a novel texture feature known as scale invariant local states (SILS) to model an image pixel. A pattern-less probabilistic measurement (PLPM) is also derived to estimate the probability of a pixel being background from its SILS. Experimental results show that texture-based background modeling is more robust than illumination-based approaches under dynamic backgrounds and lighting changes. Furthermore, the proposed background modeling technique can run much faster than the existing state-of-the-art texture-based method, without sacrificing the output quality. Two fast adaptive defogging techniques, namely 1) foreground decremental preconditioned conjugate gradient (FDPCG), and 2) adaptive guided image filtering are next introduced for removing the foggy effects on video scenes. These two methods allow the estimation of the background transmissions to converge over consecutive video frames, and then background-defog the video sequences using the background transmission map. Results show that foreground/background segmentation can be improved dramatically with such background-defogged video frames. With the reliable foreground/ background segmentation results, the foreground transmissions can then be recovered by the proposed 1) foreground incremental preconditioned conjugate gradient (FIPCG), or 2) on-demand guided image filtering. Experimental results show that the proposed methods can effectively improve the visual quality of surveillance videos under heavy fog and bad weathers. Comparing with state-of-the-art image defogging methods, the proposed methods are shown to be much more efficient.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
37

Jiang, Yugang. "Large scale semantic concept detection, fusion, and selection for domain adaptive video search /." access full-text access abstract and table of contents, 2009. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?phd-cs-b23749957f.pdf.

Full text
Abstract:
Thesis (Ph.D.)--City University of Hong Kong, 2009.
"Submitted to Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves 145-161)
APA, Harvard, Vancouver, ISO, and other styles
38

Linares, Irving. "Theory and applications of the estimated spectrum adaptive postfilter." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/14962.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Bulpitt, Andrew J. "A multiple adaptive resonance theory architecture applied to motion recognition tasks." Thesis, University of York, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Cheeseman, Bevan. "The Adaptive Particle Representation (APR) for Simple and Efficient Adaptive Resolution Processing, Storage and Simulations." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-234245.

Full text
Abstract:
This thesis presents the Adaptive Particle Representation (APR), a novel adaptive data representation that can be used for general data processing, storage, and simulations. The APR is motivated, and designed, as a replacement representation for pixel images to address computational and memory bottlenecks in processing pipelines for studying spatiotemporal processes in biology using Light-sheet Fluo- rescence Microscopy (LSFM) data. The APR is an adaptive function representation that represents a function in a spatially adaptive way using a set of Particle Cells V and function values stored at particle collocation points P∗. The Particle Cells partition space, and implicitly define a piecewise constant Implied Resolution Function R∗(y) and particle sampling locations. As an adaptive data representation, the APR can be used to provide both computational and memory benefits by aligning the number of Particle Cells and particles with the spatial scales of the function. The APR allows reconstruction of a function value at any location y using any positive weighted combination of particles within a distance of R∗(y). The Particle Cells V are selected such that the error between the reconstruction and the original function, when weighted by a function σ(y), is below a user-set relative error threshold E. We call this the Reconstruction Condition and σ(y) the Local Intensity Scale. σ(y) is motivated by local gain controls in the human visual system, and for LSFM data can be used to account for contrast variations across an image. The APR is formed by satisfying an additional condition on R∗(y); we call the Resolution Bound. The Resolution Bound relates the R∗(y) to a local maximum of the absolute value function derivatives within a distance R∗(y) or y. Given restric- tions on σ(y), satisfaction of the Resolution Bound also guarantees satisfaction of the Reconstruction Condition. In this thesis, we present algorithms and approaches that find the optimal Implied Resolution Function to general problems in the form of the Resolution Bound using Particle Cells using an algorithm we call the Pulling Scheme. Here, optimal means the largest R∗(y) at each location. The Pulling Scheme has worst-case linear complexity in the number of pixels when used to rep- resent images. The approach is general in that the same algorithm can be used for general (α,m)-Reconstruction Conditions, where α denotes the function derivative and m the minimum order of the reconstruction. Further, it can also be combined with anisotropic neighborhoods to provide adaptation in both space and time. The APR can be used with both noise-free and noisy data. For noisy data, the Reconstruction Condition can no longer be guaranteed, but numerical results show an optimal range of relative error E that provides a maximum increase in PSNR over the noisy input data. Further, if it is assumed the Implied Resolution Func- tion satisfies the Resolution Bound, then the APR converges to a biased estimate (constant factor of E), at the optimal statistical rate. The APR continues a long tradition of adaptive data representations and rep- resents a unique trade off between the level of adaptation of the representation and simplicity. Both regarding the APRs structure and its use for processing. Here, we numerically evaluate the adaptation and processing of the APR for use with LSFM data. This is done using both synthetic and LSFM exemplar data. It is concluded from these results that the APR has the correct properties to provide a replacement of pixel images and address bottlenecks in processing for LSFM data. Removal of the bottleneck would be achieved by adapting to spatial, temporal and intensity scale variations in the data. Further, we propose the simple structure of the general APR could provide benefit in areas such as the numerical solution of differential equations, adaptive regression methods, and surface representation for computer graphics.
APA, Harvard, Vancouver, ISO, and other styles
41

Hasan, N. K. "An adaptive artificial vision system for recognition and handling of industrial parts." Thesis, Brunel University, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.374430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Maitree, Rapeepan, Gloria J. Guzman Perez-Carrillo, Joshua S. Shimony, H. Michael Gach, Anupama Chundury, Michael Roach, H. Harold Li, and Deshan Yang. "Adaptive anatomical preservation optimal denoising for radiation therapy daily MRI." SPIE-SOC PHOTO-OPTICAL INSTRUMENTATION ENGINEERS, 2017. http://hdl.handle.net/10150/626083.

Full text
Abstract:
Low-field magnetic resonance imaging (MRI) has recently been integrated with radiation therapy systems to provide image guidance for daily cancer radiation treatments. The main benefit of the low-field strength is minimal electron return effects. The main disadvantage of low-field strength is increased image noise compared to diagnostic MRIs conducted at 1.5 T or higher. The increased image noise affects both the discernibility of soft tissues and the accuracy of further image processing tasks for both clinical and research applications, such as tumor tracking, feature analysis, image segmentation, and image registration. An innovative method, adaptive anatomical preservation optimal denoising (AAPOD), was developed for optimal image denoising, i. e., to maximally reduce noise while preserving the tissue boundaries. AAPOD employs a series of adaptive nonlocal mean (ANLM) denoising trials with increasing denoising filter strength (i. e., the block similarity filtering parameter in the ANLM algorithm), and then detects the tissue boundary losses on the differences of sequentially denoised images using a zero-crossing edge detection method. The optimal denoising filter strength per voxel is determined by identifying the denoising filter strength value at which boundary losses start to appear around the voxel. The final denoising result is generated by applying the ANLM denoising method with the optimal per-voxel denoising filter strengths. The experimental results demonstrated that AAPOD was capable of reducing noise adaptively and optimally while avoiding tissue boundary losses. AAPOD is useful for improving the quality of MRIs with low-contrast-to-noise ratios and could be applied to other medical imaging modalities, e.g., computed tomography. (C) 2017 Society of Photo-Optical Instrumentation Engineers (SPIE)
APA, Harvard, Vancouver, ISO, and other styles
43

Doshi, Punit Rameshchandra. "Adaptive prefetching for visual data exploration." Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0131103-203307.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: Adaptive prefetching; Large-scale multivariate data visualization; Semantic caching; Hierarchical data exploration; Exploratory data analysis. Includes bibliographical references (p.66-70).
APA, Harvard, Vancouver, ISO, and other styles
44

Gastal, Eduardo Simões Lopes. "Efficient high-dimensional filtering for image and video processing." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/118258.

Full text
Abstract:
Filtragem é uma das mais importantes operações em processamento de imagens e vídeos. Em particular, filtros de altas dimensões são ferramentas fundamentais para diversas aplicações, tendo recebido recentemente significativa atenção de pesquisadores da área. Infelizmente, implementações ingênuas desta importante classe de filtros são demasiadamente lentas para muitos usos práticos, especialmente tendo em vista o aumento contínuo na resolução de imagens capturadas digitalmente. Esta dissertação descreve três novas abordagens para filtragem eficiente em altas dimensões: a domain transform, os adaptive manifolds, e uma formulação matemática para a aplicação de filtros recursivos em sinais amostrados não-uniformemente. A domain transform, representa o estado-da-arte em termos de algoritmos para filtragem utilizando métrica geodésica. A inovação desta abordagem é a utilização de um procedimento simples de redução de dimensionalidade para implementar eficientemente filtros de alta dimensão. Isto nos permite a primeira demonstração de filtragem com preservação de arestas em tempo real para vídeos coloridos de alta resolução (full HD). Os adaptive manifolds, representam o estado-da-arte em termos de algoritmos para filtragem utilizando métrica Euclidiana. A inovação desta abordagem é a ideia de subdividir o espaço de alta dimensão em fatias não-lineares de mais baixa dimensão, as quais são filtradas independentemente e finalmente interpoladas para obter uma filtragem de alta dimensão com métrica Euclidiana. Com isto obtemos diversos avanços em relação a técnicas anteriores, como filtragem mais rápida e requerendo menos memória, além da derivação do primeiro filtro Euclidiano com custo linear tanto no número de pixels da imagem (ou vídeo) quanto na dimensionalidade do espaço onde o filtro está operando. Finalmente, introduzimos uma formulação matemática que descreve a aplicação de um filtro recursivo em sinais amostrados de maneira não-uniforme. Esta formulação estende a ideia de filtragem geodésica para filtros recursivos arbitrários (tanto passa-baixa quanto passa-alta e passa-banda). Esta extensão fornece maior controle sobre as respostas desejadas para os filtros, as quais podem então ser melhor adaptadas para aplicações específicas. Como exemplo, demonstramos—pela primeira vez na literatura—filtros geodésicos com formato Gaussiano, Laplaciana do Gaussiano, Butterworth, e Cauer, dentre outros. Com a possibilidade de se trabalhar com filtros arbitrários, nosso método permite uma nova variedade de efeitos para aplicações em imagens e vídeos.
Filtering is arguably the single most important operation in image and video processing. In particular, high-dimensional filters are a fundamental building block for several applications, having recently received considerable attention from the research community. Unfortunately, naive implementations of such an important class of filters are too slow for many practical uses, specially in light of the ever increasing resolution of digitally captured images. This dissertation describes three novel approaches to efficiently perform high-dimensional filtering: the domain transform, the adaptive manifolds, and a mathematical formulation for recursive filtering of non-uniformly sampled signals. The domain transform defines an isometry between curves on the 2D image manifold in 5D and the real line. It preserves the geodesic distance between points on these curves, adaptively warping the input signal so that high-dimensional geodesic filtering can be efficiently performed in linear time. Its computational cost is not affected by the choice of the filter parameters; and the resulting filters are the first to work on color images at arbitrary scales in real time, without resorting to subsampling or quantization. The adaptive manifolds compute the filter’s response at a reduced set of sampling points, and use these for interpolation at all input pixels, so that high-dimensional Euclidean filtering can be efficiently performed in linear time. We show that for a proper choice of sampling points, the total cost of the filtering operation is linear both in the number of pixels and in the dimension of the space in which the filter operates. As such, ours is the first high-dimensional filter with such a complexity. We present formal derivations for the equations that define our filter, providing a sound theoretical justification. Finally, we introduce a mathematical formulation for linear-time recursive filtering of non-uniformly sampled signals. This formulation enables, for the first time, geodesic edge-aware evaluation of arbitrary recursive infinite impulse response filters (not only low-pass), which allows practically unlimited control over the shape of the filtering kernel. By providing the ability to experiment with the design and composition of new digital filters, our method has the potential do enable a greater variety of image and video effects. The high-dimensional filters we propose provide the fastest performance (both on CPU and GPU) for a variety of real-world applications. Thus, our filters are a valuable tool for the image and video processing, computer graphics, computer vision, and computational photography communities.
APA, Harvard, Vancouver, ISO, and other styles
45

Sjögren, Thomas. "Synthetic Aperture Radar Signal and Image Processing for Moving Target Indication and Side Lobe Suppression." Doctoral thesis, Blekinge Tekniska Högskola, Avdelningen för elektroteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00542.

Full text
Abstract:
The thesis summarizes a selection of my research within Synthetic Aperture Radar (SAR). Mainly the research is aimed at applying and developing signal processing methods to single channel and multi channel SAR for wideband systems. SAR systems can generate images looking very similar to optical pictures, i.e. photos, and sometimes with much finer resolution compared to optical systems orbiting Earth. SAR has also for instance been used to obtain fine resolution images of the moon, Venus and the satellites of Saturn. Other applications for SAR has is to detect changes in ice sheets and deforestation. In this thesis, SAR systems capable of very high resolution imaging are con- sidered, and data from such systems, namely the VHF system CARABAS-II and the UHF system LORA, is used. High resolution imaging in this thesis refers to high resolution with regard to wavelength, this independent of system operating frequency. Two of the topics in this thesis are related to detection and parameter estimation of moving objects in SAR, the first one using CARABAS-II data and the second with LORA data. On the CARABAS-II data, a speed estimation and refocusing method is introduced and applied to single channel CARABAS-II data. The results show good estimation accuracy as well as good ability to focus the object and suppress forest clutter by ap- plying the refocusing algorithm. The results on LORA data are satisfactory especially with regard to forest clutter suppression. The ability to detect and focus images of ships allow for surveillance of coastal areas and help in rescue of ships lost at sea. Detection and location of cars and trucks allow for traffic monitoring to obtain statistics of how many cars travel the roads and their speed. In the thesis, two more important aspects for SAR processing is presented. One paper presents windowing of UWB SAR images. A strong object such as a power line in a SAR image cause ringing on both sides of the power line. This ringing can cause a small house to be covered by these so called side lobes. Applying a window can make these side lobes in the image much suppressed, however if windowing too much, the power line will smear over the image, covering the small house. The last topic in the thesis concern with theoretical limits for measurement accuracy of parameters for a moving object in a SAR image. These parameters are position, velocity, radar cross section and phase. The theoretical expressions are verified using simulations for a single channel system for estimation accuracy of target speed and relative speed.
APA, Harvard, Vancouver, ISO, and other styles
46

Pope, Benjamin. "Vision and revision: wavefront sensing from the image domain." Thesis, The University of Sydney, 2014. http://hdl.handle.net/2123/12890.

Full text
Abstract:
An ideal telescope with no optical aberrations can achieve a resolution and contrast limited by the wave nature of light, such that the finest detail that can be resolved is of the order of the angle subtended by one wavelength over the diameter of the telescope. For telescopes operating close to this ideal case, however, it is rare that the full performance of the diffraction limit is achieved, as small optical imperfections cause speckles to appear in the image. These are difficult to calibrate, as they are often caused by thermal and mechanical variations in the optical path which vary slowly with time. The quasi-static speckles that they impose can mimic the real signal of a faint star or planet orbiting the primary target, and these therefore impose the principal limitation on the angular resolution and contrast of instruments designed to detect exoplanets and faint companions. These aberrations can be corrected by active optics, where a wavefront sensor is used to used to reconstruct a map of the distortions which can then be compensated for by a deformable mirror, but there is a problem with this also: differrential aberrations between the wavefront sensor and science camera are not detected. In this thesis, I will discuss a successful laboratory implementation of a recently-proposed technique for reconstructing a wavefront map using only the image taken with the science camera, which can be used to calibrate this non-common path error. This approach, known as the asymmetric pupil Fourier wavefront sensor, requires that the pupil not be centrosymmetric, which is easily achieved with a mask, with segment tilting, or with judiciously placed spiders to support the secondary mirror, and represents a promising way forward for characterizing and correcting segment misalignments on future missions such as the James Webb Space Telescope.
APA, Harvard, Vancouver, ISO, and other styles
47

Man, Hong. "On efficiency and robustness of adaptive quantization for subband coding of images and video sequences." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/15003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Lind, Fredrik, and Escalante Andrés Diaz. "Maximizing performance gain of Variable Rate Shading tier 2 while maintaining image quality : Using post processing effects to mask image degradation." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21868.

Full text
Abstract:
Background. Performance optimization is of great importance for games as it constrains the possibilities of content or complexity of systems. Modern games support high resolution rendering but higher resolutions require more pixels to be computed and solutions are needed to reduce this workload. Currently used methods include uniformly lowering the shading rates across the whole screen to reduce the amount of pixels needing computation. Variable Rate Shading is a new hardware supported technique with several functionality tiers. Tier 1 is similar to previous methods in that it lowers the shading rate for the whole screen. Tier 2 supports screen space image shading. With tier 2 screen space image shading, various different shading rates can be set across the screen which gives developers the choice of where and when to set specific shading rates. Objectives. The aim of this thesis is to examine how close Variable Rate Shading tier 2 screen space shading can come to the performance gains of Variable Rate Shading tier 1 while trying to maintain an acceptable image quality with the help of commonly used post processing effects. Methods. A lightweight scene is set up and Variable Rate Shading tier 2 methods are set to an acceptable image quality as baseline. Evaluation of performance is done by measuring the times of specific passes required by and affected by Variable Rate Shading. Image quality is measured by capturing sequences of images with no Variable Rate Shading on as reference, then with Variable Rate Shading tier 1 and several methods with tier 2 to be compared with Structural Similarity Index. Results. Highest measured performance gains from tier 2 was 28.0%. The result came from using edge detection to create the shading rate image in 3840x2160 resolution. This translates to 36.7% of the performance gains of tier 1 but with better image quality with SSIM values of 0.960 against tier 1’s 0.802, which corresponds to good and poor image quality respectively. Conclusions. Variable Rate Shading tier 2 shows great potential in increasing performance while maintaining image quality, especially with edge detection. Postprocessing effects are effective at maintaining a good image quality. Performance gains also scale well as they increase with higher resolutions.
Bakgrund. Prestandaoptimering är väldigt viktigt för spel eftersom det kan begränsa möjligheterna av innehåll eller komplexitet av system. Moderna spel stödjer rendering för höga upplösningar men höga upplösningar kräver beräkningar för mera pixlar och lösningar behövs för att minska arbetsbördan. Metoder som för närvarande används omfattar bland annat enhetlig sänkning av skuggningsförhållande över hela skärmen för att minska antalet pixlar som behöver beräkningar. Variable Rate Shading är en ny hårdvarustödd teknik med flera funktionalitetsnivåer. Nivå 1 är likt tidigare metoder eftersom skuggningsförhållandet enhetligt sänks över hela skärmen. Nivå 2 stödjer skärmrymdsbildskuggning. Med skärmrymdsbildskuggning kan skuggningsförhållanden varieras utspritt över skärmen vilket ger utvecklare valmöjligheter att bestämma var och när specifika skuggningsförhållanden ska sättas. Syfte. Syftet med examensarbetet är att undersöka hur nära Variable Rate Shading nivå 2 skärmrymdsbildskuggning kan komma prestandavinsterna av Variable Rate Shading nivå 1 samtidigt som bildkvaliteten behålls acceptabel med hjälp av vanligt använda efterbearbetningseffekter. Metod. En simpel scen skapades och metoder för Variable Rate Shading nivå 2 sattes till en acceptabel bildkvalitet som utgångspunkt. Utvärdering av prestanda gjordes genom att mäta tiderna för specifika pass som behövdes för och påverkades av Variable Rate Shading. Bildkvalitet mättes genom att spara bildsekvenser utan Variable Rate Shading på som referensbilder, sedan med Variable Rate Shading nivå 1 och flera metoder med nivå 2 för att jämföras med Structural Similarity Index. Resultat. Högsta uppmätta prestandavinsten från nivå 2 var 28.0%. Resultatet kom ifrån kantdetektering för skapandet av skuggningsförhållandebilden, med upplösningen 3840x2160. Det motsvarar 36.7% av prestandavinsten för nivå 1 men med mycket bättre bildkvalitet med SSIM-värde på 0.960 gentemot 0.802 för nivå 1, vilka motsvarar bra och dålig bildkvalitet. Slutsatser. Variable Rate Shading nivå 2 visar stor potential i prestandavinster med bibehållen bildkvalitet, speciellt med kantdetektering. Efterbearbetningseffekter är effektiva på att upprätthålla en bra bildkvalitet. Prestandavinster skalar även bra då de ökar vid högre upplösningar.
APA, Harvard, Vancouver, ISO, and other styles
49

KHAN, JAWAD BASIT. "iPACE-V1: A PORTAABLE ADAPTIVE COMPUTING ENGINE." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1025721991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Indra, Isara. "Very low bit rate video coding using adaptive nonuniform sampling and matching pursuit." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/15779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography