Rozprawy doktorskie na temat „Image compression”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Image compression.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Image compression”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Hawary, Fatma. "Light field image compression and compressive acquisition". Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S082.

Pełny tekst źródła
Streszczenie:
En capturant une scène à partir de plusieurs points de vue, un champ de lumière fournit une représentation riche de la géométrie de la scène, ce qui permet une variété de nouvelles applications de post-capture ainsi que des expériences immersives. L'objectif de cette thèse est d'étudier la compressibilité des contenus de type champ de lumière afin de proposer de nouvelles solutions pour une imagerie de champs lumière à plus haute résolution. Deux aspects principaux ont été étudiés à travers ce travail. Les performances en compression sur les champs lumière des schémas de codage actuels étant encore limitées, il est nécessaire d'introduire des approches plus adaptées aux structures des champs de lumière. Nous proposons un schéma de compression comportant deux couches de codage. Une première couche encode uniquement un sous-ensemble de vues d’un champ de lumière et reconstruit les vues restantes via une méthode basée sur la parcimonie. Un codage résiduel améliore ensuite la qualité finale du champ de lumière décodé. Avec les moyens actuels de capture et de stockage, l’acquisition d’un champ de lumière à très haute résolution spatiale et angulaire reste impossible, une alternative consiste à reconstruire le champ de lumière avec une large résolution à partir d’un sous-ensemble d’échantillons acquis. Nous proposons une méthode de reconstruction automatique pour restaurer un champ de lumière échantillonné. L’approche utilise la parcimonie du champs de lumière dans le domaine de Fourier. Aucune estimation de la géométrie de la scène n'est nécessaire, et une reconstruction précise est obtenue même avec un échantillonnage assez réduit. Une étude supplémentaire du schéma complet, comprenant les deux approches proposées est menée afin de mesurer la distorsion introduite par les différents traitements. Les résultats montrent des performances comparables aux méthodes de synthèse de vues basées sur la l’estimation de profondeur
By capturing a scene from several points of view, a light field provides a rich representation of the scene geometry that brings a variety of novel post-capture applications and enables immersive experiences. The objective of this thesis is to study the compressibility of light field contents in order to propose novel solutions for higher-resolution light field imaging. Two main aspects were studied through this work. The compression performance on light fields of the actual coding schemes still being limited, there is need to introduce more adapted approaches to better describe the light field structures. We propose a scalable coding scheme that encodes only a subset of light field views and reconstruct the remaining views via a sparsity-based method. A residual coding provides an enhancement to the final quality of the decoded light field. Acquiring very large-scale light fields is still not feasible with the actual capture and storage facilities, a possible alternative is to reconstruct the densely sampled light field from a subset of acquired samples. We propose an automatic reconstruction method to recover a compressively sampled light field, that exploits its sparsity in the Fourier domain. No geometry estimation is needed, and an accurate reconstruction is achieved even with very low number of captured samples. A further study is conducted for the full scheme including a compressive sensing of a light field and its transmission via the proposed coding approach. The distortion introduced by the different processing is measured. The results show comparable performances to depth-based view synthesis methods
Style APA, Harvard, Vancouver, ISO itp.
2

Obaid, Arif. "Range image compression". Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/10131.

Pełny tekst źródła
Streszczenie:
Range Images, which are a representation of the surface of a 3-D object, are gaining popularity in many applications including CAD/CAM, multimedia and virtual reality. There is, thus, a need for compression of these 3-D images. Current standards for still image compression, such as JPEG, are not appropriate for such images because they have been designed specifically for intensity images. This has led us to develop a new compression method for range images. It first scans the image so that the pixels are arranged into a sequence. It then approximates this sequence by straight line segments within a user-specified maximum tolerance level. The extremities of the straight-line segments within a user-specified maximum tolerance level. The extremities of the straight-line segments are non-redundant points (NRPs). Huffman coding, with a fixed Huffman tree, is used to encode the distance between NRPs and their altitudes. A plane-filling scanning technique, known as Peano scanning, is used to improve performance. The algorithms performance is assessed on range images acquired from the Institute for Information Technology of the National Research Council of Canada. The proposed method performs better than JPEG for any given maximum tolerance level. The adaptive mode of the algorithm is also presented along with its performance assessment.
Style APA, Harvard, Vancouver, ISO itp.
3

Lacroix, Bruno. "Fractal image compression". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ36939.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Aydinoğlu, Behçet Halûk. "Stereo image compression". Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/15447.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Abdul-Amir, Said. "Digital image compression". Thesis, De Montfort University, 1985. http://hdl.handle.net/2086/10681.

Pełny tekst źródła
Streszczenie:
Due to the rapid growth in information handling and transmission, there is a serious demand for more efficient data compression schemes. compression schemes address themselves to speech, visual and alphanumeric coded data. This thesis is concerned with the compression of visual data given in the form of still or moving pictures. such data is highly correlated spatially and in the context domain. A detailed study of some existing data compression systems is presented, in particular, the performance of DPCM was analysed by computer simulation, and the results examined both subjectively and objectively. The adaptive form of the prediction encoder is discussed and two new algorithms proposed, which increase the definition of the compressed image and reduce the overall mean square error. Two novel systems are proposed for image compression. The first is a bit plane image coding system based on a hierarchic quadtree structure in a transmission domain, using the Hadamard transform as a kernel. Good compression has been achieved from this scheme, particularly for images with low detail. The second scheme uses a learning automata to predict the probability distribution of the grey levels of an image related to its spatial context and position. An optimal reward/punishment function is proposed such that the automata converges to its steady state within 4000 iterations • such a high speed of convergence together with Huffman coding results in efficient compression for images and is shown to be applicable to other types of data. . The performance and evaluation of all the proposed .'systems have been tested by computer simulation and the results presented both quantitatively and qualitatively."The advantages and disadvantages of each system are discussed and suggestions for improvement. given.
Style APA, Harvard, Vancouver, ISO itp.
6

Hallidy, William H. Jr, i Michael Doerr. "HYPERSPECTRAL IMAGE COMPRESSION". International Foundation for Telemetering, 1999. http://hdl.handle.net/10150/608744.

Pełny tekst źródła
Streszczenie:
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Systems & Processes Engineering Corporation (SPEC) compared compression and decompression algorithms and developed optimal forms of lossless and lossy compression for hyperspectral data. We examined the relationship between compression-induced distortion and additive noise, determined the effect of errors on the compressed data, and showed that the data could separate targets from clutter after more than 50:1 compression.
Style APA, Harvard, Vancouver, ISO itp.
7

Hernández-Cabronero, Miguel. "DNA Microarray Image Compression". Doctoral thesis, Universitat Autònoma de Barcelona, 2015. http://hdl.handle.net/10803/297706.

Pełny tekst źródła
Streszczenie:
En los experimentos con DNA microarrays se genran dos imágenes monocromo, las cuales es conveniente almacenar para poder realizar análisis más precisos en un futuro. Por tanto, la compresión de imágenes surge como una herramienta particularmente útil para minimizar los costes asociados al almacenamiento y la transmisión de dichas imágenes. Esta tesis tiene por objetivo mejorar el estado del arte en la compresión de imágenes de DNA microarrays. Como parte de esta tesis, se ha realizado una detallada investigación de las características de las imágenes de DNA microarray. Los resultados experimentales indican que los algoritmos de compresión no adaptados a este tipo de imágenes producen resultados más bien pobres debido a las características de estas imágenes. Analizando las entropías de primer orden y condicionales, se ha podido determinar un límite aproximado a la compresibilidad sin pérdida de estas imágenes. Aunque la compresión basada en contexto y en segmentación proporcionan mejoras modestas frente a algoritmos de compresión genéricos, parece necesario realizar avances rompedores en el campo de compresión de datos para superar los ratios 2:1 en la mayor parte de las imágenes. Antes del comienzo de esta tesis se habían propuesto varios algoritmos de compresión sin pérdida con rendimientos cercanos al límite óptimo anteriormente mencionado. Sin embargo, ninguno es compatible con los estándares de compresión existentes. Por tanto, la disponibilidad de descompresores compatibles en plataformas futuras no está garantizado. Además, la adhesión a dichos estándares se require normalmente en escenarios clínicos. Para abordar estos problemos, se propone una transformada reversible compatible con el standard JPEG2000: la Histogram Swap Transform (HST). La HST mejora el rendimiento medio de JPEG2000 en todos los corpora entre 1.97% y 15.53%. Además, esta transformada puede aplicarse incurriendo en un sobrecoste de tiempo negligible. Con la HST, JPEG2000 se convierte en la alternativa estándard más competitiva a los compresores no estándard. Las similaridades entre imágenes del mismo corpus también se han estudiado para mejorar aún más los resultados de compresión de imágenes de DNA microarrays. En concreto, se ha encontrado una agrupación óptima de las imágenes que maximiza la correlación dentro de los grupos. Dependiendo del corpus observado, pueden observarse resultados de correlación medios de entre 0.75 y 0.92. Los resultados experimentales obtenidos indican que las técnicas de decorrelación espectral pueden mejorar los resultados de compresión hasta en 0.6 bpp, si bien ninguna de las transformadas es efectiva para todos los corpora utilizados. Por otro lado, los algoritmos de compresión con pérdida permiten obtener resultados de compresión arbitrarios a cambio de modificar las imágenes y, por tanto, de distorsionar subsiguientes procesos de análisis. Si la distorsión introducida es más pequeña que la variabilidad experimental inherente, dicha distorsión se considera generalmente aceptable. Por tanto, el uso de técnicas de compresión con pérdida está justificado. En esta tesis se propone una métrica de distorsión para imágenes de DNA microarrays capaz de predecir la cantidad de distorsión introducida en el análisis sin necesitar analizar las imágenes modificadas, diferenciando entre cambios importantes y no importantes. Asimismo, aunque ya se habían propuesto algunos algoritmos de compresión con pérdida para estas imágenes antes del comienzo de la tesis, ninguno estaba específicamente diseñado para minimizar el impacto en los procesos de análisis para un bitrate prefijado. En esta tesis, se propone un compresor con pérdida (el Relative Quantizer (RQ) coder) que mejora los resultados de todos los métodos anteriormente publicados. Los resultados obtenidos sugieren que es posible comprimir con ratios superiores a 4.5:1 mientras se introducen distorsiones en el análisis inferiores a la mitad de la variabilidad experimental inherente. Además, se han propuesto algunas mejoras a dicho compresor, las cuales permiten realizar una codificación lossy-to-lossless (el Progressive RQ (PRQ) coder), pudiéndose así reconstruir una imagen comprimida con diferentes niveles de calidad. Cabe señalar que los resultados de compresión anteriormente mencionados se obtienen con una complejidad computacional ligeramente inferior a la del mejor compresor sin pérdida para imágenes de DNA microarrays.
In DNA microarray experiments, two grayscale images are produced. It is convenient to save these images for future, more accurate re-analysis. Thus, image compression emerges as a particularly useful tool to alleviate the associated storage and transmission costs. This dissertation aims at improving the state of the art of the compression of DNA microarray images. A thorough investigation of the characteristics of DNA microarray images has been performed as a part of this work. Results indicate that algorithms not adapted to DNA microarray images typically attain only mediocre lossless compression results due to the image characteristics. By analyzing the first-order and conditional entropy present in these images, it is possible to determine approximate limits to their lossless compressibility. Even though context-based coding and segmentation provide modest improvements over generic-purpose algorithms, conceptual breakthroughs in data coding are arguably required to achieve compression ratios exceeding 2:1 for most images. Prior to the start of this thesis, several lossless coding algorithms that have performance results close to the aforementioned limit were published. However, none of them is compliant with existing image compression standards. Hence, the availability of decoders in future platforms -a requisite for future re-analysis- is not guaranteed. Moreover, the adhesion to standards is usually a requisite in clinical scenarios. To address these problems, a fast reversible transform compatible with the JPEG2000 standard -the Histogram Swap Transform (HST)- is proposed. The HST improves the average compression performance of JPEG2000 for all tested image corpora, with gains ranging from 1.97% to 15.53%. Furthermore, this transform can be applied with only negligible time complexity overhead. With the HST, JPEG2000 becomes arguably the most competitive alternatives to microarray-specific, non-standard compressors. The similarities among sets of microarray images have also been studied as a means to improve the compression performance of standard and microarray-specific algorithms. An optimal grouping of the images which maximizes the inter-group correlation is described. Average correlations between 0.75 and 0.92 are observed for the tested corpora. Thorough experimental results suggest that spectral decorrelation transforms can improve some lossless coding results by up to 0.6bpp, although no single transform is effective for all copora. Lossy coding algorithms can yield almost arbitrary compression ratios at the cost of modifying the images and, thus, of distorting subsequent analysis processes. If the introduced distortion is smaller than the inherent experimental variability, it is usually considered acceptable. Hence, the use of lossy compression is justified on the assumption that the analysis distortion is assessed. In this work, a distortion metric for DNA microarray images is proposed to predict the extent of this distortion without needing a complete re-analysis of the modified images. Experimental results suggest that this metric is able to tell apart image changes that affect subsequent analysis from image modifications that do not. Although some lossy coding algorithms were previously described for this type of images, none of them is specifically designed to minimize the impact on subsequent analysis for a given target bitrate. In this dissertation, a lossy coder -the Relative Quantizer (RQ) coder- that improves upon the rate- distortion results of previously published methods is proposed. Experiments suggest that compression ratios exceeding 4.5:1 can be achieved while introducing distortions smaller than half the inherent experimental variability. Furthermore, a lossy-to-lossless extension of this coder -the Progressive RQ (PRQ) coder- is also described. With the PRQ, images can be compressed once and then reconstructed at different quality levels, including lossless reconstruction. In addition, the competitive rate-distortion results of the RQ and PRQ coders can be obtained with computational complexity slightly smaller than that of the best-performing lossless coder of DNA microarray images.
Style APA, Harvard, Vancouver, ISO itp.
8

Agostini, Luciano Volcan. "Projeto de arquiteturas integradas para a compressão de imagens JPEG". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2002. http://hdl.handle.net/10183/11431.

Pełny tekst źródła
Streszczenie:
Esta dissertação apresenta o desenvolvimento de arquiteturas para a compressão JPEG, onde são apresentadas arquiteturas de um compressor JPEG para imagens em tons de cinza, de um compressor JPEG para imagens coloridas e de um conversor de espaço de cores de RGB para YCbCr. As arquiteturas desenvolvidas são detalhadamente apresentadas, tendo sido completamente descritas em VHDL, com sua síntese direcionada para FPGAs da família Flex10KE da Altera. A arquitetura integrada do compressor JPEG para imagens em tons de cinza possui uma latência mínima de 237 ciclos de clock e processa uma imagem de 640x480 pixels em 18,5ms, permitindo uma taxa de processamento de 54 imagens por segundo. As estimativas realizadas em torno da taxa de compressão obtida indicam que ela seria de aproximadamente 6,2 vezes ou de 84 %. A arquitetura integrada do compressor JPEG para imagens coloridas foi gerada a partir de adaptações na arquitetura do compressor para imagens em tons de cinza. Esta arquitetura também possui a latência mínima de 237 ciclos de clock, sendo capaz de processar uma imagem coloria de 640 x 480 pixels em 54,4ms, permitindo uma taxa de processamento de 18,4 imagens por segundo. A taxa de compressão obtida, segundo estimativas, seria de aproximadamente 14,4 vezes ou de 93 %. A arquitetura para o conversor de espaço de cores de RBG para YCbCr possui uma latência de 6 ciclos de clock e é capaz de processar uma imagem colorida de 640x480 pixels em 84,6ms, o que permite uma taxa de processamento de 11,8 imagens por segundo. Esta arquitetura não chegou a ser integrada com a arquitetura do compressor de imagens coloridas, mas algumas sugestões e estimativas foram realizadas nesta direção.
This dissertation presents the design of architectures for JPEG image compression. Architectures for a gray scale images JPEG compressor that were developed are herein presented. This work also addresses a color images JPEG compressor and a color space converter. The designed architectures are described in detail and they were completely described in VHDL, with synthesis directed for Altera Flex10KE family of FPGAs. The integrated architecture for gray scale images JPEG compressor has a minimum latency of 237 clock cycles and it processes an image of 640x480 pixels in 18,5ms, allowing a processing rate of 54 images per second. The compression rate, according to estimates, would be of 6,2 times or 84%, in percentage of bits compression. The integrated architecture for color images JPEG compression was generated starting from incremental changes in the architecture of gray scale images compressor. This architecture also has the minimum latency of 237 clock cycles and it can process a color image of 640 x 480 pixels in 54,4ms, allowing a processing rate of 18,4 images per second. The compression rate, according to estimates, would be of 14,4 times or 93%, in percentage of bits compression. The architecture for space color conversor from RBG to YCbCr has a latency of 6 clock cycles and it is able to process a color image of 640 x 480 pixels in 84,6ms, allowing a processing rate of 11,8 images per second. This architecture was finally not integrated with the color images compressor architecture, but some suggestions, alternatives and estimates were made in this direction.
Style APA, Harvard, Vancouver, ISO itp.
9

Nicholl, Peter Nigel. "Feature directed spiral image compression : (a new technique for lossless image compression)". Thesis, University of Ulster, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339326.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Mandal, Mrinal Kumar. "Wavelets for image compression". Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/10277.

Pełny tekst źródła
Streszczenie:
Wavelets are becoming increasingly important in image compression applications because of its flexibility in representing nonstationary signals. To achieve a high compression ratio, the wavelet has to be adapted to the image. Current techniques use exhaustive search procedures which are computationally intensive to find the optimal basis (type/order/tree) for the image to be coded. In this thesis, we have carried out extensive performance analysis of various wavelets on a wide variety of images. Based on the investigation, we propose some guidelines for searching for the optimal wavelet (type/order) based on the overall activity (measured by the spectral flatness) of the image to be coded. These guidelines will provide the degree of improvement that can be achieved by using the "optimal" over "standard" wavelets. The proposed guidelines can be used to find a good initial guess for faster convergence when searching for optimal wavelet is essential. We propose a wave packet decomposition algorithm based on the local transform gain of the wavelet decomposed bands. The proposed algorithm provides good coding performance at significantly reduced complexity. Most practical coders are designed to minimize the mean square error (MSE) between the original and reconstructed image. It is known that at high compression ratio, MSE does not correspond well to the subjective quality of the image. In this thesis, we propose an image adaptive coding algorithm which tries to minimize the MSE weighted by the visual importance of various wavelet bands. It has been observed that the proposed algorithm provides a better coding performance for a wide variety of images.
Style APA, Harvard, Vancouver, ISO itp.
11

Jiang, Qin. "Stereo image sequence compression". Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/15634.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Fawcett, Roger James. "Efficient practical image compression". Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365711.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Rajpoot, Nasir Mahmood. "Adaptive wavelet image compression". Thesis, University of Warwick, 2001. http://wrap.warwick.ac.uk/67099/.

Pełny tekst źródła
Streszczenie:
In recent years, there has been an explosive increase in the amount of digital image data. The requirements for its storage. and communication can be reduced considerably by compressing the data while maintaining their visual quality. The work in this thesis is concerned with the compression of still images using fixed and adaptive wavelet transforms. The wavelet transform is a suitable candidate for representing an image in a compression system, due to its being an efficient representation, having an inherent multiresolution nature, and possessing a self-similar structure which lends itself to efficient quantization strategies using zerotrees. The properties of wavelet transforms are studied from a compression viewpoint. A novel augmented zerotree wavelet image coding algorithm is presented whose compression performance is comparable to the best wavelet coding results published to date. It is demonstrated that a wavelet image coder performs much better on images consisting of smooth regions than on relatively complex images. The need thus arises to explore the wavelet bases whose time-frequency tiling is adapted to a given signal, in such a way that the resulting waveforms resemble closely those present in the signal and consequently result in a sparse representation, suitable for compression purposes. Various issues related to a generalized wavelet basis adapted to the signal or image contents, the so-called best wavelet packet basis, and its selection are addressed. A new method for wavelet packet basis selection is presented, which aims to unite the basis selection process with quantization strategy to achieve better compression performance. A general zerotree structure for any arbitrary wavelet packet basis, termed the compatible zerotree structure, is presented. The new basis selection method is applied to compatible zerotree quantization to obtain a progressive wavelet packet coder, which shows significant coding gains over its wavelet counterpart on test images of diverse nature.
Style APA, Harvard, Vancouver, ISO itp.
14

Whitehouse, Steven John. "Error resilient image compression". Thesis, University of Cambridge, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.621935.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Penrose, Andrew John. "Extending lossless image compression". Thesis, University of Cambridge, 1999. https://www.repository.cam.ac.uk/handle/1810/272288.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Shaban, Osama M. N. "Image compression using local image visual activities". Thesis, De Montfort University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391590.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Hague, Darren S. "Neural networks for image data compression : improving image quality for auto-associative feed-forward image compression networks". Thesis, Brunel University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.262478.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Wyllie, Michael. "A comparative quantitative approach to digital image compression". Huntington, WV : [Marshall University Libraries], 2006. http://www.marshall.edu/etd/descript.asp?ref=719.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Lee, Jungwon. "Efficient image compression system using a CMOS transform imager". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31825.

Pełny tekst źródła
Streszczenie:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Anderson, David; Committee Member: Dorsey, John; Committee Member: Hasler, Paul; Committee Member: Kang, Sung Ha; Committee Member: Romberg, Justin. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Style APA, Harvard, Vancouver, ISO itp.
20

Tummala, Sai Virali, i Veerendra Marni. "Comparison of Image Compression and Enhancement Techniques for Image Quality in Medical Images". Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15360.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Moreno, Escobar Jesús Jaime. "Perceptual Criteria on Image Compression". Doctoral thesis, Universitat Autònoma de Barcelona, 2011. http://hdl.handle.net/10803/51428.

Pełny tekst źródła
Streszczenie:
Hoy en día las imágenes digitales son usadas en muchas areas de nuestra vida cotidiana, pero estas tienden a ser cada vez más grandes. Este incremento de información nos lleva al problema del almacenamiento de las mismas. Por ejemplo, es común que la representación de un pixel a color ocupe 24 bits, donde los canales rojo, verde y azul se almacenen en 8 bits. Por lo que, este tipo de pixeles en color pueden representar uno de los 224 ¼ 16:78 millones de colores. Así, una imagen de 512 £ 512 que representa con 24 bits un pixel ocupa 786,432 bytes. Es por ello que la compresión es importante. Una característica importante de la compresión de imágenes es que esta puede ser con per didas o sin ellas. Una imagen es aceptable siempre y cuando dichas perdidas en la información de la imagen no sean percibidas por el ojo. Esto es posible al asumir que una porción de esta información es redundante. La compresión de imágenes sin pérdidas es definida como deco dificar matemáticamente la misma imagen que fue codificada. En la compresión de imágenes con pérdidas se necesita identificar dos características: la redundancia y la irrelevancia de in formación. Así la compresión con pérdidas modifica los datos de la imagen de tal manera que cuando estos son codificados y decodificados, la imagen recuperada es lo suficientemente pare cida a la original. Que tan parecida es la imagen recuperada en comparación con la original es definido previamente en proceso de codificación y depende de la implementación a ser desarrollada. En cuanto a la compresión con pérdidas, los actuales esquemas de compresión de imágenes eliminan información irrelevante utilizando criterios matemáticos. Uno de los problemas de estos esquemas es que a pesar de la calidad numérica de la imagen comprimida es baja, esta muestra una alta calidad visual, dado que no muestra una gran cantidad de artefactos visuales. Esto es debido a que dichos criterios matemáticos no toman en cuenta la información visual percibida por el Sistema Visual Humano. Por lo tanto, el objetivo de un sistema de compresión de imágenes diseñado para obtener imágenes que no muestren artefactos, aunque su calidad numérica puede ser baja, es eliminar la información que no es visible por el Sistema Visual Humano. Así, este trabajo de tesis doctoral propone explotar la redundancia visual existente en una imagen, reduciendo frecuencias imperceptibles para el sistema visual humano. Por lo que primeramente, se define una métrica de calidad de imagen que está altamente correlacionada con opiniones de observadores. La métrica propuesta pondera el bien conocido PSNR por medio de una modelo de inducción cromática (CwPSNR). Después, se propone un algoritmo compresor de imágenes, llamado Hi-SET, el cual explota la alta correlación de un vecindario de pixeles por medio de una función Fractal. Hi-SET posee las mismas características que tiene un compresor de imágenes moderno, como ser una algoritmo embedded que permite la transmisión progresiva. También se propone un cuantificador perceptual(½SQ), el cual es una modificación a la clásica cuantificación Dead-zone. ½SQes aplicado a un grupo entero de pixelesen una sub-banda Wavelet dada, es decir, se aplica una cuantificación global. A diferencia de lo anterior, la modificación propuesta permite hacer una cuantificación local tanto directa como inversa pixel-por-pixel introduciéndoles una distorsión perceptual que depende directamente de la información espacial del entorno del pixel. Combinando el método ½SQ con Hi-SET, se define un compresor perceptual de imágenes, llamado ©SET. Finalmente se presenta un método de codificación de areas de la Región de Interés, ½GBbBShift, la cual pondera perceptualmente los pixeles en dichas areas, en tanto que las areas que no pertenecen a la Región de Interés o el Fondo sólo contendrán aquellas que perceptualmente sean las más importantes. Los resultados expuestos en esta tesis indican que CwPSNR es el mejor indicador de calidad de imagen en las distorsiones más comunes de compresión como son JPEG y JPEG2000, dado que CwPSNR posee la mejor correlación con la opinión de observadores, dicha opinión está sujeta a los experimentos psicofísicos de las más importantes bases de datos en este campo, como son la TID2008, LIVE, CSIQ y IVC. Además, el codificador de imágenes Hi-SET obtiene mejores resultados que los obtenidos por JPEG2000 u otros algoritmos que utilizan el fractal de Hilbert. Así cuando a Hi-SET se la aplica la cuantificación perceptual propuesta, ©SET, este incrementa su eficiencia tanto objetiva como subjetiva. Cuando el método ½GBbBShift es aplicado a Hi-SET y este es comparado contra el método MaxShift aplicado al estándar JPEG2000 y a Hi-SET, se obtienen mejores resultados perceptuales comparando la calidad subjetiva de toda la imagen de dichos métodos. Tanto la cuantificación perceptual propuesta ½SQ como el método ½GBbBShift son algoritmos generales, los cuales pueden ser aplicados a otros algoritmos de compresión de imágenes basados en Transformada Wavelet tales como el mismo JPEG2000, SPIHT o SPECK, por citar algunos ejemplos.
Nowadays, digital images are used in many areas in everyday life, but they tend to be big. This increases amount of information leads us to the problem of image data storage. For example, it is common to have a representation a color pixel as a 24-bit number, where the channels red, green, and blue employ 8 bits each. In consequence, this kind of color pixel can specify one of 224 ¼ 16:78 million colors. Therefore, an image at a resolution of 512 £ 512 that allocates 24 bits per pixel, occupies 786,432 bytes. That is why image compression is important. An important feature of image compression is that it can be lossy or lossless. A compressed image is acceptable provided these losses of image information are not perceived by the eye. It is possible to assume that a portion of this information is redundant. Lossless Image Compression is defined as to mathematically decode the same image which was encoded. In Lossy Image Compression needs to identify two features inside the image: the redundancy and the irrelevancy of information. Thus, lossy compression modifies the image data in such a way when they are encoded and decoded, the recovered image is similar enough to the original one. How similar is the recovered image in comparison to the original image is defined prior to the compression process, and it depends on the implementation to be performed. In lossy compression, current image compression schemes remove information considered irrelevant by using mathematical criteria. One of the problems of these schemes is that although the numerical quality of the compressed image is low, it shows a high visual image quality, e.g. it does not show a lot of visible artifacts. It is because these mathematical criteria, used to remove information, do not take into account if the viewed information is perceived by the Human Visual System. Therefore, the aim of an image compression scheme designed to obtain images that do not show artifacts although their numerical quality can be low, is to eliminate the information that is not visible by the Human Visual System. Hence, this Ph.D. thesis proposes to exploit the visual redundancy existing in an image by reducing those features that can be unperceivable for the Human Visual System. First, we define an image quality assessment, which is highly correlated with the psychophysical experiments performed by human observers. The proposed CwPSNR metrics weights the well-known PSNR by using a particular perceptual low level model of the Human Visual System, e.g. the Chromatic Induction Wavelet Model (CIWaM). Second, we propose an image compression algorithm (called Hi-SET), which exploits the high correlation and self-similarity of pixels in a given area or neighborhood by means of a fractal function. Hi-SET possesses the main features that modern image compressors have, that is, it is an embedded coder, which allows a progressive transmission. Third, we propose a perceptual quantizer (½SQ), which is a modification of the uniform scalar quantizer. The ½SQ is applied to a pixel set in a certain Wavelet sub-band, that is, a global quantization. Unlike this, the proposed modification allows to perform a local pixel-by-pixel forward and inverse quantization, introducing into this process a perceptual distortion which depends on the surround spatial information of the pixel. Combining ½SQ method with the Hi-SET image compressor, we define a perceptual image compressor, called ©SET. Finally, a coding method for Region of Interest areas is presented, ½GBbBShift, which perceptually weights pixels into these areas and maintains only the more important perceivable features in the rest of the image. Results presented in this report show that CwPSNR is the best-ranked image quality method when it is applied to the most common image compression distortions such as JPEG and JPEG2000. CwPSNR shows the best correlation with the judgement of human observers, which is based on the results of psychophysical experiments obtained for relevant image quality databases such as TID2008, LIVE, CSIQ and IVC. Furthermore, Hi-SET coder obtains better results both for compression ratios and perceptual image quality than the JPEG2000 coder and other coders that use a Hilbert Fractal for image compression. Hence, when the proposed perceptual quantization is introduced to Hi-SET coder, our compressor improves its numerical and perceptual e±ciency. When ½GBbBShift method applied to Hi-SET is compared against MaxShift method applied to the JPEG2000 standard and Hi-SET, the images coded by our ROI method get the best results when the overall image quality is estimated. Both the proposed perceptual quantization and the ½GBbBShift method are generalized algorithms that can be applied to other Wavelet based image compression algorithms such as JPEG2000, SPIHT or SPECK.
Style APA, Harvard, Vancouver, ISO itp.
22

Zhang, Kui. "Knowledge based image sequence compression". Thesis, University of Surrey, 1998. http://epubs.surrey.ac.uk/843195/.

Pełny tekst źródła
Streszczenie:
In this thesis, most commonly encountered video compression techniques and international coding standards are studied. The study leads to the idea of a reconfigurable codec which can adapt itself to the specific requirements of diverse applications so as to achieve improved performance. Firstly, we propose a multiple layer affine motion compensated codec which acts as a basic building block of the reconfigurable multiple tool video codec. A detailed investigation of the properties of the proposed codec is carried out. The experimental results reveal that the gain in coding efficiency from improved motion prediction and segmentation is proportional to the spatial complexity of the sequence being encoded. Secondly, a framework for the reconfigurable multiple tool video codec is developed and its key parts are discussed in detail. Two important concepts virtual codec and virtual tool are introduced. A prototype of the proposed reconfigurable multiple tool video codec is implemented. The codec structure and the constituent tools of the codec included in the prototype are extensively tested and evaluated to prove the concept. The results confirm that different applications require different codec configurations to achieve optimum performance. Thirdly, a knowledge based tool selection system for the reconfigurable codec is proposed and developed. Human knowledge as well as sequence properties are taken into account in the tool selection procedure. It is shown that the proposed tool selection mechanism gives promising results. Finally, concluding remarks are offered and future research directions are suggested.
Style APA, Harvard, Vancouver, ISO itp.
23

Lin, Huawu. "Fractal image compression using pyramids". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ27682.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Gandhi, Sonia. "ENO interpolation for image compression". Diss., Connect to online resource, 2005. http://wwwlib.umi.com/cr/colorado/fullcit?p1425778.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Gorley, Paul Ward. "Metrics for stereoscopic image compression". Thesis, Durham University, 2012. http://etheses.dur.ac.uk/3471/.

Pełny tekst źródła
Streszczenie:
Metrics for automatically predicting the compression settings for stereoscopic images, to minimize file size, while still maintaining an acceptable level of image quality are investigated. This research evaluates whether symmetric or asymmetric compression produces a better quality of stereoscopic image. Initially, how Peak Signal to Noise Ratio (PSNR) measures the quality of varyingly compressed stereoscopic image pairs was investigated. Two trials with human subjects, following the ITU-R BT.500-11 Double Stimulus Continuous Quality Scale (DSCQS) were undertaken to measure the quality of symmetric and asymmetric stereoscopic image compression. Computational models of the Human Visual System (HVS) were then investigated and a new stereoscopic image quality metric designed and implemented. The metric point matches regions of high spatial frequency between the left and right views of the stereo pair and accounts for HVS sensitivity to contrast and luminance changes in these regions. The PSNR results show that symmetric, as opposed to asymmetric stereo image compression, produces significantly better results. The human factors trial suggested that in general, symmetric compression of stereoscopic images should be used. The new metric, Stereo Band Limited Contrast, has been demonstrated as a better predictor of human image quality preference than PSNR and can be used to predict a perceptual threshold level for stereoscopic image compression. The threshold is the maximum compression that can be applied without the perceived image quality being altered. Overall, it is concluded that, symmetric, as opposed to asymmetric stereo image encoding, should be used for stereoscopic image compression. As PSNR measures of image quality are correctly criticized for correlating poorly with perceived visual quality, the new HVS based metric was developed. This metric produces a useful threshold to provide a practical starting point to decide the level of compression to use.
Style APA, Harvard, Vancouver, ISO itp.
26

Byrne, James. "Texture synthesis for image compression". Thesis, University of Bristol, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.574259.

Pełny tekst źródła
Streszczenie:
Still image compression methods have changed little over the last ten years. Mean- while, the quantity of content transmitted over limited bandwidth channels has increased dramatically. The currently available methods are content agnostic: that I is they use the same compression process independent of the content at any given spatial location. Region specific coding provides one possible route to increased compression performance. Texture regions in particular are usually not conceptually important to a viewer of an image, but the high frequency nature of such regions consumes many bits when encoding. Texture synthesis is the process of generating textures from a sample or parameter set, and thus if these texture regions can be encoded by spec- ifying texture synthesis at the decoder, it may be possible to save large amounts of data, without detriment to the decoded image quality. This thesis presents a number of adaptations to the Graphcut patch based texture synthesis method, to make it suitable for constrained synthesis of texture regions in natural images. This includes a colour matching process to account for luminance and chrominance changes over the texture region, and a modification to allow constrained synthesis of an arbitrarily shaped region. This architecture is then integrated into two complete image compression by synthesis systems based on JPEG and JPEG2000 respectively. In each case the image is segmented, anal- ysed and synthesis occurs at the decoder to fill in removed texture regions. In the system based on JPEG2000 a feedback loop is included which makes some assess- ment of the quality of the synthesis at the encoder in order to adapt the synthesis parameters to improve the result quality, or to skip synthesis entirely if deemed necessary. The results of these systems show some promise in that substantial savings can be made over transform coded images coded at the same Q value as the residual image. However it is observed that synthesis can be detrimental to the quality of the image in comparison to an equivalent traditionally coded image at the same bitrate. Two methods of texture orientation analysis for non-homogeneous textures are presented. One of these in particular produces a good assessment of the texture orientation. This method uses a Steerable Pyramid transform to analyse the orientations. Then, two methods of sample selection and synthesis using the analysed texture orientation are presented. These methods aim to recreate the original texture's orientation variation from a smaller texture sample and the orientation map. The best of these methods selects one or more samples containing multiple orientations and selects texture patches appropriately oriented to the current location of synthesis.
Style APA, Harvard, Vancouver, ISO itp.
27

Sahandi, M. R. "Image compression using vector encoding". Thesis, University of Bradford, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.379796.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Karki, Maya, H. N. Shivashankar i R. K. Rajangam. "IMAGE DATA COMPRESSION (USING DPCM)". International Foundation for Telemetering, 1991. http://hdl.handle.net/10150/612163.

Pełny tekst źródła
Streszczenie:
International Telemetering Conference Proceedings / November 04-07, 1991 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Advances in computer technology and mass storage have paved the way for implementing advanced data compression techniques to improve the efficiency of transmission and storage of images. The present paper deals on the development of a data compression algorithm suitable for images received from satellites. The compression ratio of 1.91:1 is achieved with the proposed technique. The technique used is 1-D DPCM Coding. Hardware-relevant to coder has also been proposed.
Style APA, Harvard, Vancouver, ISO itp.
29

Iyer, Lakshmi Ramachandran. "Image Compression Using Balanced Multiwavelets". Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/33748.

Pełny tekst źródła
Streszczenie:
The success of any transform coding technique depends on how well the basis functions represent the signal features. The discrete wavelet transform (DWT) performs a multiresolution analysis of a signal; this enables an efficient representation of smooth and detailed signal regions. Furthermore, computationally efficient algorithms exist for computing the DWT. For these reasons, recent image compression standards such as JPEG2000 use the wavelet transform. It is well known that orthogonality and symmetry are desirable transform properties in image compression applications. It is also known that the scalar wavelet transform does not possess both properties simultaneously. Multiwavelets overcome this limitation; the multiwavelet transform allows orthogonality and symmetry to co-exist. However recently reported image compression results indicate that the scalar wavelets still outperform the multiwavelets in terms of peak signal-to-noise ratio (PSNR). In a multiwavelet transform, the balancing order of the multiwavelet is indicative of its energy compaction efficiency (usually a higher balancing order implies lower mean-squared-error, MSE, in the compressed image). But a high balancing order alone does not ensure good image compression performance. Filter bank characteristics such as shift-variance, magnitude response, symmetry and phase response are important factors that also influence the MSE and perceived image quality. This thesis analyzes the impact of these multiwavelet characteristics on image compression performance. Our analysis allows us to explain---for the first time---reasons for the small performance gap between the scalar wavelets and multiwavelets. We study the characteristics of five balanced multiwavelets (and 2 unbalanced multiwavelets) and compare their image compression performance for grayscale images with the popular (9,7)-tap and (22,14)-tap biorthogonal scalar wavelets. We use the well-known SPIHT quantizer in our compression scheme and utilize PSNR and subjective quality measures to assess performance. We also study the effect of incorporating a human visual system (HVS)-based transform model in our multiwavelet compression scheme. Our results indicate those multiwavelet properties that are most important to image compression. Moreover, the PSNR and subjective quality results depict similar performance for the best scalar wavelets and multiwavelets. Our analysis also shows that the HVS-based multiwavelet transform coder considerably improves perceived image quality at low bit rates.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
30

Mellin, Fredrik. "Introduction to Fractal Image Compression". Thesis, Uppsala universitet, Analys och sannolikhetsteori, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-444400.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Hong, Edwin S. "Group testing for image compression /". Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/6900.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Xiao, Panrong. "Image compression by wavelet transform". [Johnson City, Tenn. : East Tennessee State University], 2001. http://etd-submit.etsu.edu/etd/theses/available/etd-0711101-121206/unrestricted/xiaop0720.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Oh, Han. "Perceptual Image Compression using JPEG2000". Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/202996.

Pełny tekst źródła
Streszczenie:
Image sizes have increased exponentially in recent years. The resulting high-resolution images are typically encoded in a lossy fashion to achieve high compression ratios. Lossy compression can be categorized into visually lossless and visually lossy compression depending on the visibility of compression artifacts. This dissertation proposes visually lossless coding methods as well as a visually lossy coding method with perceptual quality control. All resulting codestreams are JPEG2000 Part-I compliant.Visually lossless coding is increasingly considered as an alternative to numerically lossless coding. In order to hide compression artifacts caused by quantization, visibility thresholds (VTs) are measured and used for quantization of subbands in JPEG2000. In this work, VTs are experimentally determined from statistically modeled quantization distortion, which is based on the distribution of wavelet coefficients and the dead-zone quantizer of JPEG2000. The resulting VTs are adjusted for locally changing background through a visual masking model, and then used to determine the minimum number of coding passes to be included in a codestream for visually lossless quality under desired viewing conditions. The proposed coding scheme successfully yields visually lossless images at competitive bitrates compared to those of numerically lossless coding and visually lossless algorithms in the literature.This dissertation also investigates changes in VTs as a function of display resolution and proposes a method which effectively incorporates multiple VTs for various display resolutions into the JPEG2000 framework. The proposed coding method allows for visually lossless decoding at resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely, this method can significantly reduce bandwidth usage.Contrary to images encoded in the visually lossless manner, highly compressed images inevitably have visible compression artifacts. To minimize these artifacts, many compression algorithms exploit the varying sensitivity of the human visual system (HVS) to different frequencies, which is typically obtained at the near-threshold level where distortion is just noticeable. However, it is unclear that the same frequency sensitivity applies at the supra-threshold level where distortion is highly visible. In this dissertation, the sensitivity of the HVS for several supra-threshold distortion levels is measured based on the JPEG2000 quantization distortion model. Then, a low-complexity JPEG2000 encoder using the measured sensitivity is described. The proposed visually lossy encoder significantly reduces encoding time while maintaining superior visual quality compared with conventional JPEG2000 encoders.
Style APA, Harvard, Vancouver, ISO itp.
34

Wakefield, Paul D. "Aspects of fractal image compression". Thesis, University of Bath, 1999. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285300.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Dumas, Thierry. "Deep learning for image compression". Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S029/document.

Pełny tekst źródła
Streszczenie:
Ces vingt dernières années, la quantité d’images et de vidéos transmises a augmenté significativement, ce qui est principalement lié à l’essor de Facebook et Netflix. Même si les capacités de transmission s’améliorent, ce nombre croissant d’images et de vidéos transmises exige des méthodes de compression plus efficaces. Cette thèse a pour but d’améliorer par l’apprentissage deux composants clés des standards modernes de compression d’image, à savoir la transformée et la prédiction intra. Plus précisément, des réseaux de neurones profonds sont employés car ils ont un grand pouvoir d’approximation, ce qui est nécessaire pour apprendre une approximation fidèle d’une transformée optimale (ou d’un filtre de prédiction intra optimal) appliqué à des pixels d’image. En ce qui concerne l’apprentissage d’une transformée pour la compression d’image via des réseaux de neurones, un défi est d’apprendre une transformée unique qui est efficace en termes de compromis débit-distorsion, à différents débits. C’est pourquoi deux approches sont proposées pour relever ce défi. Dans la première approche, l’architecture du réseau de neurones impose une contrainte de parcimonie sur les coefficients transformés. Le niveau de parcimonie offre un contrôle sur le taux de compression. Afin d’adapter la transformée à différents taux de compression, le niveau de parcimonie est stochastique pendant la phase d’apprentissage. Dans la deuxième approche, l’efficacité en termes de compromis débit-distorsion est obtenue en minimisant une fonction de débit-distorsion pendant la phase d’apprentissage. Pendant la phase de test, les pas de quantification sont progressivement agrandis selon un schéma afin de compresser à différents débits avec une unique transformée apprise. Concernant l’apprentissage d’un filtre de prédiction intra pour la compression d’image via des réseaux de neurones, le problème est d’obtenir un filtre appris qui s’adapte à la taille du bloc d’image à prédire, à l’information manquante dans le contexte de prédiction et au bruit de quantification variable dans ce contexte. Un ensemble de réseaux de neurones est conçu et entraîné de façon à ce que le filtre appris soit adaptatif à ces égards
Over the last twenty years, the amount of transmitted images and videos has increased noticeably, mainly urged on by Facebook and Netflix. Even though broadcast capacities improve, this growing amount of transmitted images and videos requires increasingly efficient compression methods. This thesis aims at improving via learning two critical components of the modern image compression standards, which are the transform and the intra prediction. More precisely, deep neural networks are used for this task as they exhibit high power of approximation, which is needed for learning a reliable approximation of an optimal transform (or an optimal intra prediction filter) applied to image pixels. Regarding the learning of a transform for image compression via neural networks, a challenge is to learn an unique transform that is efficient in terms of rate-distortion while keeping this efficiency when compressing at different rates. That is why two approaches are proposed to take on this challenge. In the first approach, the neural network architecture sets a sparsity on the transform coefficients. The level of sparsity gives a direct control over the compression rate. To force the transform to adapt to different compression rates, the level of sparsity is stochastically driven during the training phase. In the second approach, the rate-distortion efficiency is obtained by minimizing a rate-distortion objective function during the training phase. During the test phase, the quantization step sizes are gradually increased according a scheduling to compress at different rates using the single learned transform. Regarding the learning of an intra prediction filter for image compression via neural networks, the issue is to obtain a learned filter that is adaptive with respect to the size of the image block to be predicted, with respect to missing information in the context of prediction, and with respect to the variable quantization noise in this context. A set of neural networks is designed and trained so that the learned prediction filter has this adaptibility
Style APA, Harvard, Vancouver, ISO itp.
36

Nolte, Ernst Hendrik. "Image compression quality measurement : a comparison of the performance of JPEG and fractal compression on satellite images". Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51796.

Pełny tekst źródła
Streszczenie:
Thesis (MEng)--Stellenbosch University, 2000.
ENGLISH ABSTRACT: The purpose of this thesis is to investigate the nature of digital image compression and the calculation of the quality of the compressed images. The work is focused on greyscale images in the domain of satellite images and aerial photographs. Two compression techniques are studied in detail namely the JPEG and fractal compression methods. Implementations of both these techniques are then applied to a set of test images. The rest of this thesis is dedicated to investigating the measurement of the loss of quality that was introduced by the compression. A general method for quality measurement (signal To Noise Ratio) is discussed as well as a technique that was presented in literature quite recently (Grey Block Distance). Hereafter, a new measure is presented. After this, a means of comparing the performance of these measures is presented. It was found that the new measure for image quality estimation performed marginally better than the SNR algorithm. Lastly, some possible improvements on this technique are mentioned and the validity of the method used for comparing the quality measures is discussed.
AFRIKAANSE OPSOMMING: Die doel van hierdie tesis is om ondersoek in te stel na die aard van digitale beeldsamepersing en die berekening van beeldkwaliteit na samepersing. Daar word gekonsentreer op grysvlak beelde in die spesifieke domein van satellietbeelde en lugfotos. Twee spesifieke samepersingstegnieke word in diepte ondersoek naamlik die JPEG en fraktale samepersingsmetodes. Implementasies van beide hierdie tegnieke word op 'n stel toetsbeelde aangewend. Die res van hierdie tesis word dan gewy aan die ondersoek van die meting van die kwaliteitsverlies van hierdie saamgeperste beelde. Daar word gekyk na 'n metode wat in algemene gebruik in die praktyk is asook na 'n nuwer metode wat onlangs in die literatuur veskyn het. Hierna word 'n nuwe tegniek bekendgestel. Verder word daar 'n vergelyking van hierdie mates en 'n ondersoek na die interpretasie van die 'kwaliteit' van hierdie kwaliteitsmate gedoen. Daar is gevind dat die nuwe maatstaf vir kwaliteit net so goed en selfs beter werk as die algemene maat vir beeldkwaliteit naamlik die Sein tot Ruis Verhouding. Laastens word daar moontlike verbeterings op die maatstaf genoem en daar volg 'n bespreking oor die geldigheid van die metode wat gevolg is om die kwaliteit van die kwaliteitsmate te bepaal
Style APA, Harvard, Vancouver, ISO itp.
37

Almshaal, Rashwan M. "Sparse Signal Processing Based Image Compression and Inpainting". VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4286.

Pełny tekst źródła
Streszczenie:
In this thesis, we investigate the application of compressive sensing and sparse signal processing techniques to image compression and inpainting problems. Considering that many signals are sparse in certain transformation domain, a natural question to ask is: can an image be represented by as few coefficients as possible? In this thesis, we propose a new model for image compression/decompression based on sparse representation. We suggest constructing an overcomplete dictionary by combining two compression matrices, the discrete cosine transform (DCT) matrix and Hadamard-Walsh transform (HWT) matrix, instead of using only one transformation matrix that has been used by the common compression techniques such as JPEG and JPEG2000. We analyze the Structural Similarity Index (SSIM) versus the number of coefficients, measured by the Normalized Sparse Coefficient Rate (NSCR) for our approach. We observe that using the same NSCR, SSIM for images compressed using the proposed approach is between 4%-17% higher than when using JPEG. Several algorithms have been used for sparse coding. Based on experimental results, Orthogonal Matching Pursuit (OMP) is proved to be the most efficient algorithm in terms of computational time and the quality of the decompressed image. In addition, based on compressive sensing techniques, we propose an image inpainting approach, which could be used to fill missing pixels and reconstruct damaged images. In this approach, we use the Gradient Projection for Sparse Reconstruction (GPSR) algorithm and wavelet transformation with Daubechies filters to reconstruct the damaged images based on the information available in the original image. Experimental results show that our approach outperforms existing image inpainting techniques in terms of computational time with reasonably good image reconstruction performance.
Style APA, Harvard, Vancouver, ISO itp.
38

Nelson, Christopher. "Contour encoded compression and transmission /". Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1613.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Roman-Gonzalez, Avid. "Compression Based Analysis of Image Artifacts: Application to Satellite Images". Phd thesis, Telecom ParisTech, 2013. http://tel.archives-ouvertes.fr/tel-00935029.

Pełny tekst źródła
Streszczenie:
This thesis aims at an automatic detection of artifacts in optical satellite images such as aliasing, A/D conversion problems, striping, and compression noise; in fact, all blemishes that are unusual in an undistorted image. Artifact detection in Earth observation images becomes increasingly difficult when the resolution of the image improves. For images of low, medium or high resolution, the artifact signatures are sufficiently different from the useful signal, thus allowing their characterization as distortions; however, when the resolution improves, the artifacts have, in terms of signal theory, a similar signature to the interesting objects in an image. Although it is more difficult to detect artifacts in very high resolution images, we need analysis tools that work properly, without impeding the extraction of objects in an image. Furthermore, the detection should be as automatic as possible, given the quantity and ever-increasing volumes of images that make any manual detection illusory. Finally, experience shows that artifacts are not all predictable nor can they be modeled as expected. Thus, any artifact detection shall be as generic as possible, without requiring the modeling of their origin or their impact on an image. Outside the field of Earth observation, similar detection problems have arisen in multimedia image processing. This includes the evaluation of image quality, compression, watermarking, detecting attacks, image tampering, the montage of photographs, steganalysis, etc. In general, the techniques used to address these problems are based on direct or indirect measurement of intrinsic information and mutual information. Therefore, this thesis has the objective to translate these approaches to artifact detection in Earth observation images, based particularly on the theories of Shannon and Kolmogorov, including approaches for measuring rate-distortion and pattern-recognition based compression. The results from these theories are then used to detect too low or too high complexities, or redundant patterns. The test images being used are from the satellite instruments SPOT, MERIS, etc. We propose several methods for artifact detection. The first method is using the Rate-Distortion (RD) function obtained by compressing an image with different compression factors and examines how an artifact can result in a high degree of regularity or irregularity affecting the attainable compression rate. The second method is using the Normalized Compression Distance (NCD) and examines whether artifacts have similar patterns. The third method is using different approaches for RD such as the Kolmogorov Structure Function and the Complexity-to-Error Migration (CEM) for examining how artifacts can be observed in compression-decompression error maps. Finally, we compare our proposed methods with an existing method based on image quality metrics. The results show that the artifact detection depends on the artifact intensity and the type of surface cover contained in the satellite image.
Style APA, Harvard, Vancouver, ISO itp.
40

Taylor, Ty. "Compression of Cartoon Images". Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1301319148.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Tokdemir, Serpil. "Digital compression on GPU". unrestricted, 2006. http://etd.gsu.edu/theses/available/etd-12012006-154433/.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.)--Georgia State University, 2006.
Title from dissertation title page. Saeid Belkasim, committee chair; Ying Zhu, A.P. Preethy, committee members. Electronic text (90 p. : ill. (some col.)). Description based on contents viewed May 2, 2007. Includes bibliographical references (p. 78-81).
Style APA, Harvard, Vancouver, ISO itp.
42

Henriques, Marco António Silva. "Facial recognition based on image compression". Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/17207.

Pełny tekst źródła
Streszczenie:
Mestrado em Engenharia Electrónica e Telecomunicações
O reconhecimento facial tem recebido uma importante atenção em termos de investigação, especialmente nos últimos anos, podendo ser considerado como uma das mais bem sucessidas aplicações de análise e "compreensão" de imagens. Prova disso são as várias conferências e novos artigos que são publicados sobre o tema. O foco na investigação deve-se à grande quantidade de aplicações a que pode estar associado, podendo servir de "auxílio" para muitas tarefas diárias do ser humano. Apesar de existirem diversos algoritmos para efetuar reconhecimento facial, muitos deles até bastante precisos, este problema ainda não está completamente resolvido: existem vários obstáculos relacionados com as condições do ambiente da imagem que alteram a aquisição da mesma e que, por isso, afetam o reconhecimento. Esta tese apresenta uma nova solução ao problema do reconhecimento facial que utiliza métricas de similaridade entre imagens, obtidas com recurso a compressão de dados, nomeadamente a partir de Modelos de Contexto Finito. Existem na literatura algumas abordagens ao reconhecimento facial através de compressão de dados que recorrem principalmente ao uso de transformadas. O método proposto nesta tese tenta uma abordagem inovadora, baseada na utilização de Modelos de Contexto Finito para estimar o número de bits necessários para codificar uma imagem de um sujeito, utilizando um modelo de treino de uma base de dados. Esta tese tem como objectivo o estudo da abordagem descrita acima, isto é, resolver o problema de reconhecimento facial, para uma possível utilização num sistema de autenticação real. São apresentados resultados experimentais detalhados em bases de dados bem conhecidas, o que comprova a eficácia da abordagem proposta.
Facial recognition has received an important attention in terms of research, especially in recent years, and can be considered as one of the best succeeded applications on image analysis and understanding. Proof of this are the several conferences and new articles that are published about the subject. The focus on this research is due to the large amount of applications that facial recognition can be related to, which can be used to help on many daily tasks of the human being. Although there are many algorithms to perform facial recognition, many of them very precise, this problem is not completely solved: there are several obstacles associated with the conditions of the environment that change the image’s acquisition, and therefore affect the recognition. This thesis presents a new solution to the problem of face recognition, using metrics of similarity between images obtained based on data compression, namely by the use of Finite Context Models. There are on the literature some proposed approaches which relate facial recognition and data compression, mainly regarding the use of transform-based methods. The method proposed in this thesis tries an innovative approach based on the use of Finite Context Models to estimate the number of bits needed to encode an image of a subject, using a trained model from a database. This thesis studies the approach described above to solve the problem of facial recognition for a possible use in a real authentication system. Detailed experimental results based on well known databases proves the effectiveness of the proposed approach.
Style APA, Harvard, Vancouver, ISO itp.
43

Williams, Saunya Michelle. "Effects of image compression on data interpretation for telepathology". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42762.

Pełny tekst źródła
Streszczenie:
When geographical distance poses as a barrier, telepathology is designed to offer pathologists the opportunity to replicate their normal activities by using an alternative means of practice. The rapid progression in technology has greatly influenced the appeal of telepathology and its use in multiple domains. To that point, telepathology systems help to afford teleconsultation services for remote locations, improve the workload distribution in clinical environments, measure quality assurance, and also enhance educational programs. While telepathology is an attractive method to many potential users, the resource requirements for digitizing microscopic specimens have hindered widespread adoption. The use of image compression is extremely critical to help advance the pervasiveness of digital images in pathology. For this research, we characterize two different methods that we use to assess compression of pathology images. Our first method is characterized by the fact that image quality is human-based and completely subjective in terms of interpretation. Our second method is characterized by the fact that image analysis is introduced by using machine-based interpretation to provide objective results. Additionally, the objective outcomes from the image analysis may also be used to help confirm tumor classification. With these two methods in mind, the purpose of this dissertation is to quantify the effects of image compression on data interpretation as seen by human experts and a computerized algorithm for use in telepathology.
Style APA, Harvard, Vancouver, ISO itp.
44

Kucherov, Dmytro, D. P. Kucherov i Д. П. Кучеров. "A computer system for images compression". Thesis, Національний авіаційний університет, 2019. http://er.nau.edu.ua/handle/NAU/38657.

Pełny tekst źródła
Streszczenie:
A new approach to images compression is proposed. The approach involves the use of elements of tensor analysis based on singular decomposition. A feature of this approach used is the representation of the image by the matrix triad, which includes the tensor core and a pair of unitary matrices containing right and left singular vectors, respectively. Compression is achieved by one recurrent procedure, which involves lowering the rank of the triad to the level of allowable errors while maintaining the original image size. The result of semi-natural modeling the system components is provided.
A new approach to images compression is proposed. The approach involves the use of elements of tensor analysis based on singular decomposition. A feature of this approach used is the representation of the image by the matrix triad, which includes the tensor core and a pair of unitary matrices containing right and left singular vectors, respectively. Compression is achieved by one recurrent procedure, which involves lowering the rank of the triad to the level of allowable errors while maintaining the original image size. The result of semi-natural modeling the system components is provided.
Style APA, Harvard, Vancouver, ISO itp.
45

Lim, Seng. "Image compression scheme for network transmission". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA294959.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Bodine, Christopher J. "Psychophysical comparisons in image compression algorithms". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA362726.

Pełny tekst źródła
Streszczenie:
Thesis (M.S. in Operations Research) Naval Postgraduate School, March 1999.
Thesis advisor(s): William K. Krebs, Lyn R. Whitaker. "March 1999". Includes bibliographical references (p. 97-99). Also available online.
Style APA, Harvard, Vancouver, ISO itp.
47

Ritter, Jörg. "Wavelet based image compression using FPGAs". [S.l. : s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=967407710.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Butt, Amar Majeed, i Rana Asif Sattar. "On Image Compression using Curve Fitting". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3144.

Pełny tekst źródła
Streszczenie:
Context: Uncompressed Images contain redundancy of image data which can be reduced by image compression in order to store or transmit image data in an economic way. There are many techniques being used for this purpose but the rapid growth in digital media requires more research to make more efficient use of resources. Objectives: In this study we implement Polynomial curve fitting using 1st and 2nd curve orders with non-overlapping 4x4 and 8x8 block sizes. We investigate a selective quantization where each parameter is assigned a priority. The 1st parameter is assigned high priority compared to the other parameters. At the end Huffman coding is applied. Three standard grayscale images of LENA, PEPPER and BOAT are used in our experiment. Methods: We did a literature review, where we selected articles from known libraries i.e. IEEE, ACM Digital Library, ScienceDirect and SpringerLink etc. We have also performed two experiments, one experiment with 1st curve order using 4x4 and 8x8 block sizes and second experiment with 2nd curve order using same block sizes. Results: A comparison using 4x4 and 8x8 block sizes at 1st and 2nd curve orders shows that there is a large difference in compression ratio for the same value of Mean Square Error. Using 4x4 block size gives better quality of an image as compare to 8x8 block size at same curve order but less compression. JPEG gives higher value of PSNR at low and high compression. Conclusions: A selective quantization is good idea to use to get better subjective quality of an image. A comparison shows that to get good compression ratio, 8x8 block size at 1st curve order should be used but for good objective and subjective quality of an image 4x4 block size at 2nd order should be used. JPEG involves a lot of research and it outperforms in PSNR and CR as compare to our proposed scheme at low and high compression ratio. Our proposed scheme gives comparable objective quality (PSNR) of an image at high compression ratio as compare to the previous curve fitting techniques implemented by Salah and Ameer but we are unable to achieve subjective quality of an image.
Style APA, Harvard, Vancouver, ISO itp.
49

Ferdeen, Mats. "Reducing Energy Consumption Through Image Compression". Thesis, Linköpings universitet, Datorteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-134335.

Pełny tekst źródła
Streszczenie:
The energy consumption to make the off-chip memory writing and readings are aknown problem. In the image processing field structure from motion simpler compressiontechniques could be used to save energy. A balance between the detected features suchas corners, edges, etc., and the degree of compression becomes a big issue to investigate.In this thesis a deeper study of this balance are performed. A number of more advancedcompression algorithms for processing of still images such as JPEG is used for comparisonwith a selected number of simpler compression algorithms. The simpler algorithms canbe divided into two categories: individual block-wise compression of each image andcompression with respect to all pixels in each image. In this study the image sequences arein grayscale and provided from an earlier study about rolling shutters. Synthetic data setsfrom a further study about optical flow is also included to see how reliable the other datasets are.
Energikonsumtionen för att skriva och läsa till off-chip minne är ett känt problem. Inombildbehandlingsområdet struktur från rörelse kan enklare kompressionstekniker användasför att spara energi. En avvägning mellan detekterade features såsom hörn, kanter, etc.och grad av kompression blir då en fråga att utreda. I detta examensarbete har en djuparestudie av denna avvägning utförts. Ett antal mer avancerade kompressionsalgoritmer förbearbetning av stillbilder som tex. JPEG används för jämförelse med ett antal utvaldaenklare kompressionsalgoritmer. De enklare algoritmerna kan delas in i två kategorier:individuell blockvis kompression av vardera bilden och kompression med hänsyn tillsamtliga pixlar i vardera bilden. I studien är bildsekvenserna i gråskala och tillhandahållnafrån en tidigare studie om rullande slutare. Syntetiska data set från ytterligare en studie om’optical flow’ ingår även för att se hur pass tillförlitliga de andra dataseten är.
Style APA, Harvard, Vancouver, ISO itp.
50

Quesnel, Ronny. "Image compression using subjective vector quantization". Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=60714.

Pełny tekst źródła
Streszczenie:
The goal of this research is to improve the subjective quality of real world imagery encoded with spatial vector quantization (VQ). Improved subjective quality implies that a human perceives less visually objectionable distortion when looking at the coded images. Through study of several basic VQ schemes, the issues fundamental to achieving good subjective quality are uncovered and addressed in this work. Vector quantization is very good at reproducing quasi-uniform textures in an image, but has difficulty in reproducing abrupt changes in textures (edges) and fine detail and can cause a block effect which is subjectively annoying. A second generation coding scheme is developed which takes certain properties of the human visual system into account. A promising method which is developed utilizes omniscient finite state VQ, a new quadratic distortion measure which penalizes the misrepresentation of edges, and brightness compensation based on Steven's power law. The proposed subjective VQ is compared with several classical, first generation VQ methods.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii