To see the other types of publications on this topic, follow the link: Compression.

Dissertations / Theses on the topic 'Compression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Compression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hawary, Fatma. "Light field image compression and compressive acquisition." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S082.

Full text
Abstract:
En capturant une scène à partir de plusieurs points de vue, un champ de lumière fournit une représentation riche de la géométrie de la scène, ce qui permet une variété de nouvelles applications de post-capture ainsi que des expériences immersives. L'objectif de cette thèse est d'étudier la compressibilité des contenus de type champ de lumière afin de proposer de nouvelles solutions pour une imagerie de champs lumière à plus haute résolution. Deux aspects principaux ont été étudiés à travers ce travail. Les performances en compression sur les champs lumière des schémas de codage actuels étant encore limitées, il est nécessaire d'introduire des approches plus adaptées aux structures des champs de lumière. Nous proposons un schéma de compression comportant deux couches de codage. Une première couche encode uniquement un sous-ensemble de vues d’un champ de lumière et reconstruit les vues restantes via une méthode basée sur la parcimonie. Un codage résiduel améliore ensuite la qualité finale du champ de lumière décodé. Avec les moyens actuels de capture et de stockage, l’acquisition d’un champ de lumière à très haute résolution spatiale et angulaire reste impossible, une alternative consiste à reconstruire le champ de lumière avec une large résolution à partir d’un sous-ensemble d’échantillons acquis. Nous proposons une méthode de reconstruction automatique pour restaurer un champ de lumière échantillonné. L’approche utilise la parcimonie du champs de lumière dans le domaine de Fourier. Aucune estimation de la géométrie de la scène n'est nécessaire, et une reconstruction précise est obtenue même avec un échantillonnage assez réduit. Une étude supplémentaire du schéma complet, comprenant les deux approches proposées est menée afin de mesurer la distorsion introduite par les différents traitements. Les résultats montrent des performances comparables aux méthodes de synthèse de vues basées sur la l’estimation de profondeur
By capturing a scene from several points of view, a light field provides a rich representation of the scene geometry that brings a variety of novel post-capture applications and enables immersive experiences. The objective of this thesis is to study the compressibility of light field contents in order to propose novel solutions for higher-resolution light field imaging. Two main aspects were studied through this work. The compression performance on light fields of the actual coding schemes still being limited, there is need to introduce more adapted approaches to better describe the light field structures. We propose a scalable coding scheme that encodes only a subset of light field views and reconstruct the remaining views via a sparsity-based method. A residual coding provides an enhancement to the final quality of the decoded light field. Acquiring very large-scale light fields is still not feasible with the actual capture and storage facilities, a possible alternative is to reconstruct the densely sampled light field from a subset of acquired samples. We propose an automatic reconstruction method to recover a compressively sampled light field, that exploits its sparsity in the Fourier domain. No geometry estimation is needed, and an accurate reconstruction is achieved even with very low number of captured samples. A further study is conducted for the full scheme including a compressive sensing of a light field and its transmission via the proposed coding approach. The distortion introduced by the different processing is measured. The results show comparable performances to depth-based view synthesis methods
APA, Harvard, Vancouver, ISO, and other styles
2

Yell, M. D. "Steam compression in the single screw compressor." Thesis, University of Leeds, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.372575.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nóbrega, Fernando Antônio Asevêdo. "Sumarização Automática de Atualização para a língua portuguesa." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-30072018-090806/.

Full text
Abstract:
O enorme volume de dados textuais disponível na web caracteriza-se como um cenário ideal para inúmeras aplicações do Processamento de Língua Natural, tal como a tarefa da Sumarização Automática de Atualização (SAA), que tem por objetivo a geração automática de resumos a partir de uma coleção textual admitindo-se que o leitor possui algum conhecimento prévio sobre os textos-fonte. Dessa forma, um bom resumo de atualização deve ser constituído pelas informações mais relevantes, novas e atualizadas com relação ao conhecimento prévio do leitor. Essa tarefa implica em diversos desafios, sobretudo nas etapas de seleção e síntese de conteúdo para o sumário. Embora existam inúmeras abordagens na literatura, com diferentes níveis de complexidade teórica e computacional, pouco dessas investigações fazem uso de algum conhecimento linguístico profundo, que pode auxiliar a identificação de conteúdo mais relevante e atualizado. Além disso, os métodos de sumarização comumente empregam uma abordagem de síntese extrativa, na qual algumas sentenças dos textos-fonte são selecionadas e organizadas para compor o sumário sem alteração de seu conteúdo. Tal abordagem pode limitar a informatividade do sumário, uma vez que alguns segmentos sentenciais podem conter informação redundante ou irrelevante ao leitor. Assim, esforços recentes foram direcionados à síntese compressiva, na qual alguns segmentos das sentenças selecionadas para o sumário são removidos previamente à inserção no sumário. Nesse cenário, este trabalho de doutorado teve por objetivo a investigação do uso de conhecimentos linguísticos, como a Teoria Discursiva Multidocumento (CST), Segmentação de Subtópicos e Reconhecimento de Entidades Nomeadas, em distintas abordagens de seleção de conteúdo por meio das sínteses extrativas e compressivas visando à produção de sumários de atualização mais informativos. Tendo a língua Portuguesa como principal objeto de estudo, foram organizados três novos córpus, o CSTNews-Update, que viabiliza experimentos de SAA, e o PCSC-Pares e G1-Pares, para o desenvolvimento/avaliação de métodos de Compressão Sentencial. Ressalta-se que os experimentos de sumarização foram também realizados para a língua inglesa. Após as experimentações, observou-se que a Segmentação de Subtópicos foi mais efetiva para a produção de sumários mais informativos, porém, em apenas poucas abordagens de seleção de conteúdo. Além disso, foram propostas algumas simplificações para o método DualSum por meio da distribuição de Subtópicos. Tais métodos apresentaram resultados muito satisfatórios com menor complexidade computacional. Visando a produção de sumários compressivos, desenvolveram-se inúmeros métodos de Compressão Sentencial por meio de algoritmos de Aprendizado de Máquina. O melhor método proposto apresentou resultados superiores a um trabalho do estado da arte, que faz uso de algoritmos de Deep Learning. Além dos resultados supracitados, ressalta-se que anteriormente a este trabalho, a maioria das investigações de Sumarização Automática para a língua Portuguesa foi direcionada à geração de sumários a partir de um (monodocumento) ou vários textos relacionados (multidocumento) por meio da síntese extrativa, sobretudo pela ausência se recursos que viabilizassem a expansão da área de Sumarização Automática para esse idioma. Assim, as contribuições deste trabalho engajam-se em três campos, nos métodos de SAA propostos com conhecimento linguísticos, nos métodos de Compressão Sentencial e nos recursos desenvolvidos para a língua Portuguesa.
The huge amount of data that is available online is the main motivation for many tasks of Natural Language Processing, as the Update Summarization (US) which aims to produce a summary from a collection of related texts under the assumption the user/reader has some previous knowledge about the texts subject. Thus, a good update summary must be produced with the most relevant, new and updated content in order to assist the user. This task presents many research challenges, mainly in the processes of content selection and synthesis of the summary. Although there are several approaches for US, most of them do not use of some linguistic information that may assist the identification relevant content for the summary/user. Furthermore, US methods frequently apply an extractive synthesis approach, in which the summary is produced by picking some sentences from the source texts without rewriting operations. Once some segments of the picked sentences may contain redundant or irrelevant content, this synthesis process can to reduce the summary informativeness. Thus, some recent efforts in this field have focused in the compressive synthesis approach, in which some sentences are compressed by deletion of tokens or rewriting operations before be inserted in the output summary. Given this background, this PhD research has investigated the use of some linguistic information, as the Cross Document Theory (CST), Subtopic Segmentation and Named Entity Recognition into distinct content selection approaches for US by use extractive and compressive synthesis process in order to produce more informative update summaries. Once we have focused on the Portuguese language, we have compiled three new resources for this language, the CSTNews-Update, which allows the investigation of US methods for this language, the PCST-Pairs and G1-Pairs, in which there are pairs of original and compressed sentences in order to produce methods of sentence compression. It is important to say we also have performed experiments for the English language, in which there are more resources. The results show the Subtopic Segmentation assists the production of better summaries, however, this have occurred just on some content selection approaches. Furthermore, we also have proposed a simplification for the method DualSum by use Subtopic Segments. These simplifications require low computation power than DualSum and they have presented very satisfactory results. Aiming the production of compressive summaries, we have proposed different compression methods by use machine learning techniques. Our better proposed method present quality similar to a state-of-art system, which is based on Deep Learning algorithms. Previously this investigation, most of the researches on the Automatic Summarization field for the Portuguese language was focused on previous traditional tasks, as the production of summaries from one and many texts that does not consider the user knowledge, by use extractive synthesis processes. Thus, beside our proposed US systems based on linguistic information, which were evaluated over English and Portuguese datasets, we have produced many Compressions Methods and three new resources that will assist the expansion of the Automatic Summarization field for the Portuguese Language.
APA, Harvard, Vancouver, ISO, and other styles
4

Blais, Pascal. "Pattern compression." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ38737.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

SANCHEZ, FERNANDO ZEGARRA. "COMPRESSION IGNITION OF ETHANOL-POWERED IN RAPID COMPRESSION MACHINE." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=29324@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Com o passar do tempo, a humanidade tem uma maior dependência na geração de energia, utilizada para promoção de conforto, transporte e outros. Com a finalidade de resolver este aumento de demanda, novas fontes eficientes, de preferência renováveis, estão sendo pesquisadas. O transporte é uma das atividades que tem maior dependência dos combustíveis fósseis, além de ser também um dos maiores geradores de gases de efeito estufa. É por isso, que em diversas partes do mundo, o homem pesquisa novas fontes de energia renováveis que possam ser substitutas dos atuais tradicionais usados no transporte. Sabe-se, que os motores Diesel são mais eficientes com relação aos motores Otto. Devido a este fato, há mais 30 anos pesquisam-se e desenvolvem-se sistemas de ignição por compressão, movidos com combustíveis renováveis, o qual permita a diminuição da dependência dos combustíveis fósseis e garanta a redução de gases de efeito estufa. O etanol é um candidato para substituir o oleo Diesel, mas tem que se levar em conta algumas alterações (aumento da relação de compressão, adição de melhoradores da autoignição, etc.) antes de ser utilizado nos motores Diesel. Com base nisto, a presente tese apresenta uma nova proposta, utilizar como melhorador da autoignição do etanol o n-butanol. Para tal propósito se desenvolveu diversos testes com diversas relações de compressão, percentuais em massa de aditivo na mistura de etanol e diversos avanços da injeção. Os testes foram realizados em uma máquina de compressão rápida (MCR) com misturas de etanol e polietilenoglicol 400 e 600, n-butanol, além dos testes refenciais com óleo Disel e ED95. Os resultados mostram que o n-butanol, com uma participação de 10 por cento na mistura, pode ser utilizado como melhorador da autoignição do etanol em sistemas de ignição por compressão.
Over time, humanity has developed a greater reliance inpower generation, used to promoter comfort, transport and others. In order to address this increased demand new efficient sources are being searched, in preference, renewable sources. Transportation is one of the activities that have greater reliance on fossil fuels as well as being one of the largest generators of greenhouse gases. Therefore, in many parts of the world men are engaged in the search of new renewable energy sources that can substitute the current one used in transport. It is known that diesel engines are more efficient in comparison to the Otto engime. Due to this fact, for more than 30 years research has been conducted in order to develop ignition systems by compression, powered with renewable fuels, which reduces the dependence on fossil fuels and the emission of greenhouse gases. Ethanol is a viable candidate to replace diesel oil, but some improvements have to be accounted for before it s used in diesel engines, improvements such as the increase in compression ratio, adding auto-ignition improves, etc. Based on the facts presented, this thesis offers a new proposal, the use of n-butanol as an auto-ignition improver for ethanol. For this purpose several tests have been executed with various compression ratios, mass percentage of additive in the mixture off ethanol and many start of injections. The tests were performed in a rapid compression machine (RCM) with mixtures of ethanol and polyethylene glycol 400 and 600, and n-butanol inaddition to the reference test with diesel oil and ED95. The results show that n-butanol with a 10 per cent share of the mixture, can be used as an auto ignition improver for ethanol in compression ignition systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Agostini, Luciano Volcan. "Projeto de arquiteturas integradas para a compressão de imagens JPEG." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2002. http://hdl.handle.net/10183/11431.

Full text
Abstract:
Esta dissertação apresenta o desenvolvimento de arquiteturas para a compressão JPEG, onde são apresentadas arquiteturas de um compressor JPEG para imagens em tons de cinza, de um compressor JPEG para imagens coloridas e de um conversor de espaço de cores de RGB para YCbCr. As arquiteturas desenvolvidas são detalhadamente apresentadas, tendo sido completamente descritas em VHDL, com sua síntese direcionada para FPGAs da família Flex10KE da Altera. A arquitetura integrada do compressor JPEG para imagens em tons de cinza possui uma latência mínima de 237 ciclos de clock e processa uma imagem de 640x480 pixels em 18,5ms, permitindo uma taxa de processamento de 54 imagens por segundo. As estimativas realizadas em torno da taxa de compressão obtida indicam que ela seria de aproximadamente 6,2 vezes ou de 84 %. A arquitetura integrada do compressor JPEG para imagens coloridas foi gerada a partir de adaptações na arquitetura do compressor para imagens em tons de cinza. Esta arquitetura também possui a latência mínima de 237 ciclos de clock, sendo capaz de processar uma imagem coloria de 640 x 480 pixels em 54,4ms, permitindo uma taxa de processamento de 18,4 imagens por segundo. A taxa de compressão obtida, segundo estimativas, seria de aproximadamente 14,4 vezes ou de 93 %. A arquitetura para o conversor de espaço de cores de RBG para YCbCr possui uma latência de 6 ciclos de clock e é capaz de processar uma imagem colorida de 640x480 pixels em 84,6ms, o que permite uma taxa de processamento de 11,8 imagens por segundo. Esta arquitetura não chegou a ser integrada com a arquitetura do compressor de imagens coloridas, mas algumas sugestões e estimativas foram realizadas nesta direção.
This dissertation presents the design of architectures for JPEG image compression. Architectures for a gray scale images JPEG compressor that were developed are herein presented. This work also addresses a color images JPEG compressor and a color space converter. The designed architectures are described in detail and they were completely described in VHDL, with synthesis directed for Altera Flex10KE family of FPGAs. The integrated architecture for gray scale images JPEG compressor has a minimum latency of 237 clock cycles and it processes an image of 640x480 pixels in 18,5ms, allowing a processing rate of 54 images per second. The compression rate, according to estimates, would be of 6,2 times or 84%, in percentage of bits compression. The integrated architecture for color images JPEG compression was generated starting from incremental changes in the architecture of gray scale images compressor. This architecture also has the minimum latency of 237 clock cycles and it can process a color image of 640 x 480 pixels in 54,4ms, allowing a processing rate of 18,4 images per second. The compression rate, according to estimates, would be of 14,4 times or 93%, in percentage of bits compression. The architecture for space color conversor from RBG to YCbCr has a latency of 6 clock cycles and it is able to process a color image of 640 x 480 pixels in 84,6ms, allowing a processing rate of 11,8 images per second. This architecture was finally not integrated with the color images compressor architecture, but some suggestions, alternatives and estimates were made in this direction.
APA, Harvard, Vancouver, ISO, and other styles
7

Khan, Jobaidur Rahman. "Fog Cooling, Wet Compression and Droplet Dynamics In Gas Turbine Compressors." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/908.

Full text
Abstract:
During hot days, gas turbine power output deteriorates significantly. Among various means to augment gas turbine output, inlet air fog cooling is considered as the simplest and most costeffective method. During fog cooling, water is atomized to micro-scaled droplets and introduced into the inlet airflow. In addition to cooling the inlet air, overspray can further enhance output power by intercooling the compressor. However, there are concerns that the water droplets might damage the compressor blades and increased mass might cause potential compressor operation instability due to reduced safety margin. Furthermore, the two-phase flow thermodynamics during wet compression in a rotating system has not been fully established, so continued research and development in wet compression theory and prediction model are required. The objective of this research is to improve existing wet compression theory and associated models to accurately predict the compressor and the entire gas turbine system performance for the application of gas turbine inlet fog cooling. The following achievements have been accomplished: (a) At the system level, a global gas turbine inlet fog cooling theory and algorithm have been developed and a system performance code, FogGT, has been written according to the developed theory. (b) At the component level, a stage-stacking wet compression theory in the compressor has been developed with known airfoil configurations. (c) Both equilibrium and non-equilibrium water droplet thermal-fluid dynamic models have been developed including droplet drag forces, evaporation rate, breakup and coalescence. A liquid erosion model has also been developed and incorporated. (d) Model for using computational fluid dynamics (CFD) code has been developed to simulate multiphase wet compression in the rotating compressor stage. In addition, with the continued increase in volatility of natural gas prices as well as concerns regarding national energy security, this research has also investigated employing inlet fogging to gas turbine system fired with alternative fuels such as low calorific value synthetic gases. The key results include discovering that the saturated fogging can reduce compressor power consumption, but overspray, against conventional intuition, actually increases compressor power. Nevertheless, inlet fogging does increase overall net power output.
APA, Harvard, Vancouver, ISO, and other styles
8

Day, Benjamin Marc. "An Evaluation and Redesign of a Thermal Compression Evaporator." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/926.

Full text
Abstract:
Evaporators separate liquids from solutions. For maximum efficiency, designers reduce the temperature difference between the heating and heated media using multiple-stage evaporators. This efficiency requires increased size and bulk. A vendor claimed its thermal compression evaporator achieved high efficiency with only two stages. It did not function as claimed. This project investigated the evaporator's design to identify its problems and propose an alternative design with a minimized footprint. The analysis showed theoretical flaws and design weaknesses in the evaporator, including violation of the first law of thermodynamics. An alternative thermal compressor design was created through computational fluid dynamics using spreadsheet methods developed in house, aided by the software product FLUENT. Detailed component sizing was done using the software product HYSYS. The proposed redesign achieved four to one efficiency with two stage thermal compression, using one half of the space of a traditional system of similar performance.
APA, Harvard, Vancouver, ISO, and other styles
9

Hernández-Cabronero, Miguel. "DNA Microarray Image Compression." Doctoral thesis, Universitat Autònoma de Barcelona, 2015. http://hdl.handle.net/10803/297706.

Full text
Abstract:
En los experimentos con DNA microarrays se genran dos imágenes monocromo, las cuales es conveniente almacenar para poder realizar análisis más precisos en un futuro. Por tanto, la compresión de imágenes surge como una herramienta particularmente útil para minimizar los costes asociados al almacenamiento y la transmisión de dichas imágenes. Esta tesis tiene por objetivo mejorar el estado del arte en la compresión de imágenes de DNA microarrays. Como parte de esta tesis, se ha realizado una detallada investigación de las características de las imágenes de DNA microarray. Los resultados experimentales indican que los algoritmos de compresión no adaptados a este tipo de imágenes producen resultados más bien pobres debido a las características de estas imágenes. Analizando las entropías de primer orden y condicionales, se ha podido determinar un límite aproximado a la compresibilidad sin pérdida de estas imágenes. Aunque la compresión basada en contexto y en segmentación proporcionan mejoras modestas frente a algoritmos de compresión genéricos, parece necesario realizar avances rompedores en el campo de compresión de datos para superar los ratios 2:1 en la mayor parte de las imágenes. Antes del comienzo de esta tesis se habían propuesto varios algoritmos de compresión sin pérdida con rendimientos cercanos al límite óptimo anteriormente mencionado. Sin embargo, ninguno es compatible con los estándares de compresión existentes. Por tanto, la disponibilidad de descompresores compatibles en plataformas futuras no está garantizado. Además, la adhesión a dichos estándares se require normalmente en escenarios clínicos. Para abordar estos problemos, se propone una transformada reversible compatible con el standard JPEG2000: la Histogram Swap Transform (HST). La HST mejora el rendimiento medio de JPEG2000 en todos los corpora entre 1.97% y 15.53%. Además, esta transformada puede aplicarse incurriendo en un sobrecoste de tiempo negligible. Con la HST, JPEG2000 se convierte en la alternativa estándard más competitiva a los compresores no estándard. Las similaridades entre imágenes del mismo corpus también se han estudiado para mejorar aún más los resultados de compresión de imágenes de DNA microarrays. En concreto, se ha encontrado una agrupación óptima de las imágenes que maximiza la correlación dentro de los grupos. Dependiendo del corpus observado, pueden observarse resultados de correlación medios de entre 0.75 y 0.92. Los resultados experimentales obtenidos indican que las técnicas de decorrelación espectral pueden mejorar los resultados de compresión hasta en 0.6 bpp, si bien ninguna de las transformadas es efectiva para todos los corpora utilizados. Por otro lado, los algoritmos de compresión con pérdida permiten obtener resultados de compresión arbitrarios a cambio de modificar las imágenes y, por tanto, de distorsionar subsiguientes procesos de análisis. Si la distorsión introducida es más pequeña que la variabilidad experimental inherente, dicha distorsión se considera generalmente aceptable. Por tanto, el uso de técnicas de compresión con pérdida está justificado. En esta tesis se propone una métrica de distorsión para imágenes de DNA microarrays capaz de predecir la cantidad de distorsión introducida en el análisis sin necesitar analizar las imágenes modificadas, diferenciando entre cambios importantes y no importantes. Asimismo, aunque ya se habían propuesto algunos algoritmos de compresión con pérdida para estas imágenes antes del comienzo de la tesis, ninguno estaba específicamente diseñado para minimizar el impacto en los procesos de análisis para un bitrate prefijado. En esta tesis, se propone un compresor con pérdida (el Relative Quantizer (RQ) coder) que mejora los resultados de todos los métodos anteriormente publicados. Los resultados obtenidos sugieren que es posible comprimir con ratios superiores a 4.5:1 mientras se introducen distorsiones en el análisis inferiores a la mitad de la variabilidad experimental inherente. Además, se han propuesto algunas mejoras a dicho compresor, las cuales permiten realizar una codificación lossy-to-lossless (el Progressive RQ (PRQ) coder), pudiéndose así reconstruir una imagen comprimida con diferentes niveles de calidad. Cabe señalar que los resultados de compresión anteriormente mencionados se obtienen con una complejidad computacional ligeramente inferior a la del mejor compresor sin pérdida para imágenes de DNA microarrays.
In DNA microarray experiments, two grayscale images are produced. It is convenient to save these images for future, more accurate re-analysis. Thus, image compression emerges as a particularly useful tool to alleviate the associated storage and transmission costs. This dissertation aims at improving the state of the art of the compression of DNA microarray images. A thorough investigation of the characteristics of DNA microarray images has been performed as a part of this work. Results indicate that algorithms not adapted to DNA microarray images typically attain only mediocre lossless compression results due to the image characteristics. By analyzing the first-order and conditional entropy present in these images, it is possible to determine approximate limits to their lossless compressibility. Even though context-based coding and segmentation provide modest improvements over generic-purpose algorithms, conceptual breakthroughs in data coding are arguably required to achieve compression ratios exceeding 2:1 for most images. Prior to the start of this thesis, several lossless coding algorithms that have performance results close to the aforementioned limit were published. However, none of them is compliant with existing image compression standards. Hence, the availability of decoders in future platforms -a requisite for future re-analysis- is not guaranteed. Moreover, the adhesion to standards is usually a requisite in clinical scenarios. To address these problems, a fast reversible transform compatible with the JPEG2000 standard -the Histogram Swap Transform (HST)- is proposed. The HST improves the average compression performance of JPEG2000 for all tested image corpora, with gains ranging from 1.97% to 15.53%. Furthermore, this transform can be applied with only negligible time complexity overhead. With the HST, JPEG2000 becomes arguably the most competitive alternatives to microarray-specific, non-standard compressors. The similarities among sets of microarray images have also been studied as a means to improve the compression performance of standard and microarray-specific algorithms. An optimal grouping of the images which maximizes the inter-group correlation is described. Average correlations between 0.75 and 0.92 are observed for the tested corpora. Thorough experimental results suggest that spectral decorrelation transforms can improve some lossless coding results by up to 0.6bpp, although no single transform is effective for all copora. Lossy coding algorithms can yield almost arbitrary compression ratios at the cost of modifying the images and, thus, of distorting subsequent analysis processes. If the introduced distortion is smaller than the inherent experimental variability, it is usually considered acceptable. Hence, the use of lossy compression is justified on the assumption that the analysis distortion is assessed. In this work, a distortion metric for DNA microarray images is proposed to predict the extent of this distortion without needing a complete re-analysis of the modified images. Experimental results suggest that this metric is able to tell apart image changes that affect subsequent analysis from image modifications that do not. Although some lossy coding algorithms were previously described for this type of images, none of them is specifically designed to minimize the impact on subsequent analysis for a given target bitrate. In this dissertation, a lossy coder -the Relative Quantizer (RQ) coder- that improves upon the rate- distortion results of previously published methods is proposed. Experiments suggest that compression ratios exceeding 4.5:1 can be achieved while introducing distortions smaller than half the inherent experimental variability. Furthermore, a lossy-to-lossless extension of this coder -the Progressive RQ (PRQ) coder- is also described. With the PRQ, images can be compressed once and then reconstructed at different quality levels, including lossless reconstruction. In addition, the competitive rate-distortion results of the RQ and PRQ coders can be obtained with computational complexity slightly smaller than that of the best-performing lossless coder of DNA microarray images.
APA, Harvard, Vancouver, ISO, and other styles
10

Grün, Alexander. "Nonlinear pulse compression." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/284879.

Full text
Abstract:
In this thesis I investigate two methods for generating ultrashort laser pulses in spectral regions which are ordinarily difficult to achieve by the existing techniques. These pulses are specially attractive in the study of ultrafast (few femtosecond) atomic and molecular dynamics. The first involves Optical Parametric Amplification (OPA) mediated by four-wave-mixing in gas and supports the generation of ultrashort pulses in the Near-InfraRed (NIR) to the Mid-InfraRed (MIR) spectral region. By combining pulses at a centre wavelength of 800 nm and their second harmonic in an argon-filled hollow-core fibre, we demonstrate near-infrared pulses, peaked at 1.4 µm, with 5 µJ energy and 45 fs duration at the fibre output. The four-wave-mixing process involved in the OPA is expected to lead carrier-envelope phase stable pulses which is of great importance for applications in extreme nonlinear optics. These NIR to MIR pulses can be used directly for nonlinear light-matter interactions making use of its long-wavelength characteristics. The second method allows the compression of intense femtosecond pulses in the ultraviolet (UV) region by sum-frequency mixing two bandwidth limited NIR pulses in a noncollinear phasematching geometry under particular conditions of group-velocity mismatch. Specifically, the crystal has to be chosen such that the group velocities of the NIR pump pulses, v1 and v2 , and of the sum-frequency generated pulse, vSF, meet the following condition, v1 < vSF < v2. In case of strong energy exchange and an appropriate pre-delay between the pump waves, the leading edge of the faster pump pulse and the trailing edge of the slower one are depleted. This way the temporal overlap region of the pump pulses remains narrow resulting in the shortening of the upconverted pulse. The noncollinear beam geometry allows to control the relative group velocities while maintaining the phasematching condition. To ensure parallel wavefronts inside the crystal and that the sum-frequency generated pulses emerge untilted, pre-compensation of the NIR pulse-front tilts is essential. I show that these pulse-front tilts can be achieved using a very compact setup based on transmission gratings and a more complex setup based on prisms combined with telescopes. UV pulses as short as 32 fs (25 fs) have been generated by noncollinear nonlinear pulse compression in a type II phasematching BBO crystal, starting with NIR pulses of 74 fs (46 fs) duration. This is of interest, because there is no crystal that can be used for nonlinear pulse compression at wavelengths near 800 nm in a collinear geometry. Compared to state-of-the-art compression techniques based on self-phase modulation, pulse compression by sum-frequency generation is free of aperture limitation, and thus scalable in energy. Such femtosecond pulses in the visible and in the ultraviolet are strongly desired for studying ultrafast dynamics of a variety of (bio)molecular systems.
En esta tesis he investigado dos métodos para generar pulsos láser ultracortos en regiones espectrales que son típicamente difíciles de lograr con las técnicas existentes. Estos pulsos son especialmente atractivos en el estudio de la dinámica ultrarrápida (pocos femtosegundos) en átomos y moléculas. La primera técnica implica Amplificación Paramétrica Óptica (OPA) mediante mezcla de cuatro ondas en fase gaseosa y soporta la generación de pulsos ultracortos desde el Infrarrojo-Cercano (NIR) hasta la región espectral del Infrarrojo-Medio (MIR). Mediante la combinación de pulsos centrados a una longitud de onda de 800 nm y su segundo armónico en una fibra hueca rellena de argón, hemos demostrado a la salida de la fibra la generación de pulsos en el NIR, centrados a 1.4 µm, con 5 µJ de energía y 45 fs de duración. Se espera que el proceso de mezcla de cuatro ondas involucrado en el OPA lleve a pulsos con fase de la envolvente de la portadora estables, ya que es de gran importancia para aplicaciones en óptica extrema no lineal. Estos pulsos desde el NIR hasta el MIR se pueden utilizar directamente en interacciones no-lineales materia-radiación, haciendo uso de sus características de longitud de onda largas. El segundo método permite la compresión de pulsos intensos de femtosegundos en la región del ultravioleta (UV) mediante la mezcla de suma de frecuencias de dos pulsos en el NIR limitados en el ancho de banda en una geometría de ajuste de fases no-colineal bajo condiciones particulares de discrepancia de velocidades de grupo. Específicamente, el cristal debe ser elegido de tal manera que las velocidades de grupo de los pulsos de bombeo del NIR, v1 y v2, y la del pulso suma-de-frecuencias generado, vSF, cumplan la siguiente condición, v1 < vSF < v2. En el caso de un fuerte intercambio de energía y un pre-retardo adecuado entre las ondas de bombeo, el borde delantero del pulso de bombeo más rápido y el borde trasero del más lento se agotan. De esta manera la región de solapamiento temporal de los impulsos de bombeo permanece estrecha, resultando en el acortamiento del impulso generado. La geometría de haces no-colineales permite controlar las velocidades de grupo relativas mientras mantiene la condición de ajuste de fase. Para asegurar frentes de onda paralelos dentro del cristal y que los pulsos generados por suma de frecuencias se generen sin inclinación, es esencial la pre-compensación de la inclinación de los frente de onda de los pulsos NIR. En esta tesis se muestra que estas inclinaciones de los frentes de onda se pueden lograr utilizando una configuración muy compacta basada en rejillas de transmisión y una configuración más compleja basada en prismas combinados con telescopios. Pulsos en el UV tan cortos como 32 fs (25 fs) se han generado mediante compresión de pulsos no-lineal no-colineal en un cristal BBO de ajuste de fase tipo II, comenzando con pulsos en el NIR de 74 fs (46 fs) de duración. El interés de este método radica en la inexistencia de cristales que se puedan utilizar para la compresión de impulsos no-lineal a longitudes de onda entorno a 800 nm en una geometría colineal. En comparación con las técnicas de última generación de compresión basadas en la automodulación de fase, la compresión de pulsos por suma de frecuencias esta libre de restricciones en la apertura de los pulsos, y por lo tanto es expandible en energía. Tales pulsos de femtosegundos en el visible y en el ultravioleta son fuertemente deseados en el estudio de dinámica ultrarrápida de una gran variedad de sistemas (bio)moleculares.
APA, Harvard, Vancouver, ISO, and other styles
11

Nasiopoulos, Panagiotis. "Adaptive compression coding." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28508.

Full text
Abstract:
An adaptive image compression coding technique, ACC, is presented. This algorithm is shown to preserve edges and give better quality decompressed pictures and better compression ratios than that of the Absolute Moment Block Truncation Coding. Lookup tables are used to achieve better compression rates without affecting the visual quality of the reconstructed image. Regions with approximately uniform intensities are successfully detected by using the range and these regions are approximated by their average. This procedure leads to further reduction in the compression data rates. A method for preserving edges is introduced. It is shown that as more details are preserved around edges the pictorial results improve dramatically. The ragged appearance of the edges in AMBTC is reduced or eliminated, leading to images far superior than those of AMBTC. For most of the images ACC yields Root Mean Square Error smaller than that obtained by AMBTC. Decompression time is shown to be comparable to that of AMBTC for low threshold values and becomes significantly lower as the compression rate becomes smaller. An adaptive filter is introduced which helps recover lost texture at very low compression rates (0.8 to 0.6 b/p, depending on the degree of texture in the image). This algorithm is easy to implement since no special hardware is needed.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
12

Obaid, Arif. "Range image compression." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/10131.

Full text
Abstract:
Range Images, which are a representation of the surface of a 3-D object, are gaining popularity in many applications including CAD/CAM, multimedia and virtual reality. There is, thus, a need for compression of these 3-D images. Current standards for still image compression, such as JPEG, are not appropriate for such images because they have been designed specifically for intensity images. This has led us to develop a new compression method for range images. It first scans the image so that the pixels are arranged into a sequence. It then approximates this sequence by straight line segments within a user-specified maximum tolerance level. The extremities of the straight-line segments within a user-specified maximum tolerance level. The extremities of the straight-line segments are non-redundant points (NRPs). Huffman coding, with a fixed Huffman tree, is used to encode the distance between NRPs and their altitudes. A plane-filling scanning technique, known as Peano scanning, is used to improve performance. The algorithms performance is assessed on range images acquired from the Institute for Information Technology of the National Research Council of Canada. The proposed method performs better than JPEG for any given maximum tolerance level. The adaptive mode of the algorithm is also presented along with its performance assessment.
APA, Harvard, Vancouver, ISO, and other styles
13

Hansson, Erik, and Stefan Karlsson. "Lossless Message Compression." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-21434.

Full text
Abstract:
In this thesis we investigated whether using compression when sending inter-process communication (IPC) messages can be beneficial or not. A literature study on lossless compression resulted in a compilation of algorithms and techniques. Using this compilation, the algorithms LZO, LZFX, LZW, LZMA, bzip2 and LZ4 were selected to be integrated into LINX as an extra layer to support lossless message compression. The testing involved sending messages with real telecom data between two nodes on a dedicated network, with different network configurations and message sizes. To calculate the effective throughput for each algorithm, the round-trip time was measured. We concluded that the fastest algorithms, i.e. LZ4, LZO and LZFX, were most efficient in our tests.
I detta examensarbete har vi undersökt huruvida komprimering av meddelanden för interprocesskommunikation (IPC) kan vara fördelaktigt. En litteraturstudie om förlustfri komprimering resulterade i en sammanställning av algoritmer och tekniker. Från den här sammanställningen utsågs algoritmerna LZO, LZFX, LZW, LZMA, bzip2 och LZ4 för integrering i LINX som ett extra lager för att stödja komprimering av meddelanden. Algoritmerna testades genom att skicka meddelanden innehållande riktig telekom-data mellan två noder på ett dedikerat nätverk. Detta gjordes med olika nätverksinställningar samt storlekar på meddelandena. Den effektiva nätverksgenomströmningen räknades ut för varje algoritm genom att mäta omloppstiden. Resultatet visade att de snabbaste algoritmerna, alltså LZ4, LZO och LZFX, var effektivast i våra tester.
APA, Harvard, Vancouver, ISO, and other styles
14

Williams, Ross Neil. "Adaptive data compression." Adelaide, 1989. http://web4.library.adelaide.edu.au/theses/09PH/09phw7262.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lacroix, Bruno. "Fractal image compression." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ36939.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Aydinoğlu, Behçet Halûk. "Stereo image compression." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/15447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Abdul-Amir, Said. "Digital image compression." Thesis, De Montfort University, 1985. http://hdl.handle.net/2086/10681.

Full text
Abstract:
Due to the rapid growth in information handling and transmission, there is a serious demand for more efficient data compression schemes. compression schemes address themselves to speech, visual and alphanumeric coded data. This thesis is concerned with the compression of visual data given in the form of still or moving pictures. such data is highly correlated spatially and in the context domain. A detailed study of some existing data compression systems is presented, in particular, the performance of DPCM was analysed by computer simulation, and the results examined both subjectively and objectively. The adaptive form of the prediction encoder is discussed and two new algorithms proposed, which increase the definition of the compressed image and reduce the overall mean square error. Two novel systems are proposed for image compression. The first is a bit plane image coding system based on a hierarchic quadtree structure in a transmission domain, using the Hadamard transform as a kernel. Good compression has been achieved from this scheme, particularly for images with low detail. The second scheme uses a learning automata to predict the probability distribution of the grey levels of an image related to its spatial context and position. An optimal reward/punishment function is proposed such that the automata converges to its steady state within 4000 iterations • such a high speed of convergence together with Huffman coding results in efficient compression for images and is shown to be applicable to other types of data. . The performance and evaluation of all the proposed .'systems have been tested by computer simulation and the results presented both quantitatively and qualitatively."The advantages and disadvantages of each system are discussed and suggestions for improvement. given.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhang, Fan. "Parametric video compression." Thesis, University of Bristol, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.574421.

Full text
Abstract:
Advances in communication and compression technologies have facilitated the transmission of high quality video content across a broad range of net- works to numerous terminal types. Challenges for video coding continue to increase due to the demands on bandwidth from increased frame rates, higher resolutions and complex formats. In most cases, the target of any video coding algorithm is, for a given bitrate, to provide the best subjective quality rather than simply produce the most similar pictures to the originals. Based on this premise, texture analysis and synthesis can be utilised to provide higher performance video codecs. This thesis describes a novel means of parametric video compression based on texture warping and synthesis. Instead of encoding whole images or prediction residuals after translational motion estimation, this approach employs a perspective motion model to warp static textures and utilises texture synthesis to create dynamic textures. Texture regions are segmented using features derived from the com- plex wavelet transform and further classified according to their spatial and temporal characteristics. A compatible artefact-based video metric (AVM) has been designed to evaluate the quality of the reconstructed video. Its enhanced version is further developed as a generic perception-based video metric offering improved performance in correlation with subjective opinions. It is unique in being able to assess both synthesised and conventionally coded content. The AVM is accordingly employed in the coding loop to prevent warping and synthesis artefacts, and a local RQO strategy is then developed based on it to make a trade-off between waveform coding and texture warping/synthesis. In addition, these parametric texture models have been integrated into an H.264 video coding framework whose results show significant coding efficiency improvement, up to 60% bitrate savings over H.264/ AVC, on diverse video content.
APA, Harvard, Vancouver, ISO, and other styles
19

Steinruecken, Christian. "Lossless data compression." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Santurkar, Shibani (Shibani Vinay). "Towards generative compression." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112048.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 47-51).
Graceful degradation is a metric of system functionality which guarantees that performance declines gradually as resource constraints increase or components fail. In the context of data compression, this translates to providing users with intelligible data, even in the presence of bandwidth bottlenecks and noisy channels. Traditional image and video compression algorithms rely on hand-crafted encoder/decoder pairs (codecs) that lack adaptability and are agnostic to the data being compressed and as a result, they do not degrade gracefully. Further, these traditional techniques have been customized for bitmap images and cannot be easily extended to the variety of new media formats such as stereoscopic data, VR data and 360 videos, that are becoming increasingly prevalent. New compression algorithms must address the dual constraints of increased flexibility while demonstrating improvement on traditional measures of compression quality. In this work, we propose a data-aware compression technique leveraging a class of machine learning models called generative models. These are trained to approximate the true data distribution, and hence can be used to learn an intelligent low-dimensional representation of the data. Using these models, we describe the concept of generative compression and show its potential to produce more accurate and visually pleasing reconstructions at much deeper compression levels for both image and video data. We also demonstrate that generative compression is orders-of-magnitude more resilient to bit error rates (e.g. from noisy wireless channels) than traditional variable-length entropy coding schemes.
by Shibani Santurkar.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
21

Feizi, Soheil (Feizi-Khankandi). "Network functional compression." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/60163.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Includes bibliographical references (p. 97-99).
In this thesis, we consider different aspects of the functional compression problem. In functional compression, the computation of a function (or, some functions) of sources is desired at the receiver(s). The rate region of this problem has been considered in the literature under certain restrictive assumptions. In Chapter 2 of this Thesis, we consider this problem for an arbitrary tree network and asymptotically lossless computations. In particular, for one-stage tree networks, we compute a rate-region and for an arbitrary tree network, we derive a rate lower bound based on the graph entropy. We introduce a new condition on colorings of source random variables' characteristic graphs called the coloring connectivity condition (C.C.C.). We show that unlike the condition mentioned in Doshi et al., this condition is necessary and sufficient for any achievable coding scheme based on colorings. We also show that, unlike entropy, graph entropy does not satisfy the chain rule. For one stage trees with correlated sources, and general trees with independent sources, we propose a modularized coding scheme based on graph colorings to perform arbitrarily closely to the derived rate lower bound. We show that in a general tree network case with independent sources, to achieve the rate lower bound, intermediate nodes should perform some computations. However, for a family of functions and random variables called chain rule proper sets, it is sufficient to have intermediate nodes act like relays to perform arbitrarily closely to the rate lower bound. In Chapter 3 of this Thesis, we consider a multi-functional version of this problem with side information, where the receiver wants to compute several functions with different side information random variables and zero distortion. Our results are applicable to the case with several receivers computing different desired functions. We define a new concept named multi-functional graph entropy which is an extension of graph entropy defined by K6rner. We show that the minimum achievable rate for this problem is equal to conditional multi-functional graph entropy of the source random variable given the side information. We also propose a coding scheme based on graph colorings to achieve this rate. In these proposed coding schemes, one needs to compute the minimum entropy coloring (a coloring random variable which minimizes the entropy) of a characteristic graph. In general, finding this coloring is an NP-hard problem. However, in Chapter 4, we show that depending on the characteristic graph's structure, there are some interesting cases where finding the minimum entropy coloring is not NP-hard, but tractable and practical. In one of these cases, we show that, by having a non-zero joint probability condition on random variables' distributions, for any desired function, finding the minimum entropy coloring can be solved in polynomial time. In another case, we show that if the desired function is a quantization function, this problem is also tractable. We also consider this problem in a general case. By using Huffman or Lempel-Ziv coding notions, we show that finding the minimum entropy coloring is heuristically equivalent to finding the maximum independent set of a graph. While the minimum-entropy coloring problem is a recently studied problem, there are some heuristic algorithms to approximately solve the maximum independent set problem. Next, in Chapter 5, we consider the effect of having feedback on the rate-region of the functional compression problem . If the function at the receiver is the identity function, this problem reduces to the Slepian-Wolf compression with feedback. For this case, having feedback does not make any benefits in terms of the rate. However, it is not the case when we have a general function at the receiver. By having feedback, one may outperform rate bounds of the case without feedback. We finally consider the problem of distributed functional compression with distortion. The objective is to compress correlated discrete sources such that an arbitrary deterministic function of those sources can be computed up to a distortion level at the receiver. In this case, we compute a rate-distortion region and then, propose a simple coding scheme with a non-trivial performance guarantee.
by Soheil Feizi.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
22

Stampleman, Joseph Bruce. "Scalable video compression." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/70216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Zheng, L. "Lossy index compression." Thesis, University College London (University of London), 2011. http://discovery.ucl.ac.uk/1302556/.

Full text
Abstract:
This thesis primarily investigates lossy compression of an inverted index. Two approaches of lossy compression are studied in detail, i.e. (i) term frequency quantization, and (ii) document pruning. In addition, a technique for document pruning, i.e. the entropy-based method, is applied to re-rank retrieved documents as query-independent knowledge. Based on the quantization theory, we examine how the number of quantization levels for coding the term frequencies affects retrieval performance. Three methods are then proposed for the purpose of reducing the quantization distortion, including (i) a non-uniform quantizer; (ii) an iterative technique; and (iii) term-specific quantizers. Experiments based on standard TREC test sets demonstrate that nearly no degradation of retrieval performance can be achieved by allocating only 2 or 3 bits for the quantized term frequency values. This is comparable to lossless coding techniques such as unary, γ and δ-codes. Furthermore, if lossless coding is applied to the quantized term frequency values, then around 26% (or 12%) savings can be achieved over lossless coding alone, with less than 2.5% (or no measurable) degradation in retrieval performance. Prior work on index pruning considered posting pruning and term pruning. In this thesis, an alternative pruning approach, i.e. document pruning, is investigated, in which unimportant documents are removed from the document collection. Four algorithms for scoring document importance are described, two of which are dependent on the score function of the retrieval system, while the other two are independent of the retrieval system. Experimental results suggest that document pruning is comparable to existing pruning approaches, such as posting pruning. Note that document pruning affects the global statistics of the indexed collection. We therefore examine whether retrieval performance is superior based on statistics derived from the full or the pruned collection. Our results indicate that keeping statistics derived from the full collection performs slightly better. Document pruning scores documents and then discards those that fall outside a threshold. An alternative is to re-rank documents based on these scores. The entropy-based score, which is independent of the retrieval system, provides a query-independent knowledge of document specificity, analogous to PageRank. We investigate the utility of document specificity in the context of Intranet search, where hypertext information is sparse or absent. Our results are comparable to the previous algorithm that induced a graph link structure based on the measure of similarity between documents. However, a further analysis indicates that our method is superior on computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
24

Hallidy, William H. Jr, and Michael Doerr. "HYPERSPECTRAL IMAGE COMPRESSION." International Foundation for Telemetering, 1999. http://hdl.handle.net/10150/608744.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Systems & Processes Engineering Corporation (SPEC) compared compression and decompression algorithms and developed optimal forms of lossless and lossy compression for hyperspectral data. We examined the relationship between compression-induced distortion and additive noise, determined the effect of errors on the compressed data, and showed that the data could separate targets from clutter after more than 50:1 compression.
APA, Harvard, Vancouver, ISO, and other styles
25

Cilke, Tom. "Video Compression Techniques." International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615075.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada
This paper will attempt to present algorithms commonly used for video compression, and their effectiveness in aerospace applications where size, weight, and power are of prime importance. These techniques will include samples of one-, two-, and three-dimensional algorithms. Implementation of these algorithms into usable hardware is also explored but limited to monochrome video only.
APA, Harvard, Vancouver, ISO, and other styles
26

Lindsay, Robert A., and B. V. Cox. "UNIVERSAL DATA COMPRESSION." International Foundation for Telemetering, 1985. http://hdl.handle.net/10150/615552.

Full text
Abstract:
International Telemetering Conference Proceedings / October 28-31, 1985 / Riviera Hotel, Las Vegas, Nevada
Universal and adaptive data compression techniques have the capability to globally compress all types of data without loss of information but have the disadvantage of complexity and computation speed. Advances in hardware speed and the reduction of computational costs have made universal data compression feasible. Implementations of the Adaptive Huffman and Lempel-Ziv compression algorithms are evaluated for performance. Compression ratios versus run times for different size data files are graphically presented and discussed in the paper. Required adjustments needed for optimum performance of the algorithms relative to theoretical achievable limits will be outlined.
APA, Harvard, Vancouver, ISO, and other styles
27

Lima, Jose Paulo Rodrigues de. "Representação compressiva de malhas." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-17042014-151933/.

Full text
Abstract:
A compressão de dados é uma área de muito interesse em termos computacionais devido à necessidade de armazená-los e transmiti-los. Em particular, a compressão de malhas possui grande interesse em função do crescimento de sua utilização em jogos tridimensionais e modelagens diversas. Nos últimos anos, uma nova teoria de aquisição e reconstrução de sinais foi desenvolvida, baseada no conceito de esparsidade na minimização da norma L1 e na incoerência do sinal, chamada Compressive Sensing (CS). Essa teoria possui algumas características marcantes, como a aleatoriedade de amostragem e a reconstrução via minimização, de modo que a própria aquisição do sinal é feita considerando somente os coeficientes significativos. Qualquer objeto que possa ser interpretado como um sinal esparso permite sua utilização. Assim, ao se representar esparsamente um objeto (sons, imagens) é possível aplicar a técnica de CS. Este trabalho verifica a viabilidade da aplicação da teoria de CS na compressão de malhas, de modo que seja possível um sensoreamento e representação compressivos na geometria de uma malha. Nos experimentos realizados, foram utilizadas variações dos parâmetros de entrada e técnicas de minimização da Norma L1. Os resultados obtidos mostram que a técnica de CS pode ser utilizada como estratégia de compressão da geometria das malhas.
Data compression is an area of a major interest in computational terms due to the issues on storage and transmission. Particularly, mesh compression has wide usage due to the increase of its application in games and three-dimensional modeling. In recent years, a new theory of acquisition and reconstruction of signals was developed, based on the concept of sparsity and in the minimization of the L1 norm and the incoherency of the signal, called Compressive Sensing (CS). This theory has some remarkable features, such as random sampling and reconstruction by minimization, in a way that the signal acquisition is done by considering only its significant coefficients. Any object that can be interpreted as a sparse sign allows its use. Thus, representing an object sparsely (sounds, images), you can apply the technique of CS. This work explores the viability of CS theory on mesh compression, so that it is possible a representative and compressive sensing on the mesh geometry. In the performed experiments, different parameters and L1 Norm minimization strategies were used. The results show that CS can be used as a mesh geometry compression strategy.
APA, Harvard, Vancouver, ISO, and other styles
28

Fgee, El-Bahlul. "A comparison of voice compression using wavelets with other compression schemes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ39651.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Nicholl, Peter Nigel. "Feature directed spiral image compression : (a new technique for lossless image compression)." Thesis, University of Ulster, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Lee, Joshua Ka-Wing. "A model-adaptive universal data compression architecture with applications to image compression." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111868.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-61).
In this thesis, I designed and implemented a model-adaptive data compression system for the compression of image data. The system is a realization and extension of the Model-Quantizer-Code-Separation Architecture for universal data compression which uses Low-Density-Parity-Check Codes for encoding and probabilistic graphical models and message-passing algorithms for decoding. We implement a lossless bi-level image data compressor as well as a lossy greyscale image compressor and explain how these compressors can rapidly adapt to changes in source models. We then show using these implementations that Restricted Boltzmann Machines are an effective source model for compressing image data compared to other compression methods by comparing compression performance using these source models on various image datasets.
by Joshua Ka-Wing Lee.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
31

Hooshmand, Mohsen. "Sensing and Compression Techniques for Environmental and Human Sensing Applications." Doctoral thesis, Università degli studi di Padova, 2017. http://hdl.handle.net/11577/3425724.

Full text
Abstract:
In this doctoral thesis, we devise and evaluate a variety of lossy compression schemes for Internet of Things (IoT) devices such as those utilized in environmental wireless sensor networks (WSNs) and Body Sensor Networks (BSNs). We are especially concerned with the efficient acquisition of the data sensed by these systems and to this end we advocate the use of joint (lossy) compression and transmission techniques. Environmental WSNs are considered first. For these, we present an original compressive sensing (CS) approach for the spatio-temporal compression of data. In detail, we consider temporal compression schemes based on linear approximations as well as Fourier transforms, whereas spatial and/or temporal dynamics are exploited through compression algorithms based on distributed source coding (DSC) and several algorithms based on compressive sensing (CS). To the best of our knowledge, this is the first work presenting a systematic performance evaluation of these (different) lossy compression approaches. The selected algorithms are framed within the same system model, and a comparative performance assessment is carried out, evaluating their energy consumption vs the attainable compression ratio. Hence, as a further main contribution of this thesis, we design and validate a novel CS-based compression scheme, termed covariogram-based compressive sensing (CB-CS), which combines a new sampling mechanism along with an original covariogram-based approach for the online estimation of the covariance structure of the signal. As a second main research topic, we focus on modern wearable IoT devices which enable the monitoring of vital parameters such as heart or respiratory rates (RESP), electrocardiography (ECG), and photo-plethysmographic (PPG) signals within e-health applications. These devices are battery operated and communicate the vital signs they gather through a wireless communication interface. A common issue of this technology is that signal transmission is often power-demanding and this poses serious limitations to the continuous monitoring of biometric signals. To ameliorate this, we advocate the use of lossy signal compression at the source: this considerably reduces the size of the data that has to be sent to the acquisition point by, in turn, boosting the battery life of the wearables and allowing for fine-grained and long-term monitoring. Considering one dimensional biosignals such as ECG, RESP and PPG, which are often available from commercial wearable devices, we first provide a throughout review of existing compression algorithms. Hence, we present novel approaches based on online dictionaries, elucidating their operating principles and providing a quantitative assessment of compression, reconstruction and energy consumption performance of all schemes. As part of this first investigation, dictionaries are built using a suboptimal but lightweight, online and best effort algorithm. Surprisingly, the obtained compression scheme is found to be very effective both in terms of compression efficiencies and reconstruction accuracy at the receiver. This approach is however not yet amenable to its practical implementation as its memory usage is rather high. Also, our systematic performance assessment reveals that the most efficient compression algorithms allow reductions in the signal size of up to 100 times, which entail similar reductions in the energy demand, by still keeping the reconstruction error within 4 % of the peak-to-peak signal amplitude. Based on what we have learned from this first comparison, we finally propose a new subject-specific compression technique called SURF Subject-adpative Unsupervised ecg compressor for weaRable Fitness monitors. In SURF, dictionaries are learned and maintained using suitable neural network structures. Specifically, learning is achieve through the use of neural maps such as self organizing maps and growing neural gas networks, in a totally unsupervised manner and adapting the dictionaries to the signal statistics of the wearer. As our results show, SURF: i) reaches high compression efficiencies (reduction in the signal size of up to 96 times), ii) allows for reconstruction errors well below 4 % (peak-to-peak RMSE, errors of 2 % are generally achievable), iii) gracefully adapts to changing signal statistics due to switching to a new subject or changing their activity, iv) has low memory requirements (lower than 50 kbytes) and v) allows for further reduction in the total energy consumption (processing plus transmission). These facts makes SURF a very promising algorithm, delivering the best performance among all the solutions proposed so far.
In this doctoral thesis, we devise and evaluate a variety of lossy compression schemes for Internet of Things (IoT) devices such as those utilized in environmental wireless sensor networks (WSNs) and Body Sensor Networks (BSNs). We are especially concerned with the efficient acquisition of the data sensed by these systems and to this end we advocate the use of joint (lossy) compression and transmission techniques. Environmental WSNs are considered first. For these, we present an original compressive sensing (CS) approach for the spatio-temporal compression of data. In detail, we consider temporal compression schemes based on linear approximations as well as Fourier transforms, whereas spatial and/or temporal dynamics are exploited through compression algorithms based on distributed source coding (DSC) and several algorithms based on compressive sensing (CS). To the best of our knowledge, this is the first work presenting a systematic performance evaluation of these (different) lossy compression approaches. The selected algorithms are framed within the same system model, and a comparative performance assessment is carried out, evaluating their energy consumption vs the attainable compression ratio. Hence, as a further main contribution of this thesis, we design and validate a novel CS-based compression scheme, termed covariogram-based compressive sensing (CB-CS), which combines a new sampling mechanism along with an original covariogram-based approach for the online estimation of the covariance structure of the signal. As a second main research topic, we focus on modern wearable IoT devices which enable the monitoring of vital parameters such as heart or respiratory rates (RESP), electrocardiography (ECG), and photo-plethysmographic (PPG) signals within e-health applications. These devices are battery operated and communicate the vital signs they gather through a wireless communication interface. A common issue of this technology is that signal transmission is often power-demanding and this poses serious limitations to the continuous monitoring of biometric signals. To ameliorate this, we advocate the use of lossy signal compression at the source: this considerably reduces the size of the data that has to be sent to the acquisition point by, in turn, boosting the battery life of the wearables and allowing for fine-grained and long-term monitoring. Considering one dimensional biosignals such as ECG, RESP and PPG, which are often available from commercial wearable devices, we first provide a throughout review of existing compression algorithms. Hence, we present novel approaches based on online dictionaries, elucidating their operating principles and providing a quantitative assessment of compression, reconstruction and energy consumption performance of all schemes. As part of this first investigation, dictionaries are built using a suboptimal but lightweight, online and best effort algorithm. Surprisingly, the obtained compression scheme is found to be very effective both in terms of compression efficiencies and reconstruction accuracy at the receiver. This approach is however not yet amenable to its practical implementation as its memory usage is rather high. Also, our systematic performance assessment reveals that the most efficient compression algorithms allow reductions in the signal size of up to 100 times, which entail similar reductions in the energy demand, by still keeping the reconstruction error within 4 % of the peak-to-peak signal amplitude. Based on what we have learned from this first comparison, we finally propose a new subject-specific compression technique called SURF Subject-adpative Unsupervised ecg compressor for weaRable Fitness monitors. In SURF, dictionaries are learned and maintained using suitable neural network structures. Specifically, learning is achieve through the use of neural maps such as self organizing maps and growing neural gas networks, in a totally unsupervised manner and adapting the dictionaries to the signal statistics of the wearer. As our results show, SURF: i) reaches high compression efficiencies (reduction in the signal size of up to 96 times), ii) allows for reconstruction errors well below 4 % (peak-to-peak RMSE, errors of 2 % are generally achievable), iii) gracefully adapts to changing signal statistics due to switching to a new subject or changing their activity, iv) has low memory requirements (lower than 50 kbytes) and v) allows for further reduction in the total energy consumption (processing plus transmission). These facts makes SURF a very promising algorithm, delivering the best performance among all the solutions proposed so far.
APA, Harvard, Vancouver, ISO, and other styles
32

Schenkel, Birgit. "Supercontinuum generation and compression /." [S.l.] : [s.n.], 2004. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=15570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

García, Sobrino Francisco Joaquín. "Sounder spectral data compression." Doctoral thesis, Universitat Autònoma de Barcelona, 2018. http://hdl.handle.net/10803/663984.

Full text
Abstract:
IASI (Infrared Atmospheric Sounding Interferometer) es un espectrómetro basado en la transformada de Fourier diseñado para medir radiación infrarroja emitida por La Tierra. A partir de estas mediciones se generan datos con una precisión y resolución espectral sin precedentes. Esta información es útil para obtener perfiles de temperatura y humedad, así como concentraciones de gases traza, que son esenciales para la comprensión y monitorización del clima y para realizar previsiones atmosféricas. La alta resolución espectral, espacial y temporal de los datos producidos por el instrumento implica generar productos con un tamaño considerablemente grande, lo que demanda el uso de técnicas de compresión eficientes para mejorar tanto las capacidades de transmisión como las de almacenamiento. En esta tesis se realiza un exhaustivo análisis de la compresión de datos IASI y se proporcionan recomendaciones para generar datos reconstruidos útiles para el usuario final. En este análisis se utilizan datos IASI transmitidos a las estaciones de recepción (productos IASI L0) y datos destinados a usuarios finales que son distribuidos a centros de predicción numérica y a la comunidad científica en general (productos IASI L1C). Para comprender mejor la naturaleza de los datos capturados por el instrumento se analizan las estadísiticas de la información y el rendimiento de varias técnicas de compresión en datos IASI L0. Se estudia la entropía de orden-0 y las entropías contextuales de orden-1, orden-2 y orden-3. Este estudio revela que el tamaño de los datos se podría reducir considerablemente explotando la entropía de orden-0. Ganancias más significativas se podrían conseguir si se utilizaran modelos contextuales. También se investiga el rendimiento de varias técnicas de compresión sin pérdida. Los resultados experimentales sugieren que se puede alcanzar un ratio de compresión de 2,6:1, lo que implica que sería posible transmitir más datos manteniendo la tasa de transmisión original o, como alternativa, la tasa de transmisión del instrumento se podría reducir. También se realiza un exhaustivo análisis de la compresión de datos IASI L1C donde se evalúa el rendimiento de varias transformadas espectrales y técnicas de compresión. La experimentación abarca compresión sin pérdida, compresión casi sin pérdida y compresión con pérdida sobre una amplia gama de productos IASI-A e IASI-B. Para compresión sin pérdida es posible conseguir ratios de compresión superiores a 2,5:1. Para compresión casi sin pérdida y compresión con pérdida se pueden alcanzar ratios de compresión mayores a la vez que se producen espectros reconstruidos de calidad. Aunque la compresión casi sin pérdida y la compresión con pérdida producen ratios de compresión altos, la utilidad de los espectros reconstruidos se puede ver comprometida ya que cierta información es eliminada durante la etapa de compresión. En consecuencia, se estudia el impacto de la compresión casi sin pérdida y la compresión con pérdida en aplicaciones de usuario final. Concretamente, se evalúa el impacto de la compresión en datos IASI L1C cuando algoritmos estadísticos se utilizan posteriormente para predecir información física a partir de los espectros reconstruidos. Los resultados experimentales muestran que el uso de espectros reconstruidos puede conseguir resultados de predicción competitivos, mejorando incluso los resultados que se obtienen cuando se utilizan datos sin comprimir. A partir del análisis previo se estudia el origen de los beneficios que produce la compresión obteniendo dos observaciones principales. Por un lado, la compresión produce eliminación de ruido y filtrado de la señal lo que beneficia a los métodos de predicción. Por otro lado, la compresión es una forma indirecta de producir regularización espectral y espacial entre píxeles vecinos lo que beneficia a algoritmos que trabajan a nivel de píxel.
The Infrared Atmospheric Sounding Interferometer (IASI) is a Fourier Transform Spectrometer implemented on the MetOp satellite series. The instrument is intended to measure infrared radiation emitted from the Earth. IASI produces data with unprecedented accuracy and spectral resolution. Notably, the sounder harvests spectral information to derive temperature and moisture profiles, as well as concentrations of trace gases, essential for the understanding of weather, for climate monitoring, and for atmospheric forecasts. The large spectral, spatial, and temporal resolution of the data collected by the instrument involves generating products with a considerably large size, about 16 Gigabytes per day by each of the IASI-A and IASI-B instruments currently operated. The amount of data produced by IASI demands efficient compression techniques to improve both the transmission and the storage capabilities. This thesis supplies a comprehensive analysis of IASI data compression and provides effective recommendations to produce useful reconstructed spectra. The study analyzes data at different processing stages. Specifically, we use data transmitted by the instrument to the reception stations (IASI L0 products) and end-user data disseminated to the Numerical Weather Prediction (NWP) centres and the scientific community (IASI L1C products). In order to better understand the nature of the data collected by the instrument, we analyze the information statistics and the compression performance of several coding strategies and techniques on IASI L0 data. The order-0 entropy and the order-1, order-2, and order-3 context-based entropies are analyzed in several IASI L0 products. This study reveals that the size of the data could be considerably reduced by exploiting the order-0 entropy. More significant gains could be achieved if contextual models were used. We also investigate the performance of several state-of-the-art lossless compression techniques. Experimental results suggest that a compression ratio of 2.6:1 can be achieved, which involves that more data could be transmitted at the original transmission rate or, alternatively, the transmission rate of the instrument could be further decreased. A comprehensive study of IASI L1C data compression is performed as well. Several state-of-the-art spectral transforms and compression techniques are evaluated on IASI L1C spectra. Extensive experiments, which embrace lossless, near-lossless, and lossy compression, are carried out over a wide range of IASI-A and IASI-B orbits. For lossless compression, compression ratios over 2.5:1 can be achieved. For near-lossless and lossy compression, higher compression ratios can be achieved, while producing useful reconstructed spectra. Even though near-lossless and lossy compression produce higher compression ratios compared to lossless compression, the usefulness of the reconstructed spectra may be compromised because some information is removed during the compression stage. Therefore, we investigate the impact of near-lossless and lossy compression on end-user applications. Specifically, the impact of compression on IASI L1C data is evaluated when statistical retrieval algorithms are later used to retrieve physical information. Experimental results reveal that the reconstructed spectra can enable competitive retrieval performance, improving the results achieved for the uncompressed data, even at high compression ratios. We extend the previous study to a real scenario, where spectra from different disjoint orbits are used in the retrieval stage. Experimental results suggest that the benefits produced by compression are still significant. We also investigate the origin of these benefits. On the one hand, results illustrate that compression performs signal filtering and denoising, which benefits the retrieval methods. On the other hand, compression is an indirect way to produce spectral and spatial regularization, which helps pixel-wise statistical algorithms.
APA, Harvard, Vancouver, ISO, and other styles
34

Ibarria, Lorenzo. "Geometric Prediction for Compression." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16162.

Full text
Abstract:
This thesis proposes several new predictors for the compression of shapes, volumes and animations. To compress frames in triangle-mesh animations with fixed connectivity, we introduce the ELP (Extended Lorenzo Predictor) and the Replica predictors that extrapolate the position of each vertex in frame $i$ from the position of each vertex in frame $i-1$ and from the position of its neighbors in both frames. For lossy compression we have combined these predictors with a segmentation of the animation into clips and a synchronized simplification of all frames in a clip. To compress 2D and 3D static or animated scalar fields sampled on a regular grid, we introduce the Lorenzo predictor well suited for scanline traversal and the family of Spectral predictors that accommodate any traversal and predict a sample value from known samples in a small neighborhood. Finally, to support the compressed streaming of isosurface animations, we have developed an approach that identifies all node-values needed to compute a given isosurface and encodes the unknown values using our Spectral predictor.
APA, Harvard, Vancouver, ISO, and other styles
35

Chin, Roger S. "Femtosecond laser pulse compression." Thesis, University of British Columbia, 1991. http://hdl.handle.net/2429/29799.

Full text
Abstract:
Once the Spectra-Physics Femtosecond Laser System had arrived, it had to be characterized. For further pulse compression, various techniques had to be considered. The best of these were chosen considering our needs and limitations. First, the Spectra-Physics Femtosecond Laser System is described and its 616 nm laser pulses are characterized. By using an autocorrelation technique based on the nonlinear optical characteristics of a potassium dihydrogen phosphate (KDP) crystal and assuming a particular intensity pulse shape (such as that described by a symmetric exponential decay), the pulse width (full width at half maximum) could be obtained. Assuming a pulse shape described by a symmetric exponential decay function, the "exponential" pulse width was measured to be 338 ± 6 fs. The nominal average power of the 82-MHz modelocked pulse train was 225 mW. The "exponential" pulse energy was 2.7 nJ with a peak pulse power of 2.8 kW. Theoretical calculations for fibre grating pulse compression are presented. Experimentally, I was able to produce 68 ± 1 fs (exponential) pulses at 616 nm. The average power was 55 mW. The "exponential" pulse energy was 0.67 nJ with a peak power of 3.4 kW. The pulse compressor consisted of a 30.8 ±0.5 cm fibre and a grating compressor with the effective grating pair distance of 103.8 ± 1 cm. Various techniques were considered for further pulse compression. Fibre-grating pulse compression and hybrid mode locking appeared to be the most convenient and least expensive options while yielding moderate results. The theory of hybrid mode locking is presented. Experimentally, it was determined that with the current laser system tuned to 616 nm, DODCI is better than DQOCI based on pulse shape, power, stability and expense. The recommended DODCI concentration is 2-3 mmol/l. The shortest "exponential" pulse width was 250 fs. The average power was 185 mW. The exponential pulse energy was 2.3 nJ with a peak pulse power of 2.6 kW. An attempt to increase the bandwidth of the laser pulse by replacing the one-plate birefringent plate with a pellicle severely limited the tunability of the dye laser and introduces copious noise. Attempts to reduce group velocity dispersion (responsible for pulse broadening) with a grating compressor was indeterminate, but did result in a slightly better pulse shape. Interferometric autocorrelation is recommended for such a study. An increase or decrease from the nominal power output of the pulse compressor showed a decrease in pulse compression.
Science, Faculty of
Physics and Astronomy, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
36

Mandal, Mrinal Kumar. "Wavelets for image compression." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/10277.

Full text
Abstract:
Wavelets are becoming increasingly important in image compression applications because of its flexibility in representing nonstationary signals. To achieve a high compression ratio, the wavelet has to be adapted to the image. Current techniques use exhaustive search procedures which are computationally intensive to find the optimal basis (type/order/tree) for the image to be coded. In this thesis, we have carried out extensive performance analysis of various wavelets on a wide variety of images. Based on the investigation, we propose some guidelines for searching for the optimal wavelet (type/order) based on the overall activity (measured by the spectral flatness) of the image to be coded. These guidelines will provide the degree of improvement that can be achieved by using the "optimal" over "standard" wavelets. The proposed guidelines can be used to find a good initial guess for faster convergence when searching for optimal wavelet is essential. We propose a wave packet decomposition algorithm based on the local transform gain of the wavelet decomposed bands. The proposed algorithm provides good coding performance at significantly reduced complexity. Most practical coders are designed to minimize the mean square error (MSE) between the original and reconstructed image. It is known that at high compression ratio, MSE does not correspond well to the subjective quality of the image. In this thesis, we propose an image adaptive coding algorithm which tries to minimize the MSE weighted by the visual importance of various wavelet bands. It has been observed that the proposed algorithm provides a better coding performance for a wide variety of images.
APA, Harvard, Vancouver, ISO, and other styles
37

Rambaruth, Ratna. "Region-based video compression." Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/843377/.

Full text
Abstract:
First generation image coding standards are now well-established and coders based on these standards are commercially available. However, for emerging applications, good quality at even lower bitrates is required. Ways of exploiting higher level visual information are currently being explored by the research community in order to achieve high compression. Unfortunately very high level approaches are bound to be restrictive as they are highly dependent on the accuracy of lower-level vision operations. Region-based coding only relies on mid-level image processing and thus is viewed as a promising strategy. In this work, substantial advances to the field of region-based video compression are made by considering the complete scheme. Thus, improvements to the failure regions coding and the motion compensation components have been devised. The failure region coding component was improved by predicting the texture inside the failure region from the neighbourhood of the region. A significant gain over widely used techniques such as the SA-DCT was obtained. The accuracy of the motion compensation component was increased by keeping an accurate internal representation for each region both at the encoder and the decoder side. The proposed region-based coding system is also evaluated against other systems, including the MPEG4 codec which has been recently approved by the MPEG community.
APA, Harvard, Vancouver, ISO, and other styles
38

Tokdemir, Serpil. "Digital compression on GPU." unrestricted, 2006. http://etd.gsu.edu/theses/available/etd-12012006-154433/.

Full text
Abstract:
Thesis (M.S.)--Georgia State University, 2006.
Title from dissertation title page. Saeid Belkasim, committee chair; Ying Zhu, A.P. Preethy, committee members. Electronic text (90 p. : ill. (some col.)). Description based on contents viewed May 2, 2007. Includes bibliographical references (p. 78-81).
APA, Harvard, Vancouver, ISO, and other styles
39

Jiang, Qin. "Stereo image sequence compression." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/15634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Fawcett, Roger James. "Efficient practical image compression." Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Rajpoot, Nasir Mahmood. "Adaptive wavelet image compression." Thesis, University of Warwick, 2001. http://wrap.warwick.ac.uk/67099/.

Full text
Abstract:
In recent years, there has been an explosive increase in the amount of digital image data. The requirements for its storage. and communication can be reduced considerably by compressing the data while maintaining their visual quality. The work in this thesis is concerned with the compression of still images using fixed and adaptive wavelet transforms. The wavelet transform is a suitable candidate for representing an image in a compression system, due to its being an efficient representation, having an inherent multiresolution nature, and possessing a self-similar structure which lends itself to efficient quantization strategies using zerotrees. The properties of wavelet transforms are studied from a compression viewpoint. A novel augmented zerotree wavelet image coding algorithm is presented whose compression performance is comparable to the best wavelet coding results published to date. It is demonstrated that a wavelet image coder performs much better on images consisting of smooth regions than on relatively complex images. The need thus arises to explore the wavelet bases whose time-frequency tiling is adapted to a given signal, in such a way that the resulting waveforms resemble closely those present in the signal and consequently result in a sparse representation, suitable for compression purposes. Various issues related to a generalized wavelet basis adapted to the signal or image contents, the so-called best wavelet packet basis, and its selection are addressed. A new method for wavelet packet basis selection is presented, which aims to unite the basis selection process with quantization strategy to achieve better compression performance. A general zerotree structure for any arbitrary wavelet packet basis, termed the compatible zerotree structure, is presented. The new basis selection method is applied to compatible zerotree quantization to obtain a progressive wavelet packet coder, which shows significant coding gains over its wavelet counterpart on test images of diverse nature.
APA, Harvard, Vancouver, ISO, and other styles
42

BREGA, LEONARDO SANTOS. "COMPRESSION USING PERMUTATION CODES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2003. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=4379@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Em um sistema de comunicações, procura-se representar a informação gerada de forma eficiente, de modo que a redundância da informação seja reduzida ou idealmente eliminada, com o propósito de armazenamento e/ou transmissão da mesma. Este interesse justifica portanto, o estudo e desenvolvimento de técnicas de compressão que vem sendo realizado ao longo dos anos. Este trabalho de pesquisa investiga o uso de códigos de permutação para codificação de fontes segundo um critério de fidelidade, mais especificamente de fontes sem memória, caracterizadas por uma distribuição uniforme e critério de distorção de erro médio quadrático. Examina-se os códigos de permutação sob a ótica de fontes compostas e a partir desta perspectiva, apresenta-se um esquema de compressão com duplo estágio. Realiza-se então uma análise desse esquema de codificação. Faz-se também uma extensão L- dimensional (L > 1) do esquema de permutação apresentado na literatura. Os resultados obtidos comprovam um melhor desempenho da versão em duas dimensões, quando comparada ao caso unidimensional, sendo esta a principal contribuição do presente trabalho. A partir desses resultados, busca-se a aplicação de um esquema que utiliza códigos de permutação para a compressão de imagens.
In communications systems the information must be represented in an efficient form, in such a way that the redundancy of the information is either reduced or ideally eliminated, with the intention of storage or transmission of the same one. This interest justifies the study and development of compression techniques that have been realized through the years. This research investigates the use of permutation codes for source encoding with a fidelity criterion, more specifically of memoryless uniform sources with mean square error fidelity criterion. We examine the permutation codes under the view of composed sources and from this perspective, a project of double stage source encoder is presented. An analysis of this project of codification is realized then. A L-dimensional extension (L > 1) of permutation codes from previous research is also introduced. The results prove a better performance of the version in two dimensions, when compared with the unidimensional case and this is the main contribution of the present study. From these results, we investigate an application for permutation codes in image compression.
APA, Harvard, Vancouver, ISO, and other styles
43

MELLO, CLAUDIO GOMES DE. "CRYPTO-COMPRESSION PREFIX CODING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2006. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=9932@1.

Full text
Abstract:
Cifragem e compressão de dados são funcionalidades essencias quando dados digitais são armazenados ou transmitidos através de canais inseguros. Geralmente, duas operações sequencias são aplicadas: primeiro, compressão de dados para economizar espaço de armazenamento e reduzir custos de transmissão, segundo, cifragem de dados para prover confidencialidade. Essa solução funciona bem para a maioria das aplicações, mas é necessário executar duas operações caras, e para acessar os dados, é necessário primeiro decifrar e depois descomprimir todo o texto cifrado para recuperar a informação. Neste trabalho são propostos algoritmos que realizam tanto compressão como cifragem de dados. A primeira contribuição desta tese é o algoritmo ADDNULLS - Inserção Seletiva de Nulos. Este algoritmo usa a técnica da esteganografia para esconder os símbolos codificados em símbolos falsos. É baseado na inserção seletiva de um número variável de símbolos nulos após os símbolos codificados. É mostrado que as perdas nas taxas de compressão são relativamente pequenas. A segunda contribuição desta tese é o algoritmo HHC - Huffman Homofônico-Canônico. Este algoritmo cria uma nova árvore homofônica baseada na árvore de Huffman canônica original para o texto de entrada. Os resultados dos experimentos são mostrados. A terceira contribuição desta tese é o algoritmo RHUFF - Huffman Randomizado. Este algoritmo é uma variante do algoritmo de Huffman que define um procedimento de cripto-compressão que aleatoriza a saída. O objetivo é gerar textos cifrados aleatórios como saída para obscurecer as redundâncias do texto original (confusão). O algoritmo possui uma função de permutação inicial, que dissipa a redundância do texto original pelo texto cifrado (difusão). A quarta contribuição desta tese é o algoritmo HSPC2 - Códigos de Prefixo baseados em Substituição Homofônica com 2 homofônicos. No processo de codificação, o algoritmo adiciona um bit de sufixo em alguns códigos. Uma chave secreta e uma taxa de homofônicos são parâmetros que controlam essa inserção. É mostrado que a quebra do HSPC2 é um problema NP- Completo.
Data compression and encryption are essential features when digital data is stored or transmitted over insecure channels. Usually, we apply two sequential operations: first, we apply data compression to save disk space and to reduce transmission costs, and second, data encryption to provide confidentiality. This solution works fine for most applications, but we have to execute two expensive operations, and if we want to access data, we must first decipher and then decompress the ciphertext to restore information. In this work we propose algorithms that achieve both compressed and encrypted data. The first contribution of this thesis is the algorithm ADDNULLS - Selective Addition of Nulls. This algorithm uses steganographic technique to hide the real symbols of the encoded text within fake ones. It is based on selective insertion of a variable number of null symbols after the real ones. It is shown that coding and decoding rates loss are small. The disadvantage is ciphertext expansion. The second contribution of this thesis is the algorithm HHC - Homophonic- Canonic Huffman. This algorithm creates a new homophonic tree based upon the original canonical Huffman tree for the input text. It is shown the results of the experiments. Adding security has not significantly decreased performance. The third contribution of this thesis is the algorithm RHUFF - Randomized Huffman. This algorithm is a variant of Huffman codes that defines a crypto-compression algorithm that randomizes output. The goal is to generate random ciphertexts as output to obscure the redundancies in the plaintext (confusion). The algorithm uses homophonic substitution, canonical Huffman codes and a secret key for ciphering. The secret key is based on an initial permutation function, which dissipates the redundancy of the plaintext over the ciphertext (diffusion). The fourth contribution of this thesis is the algorithm HSPC2 - Homophonic Substitution Prefix Codes with 2 homophones. It is proposed a provably secure algorithm by using a homophonic substitution algorithm and a key. In the encoding process, the HSPC2 function appends a one bit suffx to some codes. A secret key and a homophonic rate parameters control this appending. It is shown that breaking HSPC2 is an NP-Complete problem.
APA, Harvard, Vancouver, ISO, and other styles
44

Whitehouse, Steven John. "Error resilient image compression." Thesis, University of Cambridge, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.621935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Stephens, Charles R. "Video Compression Standardization Issues." International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615077.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada
This paper discusses the development of a standard for compressed digital video. The benefits and applications of compressed digital video are reviewed, and some examples of compression techniques are presented. A hardware implementation of a differential pulse code modulation approach is examined.
APA, Harvard, Vancouver, ISO, and other styles
46

Sato, Diogo Mululo. "EEG Analysis by Compression." Master's thesis, Faculdade de Medicina da Universidade do Porto, 2011. http://hdl.handle.net/10216/63767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Penrose, Andrew John. "Extending lossless image compression." Thesis, University of Cambridge, 1999. https://www.repository.cam.ac.uk/handle/1810/272288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Du, Toit Benjamin David. "Data Compression and Quantization." Diss., University of Pretoria, 2014. http://hdl.handle.net/2263/79233.

Full text
Abstract:
Data Compression Due to limitations in data storage and bandwidth, data of all types has often required compression. This need has spawned many different methods of compressing data. In certain situations the fidelity of the data can be compromised and unnecessary information can be discarded, while in other situations, the fidelity of the data is necessary for the data to be useful thereby requiring methods of reducing the data storage requirements without discarding any information. The theory of data compression has received much attention over the past half century, with some of the most important work done by Claude E. Shannon in the 1940’s and 1950’s and at present topics such as Information and Coding Theory, which encompass a wide variety of sciences, continue to make headway into the interesting and highly applicable topic of data compression. Quantization Quantization is a broad notion used in several fields especially in the sciences, including signal processing, quantum physics, computer science, geometry, music and others. The concept of quantization is related to the idea of grouping, dividing or approximating some physical quantity by a set of small discrete measurements. Data Quantization involves the discretization of data, or the approximation of large data sets by smaller data sets. This mini dissertation is a research dissertation that considers how data, which is of a statistical nature, can be quantized and compressed.
Dissertation (MSc)--University of Pretoria, 2014.
Statistics
MSc
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
49

Prieto, Guerrero Alfonso. "Compression de signaux biomédicaux." Toulouse, INPT, 1999. http://www.theses.fr/1999INPT032H.

Full text
Abstract:
L'objectif de ce travail de thèse est de proposer des méthodes de compression de signaux biomédicaux, [. . . ] Les signaux étudiés sont de trois catégories : les ElectroMyoGrammes (EMG), les ElectroCardiogrammes (ECG), les ElectroEncéphaloGrammes (EEG). Le choix de ces signaux a été basé sur leur ample utilisation dans le diagnostic et la surveillance des patients, en milieu hospitalier ou en ambulatoire. Cette étude est divisée en quatre chapitres. Le chapitre 1 présente les signaux qui font partie de l'étude en décrivant leur principales caractéristiques. Dans le chapitre 2, une étude comparative entre diverses méthodes de compression est menée. Cette comparaison est faite en divisant les techniques utilisées en deux grandes classes : les méthodes prédictives (le DPCM, la Modélisation Multi-Impulsionnelle et le codeur CELP) et les méthodes par transformées (la Transformée en Cosinus Discrète (TCD) et la Transformée en Ondelette Discrète (TOD)). Les performances de ces deux classes de méthodes sont comparées à celles spécifiquement développées pour les signaux biomédicaux. Nous montrons que les techniques de compression par transformées permettent nettement d'obtenir les meilleurs résultats en termes de rapport signal à bruit et de taux de compression [. . . ]. Le chapitre 3 s'intéresse plus particulièrement à la compression de l'ECG mais les résultats présentés peuvent s'étendre à toute une classe de signaux proches de l'ECG, présentant une structure de motifs périodisés. Ce chapitre présente une nouvelle méthode de compression, basée sur la modélisation de l'ECG à l'aide d'une ondelette ù̀tilisateur'' créée à partir de données réelles. Les résultats [. . . ] ouvrent un champ d'applications, non seulement en compression de données mais aussi en aide au diagnostic. Enfin, le chapitre 4 aborde le problème de la compression d'un point de vue multi-dimensionnel, prenant en compte l'enregistrement en parallèle de plusieurs signaux biomédicaux, comme, par exemple, les différentes dérivations de l'ECG. Deux des techniques présentées sont l'extension au cas multi-dimensionnel des méthodes présentées dans le chapitre 2 (le DPCM et la TCD). Une nouvelle technique de compression est aussi présentée ; celle-ci exploite le lien physique entre les signaux qui forment l'ensemble de l'enregistrement simultané. Cette méthode est basée sur l'identification de filtres RIF liant les différents signaux enregistrés [. . . ]. La méthode originale basée sur l'identification de filtres fournit des résultats très prometteurs.
APA, Harvard, Vancouver, ISO, and other styles
50

DAABOUL, AHMAD. "Compression d'images par prediction." Paris 7, 1996. http://www.theses.fr/1996PA077191.

Full text
Abstract:
Notre these porte sur la compression d'images sans perte d'information. Nous proposons trois types de codages: le codage par prefixe, le codage predictif et le codage par arbre de prediction pour effectuer la compression. Dans la premiere technique, le pixel courant est remplace par un couple d'entiers. Les couples ainsi obtenus sont codes par un codage arithmetique avec un modele de probabilite d'ordre 0 en utilisant un automate. Nous proposons deux methodes differentes pour le codage predictif: les predicteurs et les schemas optimaux de prediction. Dans la technique des predicteurs, l'image originale est traitee par les predicteurs que nous avons definis afin de calculer l'image predite. La difference entre l'image originale et l'image predite est appelee l'image d'erreurs. L'idee principale de la technique des schemas optimaux de prediction consiste a decouper l'image originale en blocs ou en lignes puis a rechercher pour chacun de ces blocs (ou lignes) le predicteur qui fournit la meilleure prediction. L'image d'erreurs calculee par ces methodes est codes a l'aide d'un codage arithmetique avec un modele de probabilite d'ordre 0. Dans la technique d'arbres de prediction, l'image originale est representee par un graphe. On cherche dans ce graphe un arbre recouvrant dont le codage est de taille minimale. Nous codons ensuite les poids des aretes de l'arbre calcule au moyen du codage arithmetique avec un modele de probabilite d'ordre 0. Les resultats que nous avons obtenus avec ces differentes methodes sont superieurs a ceux obtenus avec les methodes de compression generales et specialisees sur les images
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography