Dissertations / Theses on the topic 'Compression'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Compression.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Hawary, Fatma. "Light field image compression and compressive acquisition." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S082.
Full textBy capturing a scene from several points of view, a light field provides a rich representation of the scene geometry that brings a variety of novel post-capture applications and enables immersive experiences. The objective of this thesis is to study the compressibility of light field contents in order to propose novel solutions for higher-resolution light field imaging. Two main aspects were studied through this work. The compression performance on light fields of the actual coding schemes still being limited, there is need to introduce more adapted approaches to better describe the light field structures. We propose a scalable coding scheme that encodes only a subset of light field views and reconstruct the remaining views via a sparsity-based method. A residual coding provides an enhancement to the final quality of the decoded light field. Acquiring very large-scale light fields is still not feasible with the actual capture and storage facilities, a possible alternative is to reconstruct the densely sampled light field from a subset of acquired samples. We propose an automatic reconstruction method to recover a compressively sampled light field, that exploits its sparsity in the Fourier domain. No geometry estimation is needed, and an accurate reconstruction is achieved even with very low number of captured samples. A further study is conducted for the full scheme including a compressive sensing of a light field and its transmission via the proposed coding approach. The distortion introduced by the different processing is measured. The results show comparable performances to depth-based view synthesis methods
Yell, M. D. "Steam compression in the single screw compressor." Thesis, University of Leeds, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.372575.
Full textNóbrega, Fernando Antônio Asevêdo. "Sumarização Automática de Atualização para a língua portuguesa." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-30072018-090806/.
Full textThe huge amount of data that is available online is the main motivation for many tasks of Natural Language Processing, as the Update Summarization (US) which aims to produce a summary from a collection of related texts under the assumption the user/reader has some previous knowledge about the texts subject. Thus, a good update summary must be produced with the most relevant, new and updated content in order to assist the user. This task presents many research challenges, mainly in the processes of content selection and synthesis of the summary. Although there are several approaches for US, most of them do not use of some linguistic information that may assist the identification relevant content for the summary/user. Furthermore, US methods frequently apply an extractive synthesis approach, in which the summary is produced by picking some sentences from the source texts without rewriting operations. Once some segments of the picked sentences may contain redundant or irrelevant content, this synthesis process can to reduce the summary informativeness. Thus, some recent efforts in this field have focused in the compressive synthesis approach, in which some sentences are compressed by deletion of tokens or rewriting operations before be inserted in the output summary. Given this background, this PhD research has investigated the use of some linguistic information, as the Cross Document Theory (CST), Subtopic Segmentation and Named Entity Recognition into distinct content selection approaches for US by use extractive and compressive synthesis process in order to produce more informative update summaries. Once we have focused on the Portuguese language, we have compiled three new resources for this language, the CSTNews-Update, which allows the investigation of US methods for this language, the PCST-Pairs and G1-Pairs, in which there are pairs of original and compressed sentences in order to produce methods of sentence compression. It is important to say we also have performed experiments for the English language, in which there are more resources. The results show the Subtopic Segmentation assists the production of better summaries, however, this have occurred just on some content selection approaches. Furthermore, we also have proposed a simplification for the method DualSum by use Subtopic Segments. These simplifications require low computation power than DualSum and they have presented very satisfactory results. Aiming the production of compressive summaries, we have proposed different compression methods by use machine learning techniques. Our better proposed method present quality similar to a state-of-art system, which is based on Deep Learning algorithms. Previously this investigation, most of the researches on the Automatic Summarization field for the Portuguese language was focused on previous traditional tasks, as the production of summaries from one and many texts that does not consider the user knowledge, by use extractive synthesis processes. Thus, beside our proposed US systems based on linguistic information, which were evaluated over English and Portuguese datasets, we have produced many Compressions Methods and three new resources that will assist the expansion of the Automatic Summarization field for the Portuguese Language.
Blais, Pascal. "Pattern compression." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ38737.pdf.
Full textSANCHEZ, FERNANDO ZEGARRA. "COMPRESSION IGNITION OF ETHANOL-POWERED IN RAPID COMPRESSION MACHINE." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=29324@1.
Full textCOORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Com o passar do tempo, a humanidade tem uma maior dependência na geração de energia, utilizada para promoção de conforto, transporte e outros. Com a finalidade de resolver este aumento de demanda, novas fontes eficientes, de preferência renováveis, estão sendo pesquisadas. O transporte é uma das atividades que tem maior dependência dos combustíveis fósseis, além de ser também um dos maiores geradores de gases de efeito estufa. É por isso, que em diversas partes do mundo, o homem pesquisa novas fontes de energia renováveis que possam ser substitutas dos atuais tradicionais usados no transporte. Sabe-se, que os motores Diesel são mais eficientes com relação aos motores Otto. Devido a este fato, há mais 30 anos pesquisam-se e desenvolvem-se sistemas de ignição por compressão, movidos com combustíveis renováveis, o qual permita a diminuição da dependência dos combustíveis fósseis e garanta a redução de gases de efeito estufa. O etanol é um candidato para substituir o oleo Diesel, mas tem que se levar em conta algumas alterações (aumento da relação de compressão, adição de melhoradores da autoignição, etc.) antes de ser utilizado nos motores Diesel. Com base nisto, a presente tese apresenta uma nova proposta, utilizar como melhorador da autoignição do etanol o n-butanol. Para tal propósito se desenvolveu diversos testes com diversas relações de compressão, percentuais em massa de aditivo na mistura de etanol e diversos avanços da injeção. Os testes foram realizados em uma máquina de compressão rápida (MCR) com misturas de etanol e polietilenoglicol 400 e 600, n-butanol, além dos testes refenciais com óleo Disel e ED95. Os resultados mostram que o n-butanol, com uma participação de 10 por cento na mistura, pode ser utilizado como melhorador da autoignição do etanol em sistemas de ignição por compressão.
Over time, humanity has developed a greater reliance inpower generation, used to promoter comfort, transport and others. In order to address this increased demand new efficient sources are being searched, in preference, renewable sources. Transportation is one of the activities that have greater reliance on fossil fuels as well as being one of the largest generators of greenhouse gases. Therefore, in many parts of the world men are engaged in the search of new renewable energy sources that can substitute the current one used in transport. It is known that diesel engines are more efficient in comparison to the Otto engime. Due to this fact, for more than 30 years research has been conducted in order to develop ignition systems by compression, powered with renewable fuels, which reduces the dependence on fossil fuels and the emission of greenhouse gases. Ethanol is a viable candidate to replace diesel oil, but some improvements have to be accounted for before it s used in diesel engines, improvements such as the increase in compression ratio, adding auto-ignition improves, etc. Based on the facts presented, this thesis offers a new proposal, the use of n-butanol as an auto-ignition improver for ethanol. For this purpose several tests have been executed with various compression ratios, mass percentage of additive in the mixture off ethanol and many start of injections. The tests were performed in a rapid compression machine (RCM) with mixtures of ethanol and polyethylene glycol 400 and 600, and n-butanol inaddition to the reference test with diesel oil and ED95. The results show that n-butanol with a 10 per cent share of the mixture, can be used as an auto ignition improver for ethanol in compression ignition systems.
Agostini, Luciano Volcan. "Projeto de arquiteturas integradas para a compressão de imagens JPEG." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2002. http://hdl.handle.net/10183/11431.
Full textThis dissertation presents the design of architectures for JPEG image compression. Architectures for a gray scale images JPEG compressor that were developed are herein presented. This work also addresses a color images JPEG compressor and a color space converter. The designed architectures are described in detail and they were completely described in VHDL, with synthesis directed for Altera Flex10KE family of FPGAs. The integrated architecture for gray scale images JPEG compressor has a minimum latency of 237 clock cycles and it processes an image of 640x480 pixels in 18,5ms, allowing a processing rate of 54 images per second. The compression rate, according to estimates, would be of 6,2 times or 84%, in percentage of bits compression. The integrated architecture for color images JPEG compression was generated starting from incremental changes in the architecture of gray scale images compressor. This architecture also has the minimum latency of 237 clock cycles and it can process a color image of 640 x 480 pixels in 54,4ms, allowing a processing rate of 18,4 images per second. The compression rate, according to estimates, would be of 14,4 times or 93%, in percentage of bits compression. The architecture for space color conversor from RBG to YCbCr has a latency of 6 clock cycles and it is able to process a color image of 640 x 480 pixels in 84,6ms, allowing a processing rate of 11,8 images per second. This architecture was finally not integrated with the color images compressor architecture, but some suggestions, alternatives and estimates were made in this direction.
Khan, Jobaidur Rahman. "Fog Cooling, Wet Compression and Droplet Dynamics In Gas Turbine Compressors." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/908.
Full textDay, Benjamin Marc. "An Evaluation and Redesign of a Thermal Compression Evaporator." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/926.
Full textHernández-Cabronero, Miguel. "DNA Microarray Image Compression." Doctoral thesis, Universitat Autònoma de Barcelona, 2015. http://hdl.handle.net/10803/297706.
Full textIn DNA microarray experiments, two grayscale images are produced. It is convenient to save these images for future, more accurate re-analysis. Thus, image compression emerges as a particularly useful tool to alleviate the associated storage and transmission costs. This dissertation aims at improving the state of the art of the compression of DNA microarray images. A thorough investigation of the characteristics of DNA microarray images has been performed as a part of this work. Results indicate that algorithms not adapted to DNA microarray images typically attain only mediocre lossless compression results due to the image characteristics. By analyzing the first-order and conditional entropy present in these images, it is possible to determine approximate limits to their lossless compressibility. Even though context-based coding and segmentation provide modest improvements over generic-purpose algorithms, conceptual breakthroughs in data coding are arguably required to achieve compression ratios exceeding 2:1 for most images. Prior to the start of this thesis, several lossless coding algorithms that have performance results close to the aforementioned limit were published. However, none of them is compliant with existing image compression standards. Hence, the availability of decoders in future platforms -a requisite for future re-analysis- is not guaranteed. Moreover, the adhesion to standards is usually a requisite in clinical scenarios. To address these problems, a fast reversible transform compatible with the JPEG2000 standard -the Histogram Swap Transform (HST)- is proposed. The HST improves the average compression performance of JPEG2000 for all tested image corpora, with gains ranging from 1.97% to 15.53%. Furthermore, this transform can be applied with only negligible time complexity overhead. With the HST, JPEG2000 becomes arguably the most competitive alternatives to microarray-specific, non-standard compressors. The similarities among sets of microarray images have also been studied as a means to improve the compression performance of standard and microarray-specific algorithms. An optimal grouping of the images which maximizes the inter-group correlation is described. Average correlations between 0.75 and 0.92 are observed for the tested corpora. Thorough experimental results suggest that spectral decorrelation transforms can improve some lossless coding results by up to 0.6bpp, although no single transform is effective for all copora. Lossy coding algorithms can yield almost arbitrary compression ratios at the cost of modifying the images and, thus, of distorting subsequent analysis processes. If the introduced distortion is smaller than the inherent experimental variability, it is usually considered acceptable. Hence, the use of lossy compression is justified on the assumption that the analysis distortion is assessed. In this work, a distortion metric for DNA microarray images is proposed to predict the extent of this distortion without needing a complete re-analysis of the modified images. Experimental results suggest that this metric is able to tell apart image changes that affect subsequent analysis from image modifications that do not. Although some lossy coding algorithms were previously described for this type of images, none of them is specifically designed to minimize the impact on subsequent analysis for a given target bitrate. In this dissertation, a lossy coder -the Relative Quantizer (RQ) coder- that improves upon the rate- distortion results of previously published methods is proposed. Experiments suggest that compression ratios exceeding 4.5:1 can be achieved while introducing distortions smaller than half the inherent experimental variability. Furthermore, a lossy-to-lossless extension of this coder -the Progressive RQ (PRQ) coder- is also described. With the PRQ, images can be compressed once and then reconstructed at different quality levels, including lossless reconstruction. In addition, the competitive rate-distortion results of the RQ and PRQ coders can be obtained with computational complexity slightly smaller than that of the best-performing lossless coder of DNA microarray images.
Grün, Alexander. "Nonlinear pulse compression." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/284879.
Full textEn esta tesis he investigado dos métodos para generar pulsos láser ultracortos en regiones espectrales que son típicamente difíciles de lograr con las técnicas existentes. Estos pulsos son especialmente atractivos en el estudio de la dinámica ultrarrápida (pocos femtosegundos) en átomos y moléculas. La primera técnica implica Amplificación Paramétrica Óptica (OPA) mediante mezcla de cuatro ondas en fase gaseosa y soporta la generación de pulsos ultracortos desde el Infrarrojo-Cercano (NIR) hasta la región espectral del Infrarrojo-Medio (MIR). Mediante la combinación de pulsos centrados a una longitud de onda de 800 nm y su segundo armónico en una fibra hueca rellena de argón, hemos demostrado a la salida de la fibra la generación de pulsos en el NIR, centrados a 1.4 µm, con 5 µJ de energía y 45 fs de duración. Se espera que el proceso de mezcla de cuatro ondas involucrado en el OPA lleve a pulsos con fase de la envolvente de la portadora estables, ya que es de gran importancia para aplicaciones en óptica extrema no lineal. Estos pulsos desde el NIR hasta el MIR se pueden utilizar directamente en interacciones no-lineales materia-radiación, haciendo uso de sus características de longitud de onda largas. El segundo método permite la compresión de pulsos intensos de femtosegundos en la región del ultravioleta (UV) mediante la mezcla de suma de frecuencias de dos pulsos en el NIR limitados en el ancho de banda en una geometría de ajuste de fases no-colineal bajo condiciones particulares de discrepancia de velocidades de grupo. Específicamente, el cristal debe ser elegido de tal manera que las velocidades de grupo de los pulsos de bombeo del NIR, v1 y v2, y la del pulso suma-de-frecuencias generado, vSF, cumplan la siguiente condición, v1 < vSF < v2. En el caso de un fuerte intercambio de energía y un pre-retardo adecuado entre las ondas de bombeo, el borde delantero del pulso de bombeo más rápido y el borde trasero del más lento se agotan. De esta manera la región de solapamiento temporal de los impulsos de bombeo permanece estrecha, resultando en el acortamiento del impulso generado. La geometría de haces no-colineales permite controlar las velocidades de grupo relativas mientras mantiene la condición de ajuste de fase. Para asegurar frentes de onda paralelos dentro del cristal y que los pulsos generados por suma de frecuencias se generen sin inclinación, es esencial la pre-compensación de la inclinación de los frente de onda de los pulsos NIR. En esta tesis se muestra que estas inclinaciones de los frentes de onda se pueden lograr utilizando una configuración muy compacta basada en rejillas de transmisión y una configuración más compleja basada en prismas combinados con telescopios. Pulsos en el UV tan cortos como 32 fs (25 fs) se han generado mediante compresión de pulsos no-lineal no-colineal en un cristal BBO de ajuste de fase tipo II, comenzando con pulsos en el NIR de 74 fs (46 fs) de duración. El interés de este método radica en la inexistencia de cristales que se puedan utilizar para la compresión de impulsos no-lineal a longitudes de onda entorno a 800 nm en una geometría colineal. En comparación con las técnicas de última generación de compresión basadas en la automodulación de fase, la compresión de pulsos por suma de frecuencias esta libre de restricciones en la apertura de los pulsos, y por lo tanto es expandible en energía. Tales pulsos de femtosegundos en el visible y en el ultravioleta son fuertemente deseados en el estudio de dinámica ultrarrápida de una gran variedad de sistemas (bio)moleculares.
Nasiopoulos, Panagiotis. "Adaptive compression coding." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28508.
Full textApplied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
Obaid, Arif. "Range image compression." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/10131.
Full textHansson, Erik, and Stefan Karlsson. "Lossless Message Compression." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-21434.
Full textI detta examensarbete har vi undersökt huruvida komprimering av meddelanden för interprocesskommunikation (IPC) kan vara fördelaktigt. En litteraturstudie om förlustfri komprimering resulterade i en sammanställning av algoritmer och tekniker. Från den här sammanställningen utsågs algoritmerna LZO, LZFX, LZW, LZMA, bzip2 och LZ4 för integrering i LINX som ett extra lager för att stödja komprimering av meddelanden. Algoritmerna testades genom att skicka meddelanden innehållande riktig telekom-data mellan två noder på ett dedikerat nätverk. Detta gjordes med olika nätverksinställningar samt storlekar på meddelandena. Den effektiva nätverksgenomströmningen räknades ut för varje algoritm genom att mäta omloppstiden. Resultatet visade att de snabbaste algoritmerna, alltså LZ4, LZO och LZFX, var effektivast i våra tester.
Williams, Ross Neil. "Adaptive data compression." Adelaide, 1989. http://web4.library.adelaide.edu.au/theses/09PH/09phw7262.pdf.
Full textLacroix, Bruno. "Fractal image compression." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ36939.pdf.
Full textAydinoğlu, Behçet Halûk. "Stereo image compression." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/15447.
Full textAbdul-Amir, Said. "Digital image compression." Thesis, De Montfort University, 1985. http://hdl.handle.net/2086/10681.
Full textZhang, Fan. "Parametric video compression." Thesis, University of Bristol, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.574421.
Full textSteinruecken, Christian. "Lossless data compression." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709134.
Full textSanturkar, Shibani (Shibani Vinay). "Towards generative compression." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112048.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 47-51).
Graceful degradation is a metric of system functionality which guarantees that performance declines gradually as resource constraints increase or components fail. In the context of data compression, this translates to providing users with intelligible data, even in the presence of bandwidth bottlenecks and noisy channels. Traditional image and video compression algorithms rely on hand-crafted encoder/decoder pairs (codecs) that lack adaptability and are agnostic to the data being compressed and as a result, they do not degrade gracefully. Further, these traditional techniques have been customized for bitmap images and cannot be easily extended to the variety of new media formats such as stereoscopic data, VR data and 360 videos, that are becoming increasingly prevalent. New compression algorithms must address the dual constraints of increased flexibility while demonstrating improvement on traditional measures of compression quality. In this work, we propose a data-aware compression technique leveraging a class of machine learning models called generative models. These are trained to approximate the true data distribution, and hence can be used to learn an intelligent low-dimensional representation of the data. Using these models, we describe the concept of generative compression and show its potential to produce more accurate and visually pleasing reconstructions at much deeper compression levels for both image and video data. We also demonstrate that generative compression is orders-of-magnitude more resilient to bit error rates (e.g. from noisy wireless channels) than traditional variable-length entropy coding schemes.
by Shibani Santurkar.
S.M.
Feizi, Soheil (Feizi-Khankandi). "Network functional compression." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/60163.
Full textIncludes bibliographical references (p. 97-99).
In this thesis, we consider different aspects of the functional compression problem. In functional compression, the computation of a function (or, some functions) of sources is desired at the receiver(s). The rate region of this problem has been considered in the literature under certain restrictive assumptions. In Chapter 2 of this Thesis, we consider this problem for an arbitrary tree network and asymptotically lossless computations. In particular, for one-stage tree networks, we compute a rate-region and for an arbitrary tree network, we derive a rate lower bound based on the graph entropy. We introduce a new condition on colorings of source random variables' characteristic graphs called the coloring connectivity condition (C.C.C.). We show that unlike the condition mentioned in Doshi et al., this condition is necessary and sufficient for any achievable coding scheme based on colorings. We also show that, unlike entropy, graph entropy does not satisfy the chain rule. For one stage trees with correlated sources, and general trees with independent sources, we propose a modularized coding scheme based on graph colorings to perform arbitrarily closely to the derived rate lower bound. We show that in a general tree network case with independent sources, to achieve the rate lower bound, intermediate nodes should perform some computations. However, for a family of functions and random variables called chain rule proper sets, it is sufficient to have intermediate nodes act like relays to perform arbitrarily closely to the rate lower bound. In Chapter 3 of this Thesis, we consider a multi-functional version of this problem with side information, where the receiver wants to compute several functions with different side information random variables and zero distortion. Our results are applicable to the case with several receivers computing different desired functions. We define a new concept named multi-functional graph entropy which is an extension of graph entropy defined by K6rner. We show that the minimum achievable rate for this problem is equal to conditional multi-functional graph entropy of the source random variable given the side information. We also propose a coding scheme based on graph colorings to achieve this rate. In these proposed coding schemes, one needs to compute the minimum entropy coloring (a coloring random variable which minimizes the entropy) of a characteristic graph. In general, finding this coloring is an NP-hard problem. However, in Chapter 4, we show that depending on the characteristic graph's structure, there are some interesting cases where finding the minimum entropy coloring is not NP-hard, but tractable and practical. In one of these cases, we show that, by having a non-zero joint probability condition on random variables' distributions, for any desired function, finding the minimum entropy coloring can be solved in polynomial time. In another case, we show that if the desired function is a quantization function, this problem is also tractable. We also consider this problem in a general case. By using Huffman or Lempel-Ziv coding notions, we show that finding the minimum entropy coloring is heuristically equivalent to finding the maximum independent set of a graph. While the minimum-entropy coloring problem is a recently studied problem, there are some heuristic algorithms to approximately solve the maximum independent set problem. Next, in Chapter 5, we consider the effect of having feedback on the rate-region of the functional compression problem . If the function at the receiver is the identity function, this problem reduces to the Slepian-Wolf compression with feedback. For this case, having feedback does not make any benefits in terms of the rate. However, it is not the case when we have a general function at the receiver. By having feedback, one may outperform rate bounds of the case without feedback. We finally consider the problem of distributed functional compression with distortion. The objective is to compress correlated discrete sources such that an arbitrary deterministic function of those sources can be computed up to a distortion level at the receiver. In this case, we compute a rate-distortion region and then, propose a simple coding scheme with a non-trivial performance guarantee.
by Soheil Feizi.
S.M.
Stampleman, Joseph Bruce. "Scalable video compression." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/70216.
Full textZheng, L. "Lossy index compression." Thesis, University College London (University of London), 2011. http://discovery.ucl.ac.uk/1302556/.
Full textHallidy, William H. Jr, and Michael Doerr. "HYPERSPECTRAL IMAGE COMPRESSION." International Foundation for Telemetering, 1999. http://hdl.handle.net/10150/608744.
Full textSystems & Processes Engineering Corporation (SPEC) compared compression and decompression algorithms and developed optimal forms of lossless and lossy compression for hyperspectral data. We examined the relationship between compression-induced distortion and additive noise, determined the effect of errors on the compressed data, and showed that the data could separate targets from clutter after more than 50:1 compression.
Cilke, Tom. "Video Compression Techniques." International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615075.
Full textThis paper will attempt to present algorithms commonly used for video compression, and their effectiveness in aerospace applications where size, weight, and power are of prime importance. These techniques will include samples of one-, two-, and three-dimensional algorithms. Implementation of these algorithms into usable hardware is also explored but limited to monochrome video only.
Lindsay, Robert A., and B. V. Cox. "UNIVERSAL DATA COMPRESSION." International Foundation for Telemetering, 1985. http://hdl.handle.net/10150/615552.
Full textUniversal and adaptive data compression techniques have the capability to globally compress all types of data without loss of information but have the disadvantage of complexity and computation speed. Advances in hardware speed and the reduction of computational costs have made universal data compression feasible. Implementations of the Adaptive Huffman and Lempel-Ziv compression algorithms are evaluated for performance. Compression ratios versus run times for different size data files are graphically presented and discussed in the paper. Required adjustments needed for optimum performance of the algorithms relative to theoretical achievable limits will be outlined.
Lima, Jose Paulo Rodrigues de. "Representação compressiva de malhas." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-17042014-151933/.
Full textData compression is an area of a major interest in computational terms due to the issues on storage and transmission. Particularly, mesh compression has wide usage due to the increase of its application in games and three-dimensional modeling. In recent years, a new theory of acquisition and reconstruction of signals was developed, based on the concept of sparsity and in the minimization of the L1 norm and the incoherency of the signal, called Compressive Sensing (CS). This theory has some remarkable features, such as random sampling and reconstruction by minimization, in a way that the signal acquisition is done by considering only its significant coefficients. Any object that can be interpreted as a sparse sign allows its use. Thus, representing an object sparsely (sounds, images), you can apply the technique of CS. This work explores the viability of CS theory on mesh compression, so that it is possible a representative and compressive sensing on the mesh geometry. In the performed experiments, different parameters and L1 Norm minimization strategies were used. The results show that CS can be used as a mesh geometry compression strategy.
Fgee, El-Bahlul. "A comparison of voice compression using wavelets with other compression schemes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ39651.pdf.
Full textNicholl, Peter Nigel. "Feature directed spiral image compression : (a new technique for lossless image compression)." Thesis, University of Ulster, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339326.
Full textLee, Joshua Ka-Wing. "A model-adaptive universal data compression architecture with applications to image compression." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111868.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-61).
In this thesis, I designed and implemented a model-adaptive data compression system for the compression of image data. The system is a realization and extension of the Model-Quantizer-Code-Separation Architecture for universal data compression which uses Low-Density-Parity-Check Codes for encoding and probabilistic graphical models and message-passing algorithms for decoding. We implement a lossless bi-level image data compressor as well as a lossy greyscale image compressor and explain how these compressors can rapidly adapt to changes in source models. We then show using these implementations that Restricted Boltzmann Machines are an effective source model for compressing image data compared to other compression methods by comparing compression performance using these source models on various image datasets.
by Joshua Ka-Wing Lee.
S.M.
Hooshmand, Mohsen. "Sensing and Compression Techniques for Environmental and Human Sensing Applications." Doctoral thesis, Università degli studi di Padova, 2017. http://hdl.handle.net/11577/3425724.
Full textIn this doctoral thesis, we devise and evaluate a variety of lossy compression schemes for Internet of Things (IoT) devices such as those utilized in environmental wireless sensor networks (WSNs) and Body Sensor Networks (BSNs). We are especially concerned with the efficient acquisition of the data sensed by these systems and to this end we advocate the use of joint (lossy) compression and transmission techniques. Environmental WSNs are considered first. For these, we present an original compressive sensing (CS) approach for the spatio-temporal compression of data. In detail, we consider temporal compression schemes based on linear approximations as well as Fourier transforms, whereas spatial and/or temporal dynamics are exploited through compression algorithms based on distributed source coding (DSC) and several algorithms based on compressive sensing (CS). To the best of our knowledge, this is the first work presenting a systematic performance evaluation of these (different) lossy compression approaches. The selected algorithms are framed within the same system model, and a comparative performance assessment is carried out, evaluating their energy consumption vs the attainable compression ratio. Hence, as a further main contribution of this thesis, we design and validate a novel CS-based compression scheme, termed covariogram-based compressive sensing (CB-CS), which combines a new sampling mechanism along with an original covariogram-based approach for the online estimation of the covariance structure of the signal. As a second main research topic, we focus on modern wearable IoT devices which enable the monitoring of vital parameters such as heart or respiratory rates (RESP), electrocardiography (ECG), and photo-plethysmographic (PPG) signals within e-health applications. These devices are battery operated and communicate the vital signs they gather through a wireless communication interface. A common issue of this technology is that signal transmission is often power-demanding and this poses serious limitations to the continuous monitoring of biometric signals. To ameliorate this, we advocate the use of lossy signal compression at the source: this considerably reduces the size of the data that has to be sent to the acquisition point by, in turn, boosting the battery life of the wearables and allowing for fine-grained and long-term monitoring. Considering one dimensional biosignals such as ECG, RESP and PPG, which are often available from commercial wearable devices, we first provide a throughout review of existing compression algorithms. Hence, we present novel approaches based on online dictionaries, elucidating their operating principles and providing a quantitative assessment of compression, reconstruction and energy consumption performance of all schemes. As part of this first investigation, dictionaries are built using a suboptimal but lightweight, online and best effort algorithm. Surprisingly, the obtained compression scheme is found to be very effective both in terms of compression efficiencies and reconstruction accuracy at the receiver. This approach is however not yet amenable to its practical implementation as its memory usage is rather high. Also, our systematic performance assessment reveals that the most efficient compression algorithms allow reductions in the signal size of up to 100 times, which entail similar reductions in the energy demand, by still keeping the reconstruction error within 4 % of the peak-to-peak signal amplitude. Based on what we have learned from this first comparison, we finally propose a new subject-specific compression technique called SURF Subject-adpative Unsupervised ecg compressor for weaRable Fitness monitors. In SURF, dictionaries are learned and maintained using suitable neural network structures. Specifically, learning is achieve through the use of neural maps such as self organizing maps and growing neural gas networks, in a totally unsupervised manner and adapting the dictionaries to the signal statistics of the wearer. As our results show, SURF: i) reaches high compression efficiencies (reduction in the signal size of up to 96 times), ii) allows for reconstruction errors well below 4 % (peak-to-peak RMSE, errors of 2 % are generally achievable), iii) gracefully adapts to changing signal statistics due to switching to a new subject or changing their activity, iv) has low memory requirements (lower than 50 kbytes) and v) allows for further reduction in the total energy consumption (processing plus transmission). These facts makes SURF a very promising algorithm, delivering the best performance among all the solutions proposed so far.
Schenkel, Birgit. "Supercontinuum generation and compression /." [S.l.] : [s.n.], 2004. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=15570.
Full textGarcía, Sobrino Francisco Joaquín. "Sounder spectral data compression." Doctoral thesis, Universitat Autònoma de Barcelona, 2018. http://hdl.handle.net/10803/663984.
Full textThe Infrared Atmospheric Sounding Interferometer (IASI) is a Fourier Transform Spectrometer implemented on the MetOp satellite series. The instrument is intended to measure infrared radiation emitted from the Earth. IASI produces data with unprecedented accuracy and spectral resolution. Notably, the sounder harvests spectral information to derive temperature and moisture profiles, as well as concentrations of trace gases, essential for the understanding of weather, for climate monitoring, and for atmospheric forecasts. The large spectral, spatial, and temporal resolution of the data collected by the instrument involves generating products with a considerably large size, about 16 Gigabytes per day by each of the IASI-A and IASI-B instruments currently operated. The amount of data produced by IASI demands efficient compression techniques to improve both the transmission and the storage capabilities. This thesis supplies a comprehensive analysis of IASI data compression and provides effective recommendations to produce useful reconstructed spectra. The study analyzes data at different processing stages. Specifically, we use data transmitted by the instrument to the reception stations (IASI L0 products) and end-user data disseminated to the Numerical Weather Prediction (NWP) centres and the scientific community (IASI L1C products). In order to better understand the nature of the data collected by the instrument, we analyze the information statistics and the compression performance of several coding strategies and techniques on IASI L0 data. The order-0 entropy and the order-1, order-2, and order-3 context-based entropies are analyzed in several IASI L0 products. This study reveals that the size of the data could be considerably reduced by exploiting the order-0 entropy. More significant gains could be achieved if contextual models were used. We also investigate the performance of several state-of-the-art lossless compression techniques. Experimental results suggest that a compression ratio of 2.6:1 can be achieved, which involves that more data could be transmitted at the original transmission rate or, alternatively, the transmission rate of the instrument could be further decreased. A comprehensive study of IASI L1C data compression is performed as well. Several state-of-the-art spectral transforms and compression techniques are evaluated on IASI L1C spectra. Extensive experiments, which embrace lossless, near-lossless, and lossy compression, are carried out over a wide range of IASI-A and IASI-B orbits. For lossless compression, compression ratios over 2.5:1 can be achieved. For near-lossless and lossy compression, higher compression ratios can be achieved, while producing useful reconstructed spectra. Even though near-lossless and lossy compression produce higher compression ratios compared to lossless compression, the usefulness of the reconstructed spectra may be compromised because some information is removed during the compression stage. Therefore, we investigate the impact of near-lossless and lossy compression on end-user applications. Specifically, the impact of compression on IASI L1C data is evaluated when statistical retrieval algorithms are later used to retrieve physical information. Experimental results reveal that the reconstructed spectra can enable competitive retrieval performance, improving the results achieved for the uncompressed data, even at high compression ratios. We extend the previous study to a real scenario, where spectra from different disjoint orbits are used in the retrieval stage. Experimental results suggest that the benefits produced by compression are still significant. We also investigate the origin of these benefits. On the one hand, results illustrate that compression performs signal filtering and denoising, which benefits the retrieval methods. On the other hand, compression is an indirect way to produce spectral and spatial regularization, which helps pixel-wise statistical algorithms.
Ibarria, Lorenzo. "Geometric Prediction for Compression." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16162.
Full textChin, Roger S. "Femtosecond laser pulse compression." Thesis, University of British Columbia, 1991. http://hdl.handle.net/2429/29799.
Full textScience, Faculty of
Physics and Astronomy, Department of
Graduate
Mandal, Mrinal Kumar. "Wavelets for image compression." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/10277.
Full textRambaruth, Ratna. "Region-based video compression." Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/843377/.
Full textTokdemir, Serpil. "Digital compression on GPU." unrestricted, 2006. http://etd.gsu.edu/theses/available/etd-12012006-154433/.
Full textTitle from dissertation title page. Saeid Belkasim, committee chair; Ying Zhu, A.P. Preethy, committee members. Electronic text (90 p. : ill. (some col.)). Description based on contents viewed May 2, 2007. Includes bibliographical references (p. 78-81).
Jiang, Qin. "Stereo image sequence compression." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/15634.
Full textFawcett, Roger James. "Efficient practical image compression." Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365711.
Full textRajpoot, Nasir Mahmood. "Adaptive wavelet image compression." Thesis, University of Warwick, 2001. http://wrap.warwick.ac.uk/67099/.
Full textBREGA, LEONARDO SANTOS. "COMPRESSION USING PERMUTATION CODES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2003. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=4379@1.
Full textEm um sistema de comunicações, procura-se representar a informação gerada de forma eficiente, de modo que a redundância da informação seja reduzida ou idealmente eliminada, com o propósito de armazenamento e/ou transmissão da mesma. Este interesse justifica portanto, o estudo e desenvolvimento de técnicas de compressão que vem sendo realizado ao longo dos anos. Este trabalho de pesquisa investiga o uso de códigos de permutação para codificação de fontes segundo um critério de fidelidade, mais especificamente de fontes sem memória, caracterizadas por uma distribuição uniforme e critério de distorção de erro médio quadrático. Examina-se os códigos de permutação sob a ótica de fontes compostas e a partir desta perspectiva, apresenta-se um esquema de compressão com duplo estágio. Realiza-se então uma análise desse esquema de codificação. Faz-se também uma extensão L- dimensional (L > 1) do esquema de permutação apresentado na literatura. Os resultados obtidos comprovam um melhor desempenho da versão em duas dimensões, quando comparada ao caso unidimensional, sendo esta a principal contribuição do presente trabalho. A partir desses resultados, busca-se a aplicação de um esquema que utiliza códigos de permutação para a compressão de imagens.
In communications systems the information must be represented in an efficient form, in such a way that the redundancy of the information is either reduced or ideally eliminated, with the intention of storage or transmission of the same one. This interest justifies the study and development of compression techniques that have been realized through the years. This research investigates the use of permutation codes for source encoding with a fidelity criterion, more specifically of memoryless uniform sources with mean square error fidelity criterion. We examine the permutation codes under the view of composed sources and from this perspective, a project of double stage source encoder is presented. An analysis of this project of codification is realized then. A L-dimensional extension (L > 1) of permutation codes from previous research is also introduced. The results prove a better performance of the version in two dimensions, when compared with the unidimensional case and this is the main contribution of the present study. From these results, we investigate an application for permutation codes in image compression.
MELLO, CLAUDIO GOMES DE. "CRYPTO-COMPRESSION PREFIX CODING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2006. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=9932@1.
Full textData compression and encryption are essential features when digital data is stored or transmitted over insecure channels. Usually, we apply two sequential operations: first, we apply data compression to save disk space and to reduce transmission costs, and second, data encryption to provide confidentiality. This solution works fine for most applications, but we have to execute two expensive operations, and if we want to access data, we must first decipher and then decompress the ciphertext to restore information. In this work we propose algorithms that achieve both compressed and encrypted data. The first contribution of this thesis is the algorithm ADDNULLS - Selective Addition of Nulls. This algorithm uses steganographic technique to hide the real symbols of the encoded text within fake ones. It is based on selective insertion of a variable number of null symbols after the real ones. It is shown that coding and decoding rates loss are small. The disadvantage is ciphertext expansion. The second contribution of this thesis is the algorithm HHC - Homophonic- Canonic Huffman. This algorithm creates a new homophonic tree based upon the original canonical Huffman tree for the input text. It is shown the results of the experiments. Adding security has not significantly decreased performance. The third contribution of this thesis is the algorithm RHUFF - Randomized Huffman. This algorithm is a variant of Huffman codes that defines a crypto-compression algorithm that randomizes output. The goal is to generate random ciphertexts as output to obscure the redundancies in the plaintext (confusion). The algorithm uses homophonic substitution, canonical Huffman codes and a secret key for ciphering. The secret key is based on an initial permutation function, which dissipates the redundancy of the plaintext over the ciphertext (diffusion). The fourth contribution of this thesis is the algorithm HSPC2 - Homophonic Substitution Prefix Codes with 2 homophones. It is proposed a provably secure algorithm by using a homophonic substitution algorithm and a key. In the encoding process, the HSPC2 function appends a one bit suffx to some codes. A secret key and a homophonic rate parameters control this appending. It is shown that breaking HSPC2 is an NP-Complete problem.
Whitehouse, Steven John. "Error resilient image compression." Thesis, University of Cambridge, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.621935.
Full textStephens, Charles R. "Video Compression Standardization Issues." International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615077.
Full textThis paper discusses the development of a standard for compressed digital video. The benefits and applications of compressed digital video are reviewed, and some examples of compression techniques are presented. A hardware implementation of a differential pulse code modulation approach is examined.
Sato, Diogo Mululo. "EEG Analysis by Compression." Master's thesis, Faculdade de Medicina da Universidade do Porto, 2011. http://hdl.handle.net/10216/63767.
Full textPenrose, Andrew John. "Extending lossless image compression." Thesis, University of Cambridge, 1999. https://www.repository.cam.ac.uk/handle/1810/272288.
Full textDu, Toit Benjamin David. "Data Compression and Quantization." Diss., University of Pretoria, 2014. http://hdl.handle.net/2263/79233.
Full textDissertation (MSc)--University of Pretoria, 2014.
Statistics
MSc
Unrestricted
Prieto, Guerrero Alfonso. "Compression de signaux biomédicaux." Toulouse, INPT, 1999. http://www.theses.fr/1999INPT032H.
Full textDAABOUL, AHMAD. "Compression d'images par prediction." Paris 7, 1996. http://www.theses.fr/1996PA077191.
Full text