Thèses sur le sujet « Compressione dati »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Compressione dati ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.
Marconi, Chiara. « Tecniche di compressione senza perdita per dati unidimensionali e bidimensionali ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/5394/.
Texte intégralPizzolante, Raffaele. « Compression and protection of multidimensional data ». Doctoral thesis, Universita degli studi di Salerno, 2015. http://hdl.handle.net/10556/1943.
Texte intégralThe main objective of this thesis is to explore and discuss novel techniques related to the compression and protection of multidimensional data (i.e., 3-D medical images, hyperspectral images, 3-D microscopy images and 5-D functional Magnetic Resonance Images). First, we outline a lossless compression scheme based on the predictive model, denoted as Medical Images Lossless Compression algorithm (MILC). MILC is characterized to provide a good trade-off between the compression performances and reduced usage of the hardware resources. Since in the medical and medical-related fields, the execution speed of an algorithm, could be a “critical” parameter, we investigate the parallelization of the compression strategy of the MILC algorithm, which is denoted as Parallel MILC. Parallel MILC can be executed on heterogeneous devices (i.e., CPUs, GPUs, etc.) and provides significant results in terms of speedup with respect to the MILC. This is followed by the important aspects related to the protection of two sensitive typologies of multidimensional data: 3-D medical images and 3-D microscopy images. Regarding the protection of 3-D medical images, we outline a novel hybrid approach, which allows for the efficient compression of 3-D medical images as well as the embedding of a digital watermark, at the same time. In relation to the protection of 3-D microscopy images, the simultaneous embedding of two watermarks is explained. It should be noted that 3-D microscopy images are often used in delicate tasks (i.e., forensic analysis, etc.). Subsequently, we review a novel predictive structure that is appropriate for the lossless compression of different typologies of multidimensional data... [edited by Author]
XIII n.s.
Pesare, Stefano. « Sistemi di Backup e tecniche di conservazione dei dati digitali ». Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018.
Trouver le texte intégralWilliams, Ross Neil. « Adaptive data compression ». Adelaide, 1989. http://web4.library.adelaide.edu.au/theses/09PH/09phw7262.pdf.
Texte intégralSteinruecken, Christian. « Lossless data compression ». Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709134.
Texte intégralLindsay, Robert A., et B. V. Cox. « UNIVERSAL DATA COMPRESSION ». International Foundation for Telemetering, 1985. http://hdl.handle.net/10150/615552.
Texte intégralUniversal and adaptive data compression techniques have the capability to globally compress all types of data without loss of information but have the disadvantage of complexity and computation speed. Advances in hardware speed and the reduction of computational costs have made universal data compression feasible. Implementations of the Adaptive Huffman and Lempel-Ziv compression algorithms are evaluated for performance. Compression ratios versus run times for different size data files are graphically presented and discussed in the paper. Required adjustments needed for optimum performance of the algorithms relative to theoretical achievable limits will be outlined.
Dušák, Petr. « Fractal application in data compression ». Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-201795.
Texte intégralRadhakrishnan, Radhika. « Genome data modeling and data compression ». abstract and full text PDF (free order & ; download UNR users only), 2007. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1447611.
Texte intégralGarcía, Sobrino Francisco Joaquín. « Sounder spectral data compression ». Doctoral thesis, Universitat Autònoma de Barcelona, 2018. http://hdl.handle.net/10803/663984.
Texte intégralThe Infrared Atmospheric Sounding Interferometer (IASI) is a Fourier Transform Spectrometer implemented on the MetOp satellite series. The instrument is intended to measure infrared radiation emitted from the Earth. IASI produces data with unprecedented accuracy and spectral resolution. Notably, the sounder harvests spectral information to derive temperature and moisture profiles, as well as concentrations of trace gases, essential for the understanding of weather, for climate monitoring, and for atmospheric forecasts. The large spectral, spatial, and temporal resolution of the data collected by the instrument involves generating products with a considerably large size, about 16 Gigabytes per day by each of the IASI-A and IASI-B instruments currently operated. The amount of data produced by IASI demands efficient compression techniques to improve both the transmission and the storage capabilities. This thesis supplies a comprehensive analysis of IASI data compression and provides effective recommendations to produce useful reconstructed spectra. The study analyzes data at different processing stages. Specifically, we use data transmitted by the instrument to the reception stations (IASI L0 products) and end-user data disseminated to the Numerical Weather Prediction (NWP) centres and the scientific community (IASI L1C products). In order to better understand the nature of the data collected by the instrument, we analyze the information statistics and the compression performance of several coding strategies and techniques on IASI L0 data. The order-0 entropy and the order-1, order-2, and order-3 context-based entropies are analyzed in several IASI L0 products. This study reveals that the size of the data could be considerably reduced by exploiting the order-0 entropy. More significant gains could be achieved if contextual models were used. We also investigate the performance of several state-of-the-art lossless compression techniques. Experimental results suggest that a compression ratio of 2.6:1 can be achieved, which involves that more data could be transmitted at the original transmission rate or, alternatively, the transmission rate of the instrument could be further decreased. A comprehensive study of IASI L1C data compression is performed as well. Several state-of-the-art spectral transforms and compression techniques are evaluated on IASI L1C spectra. Extensive experiments, which embrace lossless, near-lossless, and lossy compression, are carried out over a wide range of IASI-A and IASI-B orbits. For lossless compression, compression ratios over 2.5:1 can be achieved. For near-lossless and lossy compression, higher compression ratios can be achieved, while producing useful reconstructed spectra. Even though near-lossless and lossy compression produce higher compression ratios compared to lossless compression, the usefulness of the reconstructed spectra may be compromised because some information is removed during the compression stage. Therefore, we investigate the impact of near-lossless and lossy compression on end-user applications. Specifically, the impact of compression on IASI L1C data is evaluated when statistical retrieval algorithms are later used to retrieve physical information. Experimental results reveal that the reconstructed spectra can enable competitive retrieval performance, improving the results achieved for the uncompressed data, even at high compression ratios. We extend the previous study to a real scenario, where spectra from different disjoint orbits are used in the retrieval stage. Experimental results suggest that the benefits produced by compression are still significant. We also investigate the origin of these benefits. On the one hand, results illustrate that compression performs signal filtering and denoising, which benefits the retrieval methods. On the other hand, compression is an indirect way to produce spectral and spatial regularization, which helps pixel-wise statistical algorithms.
Du, Toit Benjamin David. « Data Compression and Quantization ». Diss., University of Pretoria, 2014. http://hdl.handle.net/2263/79233.
Texte intégralDissertation (MSc)--University of Pretoria, 2014.
Statistics
MSc
Unrestricted
Roguski, Łukasz 1987. « High-throughput sequencing data compression ». Doctoral thesis, Universitat Pompeu Fabra, 2017. http://hdl.handle.net/10803/565775.
Texte intégralGràcies als avenços en el camp de les tecnologies de seqüenciació, en els darrers anys la recerca biomèdica ha viscut una revolució, que ha tingut com un dels resultats l'explosió del volum de dades genòmiques generades arreu del món. La mida típica de les dades de seqüenciació generades en experiments d'escala mitjana acostuma a situar-se en un rang entre deu i cent gigabytes, que s'emmagatzemen en diversos arxius en diferents formats produïts en cada experiment. Els formats estàndards actuals de facto de representació de dades genòmiques són en format textual. Per raons pràctiques, les dades necessiten ser emmagatzemades en format comprimit. En la majoria dels casos, aquests mètodes de compressió es basen en compressors de text de caràcter general, com ara gzip. Amb tot, no permeten explotar els models d'informació especifícs de dades de seqüenciació. És per això que proporcionen funcionalitats limitades i estalvi insuficient d'espai d'emmagatzematge. Això explica per què operacions relativament bàsiques, com ara el processament, l'emmagatzematge i la transferència de dades genòmiques, s'han convertit en un dels principals obstacles de processos actuals d'anàlisi. Per tot això, aquesta tesi se centra en mètodes d'emmagatzematge i compressió eficients de dades generades en experiments de sequenciació. En primer lloc, proposem un compressor innovador d'arxius FASTQ de propòsit general. A diferència de gzip, aquest compressor permet reduir de manera significativa la mida de l'arxiu resultant del procés de compressió. A més a més, aquesta eina permet processar les dades a una velocitat alta. A continuació, presentem mètodes de compressió que fan ús de l'alta redundància de seqüències present en les dades de seqüenciació. Aquests mètodes obtenen la millor ratio de compressió d'entre els compressors FASTQ del marc teòric actual, sense fer ús de cap referència externa. També mostrem aproximacions de compressió amb pèrdua per emmagatzemar dades de seqüenciació auxiliars, que permeten reduir encara més la mida de les dades. En últim lloc, aportem un sistema flexible de compressió i un format de dades. Aquest sistema fa possible generar de manera semi-automàtica solucions de compressió que no estan lligades a cap mena de format específic d'arxius de dades genòmiques. Per tal de facilitar la gestió complexa de dades, diversos conjunts de dades amb formats heterogenis poden ser emmagatzemats en contenidors configurables amb l'opció de dur a terme consultes personalitzades sobre les dades emmagatzemades. A més a més, exposem que les solucions simples basades en el nostre sistema poden obtenir resultats comparables als compressors de format específic de l'estat de l'art. En resum, les solucions desenvolupades i descrites en aquesta tesi poden ser incorporades amb facilitat en processos d'anàlisi de dades genòmiques. Si prenem aquestes solucions conjuntament, aporten una base sòlida per al desenvolupament d'aproximacions completes encaminades a l'emmagatzematge i gestió eficient de dades genòmiques.
Frimpong-Ansah, K. « Adaptive data compression with memory ». Thesis, Imperial College London, 1986. http://hdl.handle.net/10044/1/38008.
Texte intégralKretzmann, Jane Lee. « Compression of bitmapped graphic data ». Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/25761.
Texte intégralBarr, Kenneth C. (Kenneth Charles) 1978. « Energy aware lossless data compression ». Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87316.
Texte intégralWalker, Wendy Tolle 1959. « Video data compression for telescience ». Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276830.
Texte intégralHoran, Sheila. « DATA COMPRESSION STATISTICS AND IMPLICATIONS ». International Foundation for Telemetering, 1999. http://hdl.handle.net/10150/608754.
Texte intégralBandwidth is a precious commodity. In order to make the best use of what is available, better modulation schemes need to be developed, or less data needs to be sent. This paper will investigate the option of sending less data via data compression. The structure and the entropy of the data determine how much lossless compression can be obtained for a given set of data. This paper shows the data structure and entropy for several actual telemetry data sets and the resulting lossless compression obtainable using data compression techniques.
Açikel, Ömer Fatih, et William E. Ryan. « Lossless Compression of Telemetry Data ». International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/611434.
Texte intégralSandia National Laboratories is faced with the problem of losslessly compressing digitized data produced by various measurement transducers. Their interest is in compressing the data as it is created, i.e., in real time. In this work we examine a number of lossless compression schemes with an eye toward their compression efficiencies and compression speeds. The various algorithms are applied to data files supplied by Sandia containing actual vibration data.
Karki, Maya, H. N. Shivashankar et R. K. Rajangam. « IMAGE DATA COMPRESSION (USING DPCM) ». International Foundation for Telemetering, 1991. http://hdl.handle.net/10150/612163.
Texte intégralAdvances in computer technology and mass storage have paved the way for implementing advanced data compression techniques to improve the efficiency of transmission and storage of images. The present paper deals on the development of a data compression algorithm suitable for images received from satellites. The compression ratio of 1.91:1 is achieved with the proposed technique. The technique used is 1-D DPCM Coding. Hardware-relevant to coder has also been proposed.
Thornton, Christopher James. « Concept learning as data compression ». Thesis, University of Sussex, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.278809.
Texte intégralPötzelberger, Klaus, et Helmut Strasser. « Data Compression by Unsupervised Classification ». Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1997. http://epub.wu.ac.at/974/1/document.pdf.
Texte intégralSeries: Forschungsberichte / Institut für Statistik
Deng, Mo Ph D. Massachusetts Institute of Technology. « On compression of encrypted data ». Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106100.
Texte intégralCataloged from PDF version of thesis.
Includes bibliographical references (pages 93-96).
In this thesis, I took advantage of a model-free compression architecture, where the encoder only makes decision about coding and leaves to the decoder to apply the knowledge of the source for decoding, to attack the problem of compressing encrypted data. Results for compressing different sources encrypted by different class of ciphers are shown and analyzed. Moreover, we generalize the problem from encryption schemes to operations, or data-processing techniques. We try to discover key properties an operation should have, in order to enable good post-operation compression performances.
by Mo Deng.
S.M. in Electrical Engineering
Al-Rababa'a, Ahmad. « Arithmetic bit recycling data compression ». Doctoral thesis, Université Laval, 2016. http://hdl.handle.net/20.500.11794/26759.
Texte intégralLa compression des données est la technique informatique qui vise à réduire la taille de l'information pour minimiser l'espace de stockage nécessaire et accélérer la transmission des données dans les réseaux à bande passante limitée. Plusieurs techniques de compression telles que LZ77 et ses variantes souffrent d'un problème que nous appelons la redondance causée par la multiplicité d'encodages. La multiplicité d'encodages (ME) signifie que les données sources peuvent être encodées de différentes manières. Dans son cas le plus simple, ME se produit lorsqu'une technique de compression a la possibilité, au cours du processus d'encodage, de coder un symbole de différentes manières. La technique de compression par recyclage de bits a été introduite par D. Dubé et V. Beaudoin pour minimiser la redondance causée par ME. Des variantes de recyclage de bits ont été appliquées à LZ77 et les résultats expérimentaux obtenus conduisent à une meilleure compression (une réduction d'environ 9% de la taille des fichiers qui ont été compressés par Gzip en exploitant ME). Dubé et Beaudoin ont souligné que leur technique pourrait ne pas minimiser parfaitement la redondance causée par ME, car elle est construite sur la base du codage de Huffman qui n'a pas la capacité de traiter des mots de code (codewords) de longueurs fractionnaires, c'est-à-dire qu'elle permet de générer des mots de code de longueurs intégrales. En outre, le recyclage de bits s'appuie sur le codage de Huffman (HuBR) qui impose des contraintes supplémentaires pour éviter certaines situations qui diminuent sa performance. Contrairement aux codes de Huffman, le codage arithmétique (AC) peut manipuler des mots de code de longueurs fractionnaires. De plus, durant ces dernières décennies, les codes arithmétiques ont attiré plusieurs chercheurs vu qu'ils sont plus puissants et plus souples que les codes de Huffman. Par conséquent, ce travail vise à adapter le recyclage des bits pour les codes arithmétiques afin d'améliorer l'efficacité du codage et sa flexibilité. Nous avons abordé ce problème à travers nos quatre contributions (publiées). Ces contributions sont présentées dans cette thèse et peuvent être résumées comme suit. Premièrement, nous proposons une nouvelle technique utilisée pour adapter le recyclage de bits qui s'appuie sur les codes de Huffman (HuBR) au codage arithmétique. Cette technique est nommée recyclage de bits basé sur les codes arithmétiques (ACBR). Elle décrit le cadriciel et les principes de l'adaptation du HuBR à l'ACBR. Nous présentons aussi l'analyse théorique nécessaire pour estimer la redondance qui peut être réduite à l'aide de HuBR et ACBR pour les applications qui souffrent de ME. Cette analyse démontre que ACBR réalise un recyclage parfait dans tous les cas, tandis que HuBR ne réalise de telles performances que dans des cas très spécifiques. Deuxièmement, le problème de la technique ACBR précitée, c'est qu'elle requiert des calculs à précision arbitraire. Cela nécessite des ressources illimitées (ou infinies). Afin de bénéficier de cette dernière, nous proposons une nouvelle version à précision finie. Ladite technique devienne ainsi efficace et applicable sur les ordinateurs avec les registres classiques de taille fixe et peut être facilement interfacée avec les applications qui souffrent de ME. Troisièmement, nous proposons l'utilisation de HuBR et ACBR comme un moyen pour réduire la redondance afin d'obtenir un code binaire variable à fixe. Nous avons prouvé théoriquement et expérimentalement que les deux techniques permettent d'obtenir une amélioration significative (moins de redondance). À cet égard, ACBR surpasse HuBR et fournit une classe plus étendue des sources binaires qui pouvant bénéficier d'un dictionnaire pluriellement analysable. En outre, nous montrons qu'ACBR est plus souple que HuBR dans la pratique. Quatrièmement, nous utilisons HuBR pour réduire la redondance des codes équilibrés générés par l'algorithme de Knuth. Afin de comparer les performances de HuBR et ACBR, les résultats théoriques correspondants de HuBR et d'ACBR sont présentés. Les résultats montrent que les deux techniques réalisent presque la même réduction de redondance sur les codes équilibrés générés par l'algorithme de Knuth.
Data compression aims to reduce the size of data so that it requires less storage space and less communication channels bandwidth. Many compression techniques (such as LZ77 and its variants) suffer from a problem that we call the redundancy caused by the multiplicity of encodings. The Multiplicity of Encodings (ME) means that the source data may be encoded in more than one way. In its simplest case, it occurs when a compression technique with ME has the opportunity at certain steps, during the encoding process, to encode the same symbol in different ways. The Bit Recycling compression technique has been introduced by D. Dubé and V. Beaudoin to minimize the redundancy caused by ME. Variants of bit recycling have been applied on LZ77 and the experimental results showed that bit recycling achieved better compression (a reduction of about 9% in the size of files that have been compressed by Gzip) by exploiting ME. Dubé and Beaudoin have pointed out that their technique could not minimize the redundancy caused by ME perfectly since it is built on Huffman coding, which does not have the ability to deal with codewords of fractional lengths; i.e. it is constrained to generating codewords of integral lengths. Moreover, Huffman-based Bit Recycling (HuBR) has imposed an additional burden to avoid some situations that affect its performance negatively. Unlike Huffman coding, Arithmetic Coding (AC) can manipulate codewords of fractional lengths. Furthermore, it has attracted researchers in the last few decades since it is more powerful and flexible than Huffman coding. Accordingly, this work aims to address the problem of adapting bit recycling to arithmetic coding in order to improve the code effciency and the flexibility of HuBR. We addressed this problem through our four (published) contributions. These contributions are presented in this thesis and can be summarized as follows. Firstly, we propose a new scheme for adapting HuBR to AC. The proposed scheme, named Arithmetic-Coding-based Bit Recycling (ACBR), describes the framework and the principle of adapting HuBR to AC. We also present the necessary theoretical analysis that is required to estimate the average amount of redundancy that can be removed by HuBR and ACBR in the applications that suffer from ME, which shows that ACBR achieves perfect recycling in all cases whereas HuBR achieves perfect recycling only in very specific cases. Secondly, the problem of the aforementioned ACBR scheme is that it uses arbitrary-precision calculations, which requires unbounded (or infinite) resources. Hence, in order to benefit from ACBR in practice, we propose a new finite-precision version of the ACBR scheme, which makes it efficiently applicable on computers with conventional fixed-sized registers and can be easily interfaced with the applications that suffer from ME. Thirdly, we propose the use of both techniques (HuBR and ACBR) as the means to reduce the redundancy in plurally parsable dictionaries that are used to obtain a binary variable-to-fixed length code. We theoretically and experimentally show that both techniques achieve a significant improvement (less redundancy) in this respect, but ACBR outperforms HuBR and provides a wider class of binary sources that may benefit from a plurally parsable dictionary. Moreover, we show that ACBR is more flexible than HuBR in practice. Fourthly, we use HuBR to reduce the redundancy of the balanced codes generated by Knuth's algorithm. In order to compare the performance of HuBR and ACBR, the corresponding theoretical results and analysis of HuBR and ACBR are presented. The results show that both techniques achieved almost the same significant reduction in the redundancy of the balanced codes generated by Knuth's algorithm.
Singh, Inderpreet. « New wavelet transforms and their applications to data compression ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0011/NQ52773.pdf.
Texte intégralJones, Greg 1963-2017. « RADIX 95n : Binary-to-Text Data Conversion ». Thesis, University of North Texas, 1991. https://digital.library.unt.edu/ark:/67531/metadc500582/.
Texte intégralSenecal, Joshua G. « Length-limited data transformation and compression / ». For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2005. http://uclibs.org/PID/11984.
Texte intégralArbring, Joel, et Patrik Hedström. « On Data Compression for TDOA Localization ». Thesis, Linköping University, Information Coding, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57274.
Texte intégralThis master thesis investigates different approaches to data compression on common types of signals in the context of localization by estimating time difference of arrival (TDOA). The thesis includes evaluation of the compression schemes using recorded data, collected as part of the thesis work. This evaluation shows that compression is possible while preserving localization accuracy.
The recorded data is backed up with more extensive simulations using a free space propagation model without attenuation. The signals investigated are flat spectrum signals, signals using phase-shift keying and single side band speech signals. Signals with low bandwidth are given precedence over high bandwidth signals, since they require more data in order to get an accurate localization estimate.
The compression methods used are transform based schemes. The transforms utilized are the Karhunen-Loéve transform and the discrete Fourier transform. Different approaches for quantization of the transform components are examined, one of them being zonal sampling.
Localization is performed in the Fourier domain by calculating the steered response power from the cross-spectral density matrix. The simulations are performed in Matlab using three recording nodes in a symmetrical geometry.
The performance of localization accuracy is compared with the Cramér-Rao bound for flat spectrum signals using the standard deviation of the localization error from the compressed signals.
Røsten, Tage. « Seismic data compression using subband coding ». Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2000. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1487.
Texte intégralWatkins, Bruce E. « Data compression using artificial neural networks ». Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/25801.
Texte intégralThis thesis investigates the application of artificial neural networks for the compression of image data. An algorithm is developed using the competitive learning paradigm which takes advantage of the parallel processing and classification capability of neural networks to produce an efficient implementation of vector quantization. Multi-Stage, tree searched, and classification vector quantization codebook design techniques are adapted to the neural network design to reduce the computational cost and hardware requirements. The results show that the new algorithm provides a substantial reduction in computational costs and an improvement in performance.
Lewis, Michael. « Data compression for digital elevation models ». Thesis, University of South Wales, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265470.
Texte intégralDarakis, Emmanouil. « Advanced digital holographic data compression methods ». Thesis, University of Strathclyde, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.438460.
Texte intégralReid, Mark Montgomery. « Path-dictated, lossless volumetric data compression ». Thesis, University of Ulster, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338194.
Texte intégralGooch, Mark. « High performance lossless data compression hardware ». Thesis, Loughborough University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338008.
Texte intégralNunez, Yanez Jose Luis. « Gbit/second lossless data compression hardware ». Thesis, Loughborough University, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.392516.
Texte intégralROSA, JANAINA OLEINIK MOURA. « A STUDY OF BIOSEQUENCE DATA COMPRESSION ». PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2006. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=9762@1.
Texte intégralA família de algoritmos BLAST é a mais utilizada pelos biólogos para a busca de similaridade entre biosseqüências, e por esta razão, melhoras nestes algoritmos, em suas estruturas de dados ou em seus métodos de acesso à memória secundária são muito importantes para o avanço das descobertas biológicas. Nesta dissertação, foi estudada detalhadamente uma versão do programa BLAST, analisando as suas estruturas de dados e os algoritmos que as manipulam. Além disso, foram realizadas medições de desempenho com o intuito de identificar os possíveis gargalos de processamento dentro das fases de execução do BLAST. A partir das informações obtidas, técnicas de compactação de dados foram utilizadas como uma estratégia para redução de acesso à memória secundária com o objetivo de melhorar o desempenho para a execução do BLAST. Finalmente, foi gerada uma versão modificada do BLAST no ambiente Windows, na qual foi alterado diretamente o código do programa. Os resultados obtidos foram comparados com os resultados obtidos na execução do algoritmo original.
The BLAST is the sequence comparison strategy mostly used in computational biology. Therefore, research on data structures, secondary memory access methods and on the algorithm itself, could bring important optimizations and consequently contributions to the area. In this work, we study a NCBI BLAST version by analyzing its data structures and algorithms for data manipulating. In addition, we collect performance data for identifying processing bottleneck in all the BLAST execution phases. Based on this analysis, data compress techniques were applied as a strategy for reducing number of secondary memory access operations. Finally, a modified version of BLAST was implemented in the Microsoft Windows environment, where the program was directly altered. Finally, an analysis was made over using the results of execution of original BLAST against modified BLAST.
ANJOS, FLAVIA MEDEIROS DOS. « REORGANIZATION AND COMPRESSION OF SEISMIC DATA ». PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2007. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=11337@1.
Texte intégralDados sísmicos, utilizados principalmente na indústria de petróleo, costumam apresentar dimensões de dezenas de gigabytes e em alguns casos, centenas. Este trabalho apresenta propostas de manipulação destes dados que ajudem a contornar problemas enfrentados por aplicativos de processamento e interpretação sísmica ao trabalhar com arquivos deste porte. As propostas se baseiam em reorganização e compressão. O conhecimento do formato de utilização dos dados permite reestruturar seu armazenamento diminuindo o tempo gasto com a transferência entre o disco e a memória em até 90%. A compressão é utilizada para diminuir o espaço necessário para armazenamento. Para dados desta natureza os melhores resultados, em taxa de redução, são das técnicas de compressão com perda, entre elas as compressões por agrupamento. Neste trabalho apresentamos um algoritmo que minimiza o erro médio do agrupamento uma vez que o número de grupos tenha sido determinado. Em qualquer método desta categoria o grau de erro e a taxa de compressão obtidos dependem do número de grupos. Os dados sísmicos possuem uma coerência espacial que pode ser aproveitada para melhorar a compressão dos mesmos. Combinando-se agrupamento e o aproveitamento da coerência espacial conseguimos comprimir os dados com taxas variando de 7% a 25% dependendo do erro associado. Um novo formato é proposto utilizando a reorganização e a compressão em conjunto.
Seismic data, used mainly in the petroleum industry, commonly present sizes of tens of gigabyte, and, in some cases, hundreds. This work presents propositions for manipulating these data in order to help overcoming the problems that application for seismic processing and interpretation face while dealing with file of such magnitude. The propositions are based on reorganization and compression. The knowledge of the format in which the data will be used allows us to restructure storage reducing disc- memory transference time up to 90%. Compression is used to save storage space. For data of such nature, best results in terms of compression rates come from techniques associated to information loss, being clustering one of them. In this work we present an algorithm for minimizing the cost of clustering a set of data for a pre-determined number of clusters. Seismic data have space coherence that can be used to improve their compression. Combining clustering with the use of space coherence we were able to compress sets of data with rates from 7% to 25% depending on the error associated. A new file format is proposed using reorganization and compression together.
Tang, Wang-Rei 1975. « The optimization of data compression algorithms ». Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/81553.
Texte intégralLovén, Johan. « Data Compression in a Vehicular Environment ». Thesis, KTH, Signalbehandling, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-155396.
Texte intégralSmith, Daniel D. « DATA COMPRESSION IN PCM TELEMETRY SYSTEMS ». International Foundation for Telemetering, 1986. http://hdl.handle.net/10150/615422.
Texte intégralThe data capacity of aerospace PCM telemetry systems can be greatly improved by the application of data compression techniques, which exploit the redundancy in typical telemetry data. This paper describes the design concept for a compressed PCM (CPCM) telemetry system, and contrasts system design considerations to those involved in development of a conventional PCM system. Simulation results are presented depicting potential benefits from use of telemetry data compression on an upper-stage launch vehicle.
RAJYALAKSHMI, P. S., et R. K. RAJANGAM. « DATA COMPRESSION SYSTEM FOR VIDEO IMAGES ». International Foundation for Telemetering, 1986. http://hdl.handle.net/10150/615539.
Texte intégralIn most transmission channels, bandwidth is at a premium and an important attribute of any good digital signalling scheme is to optimally utilise the bandwidth for transmitting the information. The Data Compression System in this way plays a significant role in the transmission of picture data from any Remote Sensing Satellite by exploiting the statistical properties of the imagery. The data rate required for transmission to ground can be reduced by using suitable compression technique. A data compression algorithm has been developed for processing the images of Indian Remote Sensing Satellite. Sample LANDSAT imagery and also a reference photo are used for evaluating the performance of the system. The reconstructed images are obtained after compression for 1.5 bits per pixel and 2 bits per pixel as against the original of 7 bits per pixel. The technique used is uni-dimensional Hadamard Transform Technique. The Histograms are computed for various pictures which are used as samples. This paper describes the development of such a hardware and software system and also indicates how hardware can be adopted for a two dimensional Hadamard Transform Technique.
Toufie, Moegamat Zahir. « Real-time loss-less data compression ». Thesis, Cape Technikon, 2000. http://hdl.handle.net/20.500.11838/1367.
Texte intégralData stored on disks generally contain significant redundancy. A mechanism or algorithm that recodes the data to lessen the data size could possibly double or triple the effective data that could be stored on the media. One mechanism of doing this is by data compression. Many compression algorithms currently exist, but each one has its own advantages as well as disadvantages. The objective of this study', to formulate a new compression algorithm that could be implemented in a real-time mode in any file system. The new compression algorithm should also execute as fast as possible, so as not to cause a lag in the file systems performance. This study focuses on binary data of any type, whereas previous articles such as (Huftnlan. 1952:1098), (Ziv & Lempel, 1977:337: 1978:530), (Storer & Szymanski. 1982:928) and (Welch, 1984:8) have placed particular emphasis on text compression in their discussions of compression algorithms for computer data. The resulting compression algorithm that is formulated by this study is Lempel-Ziv-Toutlc (LZT). LZT is basically an LZ77 (Ziv & Lempel, 1977:337) encoder with a buffer size equal in size to that of the data block of the file system in question. LZT does not make this distinction, it discards the sliding buffer principle and uses each data block of the entire input stream. as one big buffer on which compression can be performed. LZT also handles the encoding of a match slightly different to that of LZ77. An LZT match is encoded by two bit streams, the first specifying the position of the match and the other specifying the length of the match. This combination is commonly referred to as a
Lum, Randall M. G. « Differential pulse code modulation data compression ». Scholarly Commons, 1989. https://scholarlycommons.pacific.edu/uop_etds/2181.
Texte intégralPratas, Diogo. « Compression and analysis of genomic data ». Doctoral thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/16286.
Texte intégralGenomic sequences are large codi ed messages describing most of the structure of all known living organisms. Since the presentation of the rst genomic sequence, a huge amount of genomics data have been generated, with diversi ed characteristics, rendering the data deluge phenomenon a serious problem in most genomics centers. As such, most of the data are discarded (when possible), while other are compressed using general purpose algorithms, often attaining modest data reduction results. Several speci c algorithms have been proposed for the compression of genomic data, but unfortunately only a few of them have been made available as usable and reliable compression tools. From those, most have been developed to some speci c purpose. In this thesis, we propose a compressor for genomic sequences of multiple natures, able to function in a reference or reference-free mode. Besides, it is very exible and can cope with diverse hardware speci cations. It uses a mixture of nite-context models (FCMs) and eXtended FCMs. The results show improvements over state-of-the-art compressors. Since the compressor can be seen as a unsupervised alignment-free method to estimate algorithmic complexity of genomic sequences, it is the ideal candidate to perform analysis of and between sequences. Accordingly, we de ne a way to approximate directly the Normalized Information Distance, aiming to identify evolutionary similarities in intra- and inter-species. Moreover, we introduce a new concept, the Normalized Relative Compression, that is able to quantify and infer new characteristics of the data, previously undetected by other methods. We also investigate local measures, being able to locate speci c events, using complexity pro les. Furthermore, we present and explore a method based on complexity pro les to detect and visualize genomic rearrangements between sequences, identifying several insights of the genomic evolution of humans. Finally, we introduce the concept of relative uniqueness and apply it to the Ebolavirus, identifying three regions that appear in all the virus sequences outbreak but nowhere in the human genome. In fact, we show that these sequences are su cient to classify di erent sub-species. Also, we identify regions in human chromosomes that are absent from close primates DNA, specifying novel traits in human uniqueness.
As sequências genómicas podem ser vistas como grandes mensagens codificadas, descrevendo a maior parte da estrutura de todos os organismos vivos. Desde a apresentação da primeira sequência, um enorme número de dados genómicos tem sido gerado, com diversas características, originando um sério problema de excesso de dados nos principais centros de genómica. Por esta razão, a maioria dos dados é descartada (quando possível), enquanto outros são comprimidos usando algoritmos genéricos, quase sempre obtendo resultados de compressão modestos. Têm também sido propostos alguns algoritmos de compressão para sequências genómicas, mas infelizmente apenas alguns estão disponíveis como ferramentas eficientes e prontas para utilização. Destes, a maioria tem sido utilizada para propósitos específicos. Nesta tese, propomos um compressor para sequências genómicas de natureza múltipla, capaz de funcionar em modo referencial ou sem referência. Além disso, é bastante flexível e pode lidar com diversas especificações de hardware. O compressor usa uma mistura de modelos de contexto-finito (FCMs) e FCMs estendidos. Os resultados mostram melhorias relativamente a compressores estado-dearte. Uma vez que o compressor pode ser visto como um método não supervisionado, que não utiliza alinhamentos para estimar a complexidade algortímica das sequências genómicas, ele é o candidato ideal para realizar análise de e entre sequências. Em conformidade, definimos uma maneira de aproximar directamente a distância de informação normalizada (NID), visando a identificação evolucionária de similaridades em intra e interespécies. Além disso, introduzimos um novo conceito, a compressão relativa normalizada (NRC), que é capaz de quantificar e inferir novas características nos dados, anteriormente indetectados por outros métodos. Investigamos também medidas locais, localizando eventos específicos, usando perfis de complexidade. Propomos e exploramos um novo método baseado em perfis de complexidade para detectar e visualizar rearranjos genómicos entre sequências, identificando algumas características da evolução genómica humana. Por último, introduzimos um novo conceito de singularidade relativa e aplicamo-lo ao Ebolavirus, identificando três regiões presentes em todas as sequências do surto viral, mas ausentes do genoma humano. De facto, mostramos que as três sequências são suficientes para classificar diferentes sub-espécies. Também identificamos regiões nos cromossomas humanos que estão ausentes do ADN de primatas próximos, especificando novas características da singularidade humana.
Chen, Zhenghao. « Deep Learning for Visual Data Compression ». Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29729.
Texte intégralAydinoğlu, Behçet Halûk. « Stereo image compression ». Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/15447.
Texte intégralLee, Joshua Ka-Wing. « A model-adaptive universal data compression architecture with applications to image compression ». Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111868.
Texte intégralThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-61).
In this thesis, I designed and implemented a model-adaptive data compression system for the compression of image data. The system is a realization and extension of the Model-Quantizer-Code-Separation Architecture for universal data compression which uses Low-Density-Parity-Check Codes for encoding and probabilistic graphical models and message-passing algorithms for decoding. We implement a lossless bi-level image data compressor as well as a lossy greyscale image compressor and explain how these compressors can rapidly adapt to changes in source models. We then show using these implementations that Restricted Boltzmann Machines are an effective source model for compressing image data compared to other compression methods by comparing compression performance using these source models on various image datasets.
by Joshua Ka-Wing Lee.
S.M.
Nasiopoulos, Panagiotis. « Adaptive compression coding ». Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28508.
Texte intégralApplied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
Horan, Sheila B. « CURRENT STATUS OF DATA COMPRESSION IN TELEMETRY ». International Foundation for Telemetering, 2004. http://hdl.handle.net/10150/605065.
Texte intégralReduction of bandwidth for signal transmission is of paramount concern to many in the telemetry and wireless industry. One way to reduce bandwidth is to reduce the amount data being sent. There are several techniques available to reduce the amount of data. This paper will review the various types of data compression currently in use for telemetry data and how much compression is achieved.
Portell, i. de Mora Jordi. « Payload data handling, telemetry and data compression systems for Gaia ». Doctoral thesis, Universitat Politècnica de Catalunya, 2005. http://hdl.handle.net/10803/6585.
Texte intégralUna missió com aquesta suposa grans esforços tecnològics i de disseny ja que caldrà detectar, seleccionar i mesurar centenars d'estels cada segon, per enviar-ne posteriorment les dades cap a la Terra -a més d'un milió i mig de quilòmetres. Hem centrat el treball d'aquesta tesi en aquesta vessant de la missió, proposant dissenys pels sistemes de gestió de dades, de telemetria científica, i de compressió de dades. El nostre objectiu final és fer possible la transmissió a l'estació terrestre d'aquesta immensa quantitat de dades generades pels instruments, tenint en compte la limitada capacitat del canal de comunicacions. Això requereix el disseny d'un sistema de compressió de dades sense pèrdues que ofereixi les millors relacions de compressió i garanteixi la integritat de les dades transmeses. Tot plegat suposa un gran repte pels mètodes de la teoria de la informació i pel disseny de sistemes de compressió de dades.
Aquests aspectes tecnològics encara estaven per estudiar o bé només es disposava d'esborranys preliminars -ja que la missió mateixa estava en una etapa preliminar en quan varem començar aquesta tesi. Per tant, el nostre treball ha estat rebut amb entusiasme per part de científics i enginyers del projecte.
En primer lloc hem revisat l'entorn operacional del nostre estudi, descrit a la primera part de la tesi. Això inclou els diversos sistemes de referència i les convencions que hem proposat per tal d'unificar les mesures, referències a dades i dissenys. Aquesta proposta s'ha utilitzat com a referència inicial en la missió i actualment altres científics l'estan ampliant i millorant. També hem recopilat les principals característiques de l'instrument astromètric (en el qual hem centrat el nostre estudi) i revisat les seves directrius operacionals, la qual cosa també s'ha tingut en compte en altres equips.
A la segona part de la tesi descrivim la nostra proposta pel sistema de gestió de dades de la càrrega útil de Gaia, la qual ha estat utilitzada per presentar els requeriments científics als equips industrials i representa en sí mateixa una opció d'implementació viable (tot i que simplificada). En la següent part estudiem la telemetria científica, recopilant els camps de dades a generar pels instruments i proposant un esquema optimitzat de codificació i transmissió, el qual redueix la ocupació del canal de comunicacions i està preparat per incloure un sistema optimitzat de compressió de dades. Aquest darrer serà descrit a la quarta i última part de la tesi, on veurem com la nostra proposta compleix gairebé totalment els requeriments de compressió, arribant a duplicar les relacions de compressió ofertes pels millors sistemes estàndard. El nostre disseny representa la millor solució actualment disponible per Gaia i el seu rendiment ha estat assumit com a disseny base per altres equips.
Cal dir que els resultats del nostre treball van més enllà de la publicació d'una memòria de tesi, complementant-la amb aplicacions de software que hem desenvolupat per ajudar-nos a dissenyar, optimitzar i verificar la operació dels sistemes aquí proposats. També cal indicar que la complexitat del nostre treball ha estat augmentada degut a la necessitat d'actualitzar-lo contínuament als canvis que la missió ha sofert en el seu disseny durant els cinc anys del doctorat. Per acabar, podem dir que estem satisfets amb els resultats del nostre treball, ja que la majoria han estat (o estan essent) tinguts en compte per molts equips involucrats en la missió i per la mateixa Agència Espacial Europea en el disseny final.
Haugen, Daniel. « Seismic Data Compression and GPU Memory Latency ». Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9973.
Texte intégralThe gap between processing performance and the memory bandwidth is still increasing. To compensate for this gap various techniques have been used, such as using a memory hierarchy with faster memory closer to the processing unit. Other techniques that have been tested include the compression of data prior to a memory transfer. Bandwidth limitations exists not only at low levels within the memory hierarchy, but also between the central processing unit (CPU) and the graphics processing unit (GPU), suggesting the use of compression to mask the gap. Seismic datasets are often very large, e.g. several terabytes. This thesis explores compression of seismic data to hide the bandwidth limitation between the CPU and the GPU for seismic applications. The compression method considered is subband coding, with both run-length encoding (RLE) and Huffman encoding as compressors of the quantized data. These methods has shown on CPU implementations to give very good compression ratios for seismic data. A proof of concept implementation for decompression of seismic data on GPUs is developed. It consists of three main components: First the subband synthesis filter reconstructing the input data processed by the subband analysis filter. Second, the inverse quantizer generating an output close to the input given to the quantizer. Finally, the decoders decompressing the compressed data using Huffman and RLE. The results of our implementation show that the seismic data compression algorithm investigated is probably not suited to hide the bandwidth limitation between CPU and GPU. This is because of the steps taken to do the decompression are likely slower than a simple memory copy of the uncompressed seismic data. It is primarily the decompressors that are the limiting factor, but in our implementation the subband synthesis is also limiting. The sequential nature of the decompres- sion algorithms used makes them difficult to parallelize to make use of the processing units on the GPUs in an efficient way. Several suggestions for future work is then suggested as well as results showing how our GPU implementation can be very useful for data compres- sion for data to be sent over a network. Our compression results give a compression factor between 27 and 32, and a SNR of 24.67dB for a cube of dimension 643. A speedup of 2.5 for the synthesis filter compared to the CPU implementation is achieved (2029.00/813.76 2.5). Although not currently suited for the GPU-CPU compression, our implementations indicate
Aqrawi, Ahmed Adnan. « Effects of Compression on Data Intensive Algorithms ». Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-11797.
Texte intégral