Literatura académica sobre el tema "Lempel-Ziv decompression"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Lempel-Ziv decompression".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Lempel-Ziv decompression"

1

Shirakol, Shrikanth, Akshata Koparde, Sandhya ., Shravan Kulkarni y Yogesh Kini. "Performance optimization of dual stage algorithm for lossless data compression and decompression". International Journal of Engineering & Technology 7, n.º 2.21 (20 de abril de 2018): 127. http://dx.doi.org/10.14419/ijet.v7i2.21.11849.

Texto completo
Resumen
In this paper, an optimized dual stage architecture is proposed which is the combination of Lempel-Ziv-Welch (LZW) Algorithm at the first phase and Arithmetic Coding being the later part of Architecture. LZW Algorithm is a lossless compression algorithm and code here for each character is available in the dictionary which reduces 5-bits per cycle as compared to ASCII. In arithmetic coding the numbers are represented by an interval of real numbers from zero to one according to their probabilities. It is an entropy coding and is lossless in nature. The text information is allowed to pass through the proposed architecture and it gets compressed to the higher rate.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

King, G. R. Gnana, C. Christopher Seldev y N. Albert Singh. "A Novel Compression Technique for Compound Images Using Parallel Lempel-Ziv-Welch Algorithm". Applied Mechanics and Materials 626 (agosto de 2014): 44–51. http://dx.doi.org/10.4028/www.scientific.net/amm.626.44.

Texto completo
Resumen
Compound image is a combination of natural images, text, and graphics.This paper presents a compression technique for improving coding efficiency. The algorithm first decomposes the compound images by using 3 level biorthogonal wavelet transform and then the transformed image was further compressed by Parallel dictionary based LZW algorithm called PDLZW.In PDLZW algorithm instead of using a unique fixed word width dictionary a hierarchical variable word width dictionary set containing several dictionaries of small address space and increases the word widths used for compression and decompression algorithms. The experimental results show that the PSNR value is increased and the Mean Square error value was improved.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Fitra Wijayanto, Erick, Muhammad Zarlis y Zakarias Situmorang. "Increase the PSNR Of Image Using LZW and AES Algorithm With MLSB on Steganography". International Journal of Engineering & Technology 7, n.º 2.5 (10 de marzo de 2018): 119. http://dx.doi.org/10.14419/ijet.v7i2.5.13965.

Texto completo
Resumen
There are many research has done a hybridization approach of text message insertion that has been compressed with Lempel-Ziv-Welch (LZW) algorithm and has also been encrypted. The text messages in ciphertext form are inserted into the image file using LSB (Least Significant Bit) method. The results of this study indicate that the value of Peak Signal to Noise Ratio (PSNR) lower than the LSB method of 0.94 times with a ratio of 20.33%, with Kekre's method of 10.04%. To improve the value of PSNR stego image of insertion, in this research is inserted audio samples using 5 bits to reduce the amount of data insertion, so it can get the value of MSE stego image low. Prior to insertion, the text file is compressed with the Lempel-Ziv-Welch (LZW) algorithm and encrypted with the Advanced Encryption Standard (AES) algorithm. Then, the insertion of compression and encrypted text files is done with the Modified Least Significant Bit (MLSB) algorithm. To performa test reliability of steganography, the image stego image is done by calculating Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). At extraction process with MLSB algorithm, decryption with AES algorithm and decompression with LZW algorithm. The experimental results show that the MSE values obtained are lower and the proposed PSNR method is better with (α) 1,044 times than the Kaur method, et al. The result of embed text file extraction from the stego image works well resulting in encrypted and uncompressed text files.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Anandita, Ida Bagus Gede, I. Gede Aris Gunadi y Gede Indrawan. "Analisis Kinerja Dan Kualitas Hasil Kompresi Pada Citra Medis Sinar-X Menggunakan Algoritma Huffman, Lempel Ziv Welch Dan Run Length Encoding". SINTECH (Science and Information Technology) Journal 1, n.º 1 (9 de febrero de 2018): 7–15. http://dx.doi.org/10.31598/sintechjournal.v1i1.179.

Texto completo
Resumen
Technological progress in the medical area made medical images like X-rays stored in digital files. The medical image file is relatively large so that the image needs to be compressed. The lossless compression technique is an image compression where the decompression results are the same as the original or no information lost in the compression process. The existing algorithms on lossless compression techniques are Run Length Encoding (RLE), Huffman, and Lempel Ziv Welch (LZW). This study compared the performance of the three algorithms in compressing medical images. The result of image decompression will be compared to its performance in the objective assessment such as ratio, compression time, MSE (Mean Square Error) and PNSR (Peak Signal to Noise Ratio). MSE and PSNR are used for quantitative image quality measurement for subjective assessment assisted by three experts who will compare the original image with the decompression image. Based on the results obtained from the objective assessment of compression performance of RLE algorithm showed the best performance by yielding ratio, time, MSE and PSNR respectively 86,92%, 3,11ms, 0 and 0db. For Huffman, the results can be 12.26%, 96.94ms, 0, and 0db respectively. While LZW results can be in sequence -63.79%, 160ms, 0.3 and 58.955db. For the results of the subjective assessment, the experts argued that all images can be analyzed well.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Nunes, Daniel S. N., Felipe A. Louza, Simon Gog, Mauricio Ayala-Rincón y Gonzalo Navarro. "Grammar Compression by Induced Suffix Sorting". ACM Journal of Experimental Algorithmics 27 (31 de diciembre de 2022): 1–33. http://dx.doi.org/10.1145/3549992.

Texto completo
Resumen
A grammar compression algorithm, called GCIS, is introduced in this work. GCIS is based on the induced suffix sorting algorithm SAIS, presented by Nong et al. in 2009. The proposed solution builds on the factorization performed by SAIS during suffix sorting. A context-free grammar is used to replace factors by non-terminals. The algorithm is then recursively applied on the shorter sequence of non-terminals. The resulting grammar is encoded by exploiting some redundancies, such as common prefixes between right-hands of rules, sorted according to SAIS. GCIS excels for its low space and time required for compression while obtaining competitive compression ratios. Our experiments on regular and repetitive, moderate and very large texts, show that GCIS stands as a very convenient choice compared to well-known compressors such as Gzip 7-Zip; and RePair the gold standard in grammar compression; and recent compressors such as SOLCA, LZRR, and LZD. In exchange, GCIS is slow at decompressing. Yet, grammar compressors are more convenient than Lempel-Ziv compressors in that one can access text substrings directly in compressed form without ever decompressing the text. We demonstrate that GCIS is an excellent candidate for this scenario, because it shows to be competitive among its RePair based alternatives. We also show that the relation with SAIS makes GCIS a good intermediate structure to build the suffix array and the LCP array during decompression of the text.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Belu, Sabin y Daniela Coltuc. "A Hybrid Data-Differencing and Compression Algorithm for the Automotive Industry". Entropy 24, n.º 5 (19 de abril de 2022): 574. http://dx.doi.org/10.3390/e24050574.

Texto completo
Resumen
We propose an innovative delta-differencing algorithm that combines software-updating methods with LZ77 data compression. This software-updating method relates to server-side software that creates binary delta files and to client-side software that performs software-update installations. The proposed algorithm creates binary-differencing streams already compressed from an initial phase. We present a software-updating method suitable for OTA software updates and the method’s basic strategies to achieve a better performance in terms of speed, compression ratio or a combination of both. A comparison with publicly available solutions is provided. Our test results show our method, Keops, can outperform an LZMA (Lempel–Ziv–Markov chain-algorithm) based binary differencing solution in terms of compression ratio in two cases by more than 3% while being two to five times faster in decompression. We also prove experimentally that the difference between Keops and other competing delta-creator software increases when larger history buffers are used. In one case, we achieve a three times better performance for a delta rate compared to other competing delta rates.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Di, Jinhong, Pengkun Yang, Chunyan Wang y Lichao Yan. "Layered Lossless Compression Method of Massive Fault Recording Data". International Journal of Circuits, Systems and Signal Processing 16 (3 de enero de 2022): 17–25. http://dx.doi.org/10.46300/9106.2022.16.3.

Texto completo
Resumen
In order to overcome the problems of large error and low precision in traditional power fault record data compression, a new layered lossless compression method for massive fault record data is proposed in this paper. The algorithm applies LZW (Lempel Ziv Welch) algorithm, analyzes the LZW algorithm and existing problems, and improves the LZW algorithm. Use the index value of the dictionary to replace the input string sequence, and dynamically add unknown strings to the dictionary. The parallel search method is to divide the dictionary into several small dictionaries with different bit widths to realize the parallel search of the dictionary. According to the compression and decompression of LZW, the optimal compression effect of LZW algorithm hardware is obtained. The multi tree structure of the improved LZW algorithm is used to construct the dictionary, and the multi character parallel search method is used to query the dictionary. The multi character parallel search method is used to query the dictionary globally. At the same time, the dictionary size and update strategy of LZW algorithm are analyzed, and the optimization parameters are designed to construct and update the dictionary. Through the calculation of lossless dictionary compression, the hierarchical lossless compression of large-scale fault record data is completed. Select the optimal parameters, design the dictionary size and update strategy, and complete the lossless compression of recorded data. The experimental results show that compared with the traditional compression method, under this compression method, the mean square error percentage is effectively reduced, and the compression error and compression rate are eliminated, so as to ensure the integrity of fault record data, achieve the compression effect in a short time, and achieve the expected goal.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Ratov, Denis. "DEVELOPMENT OF METHOD AND SOFTWARE FOR COMPRESSION AND ENCRYPTION OF INFORMATION". Journal of Automation and Information sciences 1 (1 de enero de 2022): 66–73. http://dx.doi.org/10.34229/1028-0979-2022-1-7.

Texto completo
Resumen
Researches of the subject area of lossless information compression and with data loss are carried out and data compression algorithms with minimal redundancy are considered: Shannon-Fano coding, Huffman coding and compression using a dictionary: Lempel-Ziv coding. In the course of the work, the theoretical foundations of data compression were used, studies of various methods of data compression were carried out, the best methods of archiving with encryption and storage of various kinds of data were identified. The method of archiving data in the work is used for the purpose of safe and rational placement of information on external media and its protection from deliberate or accidental destruction or loss. In the Embarcadero RAD Studio XE8 integrated development environment, a software package for an archiver with code protection of information has been developed. The archiverʼs mechanism of operation is based on the creation and processing of streaming data. The core of the archiver is the function of compressing and decompressing files using the Lempel-Ziv method. As a method and means of protecting information in the archive, poly-alphabetic substitution (Viziner cipher) was used. The results of the work, in particular, the developed software can be practically used for archival storage of protected information; the mechanism of data archiving and encryption can be used in information transmission systems in order to reduce network traffic and ensure data security. The resulting encryption and archiving software was used in the module of the software package «Diplomas SNU v.2.6.1», which was developed at the Volodymyr Dal East Ukrainian National University. This complex is designed to create a unified register of diplomas at the university, automate the creation of files-diplomas of higher education in the multifunctional graphics editor Adobe Photoshop. The controller exports all data for analysis and formation of diplomas from the parameters of the corresponding XML files downloaded from the unified state education database in compressed zip archives. The developed module performs the process of unzipping and receiving XML-files with parameters for the further work of the complex «Diplomas SNU v.2.6.1».
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Chandra, M. Edo. "IMPEMENTASI ALGORITMA LEMPEL ZIV STORER SZYMANSKI (LZSS) PADA APLIKASI BACAAN SHALAT BERBASIS ANDROID". KOMIK (Konferensi Nasional Teknologi Informasi dan Komputer) 3, n.º 1 (25 de noviembre de 2019). http://dx.doi.org/10.30865/komik.v3i1.1624.

Texto completo
Resumen
Compression is a way to compress or modify data so that the required storage space is smaller and more efficient. In this study, the large file size in the prayer reading application makes the document storage space requires a lot of storage space due to the large file size. Due to the large size of the file that is currently making a large and inefficient file storage area and the larger size of a file makes the smartphone slow due to the file. The purpose of this study is to design an android-based prayer reading application by implementing the Szymanski Ziv Storer Lempel algorithm (LZSS). And designing a compression application using the Java programming language and database used is SQLite The results of the study show that after carrying out the implementation process with the LZSS algorithm on Reading Prayers, the decompression test results are known to be compressed text files that can be compressed and the size of the decompressed text file is the same as the original text file before it is compressed.Keywords: Implementation, compression, algorithm Lempel Ziv Storer Szymanski (LZSS).
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Anto, Rincy Thayyalakkal y Rajesh Ramachandran. "A Compression System for Unicode Files Using an Enhanced Lzw Method". Pertanika Journal of Science and Technology 28, n.º 4 (21 de octubre de 2020). http://dx.doi.org/10.47836/pjst.28.4.16.

Texto completo
Resumen
Data compression plays a vital and pivotal role in the process of computing as it helps in space reduction occupied by a file as well as to reduce the time taken to access the file. This work relates to a method for compressing and decompressing a UTF-8 encoded stream of data pertaining to Lempel-Ziv-welch (LZW) method. It is worth to use an exclusive-purpose LZW compression scheme as many applications are utilizing Unicode text. The system of the present work comprises a compression module, configured to compress the Unicode data by creating the dictionary entries in Unicode format. This is accomplished with adaptive characteristic data compression tables built upon the data to be compressed reflecting the characteristics of the most recent input data. The decompression module is configured to decompress the compressed file with the help of unique Unicode character table obtained from the compression module and the encoded output. We can have remarkable gain in compression, wherein the knowledge that we gather from the source is used to explore the decompression process.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Lempel-Ziv decompression"

1

Rossi, Massimiliano. "Algorithms and Data Structures for Coding, Indexing, and Mining of Sequential Data". Doctoral thesis, 2020. http://hdl.handle.net/11562/1010405.

Texto completo
Resumen
In recent years, the production of sequential data has been rapidly increasing. This requires solving challenging problems about how to represent information, how to retrieve information, and how to extract knowledge, from sequential data. These questions belong to the areas of coding, indexing, and mining, respectively. In this thesis, we investigate problems from those three areas. Coding refers to the way in which information is represented. Coding aims at generating optimal codes, that are codes having a minimum expected length. Codes can be generated for different purposes, from data compression to error detection/correction. The Lempel-Ziv 77 parsing produces an asymptotically optimal code in terms of compression. We study algorithms to efficiently decompress strings from the Lempel-Ziv 77 parsing, using memory proportional to the size of the parsing itself. We provide the first implementation of an algorithm by Bille et al., the only work we are aware of on this problem. We present a practical evaluation of this approach and several optimizations which improve the performance on all datasets we tested. Through the Ulam-R{'e}nyi game, it is possible to provide optimal adaptive error-correcting codes. The game consists of discovering an unknown $m$-bit number by asking membership questions the answers to which can be erroneous. Questions are formulated knowing the answers to all previous ones. We want to find an optimal strategy, i.e., a strategy that can identify any $m$-bit number using the theoretical minimum number of questions. We studied the case where questions are a union of up to a fixed number of intervals, and up to three answers can be erroneous. We first show that for any sufficiently large $m$, there exists a strategy to identify an initially unknown $m$-bit number which uses at most four intervals per question. We further refine our main tool to turn the above asymptotic result into a complete characterization of those instances of the Ulam-R{'e}nyi game that admit optimal strategies. Indexing refers to the way in which information is retrieved. An index for texts permits finding all occurrences of any substring, without traversing the whole text. Many applications require to look for approximate substrings. One of these is the problem of jumbled pattern matching, where two strings match if one is a permutation of the other. We study combinatorial aspects of prefix normal words, a class of binary words introduced in this context. These words can be used as indices for the Indexed Binary Jumbled Pattern Matching problem. We present a new recursive generation algorithm for prefix normal words that is competitive with the previous one but allows to list all prefix normal words sharing the same prefix. This sheds lights on novel insights that may help solving the problem of counting the number of prefix normal words of a given length. We then introduce infinite prefix normal words, and we show that one of the operations used by the algorithm, when repeatedly applied to extend a word, produces an infinite prefix normal word. This motivates the seeking for other operations that produce infinite prefix normal words. We found that one of these operations establishes a connection between prefix normal words and Sturmian words. We also explored the relationship between prefix normal words and Abelian complexity, as well as between prefix normal words and lexicographic order. Mining refers to the way in which information is converted into knowledge. The process of knowledge discovery covers several processing steps, including knowledge extraction. We analyze the problem of mining assertions for an embedded system from its simulation traces. This problem can be modeled as a pattern discovery problem on colored strings. We present two problems of pattern discovery on colored strings: patterns for one color only, or for all colors at the same time. We present two suffix tree-based algorithms. The first algorithm solves both the one color problem and the all colors problem. We then, introduce modifications which improve performance of the algorithm both on synthetic and on real data. We implemented and evaluated the proposed approaches, highlighting time trade-offs that can be obtained. A different way of knowledge extraction is based on the information-theoretic perspective of Pearl's model of causality. It has been postulated that the true causality direction between two phenomena A and B is related to the problem of finding the minimum entropy joint distribution between A and B. This problem is known to be NP-hard, and greedy algorithms have recently been proposed. We provide a novel analysis of one of the proposed heuristic showing that this algorithm guarantees an additive approximation of 1 bit. We then, provide a general criterion for guaranteeing an additive approximation factor of 1. This criterion may be of independent interest in other contexts where couplings are used.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Lempel-Ziv decompression"

1

Puglisi, Simon J. y Massimiliano Rossi. "On Lempel-Ziv Decompression in Small Space". En 2019 Data Compression Conference (DCC). IEEE, 2019. http://dx.doi.org/10.1109/dcc.2019.00030.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Conrad, Kennon J. y Paul R. Wilson. "Grammatical Ziv-Lempel Compression: Achieving PPM-Class Text Compression Ratios with LZ-Class Decompression Speed". En 2016 Data Compression Conference (DCC). IEEE, 2016. http://dx.doi.org/10.1109/dcc.2016.119.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Bille, Philip, Mikko Berggren Ettienne, Travis Gagie, Inge Li Gortz y Nicola Prezza. "Decompressing Lempel-Ziv Compressed Text". En 2020 Data Compression Conference (DCC). IEEE, 2020. http://dx.doi.org/10.1109/dcc47342.2020.00022.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía