Добірка наукової літератури з теми "Parallel Lempel-Ziv"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Parallel Lempel-Ziv".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Parallel Lempel-Ziv"

1

Klein, Shmuel Tomi, and Yair Wiseman. "Parallel Lempel Ziv coding." Discrete Applied Mathematics 146, no. 2 (March 2005): 180–91. http://dx.doi.org/10.1016/j.dam.2004.04.013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Han, Ling Bo, Bin Lao, and Ge Nong. "Succinct parallel Lempel–Ziv factorization on a multicore computer." Journal of Supercomputing 78, no. 5 (November 8, 2021): 7278–303. http://dx.doi.org/10.1007/s11227-021-04165-w.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

De Agostino, Sergio. "Lempel–Ziv Data Compression on Parallel and Distributed Systems." Algorithms 4, no. 3 (September 14, 2011): 183–99. http://dx.doi.org/10.3390/a4030183.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Fujioka, Toyota, and Hirotomo Aso. "Parallel architecture for high-speed Lempel-Ziv data coding/decoding." Systems and Computers in Japan 29, no. 8 (July 1998): 28–37. http://dx.doi.org/10.1002/(sici)1520-684x(199807)29:8<28::aid-scj4>3.0.co;2-m.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

King, G. R. Gnana, C. Christopher Seldev, and N. Albert Singh. "A Novel Compression Technique for Compound Images Using Parallel Lempel-Ziv-Welch Algorithm." Applied Mechanics and Materials 626 (August 2014): 44–51. http://dx.doi.org/10.4028/www.scientific.net/amm.626.44.

Повний текст джерела
Анотація:
Compound image is a combination of natural images, text, and graphics.This paper presents a compression technique for improving coding efficiency. The algorithm first decomposes the compound images by using 3 level biorthogonal wavelet transform and then the transformed image was further compressed by Parallel dictionary based LZW algorithm called PDLZW.In PDLZW algorithm instead of using a unique fixed word width dictionary a hierarchical variable word width dictionary set containing several dictionaries of small address space and increases the word widths used for compression and decompression algorithms. The experimental results show that the PSNR value is increased and the Mean Square error value was improved.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ji, Guoli, Yong Zeng, Zijiang Yang, Congting Ye, and Jingci Yao. "A multiple sequence alignment method with sequence vectorization." Engineering Computations 31, no. 2 (February 25, 2014): 283–96. http://dx.doi.org/10.1108/ec-01-2013-0026.

Повний текст джерела
Анотація:
Purpose – The time complexity of most multiple sequence alignment algorithm is O(N2) or O(N3) (N is the number of sequences). In addition, with the development of biotechnology, the amount of biological sequences grows significantly. The traditional methods have some difficulties in handling large-scale sequence. The proposed Lemk_MSA method aims to reduce the time complexity, especially for large-scale sequences. At the same time, it can keep similar accuracy level compared to the traditional methods. Design/methodology/approach – LemK_MSA converts multiple sequence alignment into corresponding 10D vector alignment by ten types of copy modes based on Lempel-Ziv. Then, it uses k-means algorithm and NJ algorithm to divide the sequences into several groups and calculate guide tree of each group. A complete guide tree for multiple sequence alignment could be constructed by merging guide tree of every group. Moreover, for large-scale multiple sequence, Lemk_MSA proposes a GPU-based parallel way for distance matrix calculation. Findings – Under this approach, the time efficiency to process multiple sequence alignment can be improved. The high-throughput mouse antibody sequences are used to validate the proposed method. Compared to ClustalW, MAFFT and Mbed, LemK_MSA is more than ten times efficient while ensuring the alignment accuracy at the same time. Originality/value – This paper proposes a novel method with sequence vectorization for multiple sequence alignment based on Lempel-Ziv. A GPU-based parallel method has been designed for large-scale distance matrix calculation. It provides a new way for multiple sequence alignment research.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

FRANEK, FRANTISEK, and MEI JIANG. "CROCHEMORE'S REPETITIONS ALGORITHM REVISITED: COMPUTING RUNS." International Journal of Foundations of Computer Science 23, no. 02 (February 2012): 389–401. http://dx.doi.org/10.1142/s0129054112400199.

Повний текст джерела
Анотація:
Crochemore's repetitions algorithm introduced in 1981 was the first O(n log n) algorithm for computing repetitions. Since then, several linear-time worst-case algorithms for computing runs have been introduced. They all follow a similar strategy: first compute the suffix tree or array, then use the suffix tree or array to compute the Lempel-Ziv factorization, then using the Lempel-Ziv factorization compute all the runs. It is conceivable that in practice an extension of Crochemore's repetitions algorithm may outperform the linear-time algorithms, or at least for certain classes of strings. The nature of Crochemore's algorithm lends itself naturally to parallelization, while the linear-time algorithms are not easily conducive to parallelization. For all these reasons it is interesting to explore ways to extend the original Crochemore's repetitions algorithm to compute runs. We present three variants of extending the repetitions algorithm to compute runs: two with a worsen complexity of O(n ( log n)2), and one with the same complexity as the original algorithm. The three variants are tested for speed of performance and their memory requirements are analyzed. The third variant is tested and analyzed for various memory-saving alterations. The purpose of this research is to identify the best extension of Crochemore's algorithm for further study, comparison with other algorithms, and parallel implementation.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Di, Jinhong, Pengkun Yang, Chunyan Wang, and Lichao Yan. "Layered Lossless Compression Method of Massive Fault Recording Data." International Journal of Circuits, Systems and Signal Processing 16 (January 3, 2022): 17–25. http://dx.doi.org/10.46300/9106.2022.16.3.

Повний текст джерела
Анотація:
In order to overcome the problems of large error and low precision in traditional power fault record data compression, a new layered lossless compression method for massive fault record data is proposed in this paper. The algorithm applies LZW (Lempel Ziv Welch) algorithm, analyzes the LZW algorithm and existing problems, and improves the LZW algorithm. Use the index value of the dictionary to replace the input string sequence, and dynamically add unknown strings to the dictionary. The parallel search method is to divide the dictionary into several small dictionaries with different bit widths to realize the parallel search of the dictionary. According to the compression and decompression of LZW, the optimal compression effect of LZW algorithm hardware is obtained. The multi tree structure of the improved LZW algorithm is used to construct the dictionary, and the multi character parallel search method is used to query the dictionary. The multi character parallel search method is used to query the dictionary globally. At the same time, the dictionary size and update strategy of LZW algorithm are analyzed, and the optimization parameters are designed to construct and update the dictionary. Through the calculation of lossless dictionary compression, the hierarchical lossless compression of large-scale fault record data is completed. Select the optimal parameters, design the dictionary size and update strategy, and complete the lossless compression of recorded data. The experimental results show that compared with the traditional compression method, under this compression method, the mean square error percentage is effectively reduced, and the compression error and compression rate are eliminated, so as to ensure the integrity of fault record data, achieve the compression effect in a short time, and achieve the expected goal.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Parallel Lempel-Ziv"

1

Mathur, Milind. "ANALYSIS OF PARALLEL LEMPEL-ZIV COMPRESSION USING CUDA." Thesis, 2016. http://dspace.dtu.ac.in:8080/jspui/handle/repository/14425.

Повний текст джерела
Анотація:
Data compression is a topic that has been researched upon for years and we have standard formats like zip, rar, gzip, bz2 in generic data; jpeg, gif in images; . In this age where we have lots of data with internet being ubiquitous, there is a strong need for fast and efficient data compression algorithm. Lempel-Ziv family of compression algorithms form the basis for a lot of commonly used formats. Some modified form of LZ77 algorithm is still used widely as a lossless run length encoding algorithm. Recently Graphics Processing Units (GPUs) are making headway into the scientific computing world. They are enticing to many because of the sheer promise of the hardware performance and energy efficiency. More often than not these graphic cards with immense processing power are just sitting idle as we do our tasks and are not gaming. GPUs were mainly used for graphic rendering but now they are being used for computing and follow massively parallel architecture. In this dissertation, we talk about hashing algorithm used in LZSS compression. We compare the use of DJB hash and Murmur Hash in LZSS compression. We compare it to the more superior LZ4 algorithm. We also look at massively parallel, CUDA enabled version of these algorithms and the speedup we can achieve with those at our disposal. We conclude that for very small file (of order of KBs) we should use the LZ4 algorithm. If we don’t have a CUDA capable device LZ4 is our best bet. But CUDA enabled versions of these algorithms outperform all the other algorithms easily and a speedup up to 10x is possible with GPU only of 500 series and even better with the newer GPUs.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Parallel Lempel-Ziv"

1

Tomi Klein, Shmuel, and Yair Wiseman. "Parallel Lempel Ziv Coding (Extended Abstract)." In Combinatorial Pattern Matching, 18–30. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-48194-x_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Parallel Lempel-Ziv"

1

Julian Shun and Fuyao Zhao. "Practical Parallel Lempel-Ziv Factorization." In 2013 Data Compression Conference (DCC). IEEE, 2013. http://dx.doi.org/10.1109/dcc.2013.20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

King, G. R. G., C. S. Christopher, and N. A. Singh. "Compound image compression using parallel Lempel-Ziv-Welch algorithm." In IET Chennai Fourth International Conference on Sustainable Energy and Intelligent Systems (SEISCON 2013). Institution of Engineering and Technology, 2013. http://dx.doi.org/10.1049/ic.2013.0364.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

De Agostino, Sergio. "Lempel-Ziv Data Compression on Parallel and Distributed Systems." In 2011 First International Conference on Data Compression, Communications and Processing (CCP). IEEE, 2011. http://dx.doi.org/10.1109/ccp.2011.11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Shukla, Sankalp, Maniram Ahirwar, Ritu Gupta, Sarthak Jain, and Dheeraj Singh Rajput. "Audio Compression Algorithm using Discrete Cosine Transform (DCT) and Lempel-Ziv-Welch (LZW) Encoding Method." In 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon). IEEE, 2019. http://dx.doi.org/10.1109/comitcon.2019.8862228.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії