Gotowa bibliografia na temat „Hashing algorithm used in LZSS compression”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Hashing algorithm used in LZSS compression”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Hashing algorithm used in LZSS compression"

1

Hodovychenko, Mykola A., Svitlana G. Antoshchuk i Varvara I. Kuvaieva. "Methodology for image retrieval based on binary space partitioning and perceptual image hashing". Applied Aspects of Information Technology 5, nr 2 (4.07.2022): 136–46. http://dx.doi.org/10.15276/aait.05.2022.10.

Pełny tekst źródła
Streszczenie:
The paper focuses on the content-based image retrieval systems building. The main challenges in the construction of such systems are considered, the components of such systems are reviewed, and a brief overview of the main methods and techniques that have been used in this area to implement the main components of image search systems is given. As one of the options for solving such a problem, an image retrieve methodology based on the binary space partitioning method and the perceptual hashing method is proposed. Space binary partition trees are a data structures obtained as follows: the space is partitioned by a hyperplane into two halfspaces, and then each half-space is recursively partitioned until each node contains only a trivial part of the input features. Perceptual hashing algorithms make it possible to represent an image as a 64-bit hash value, with similar images represented by similar hash values. As a metric for determining the distance between hash values, the Hamming distance is used, this counts the number of distinct bits. To organize the base of hash values, a vp-tree is used, which is an implementation of the binary space partitioning structure. For the experimental study of the methodology, the Caltech-256 data set was used, which contains 30607 images divided into 256 categories, the Difference Hash, P-Hash and Wavelet Hash algorithms were used as perceptual hashing algorithms, the study was carried out in the Google Colab environment. As part of an experimental study, the robustness of hashing algorithms to modification, compression, blurring, noise, and image rotation was examined. In addition, a study was made of the process of building a vp-tree and the process of searching for images in the tree. As a result of experiments, it was found that each of the hashing algorithms has its own advantages and disadvantages. So, the hashing algorithm based on the difference in adjacent pixel values in the image turned out to be the fastest, but it turned out to be not very robust to modification and image rotation. The P-Hash algorithm, based on the use of the discrete cosine transform, showed better resistance to image blurring, but turned out to be sensitive to image compression. The W-Hash algorithm based on the Haar wavelet transform made it possible to construct the most efficient tree structure and proved to be resistant to image modification and compression. The proposed technique is not recommended for use in general-purpose image retrieval systems; however, it can be useful in searching for images in specialized databases. As ways to improve the methodology, one can note the improvement of the vp-tree structure, as well as the search for a more efficient method of image representation, in addition to perceptual hashing.
Style APA, Harvard, Vancouver, ISO itp.
2

Sakan, Kairat, Saule Nyssanbayeva, Nursulu Kapalova, Kunbolat Algazy, Ardabek Khompysh i Dilmukhanbet Dyusenbayev. "Development and analysis of the new hashing algorithm based on block cipher". Eastern-European Journal of Enterprise Technologies 2, nr 9 (116) (30.04.2022): 60–73. http://dx.doi.org/10.15587/1729-4061.2022.252060.

Pełny tekst źródła
Streszczenie:
This paper proposes the new hash algorithm HBC-256 (Hash based on Block Cipher) based on the symmetric block cipher of the CF (Compression Function). The algorithm is based on the wipe-pipe construct, a modified version of the Merkle-Damgard construct. To transform the block cipher CF into a one-way compression function, the Davis-Meyer scheme is used, which, according to the results of research, is recognized as a strong and secure scheme for constructing hash functions based on block ciphers. The symmetric CF block cipher algorithm used consists of three transformations (Stage-1, Stage-2, and Stage-3), which include modulo two addition, circular shift, and substitution box (four-bit S-boxes). The four substitution boxes are selected from the “golden” set of S-boxes, which have ideal cryptographic properties. The HBC-256 scheme is designed to strike an effective balance between computational speed and protection against a preimage attack. The CF algorithm uses an AES-like primitive as an internal transformation. The hash image was tested for randomness using the NIST (National Institute of Standards and Technology) statistical test suite, the results were examined for the presence of an avalanche effect in the CF encryption algorithm and the HBC-256 hash algorithm itself. The resistance of HBC-256 to near collisions has been practically tested. Since the classical block cipher key expansion algorithms slow down the hash function, the proposed algorithm is adapted for hardware and software implementation by applying parallel computing. A hashing algorithm was developed that has a sufficiently large freedom to select the sizes of the input blocks and the output hash digest. This will make it possible to create an almost universal hashing algorithm and use it in any cryptographic protocols and electronic digital signature algorithms
Style APA, Harvard, Vancouver, ISO itp.
3

Algazy, Kunbolat, Kairat Sakan i Nursulu Kapalova. "Evaluation of the strength and performance of a new hashing algorithm based on a block cipher". International Journal of Electrical and Computer Engineering (IJECE) 13, nr 3 (1.06.2023): 3124. http://dx.doi.org/10.11591/ijece.v13i3.pp3124-3130.

Pełny tekst źródła
Streszczenie:
The article evaluates the reliability of the new HBC-256 hashing algorithm. To study the cryptographic properties, the algorithm was implemented in software using Python and C programming languages. Also, for the algebraic analysis of the HBC-256 algorithm, a system of Boolean equations was built for one round using the Transalg tool. The program code that implements the hashing algorithm was converted into a software program for generating equations. As a result, one round of the compression function was described as conjunctive normal form (CNF) using 82,533 equations and 16,609 variables. To search for a collision, the satisfiability (SAT) problem solver Lingeling was used, including a version with the possibility of parallel computing. It is shown that each new round doubles the number of equations and variables, and the time to find the solution will grow exponentially. Therefore, it is not possible to find solutions for the full HBC256 hash function.
Style APA, Harvard, Vancouver, ISO itp.
4

Zhang, LiMing, i XinGang Zhang. "New Authentication Method for Vector Geographic Data Based on Perceptual Hash". Abstracts of the ICA 1 (15.07.2019): 1–2. http://dx.doi.org/10.5194/ica-abs-1-431-2019.

Pełny tekst źródła
Streszczenie:
<p><strong>Abstract.</strong> In the Internet era, vector data can conveniently be stored, distributed and disseminated. This make it convenience for people to use them, and also make it easier for people to edit or modify them, which can reduce the credibility of Geo-information. Traditionally, authentication is done by using a hash function to generate a digital signature for data authentication. However, this method is very sensitive for change even one bit and it is appropriate for accurate authentication such as text. For geographic data, it may experience lossy compression, filtering distortion, geometric transformation, noise pollution, etc. during transmission and application, but it is consistent with the original data perception. Therefore, the traditional cryptography authentication method is not applicable to the robust authentication of geographic data.</p><p>Among various authentication techniques, perceptual hashing is a promising solution. Perceptual hashing is a type of unidirectional mapping of multimedia data sets to perceptual digest sets, that is, a multimedia digital representation with the same perceptual content is uniquely mapped into a piece of digital digest, and satisfies perceptual robustness and security. Since a perceptual hash value is a compact representation of the original content, it can be used for robust content authentication of vector geographic data. The advantage of perceptual hashing algorithms over traditional cryptographic hashing algorithms is that they can tolerate differences in quality and format. The same content is always mapped to the same hash value. This is very effective for robust authentication of geographic data.</p><p>In this work, we focus on vector geographic data content authentication by perceptual hash algorithms. In the research of vector geographic data authentication, there are usually some authentication methods based on statistics, rough representation of images and extraction of mutation points based on wavelet transform. However, these methods are more or less sensitive to geometric transformation, poor anti-aggression, high complexity, and poor robustness. In order to avoid the shortcomings of traditional authentication algorithms, this paper proposes a vector geographic data authentication algorithm combining DCT and perceptual hashing. The algorithm has high robustness and security against attack, translation, rotation and two types of collusion attacks.</p>
Style APA, Harvard, Vancouver, ISO itp.
5

Chandra, M. Edo. "IMPEMENTASI ALGORITMA LEMPEL ZIV STORER SZYMANSKI (LZSS) PADA APLIKASI BACAAN SHALAT BERBASIS ANDROID". KOMIK (Konferensi Nasional Teknologi Informasi dan Komputer) 3, nr 1 (25.11.2019). http://dx.doi.org/10.30865/komik.v3i1.1624.

Pełny tekst źródła
Streszczenie:
Compression is a way to compress or modify data so that the required storage space is smaller and more efficient. In this study, the large file size in the prayer reading application makes the document storage space requires a lot of storage space due to the large file size. Due to the large size of the file that is currently making a large and inefficient file storage area and the larger size of a file makes the smartphone slow due to the file. The purpose of this study is to design an android-based prayer reading application by implementing the Szymanski Ziv Storer Lempel algorithm (LZSS). And designing a compression application using the Java programming language and database used is SQLite The results of the study show that after carrying out the implementation process with the LZSS algorithm on Reading Prayers, the decompression test results are known to be compressed text files that can be compressed and the size of the decompressed text file is the same as the original text file before it is compressed.Keywords: Implementation, compression, algorithm Lempel Ziv Storer Szymanski (LZSS).
Style APA, Harvard, Vancouver, ISO itp.
6

Mohan, Prakash, Manikandan Sundaram, Sambit Satpathy i Sanchali Das. "An efficient technique for cloud storage using secured de-duplication algorithm". Journal of Intelligent & Fuzzy Systems, 27.07.2021, 1–12. http://dx.doi.org/10.3233/jifs-210038.

Pełny tekst źródła
Streszczenie:
Techniques of data compression involve de-duplication of data that plays an important role in eliminating duplicate copies of information and has been widely employed in cloud storage to scale back the storage capacity and save information measure. A secure AES encryption de-duplication system for finding duplication with the meaning and store up it in the cloud. To protect the privacy of sensitive information whereas supporting de-duplication, The AES encryption technique and SHA-256 hashing algorithm have been utilized to encrypt the information before outsourcing. Pre-processing is completed and documents are compared and verified with the use of wordnet. Cosine similarity is employed to see the similarity between both the documents and to perform this, a far economical VSM data structure is used. Wordnet hierarchical corpus is used to see syntax and semantics so that the identification of duplicates is done. NLTK provides a large vary of libraries and programs for symbolic and statistical natural language process (NLP) for the Python programming language that is used here for the unidentified words by cosine similarity. Within the previous strategies, cloud storage was used abundantly since similar files were allowed to store. By implementing our system, space for storing is reduced up to 85%. Since AES and SHA-256 are employed, it provides high security and efficiency.
Style APA, Harvard, Vancouver, ISO itp.
7

Chaitanya, Kakarla, M. Prasanth, Saisrinivasreddy V. Saisrinivasreddy i Senthil Kumar V. Kumar. "Securing of aes based on secure hash algorithm for image steganography". International journal of health sciences, 26.03.2022, 1489–99. http://dx.doi.org/10.53730/ijhs.v6ns2.5107.

Pełny tekst źródła
Streszczenie:
In this paper we tend to square measure handling Steganography victimization SHA-256 algorithmic program within the past we tend to used LSB algorithmic program. LSB(Least vital Bit) it's totally different results on 1bit and 2bit models .The use of additional bits affects the smoothness of image on stego image . Message is tough to recover if the image is subject to attack like translation and rotation. it's simple to decode to the hackers(other than WHO we tend to sent ) .Message is well lost if image is subject to compression like JPEG, however solely such image kind is accepted by this algorithmic program. thanks to these drawbacks we tend to square measure moving to SHA-256(Secure Hash Algorithm), it's a scientific discipline hash additionally called digest .It enhances the extent of security in concealment of covert information exchange and data concealing .It generates a singular 256 bits (32 bytes) code of a text ,more secure than different common hashing algorithmic program. it's quick to calculate, immune to preimage and second preimage attacks and collision resistant. LSB algorithmic program has additional probabilities to be decrypted has it's coded with prime quantity combination therefore it is decoded simply.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Hashing algorithm used in LZSS compression"

1

Mathur, Milind. "ANALYSIS OF PARALLEL LEMPEL-ZIV COMPRESSION USING CUDA". Thesis, 2016. http://dspace.dtu.ac.in:8080/jspui/handle/repository/14425.

Pełny tekst źródła
Streszczenie:
Data compression is a topic that has been researched upon for years and we have standard formats like zip, rar, gzip, bz2 in generic data; jpeg, gif in images; . In this age where we have lots of data with internet being ubiquitous, there is a strong need for fast and efficient data compression algorithm. Lempel-Ziv family of compression algorithms form the basis for a lot of commonly used formats. Some modified form of LZ77 algorithm is still used widely as a lossless run length encoding algorithm. Recently Graphics Processing Units (GPUs) are making headway into the scientific computing world. They are enticing to many because of the sheer promise of the hardware performance and energy efficiency. More often than not these graphic cards with immense processing power are just sitting idle as we do our tasks and are not gaming. GPUs were mainly used for graphic rendering but now they are being used for computing and follow massively parallel architecture. In this dissertation, we talk about hashing algorithm used in LZSS compression. We compare the use of DJB hash and Murmur Hash in LZSS compression. We compare it to the more superior LZ4 algorithm. We also look at massively parallel, CUDA enabled version of these algorithms and the speedup we can achieve with those at our disposal. We conclude that for very small file (of order of KBs) we should use the LZ4 algorithm. If we don’t have a CUDA capable device LZ4 is our best bet. But CUDA enabled versions of these algorithms outperform all the other algorithms easily and a speedup up to 10x is possible with GPU only of 500 series and even better with the newer GPUs.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii