Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: HASH METHODS.

Artykuły w czasopismach na temat „HASH METHODS”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „HASH METHODS”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Safaryan, Olga, Larissa Cherckesova, Nikita Lyashenko, Pavel Razumov, Vladislav Chumakov, Boris Akishin i Andrey Lobodenko. "Modern Hash Collision CyberAttacks and Methods of Their Detection and Neutralization". Journal of Physics: Conference Series 2131, nr 2 (1.12.2021): 022099. http://dx.doi.org/10.1088/1742-6596/2131/2/022099.

Pełny tekst źródła
Streszczenie:
Abstract This article discusses the issues related to the possibility of realization of collision cyberattacks (based on hash collisions). Since post–quantum cryptography has become relevant, classical cryptosystems do not provide the sufficient resistance to the modern quantum cyberattacks. Systems based on outdated hashing algorithms become vulnerable to cyberattacks with hash collision. As replacement for unreliable algorithms, such as various modifications of MD5 and SHA–1, new algorithms have been created, for example, SHA–3 standard based on the Keccak function and AES–based hashing. This article discusses modern collision cyberattacks and possible methods of their detection. Because of this study, theoretical description of cyberattacks with hash collision was considered; modern cyberattacks on hash collisions and possible ways of detecting and countering them (weak hash detection) are described; software tool that detects vulnerable and unreliable hash is implemented; software testing is carried out. Based on the conducted research, it can be concluded that the main advantages of implementing software tool are effective detection of vulnerable hash, the ability to generate new hash protected from collisions, convenient and user– friendly interface, small memory requirements and small size of the program code.
Style APA, Harvard, Vancouver, ISO itp.
2

Blackburn, Simon R. "Perfect Hash Families: Probabilistic Methods and Explicit Constructions". Journal of Combinatorial Theory, Series A 92, nr 1 (październik 2000): 54–60. http://dx.doi.org/10.1006/jcta.1999.3050.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Jinnai, Yuu, i Alex Fukunaga. "On Hash-Based Work Distribution Methods for Parallel Best-First Search". Journal of Artificial Intelligence Research 60 (30.10.2017): 491–548. http://dx.doi.org/10.1613/jair.5225.

Pełny tekst źródła
Streszczenie:
Parallel best-first search algorithms such as Hash Distributed A* (HDA*) distribute work among the processes using a global hash function. We analyze the search and communication overheads of state-of-the-art hash-based parallel best-first search algorithms, and show that although Zobrist hashing, the standard hash function used by HDA*, achieves good load balance for many domains, it incurs significant communication overhead since almost all generated nodes are transferred to a different processor than their parents. We propose Abstract Zobrist hashing, a new work distribution method for parallel search which, instead of computing a hash value based on the raw features of a state, uses a feature projection function to generate a set of abstract features which results in a higher locality, resulting in reduced communications overhead. We show that Abstract Zobrist hashing outperforms previous methods on search domains using hand-coded, domain specific feature projection functions. We then propose GRAZHDA*, a graph-partitioning based approach to automatically generating feature projection functions. GRAZHDA* seeks to approximate the partitioning of the actual search space graph by partitioning the domain transition graph, an abstraction of the state space graph. We show that GRAZHDA* outperforms previous methods on domain-independent planning.
Style APA, Harvard, Vancouver, ISO itp.
4

Liu, Xingbo, Xiushan Nie, Yingxin Wang i Yilong Yin. "Jointly Multiple Hash Learning". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 9981–82. http://dx.doi.org/10.1609/aaai.v33i01.33019981.

Pełny tekst źródła
Streszczenie:
Hashing can compress heterogeneous high-dimensional data into compact binary codes while preserving the similarity to facilitate efficient retrieval and storage, and thus hashing has recently received much attention from information retrieval researchers. Most of the existing hashing methods first predefine a fixed length (e.g., 32, 64, or 128 bit) for the hash codes before learning them with this fixed length. However, one sample can be represented by various hash codes with different lengths, and thus there must be some associations and relationships among these different hash codes because they represent the same sample. Therefore, harnessing these relationships will boost the performance of hashing methods. Inspired by this possibility, in this study, we propose a new model jointly multiple hash learning (JMH), which can learn hash codes with multiple lengths simultaneously. In the proposed JMH method, three types of information are used for hash learning, which come from hash codes with different lengths, the original features of the samples and label. In contrast to the existing hashing methods, JMH can learn hash codes with different lengths in one step. Users can select appropriate hash codes for their retrieval tasks according to the requirements in terms of accuracy and complexity. To the best of our knowledge, JMH is one of the first attempts to learn multi-length hash codes simultaneously. In addition, in the proposed model, discrete and closed-form solutions for variables can be obtained by cyclic coordinate descent, thereby making the proposed model much faster during training. Extensive experiments were performed based on three benchmark datasets and the results demonstrated the superior performance of the proposed method.
Style APA, Harvard, Vancouver, ISO itp.
5

Fitas, Ricardo, Bernardo Rocha, Valter Costa i Armando Sousa. "Design and Comparison of Image Hashing Methods: A Case Study on Cork Stopper Unique Identification". Journal of Imaging 7, nr 3 (8.03.2021): 48. http://dx.doi.org/10.3390/jimaging7030048.

Pełny tekst źródła
Streszczenie:
Cork stoppers were shown to have unique characteristics that allow their use for authentication purposes in an anti-counterfeiting effort. This authentication process relies on the comparison between a user’s cork image and all registered cork images in the database of genuine items. With the growth of the database, this one-to-many comparison method becomes lengthier and therefore usefulness decreases. To tackle this problem, the present work designs and compares hashing-assisted image matching methods that can be used in cork stopper authentication. The analyzed approaches are the discrete cosine transform, wavelet transform, Radon transform, and other methods such as difference hash and average hash. The most successful approach uses a 1024-bit hash length and difference hash method providing a 98% accuracy rate. By transforming the image matching into a hash matching problem, the approach presented becomes almost 40 times faster when compared to the literature.
Style APA, Harvard, Vancouver, ISO itp.
6

Ma, Xian-Qin, Chong-Chong Yu, Xiu-Xin Chen i Lan Zhou. "Large-Scale Person Re-Identification Based on Deep Hash Learning". Entropy 21, nr 5 (30.04.2019): 449. http://dx.doi.org/10.3390/e21050449.

Pełny tekst źródła
Streszczenie:
Person re-identification in the image processing domain has been a challenging research topic due to the influence of pedestrian posture, background, lighting, and other factors. In this paper, the method of harsh learning is applied in person re-identification, and we propose a person re-identification method based on deep hash learning. By improving the conventional method, the method proposed in this paper uses an easy-to-optimize shallow convolutional neural network to learn the inherent implicit relationship of the image and then extracts the deep features of the image. Then, a hash layer with three-step calculation is incorporated in the fully connected layer of the network. The hash function is learned and mapped into a hash code through the connection between the network layers. The generation of the hash code satisfies the requirements that minimize the error of the sum of quantization loss and Softmax regression cross-entropy loss, which achieve the end-to-end generation of hash code in the network. After obtaining the hash code through the network, the distance between the pedestrian image hash code to be retrieved and the pedestrian image hash code library is calculated to implement the person re-identification. Experiments conducted on multiple standard datasets show that our deep hashing network achieves the comparable performances and outperforms other hashing methods with large margins on Rank-1 and mAP value identification rates in pedestrian re-identification. Besides, our method is predominant in the efficiency of training and retrieval in contrast to other pedestrian re-identification algorithms.
Style APA, Harvard, Vancouver, ISO itp.
7

Ussatova, О., Ye Begimbayeva, S. Nyssanbayeva i N. Ussatov. "ANALYSIS OF METHODS AND PRACTICAL APPLICATION OF HASH FUNCTIONS". SERIES PHYSICO-MATHEMATICAL 5, nr 339 (15.10.2021): 100–110. http://dx.doi.org/10.32014/2021.2518-1726.90.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

YARMILKO, Аndii, Inna ROZLOMII i Yuliya MYSIURA. "USAGE OF HASH METHODS IN THE CRYPTOGRAPHIC DATA ANALYSIS". Herald of Khmelnytskyi National University 303, nr 6 (grudzień 2021): 49–54. http://dx.doi.org/10.31891/2307-5732-2021-303-6-49-54.

Pełny tekst źródła
Streszczenie:
The tasks of information security system include identifying potential or actual sources of threat to system’s work and minimizing consequences of unauthorized influence on it. While solving them, arises the need of restoration of the initial state of the information system, especially the data integrity. While performing information message analysis the other task may be finding differences between two data fragments or their instances. This paper offers methods of the complex solution of the information security tasks and the analysis of data streams using the means of cryptography and presents the experience of developing the reliable implementation of these methods. The developed methods allow detecting falsifications in data part of the sent message and restoring the initial message. During the cryptographic analysis, the area of change in a data block is localized using cross hashing which is performed by computing the hash of information message block by block. The result is the program implementation of the offered method of information stream analysis that is based on comparing three frames of input data. The effectiveness of detecting falsifications in a data stream depending on algorithm’s sensitivity was researched with the developed instrument. The dependence of the share of falsifications detected by the system in the information block on the established maximum allowable relative deviation from the median and the properties of the input stream, in particular, the division of the input data into frames, was experimentally revealed. It is expected that the advantages of the method will be higher in the preliminary stage of data flow analysis related to its segmentation before addressing the selected fragments to more accurate and specialized algorithms.
Style APA, Harvard, Vancouver, ISO itp.
9

Long, Jun, Longzhi Sun, Liujie Hua i Zhan Yang. "Discrete Semantics-Guided Asymmetric Hashing for Large-Scale Multimedia Retrieval". Applied Sciences 11, nr 18 (21.09.2021): 8769. http://dx.doi.org/10.3390/app11188769.

Pełny tekst źródła
Streszczenie:
Cross-modal hashing technology is a key technology for real-time retrieval of large-scale multimedia data in real-world applications. Although the existing cross-modal hashing methods have achieved impressive accomplishment, there are still some limitations: (1) some cross-modal hashing methods do not make full consider the rich semantic information and noise information in labels, resulting in a large semantic gap, and (2) some cross-modal hashing methods adopt the relaxation-based or discrete cyclic coordinate descent algorithm to solve the discrete constraint problem, resulting in a large quantization error or time consumption. Therefore, in order to solve these limitations, in this paper, we propose a novel method, named Discrete Semantics-Guided Asymmetric Hashing (DSAH). Specifically, our proposed DSAH leverages both label information and similarity matrix to enhance the semantic information of the learned hash codes, and the ℓ2,1 norm is used to increase the sparsity of matrix to solve the problem of the inevitable noise and subjective factors in labels. Meanwhile, an asymmetric hash learning scheme is proposed to efficiently perform hash learning. In addition, a discrete optimization algorithm is proposed to fast solve the hash code directly and discretely. During the optimization process, the hash code learning and the hash function learning interact, i.e., the learned hash codes can guide the learning process of the hash function and the hash function can also guide the hash code generation simultaneously. Extensive experiments performed on two benchmark datasets highlight the superiority of DSAH over several state-of-the-art methods.
Style APA, Harvard, Vancouver, ISO itp.
10

Gabryel, Marcin, Konrad Grzanek i Yoichi Hayashi. "Browser Fingerprint Coding Methods Increasing the Effectiveness of User Identification in the Web Traffic". Journal of Artificial Intelligence and Soft Computing Research 10, nr 4 (1.10.2020): 243–53. http://dx.doi.org/10.2478/jaiscr-2020-0016.

Pełny tekst źródła
Streszczenie:
AbstractWeb-based browser fingerprint (or device fingerprint) is a tool used to identify and track user activity in web traffic. It is also used to identify computers that are abusing online advertising and also to prevent credit card fraud. A device fingerprint is created by extracting multiple parameter values from a browser API (e.g. operating system type or browser version). The acquired parameter values are then used to create a hash using the hash function. The disadvantage of using this method is too high susceptibility to small, normally occurring changes (e.g. when changing the browser version number or screen resolution). Minor changes in the input values generate a completely different fingerprint hash, making it impossible to find similar ones in the database. On the other hand, omitting these unstable values when creating a hash, significantly limits the ability of the fingerprint to distinguish between devices. This weak point is commonly exploited by fraudsters who knowingly evade this form of protection by deliberately changing the value of device parameters. The paper presents methods that significantly limit this type of activity. New algorithms for coding and comparing fingerprints are presented, in which the values of parameters with low stability and low entropy are especially taken into account. The fingerprint generation methods are based on popular Minhash, the LSH, and autoencoder methods. The effectiveness of coding and comparing each of the presented methods was also examined in comparison with the currently used hash generation method. Authentic data of the devices and browsers of users visiting 186 different websites were collected for the research.
Style APA, Harvard, Vancouver, ISO itp.
11

He, Chao, Dalin Wang, Zefu Tan, Liming Xu i Nina Dai. "Cross-Modal Discrimination Hashing Retrieval Using Variable Length". Security and Communication Networks 2022 (9.09.2022): 1–12. http://dx.doi.org/10.1155/2022/9638683.

Pełny tekst źródła
Streszczenie:
Fast cross-modal retrieval technology based on hash coding has become a hot topic for the rich multimodal data (text, image, audio, etc.), especially security and privacy challenges in the Internet of Things and mobile edge computing. However, most methods based on hash coding are only mapped to the common hash coding space, and it relaxes the two value constraints of hash coding. Therefore, the learning of the multimodal hash coding may not be sufficient and effective to express the original multimodal data and cause the hash encoding category to be less discriminatory. For the sake of solving these problems, this paper proposes a method of mapping each modal data to the optimal length of hash coding space, respectively, and then the hash encoding of each modal data is solved by the discrete cross-modal hash algorithm of two value constraints. Finally, the similarity of multimodal data is compared in the potential space. The experimental results of the cross-model retrieval based on variable hash coding are better than that of the relative comparison methods in the WIKI data set, NUS-WIDE data set, as well as MIRFlickr data set, and the method we proposed is proved to be feasible and effective.
Style APA, Harvard, Vancouver, ISO itp.
12

Cao, Mingwei, Haiyan Jiang i Haifeng Zhao. "Hash Indexing-Based Image Matching for 3D Reconstruction". Applied Sciences 13, nr 7 (2.04.2023): 4518. http://dx.doi.org/10.3390/app13074518.

Pełny tekst źródła
Streszczenie:
Image matching is a basic task in three-dimensional reconstruction, which, in recent years, has attracted extensive attention in academic and industrial circles. However, when dealing with large-scale image datasets, these methods have low accuracy and slow speeds. To improve the effectiveness of modern image matching methods, this paper proposes an image matching method for 3D reconstruction. The proposed method can obtain high matching accuracy through hash index in a very short amount of time. The core of hash matching includes two parts: creating the hash table and hash index. The former is used to encode local feature descriptors into hash codes, and the latter is used to search candidates for query feature points. In addition, the proposed method is extremely robust to image scaling and transformation by using various verifications. A comprehensive experiment was carried out using several challenging datasets to evaluate the performance of hash matching. Experimental results show that the HashMatch presents excellent results compared to the state-of-the-art methods in both computational efficiency and matching accuracy.
Style APA, Harvard, Vancouver, ISO itp.
13

Aysu, Aydin, i Patrick Schaumont. "Precomputation Methods for Hash-Based Signatures on Energy-Harvesting Platforms". IEEE Transactions on Computers 65, nr 9 (1.09.2016): 2925–31. http://dx.doi.org/10.1109/tc.2015.2500570.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Wang, Letian, Ziyu Meng, Fei Dong, Xiao Yang, Xiaoming Xi i Xiushan Nie. "Attention-Oriented Deep Multi-Task Hash Learning". Electronics 12, nr 5 (4.03.2023): 1226. http://dx.doi.org/10.3390/electronics12051226.

Pełny tekst źródła
Streszczenie:
Hashing has wide applications in image retrieval at large scales due to being an efficient approach to approximate nearest neighbor calculation. It can squeeze complex high-dimensional arrays via binarization while maintaining the semantic properties of the original samples. Currently, most existing hashing methods always predetermine the stable length of hash code before training the model. It is inevitable for these methods to increase the computing time, as the code length converts, caused by the task requirements changing. A single hash code fails to reflect the semantic relevance. Toward solving these issues, we put forward an attention-oriented deep multi-task hash learning (ADMTH) method, in which multiple hash codes of varying length can be simultaneously learned. Compared with the existing methods, ADMTH is one of the first attempts to apply multi-task learning theory to the deep hashing framework to generate and explore multi-length hash codes. Meanwhile, it embeds the attention mechanism in the backbone network to further extract discriminative information. We utilize two common available large-scale datasets, proving its effectiveness. The proposed method substantially improves retrieval efficiency and assures the image characterizing quality.
Style APA, Harvard, Vancouver, ISO itp.
15

Abanga, Ellen Akongwin. "Symmetric, Asymmetric and Hash Functions". Advances in Multidisciplinary and scientific Research Journal Publication 10, nr 4 (30.11.2022): 55–60. http://dx.doi.org/10.22624/aims/digital/v10n4p7.

Pełny tekst źródła
Streszczenie:
The field of cryptography offers numerous methods for transmitting data securely through networks. It usually builds and evaluates several protocols that deal with the numerous facets of information security, such as confidentiality, integrity, etc. Modern cryptography includes a number of engineering and scientific areas. Everyday uses of cryptography include things like computer passwords, ATM cards, and electronic commerce. The term "cryptography" refers to a method of protecting data from unauthorized parties by converting readable and stable information into an unknowable form that can be deciphered on the other end to obtain the needed information using the decoding method supplied by the message's creator. Keywords: Hash function, encryption, symmetric key, and cryptography
Style APA, Harvard, Vancouver, ISO itp.
16

Tiwari, Harshvardhan. "Merkle-Damgård Construction Method and Alternatives". Journal of information and organizational sciences 41, nr 2 (14.12.2017): 283–304. http://dx.doi.org/10.31341/jios.41.2.9.

Pełny tekst źródła
Streszczenie:
Cryptographic hash function is an important cryptographic tool in the field of information security. Design of most widely used hash functions such as MD5 and SHA-1 is based on the iterations of compression function by Merkle-Damgård construction method with constant initialization vector. Merkle-Damgård construction showed that the security of hash function depends on the security of the compression function. Several attacks on Merkle-Damgård construction based hash functions motivated researchers to propose different cryptographic constructions to enhance the security of hash functions against the differential and generic attacks. Cryptographic community had been looking for replacements for these weak hash functions and they have proposed new hash functions based on different variants of Merkle-Damgård construction. As a result of an open competition NIST announced Keccak as a SHA-3 standard. This paper provides a review of cryptographic hash function, its security requirements and different design methods of compression function.
Style APA, Harvard, Vancouver, ISO itp.
17

Zhou, Quan, Xiushan Nie, Yang Shi, Xingbo Liu i Yilong Yin. "Focusing on Detail: Deep Hashing Based on Multiple Region Details (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 10 (3.04.2020): 13991–92. http://dx.doi.org/10.1609/aaai.v34i10.7268.

Pełny tekst źródła
Streszczenie:
Fast retrieval efficiency and high performance hashing, which aims to convert multimedia data into a set of short binary codes while preserving the similarity of the original data, has been widely studied in recent years. Majority of the existing deep supervised hashing methods only utilize the semantics of a whole image in learning hash codes, but ignore the local image details, which are important in hash learning. To fully utilize the detailed information, we propose a novel deep multi-region hashing (DMRH), which learns hash codes from local regions, and in which the final hash codes of the image are obtained by fusing the local hash codes corresponding to local regions. In addition, we propose a self-similarity loss term to address the imbalance problem (i.e., the number of dissimilar pairs is significantly more than that of the similar ones) of methods based on pairwise similarity.
Style APA, Harvard, Vancouver, ISO itp.
18

Lv, Fang, Yuliang Wei, Xixian Han i Bailing Wang. "Semi-supervised hash learning method with consistency-based dimensionality reduction". Advances in Mechanical Engineering 11, nr 1 (styczeń 2019): 168781401881917. http://dx.doi.org/10.1177/1687814018819170.

Pełny tekst źródła
Streszczenie:
With the explosive growth of surveillance data, exact match queries become much more difficult for its high dimension and high volume. Owing to its good balance between the retrieval performance and the computational cost, hash learning technique is widely used in solving approximate nearest neighbor search problems. Dimensionality reduction plays a critical role in hash learning, as its target is to preserve the most original information into low-dimensional vectors. However, the existing dimensionality reduction methods neglect to unify diverse resources in original space when learning a downsized subspace. In this article, we propose a numeric and semantic consistency semi-supervised hash learning method, which unifies the numeric features and supervised semantic features into a low-dimensional subspace before hash encoding, and improves a multiple table hash method with complementary numeric local distribution structure. A consistency-based learning method, which confers the meaning of semantic to numeric features in dimensionality reduction, is presented. The experiments are conducted on two public datasets, that is, a web image NUS-WIDE and text dataset DBLP. Experimental results demonstrate that the semi-supervised hash learning method, with the consistency-based information subspace, is more effective in preserving useful information for hash encoding than state-of-the-art methods and achieves high-quality retrieval performance in multi-table context.
Style APA, Harvard, Vancouver, ISO itp.
19

Shan, Xue, Pingping Liu, Yifan Wang, Qiuzhan Zhou i Zhen Wang. "Deep Hashing Using Proxy Loss on Remote Sensing Image Retrieval". Remote Sensing 13, nr 15 (25.07.2021): 2924. http://dx.doi.org/10.3390/rs13152924.

Pełny tekst źródła
Streszczenie:
With the improvement of various space-satellite shooting methods, the sources, scenes, and quantities of remote sensing data are also increasing. An effective and fast remote sensing image retrieval method is necessary, and many researchers have conducted a lot of work in this direction. Nevertheless, a fast retrieval method called hashing retrieval is proposed to improve retrieval speed, while maintaining retrieval accuracy and greatly reducing memory space consumption. At the same time, proxy-based metric learning losses can reduce convergence time. Naturally, we present a proxy-based hash retrieval method, called DHPL (Deep Hashing using Proxy Loss), which combines hash code learning with proxy-based metric learning in a convolutional neural network. Specifically, we designed a novel proxy metric learning network, and we used one hash loss function to reduce the quantified losses. For the University of California Merced (UCMD) dataset, DHPL resulted in a mean average precision (mAP) of up to 98.53% on 16 hash bits, 98.83% on 32 hash bits, 99.01% on 48 hash bits, and 99.21% on 64 hash bits. For the aerial image dataset (AID), DHPL achieved an mAP of up to 93.53% on 16 hash bits, 97.36% on 32 hash bits, 98.28% on 48 hash bits, and 98.54% on 64 bits. Our experimental results on UCMD and AID datasets illustrate that DHPL could generate great results compared with other state-of-the-art hash approaches.
Style APA, Harvard, Vancouver, ISO itp.
20

Ren, Yanduo, Jiangbo Qian, Yihong Dong, Yu Xin i Huahui Chen. "AVBH: Asymmetric Learning to Hash with Variable Bit Encoding". Scientific Programming 2020 (21.01.2020): 1–11. http://dx.doi.org/10.1155/2020/2424381.

Pełny tekst źródła
Streszczenie:
Nearest neighbour search (NNS) is the core of large data retrieval. Learning to hash is an effective way to solve the problems by representing high-dimensional data into a compact binary code. However, existing learning to hash methods needs long bit encoding to ensure the accuracy of query, and long bit encoding brings large cost of storage, which severely restricts the long bit encoding in the application of big data. An asymmetric learning to hash with variable bit encoding algorithm (AVBH) is proposed to solve the problem. The AVBH hash algorithm uses two types of hash mapping functions to encode the dataset and the query set into different length bits. For datasets, the hash code frequencies of datasets after random Fourier feature encoding are statistically analysed. The hash code with high frequency is compressed into a longer coding representation, and the hash code with low frequency is compressed into a shorter coding representation. The query point is quantized to a long bit hash code and compared with the same length cascade concatenated data point. Experiments on public datasets show that the proposed algorithm effectively reduces the cost of storage and improves the accuracy of query.
Style APA, Harvard, Vancouver, ISO itp.
21

Zhu, Lei, Chaoqun Zheng, Xu Lu, Zhiyong Cheng, Liqiang Nie i Huaxiang Zhang. "Efficient Multi-modal Hashing with Online Query Adaption for Multimedia Retrieval". ACM Transactions on Information Systems 40, nr 2 (30.04.2022): 1–36. http://dx.doi.org/10.1145/3477180.

Pełny tekst źródła
Streszczenie:
Multi-modal hashing supports efficient multimedia retrieval well. However, existing methods still suffer from two problems: (1) Fixed multi-modal fusion. They collaborate the multi-modal features with fixed weights for hash learning, which cannot adaptively capture the variations of online streaming multimedia contents. (2) Binary optimization challenge. To generate binary hash codes, existing methods adopt either two-step relaxed optimization that causes significant quantization errors or direct discrete optimization that consumes considerable computation and storage cost. To address these problems, we first propose a Supervised Multi-modal Hashing with Online Query-adaption method. A self-weighted fusion strategy is designed to adaptively preserve the multi-modal features into hash codes by exploiting their complementarity. Besides, the hash codes are efficiently learned with the supervision of pair-wise semantic labels to enhance their discriminative capability while avoiding the challenging symmetric similarity matrix factorization. Further, we propose an efficient Unsupervised Multi-modal Hashing with Online Query-adaption method with an adaptive multi-modal quantization strategy. The hash codes are directly learned without the reliance on the specific objective formulations. Finally, in both methods, we design a parameter-free online hashing module to adaptively capture query variations at the online retrieval stage. Experiments validate the superiority of our proposed methods.
Style APA, Harvard, Vancouver, ISO itp.
22

Thanalakshmi, P., R. Anitha, N. Anbazhagan, Woong Cho, Gyanendra Prasad Joshi i Eunmok Yang. "A Hash-Based Quantum-Resistant Chameleon Signature Scheme". Sensors 21, nr 24 (16.12.2021): 8417. http://dx.doi.org/10.3390/s21248417.

Pełny tekst źródła
Streszczenie:
As a standard digital signature may be verified by anybody, it is unsuitable for personal or economically sensitive applications. The chameleon signature system was presented by Krawczyk and Rabin as a solution to this problem. It is based on a hash then sign model. The chameleon hash function enables the trapdoor information holder to compute a message digest collision. The holder of a chameleon signature is the recipient of a chameleon signature. He could compute collision on the hash value using the trapdoor information. This keeps the recipient from disclosing his conviction to a third party and ensures the privacy of the signature. The majority of the extant chameleon signature methods are built on the computationally infeasible number theory problems, like integer factorization and discrete log. Unfortunately, the construction of quantum computers would be rendered insecure to those schemes. This creates a solid requirement for construct chameleon signatures for the quantum world. Hence, this paper proposes a novel quantum secure chameleon signature scheme based on hash functions. As a hash-based cryptosystem is an essential candidate of a post-quantum cryptosystem, the proposed hash-based chameleon signature scheme would be a promising alternative to the number of theoretic-based methods. Furthermore, the proposed method is key exposure-free and satisfies the security requirements such as semantic security, non-transferability, and unforgeability.
Style APA, Harvard, Vancouver, ISO itp.
23

Feng, Jiangfan, i Wenzheng Sun. "Improved Deep Hashing with Scalable Interblock for Tourist Image Retrieval". Scientific Programming 2021 (14.07.2021): 1–14. http://dx.doi.org/10.1155/2021/9937061.

Pełny tekst źródła
Streszczenie:
Tourist image retrieval has attracted increasing attention from researchers. Mainly, supervised deep hash methods have significantly boosted the retrieval performance, which takes hand-crafted features as inputs and maps the high-dimensional binary feature vector to reduce feature-searching complexity. However, their performance depends on the supervised labels, but few labeled temporal and discriminative information is available in tourist images. This paper proposes an improved deep hash to learn enhanced hash codes for tourist image retrieval. It jointly determines image representations and hash functions with deep neural networks and simultaneously enhances the discriminative capability of tourist image hash codes with refined semantics of the accompanying relationship. Furthermore, we have tuned the CNN to implement end-to-end training hash mapping, calculating the semantic distance between two samples of the obtained binary codes. Experiments on various datasets demonstrate the superiority of the proposed approach compared to state-of-the-art shallow and deep hashing techniques.
Style APA, Harvard, Vancouver, ISO itp.
24

Steinebach, Martin. "A Close Look at Robust Hash Flip Positions". Electronic Imaging 2021, nr 4 (18.01.2021): 345–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.4.mwsf-345.

Pełny tekst źródła
Streszczenie:
Images can be recognized by cryptographic or robust hashes during forensic investigation or content filtering. Cryptographic methods tend to be too fragile, robust methods may leak information about the hashed images. Combining robust and cryptographic methods can solve both problems, but requires a good prediction of hash bit positions most likely to break. Previous research shows the potential of this approach, but evaluation results still have rather high error rates, especially many false negatives. In this work we have a detailed look at the behavior of robust hashes under attacks and the potential of prediction derived from distance from median and learning from attacks.
Style APA, Harvard, Vancouver, ISO itp.
25

BARNA, Andrii, i Roman KAMINSKY. "ANALYSIS OF THE EFFICIENCY OF DATA CHUNKING METHODS FOR DATA DEDUBLICATION SYSTEMS". Herald of Khmelnytskyi National University. Technical sciences 315, nr 6(1) (29.12.2022): 24–27. http://dx.doi.org/10.31891/2307-5732-2022-315-6-24-27.

Pełny tekst źródła
Streszczenie:
There is a significant increase in the amount of data that needs to be stored worldwide. More and more companies are turning their attention to deduplication systems, which effectively increase data warehouse volume and reduce storage costs. Deduplication not only reduces the overall amount of information in storage but also reduces the load on networks by eliminating the need to retransmit duplicate data. In this work, we considered the stages that any deduplication system includes, namely chunking, hashing and indexing, mapping. The effectiveness of deduplication systems primarily depends on the choice of the method of dividing the data stream at the chunking stage. We considered the classic Two Threshold Two Divisor (TTTD) method, which is widely used in modern deduplication systems. This method uses Rabin’s fingerprint to find the hash of the substring value. The formula for calculating the hash for the first substring and the formula for calculating the rest of the substring are given. Another method we investigated is Content Based Two Threshold Two Divisor (CB-TTTD) – it uses new hash functions to fragment the data stream, and the corresponding formulas for calculating the first and each subsequent substring are given. To test the effectiveness of these two methods, we developed a test deduplication system, implemented these two fragmentation methods, and tested their performance on two sets of text data. We have modified these methods with the addition of a new string-splitting condition based on the content specification of the data we tested. The results of a comparison of the work of classical and modified methods are given. Using metrics to compare the efficiency of data fragmentation methods, we obtained experimental data, based on which we can make conclusions about the feasibility of using CB-TTTD as an alternative to TTTD in new deduplication systems. The obtained data can be used in the development of new highly efficient data deduplication systems and to improve old solutions
Style APA, Harvard, Vancouver, ISO itp.
26

Chen, Yaxiong, i Xiaoqiang Lu. "A Deep Hashing Technique for Remote Sensing Image-Sound Retrieval". Remote Sensing 12, nr 1 (25.12.2019): 84. http://dx.doi.org/10.3390/rs12010084.

Pełny tekst źródła
Streszczenie:
With the rapid progress of remote sensing (RS) observation technologies, cross-modal RS image-sound retrieval has attracted some attention in recent years. However, these methods perform cross-modal image-sound retrieval by leveraging high-dimensional real-valued features, which can require more storage than low-dimensional binary features (i.e., hash codes). Moreover, these methods cannot directly encode relative semantic similarity relationships. To tackle these issues, we propose a new, deep, cross-modal RS image-sound hashing approach, called deep triplet-based hashing (DTBH), to integrate hash code learning and relative semantic similarity relationship learning into an end-to-end network. Specially, the proposed DTBH method designs a triplet selection strategy to select effective triplets. Moreover, in order to encode relative semantic similarity relationships, we propose the objective function, which makes sure that that the anchor images are more similar to the positive sounds than the negative sounds. In addition, a triplet regularized loss term leverages approximate l1-norm of hash-like codes and hash codes and can effectively reduce the information loss between hash-like codes and hash codes. Extensive experimental results showed that the DTBH method could achieve a superior performance to other state-of-the-art cross-modal image-sound retrieval methods. For a sound query RS image task, the proposed approach achieved a mean average precision (mAP) of up to 60.13% on the UCM dataset, 87.49% on the Sydney dataset, and 22.72% on the RSICD dataset. For RS image query sound task, the proposed approach achieved a mAP of 64.27% on the UCM dataset, 92.45% on the Sydney dataset, and 23.46% on the RSICD dataset. Future work will focus on how to consider the balance property of hash codes to improve image-sound retrieval performance.
Style APA, Harvard, Vancouver, ISO itp.
27

Kahri, Fatma, Hassen Mestiri, Belgacem Bouallegue i Mohsen Machhout. "High Speed FPGA Implementation of Cryptographic KECCAK Hash Function Crypto-Processor". Journal of Circuits, Systems and Computers 25, nr 04 (2.02.2016): 1650026. http://dx.doi.org/10.1142/s0218126616500262.

Pełny tekst źródła
Streszczenie:
Cryptographic hash functions are at the heart of many information security applications like message authentication codes (MACs), digital signatures and other forms of authentication. One of the methods to ensure information integrity is the use of hash functions, which generates a stream of bytes (hash) that must be unique. But most functions can no longer prevent malicious attacks and ensure that the information have just a hash. Because of the weakening of the widely used SHA-1 hash algorithm and concerns over the similarly-structured algorithms of the SHA-2 family, the US National Institute of Standards and Technology (NIST) has initiated the SHA-3 contest in order to select a suitable drop-in replacement. KECCAK hash function has been submitted to SHA-3 competition and it belongs to the final five candidate functions. In this paper, we present the implementation details of the hash function’s KECCAK algorithm, moreover, the proposed KECCAK design has been implemented on XILINX FPGAs. Its area, frequency, throughput and efficiency have been derived and compared and it is shown that the proposed design allows a trade-off between the maximum frequency and the area implementation.
Style APA, Harvard, Vancouver, ISO itp.
28

Li, Hongming, Lilai Zhang, Hao Cao i Yirui Wu. "Hash Based DNA Computing Algorithm for Image Encryption". Applied Sciences 13, nr 14 (23.07.2023): 8509. http://dx.doi.org/10.3390/app13148509.

Pełny tekst źródła
Streszczenie:
Deoxyribonucleic Acid (DNA) computing has demonstrated great potential in data encryption due to its capability of parallel computation, minimal storage requirement, and unbreakable cryptography. Focusing on high-dimensional image data for encryption with DNA computing, we propose a novel hash encoding-based DNA computing algorithm, which consists of a DNA hash encoding module and content-aware encrypting module. Inspired by the significant properties of the hash function, we build a quantity of hash mappings from image pixels to DNA computing bases, properly integrating the advantages of the hash function and DNA computing to boost performance. Considering the correlation relationship of pixels and patches for modeling, a content-aware encrypting module is proposed to reorganize the image data structure, resisting the crack with non-linear and high dimensional complexity originating from the correlation relationship. The experimental results suggest that the proposed method performs better than most comparative methods in key space, histogram analysis, pixel correlation, information entropy, and sensitivity measurements.
Style APA, Harvard, Vancouver, ISO itp.
29

Park, Si-Hyeon, Seong-Min You, Dong-Ho Song i Kwangjae Lee. "Image-based Approaches for Identifying Harmful Sites using OCR and Average Hash Methods". TRANSACTION OF THE KOREAN INSTITUTE OF ELECTRICAL ENGINEERS P 72, nr 2 (30.06.2023): 112–19. http://dx.doi.org/10.5370/kieep.2023.72.2.112.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Duan, Lijuan, Chongyang Zhao, Jun Miao, Yuanhua Qiao i Xing Su. "Deep Hashing Based Fusing Index Method for Large-Scale Image Retrieval". Applied Computational Intelligence and Soft Computing 2017 (2017): 1–8. http://dx.doi.org/10.1155/2017/9635348.

Pełny tekst źródła
Streszczenie:
Hashing has been widely deployed to perform the Approximate Nearest Neighbor (ANN) search for the large-scale image retrieval to solve the problem of storage and retrieval efficiency. Recently, deep hashing methods have been proposed to perform the simultaneous feature learning and the hash code learning with deep neural networks. Even though deep hashing has shown the better performance than traditional hashing methods with handcrafted features, the learned compact hash code from one deep hashing network may not provide the full representation of an image. In this paper, we propose a novel hashing indexing method, called the Deep Hashing based Fusing Index (DHFI), to generate a more compact hash code which has stronger expression ability and distinction capability. In our method, we train two different architecture’s deep hashing subnetworks and fuse the hash codes generated by the two subnetworks together to unify images. Experiments on two real datasets show that our method can outperform state-of-the-art image retrieval applications.
Style APA, Harvard, Vancouver, ISO itp.
31

Chen, Ye. "Research of Data Storage and Querying Methods Based on Ring Distributed Hash". Open Automation and Control Systems Journal 7, nr 1 (14.09.2015): 1203–9. http://dx.doi.org/10.2174/1874444301507011203.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Colbourn, Charles J., Erin Lanus i Kaushik Sarkar. "Asymptotic and constructive methods for covering perfect hash families and covering arrays". Designs, Codes and Cryptography 86, nr 4 (26.05.2017): 907–37. http://dx.doi.org/10.1007/s10623-017-0369-x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Wei, Hongjian, i Yingping Huang. "Online Multiple Object Tracking Using Spatial Pyramid Pooling Hashing and Image Retrieval for Autonomous Driving". Machines 10, nr 8 (9.08.2022): 668. http://dx.doi.org/10.3390/machines10080668.

Pełny tekst źródła
Streszczenie:
Multiple object tracking (MOT) is a fundamental issue and has attracted considerable attention in the autonomous driving community. This paper presents a novel MOT framework for autonomous driving. The framework consists of two stages of object representation and data association. In the stage of object representation, we employ appearance, motion, and position features to characterize objects. We design a spatial pyramidal pooling hash network (SPPHNet) to generate the appearance features. Multiple-level representative features in the SPPHNet are mapped into a similarity-preserving binary space, called hash features. The hash features retain the visual discriminability of high-dimensional features and are beneficial for computational efficiency. For data association, a two-tier data association scheme is designed to address the occlusion issue, consisting of an affinity cost model and a hash-based image retrieval model. The affinity cost model accommodates the hash features, disparity, and optical flow as the first tier of data association. The hash-based image retrieval model exploits the hash features and adopts image retrieval technology to handle reappearing objects as the second tier of data association. Experiments on the KITTI public benchmark dataset and our campus scenario sequences show that our method has superior tracking performance to the state-of-the-art vision-based MOT methods.
Style APA, Harvard, Vancouver, ISO itp.
34

Patil, Vedika, Sakshi Jain i Yogita Shah. "Secure Cryptography by Using Hash Application". Journal of Cyber Security in Computer System 1, nr 1 (11.05.2022): 18–24. http://dx.doi.org/10.46610/jcscs.2022.v01i01.002.

Pełny tekst źródła
Streszczenie:
This project is basically design for ensuring security to all kinds of data format file such as pdf, text, images, audio, videos etc. This is carried out using various keys and hashing algorithm. This project basically helps the end user to protect his/her personal files from the unauthorized user and can also restrict its access to the limited people. This complete process is carried out in two methods that are encryption and decryption. Encryption process required varchar key while encryption the data file which convert the original file in to incomprehensible format. Further in decryption process, again by providing the same key file can be converted in to its original format. This helps the user to protect personal file on a public systems like schools, colleges, cyber cafes, hospitals, banks etc. The cryptography is acquired through various algorithms and keys. We are using a XOR key function and Hash function for encrypting and decrypting the data. In this certain key given by user will be XOR with the actual byte code of file for encryption and vice versa for decryption. This ensures to restrict the personal files on a public system.
Style APA, Harvard, Vancouver, ISO itp.
35

Ni, Li Shun, i Bing Chen. "Design and Implementation of the Network Electronic Identity Management System". Applied Mechanics and Materials 548-549 (kwiecień 2014): 1334–38. http://dx.doi.org/10.4028/www.scientific.net/amm.548-549.1334.

Pełny tekst źródła
Streszczenie:
This paper presents a quickly locate the query methods in the face of huge amounts of data. The method by analyzing the data relationship, split the database, the formation of cross-database quickly Hash positioning table, and by analyzing the query cost to build double weighted graph and weight table. Each query is given based on the node weight table fast cross-database query path, and quickly locate the target from the cross-database positioning Hash table Hash calculation, in order to achieve fast positioning method.
Style APA, Harvard, Vancouver, ISO itp.
36

Nkouankou, Aboubakar, Fotso Clarice, Wadoufey Abel i René Ndoundam. "Pre-image attack of the MD5 hash function by proportional logic". International Journal of Research and Innovation in Applied Science 07, nr 08 (2022): 20–25. http://dx.doi.org/10.51584/ijrias.2022.7802.

Pełny tekst źródła
Streszczenie:
Hash functions are important cryptographic primitives that map arbitrary length messages to fixed length message summaries such that: it is easy to compute the digest given a message, while invert the hash process (for example, finding a message that maps a summary of a specific message) is difficult. An attack against a hash function is an algorithm that nevertheless manages to invert the hash process. Hash functions are used in authentication, digital signature, and key exchange systems. The most widely used hash function in many applications is the Message Digest-5 (MD5) algorithm. In this paper we study the current state of the technique of realization of the preimage attack of MD5 using solver SAT, we try improvements in the process of encoding and resolution. An important part of our work is to use the methods of propositional logic to model the attack problem and to determine which heuristic leads to the best resolution. Our most important result is a new encoding of the addition to several operands which considerably reduce the time required for the SAT solvers to find a solution to coding’s previously known.
Style APA, Harvard, Vancouver, ISO itp.
37

Tan, Xiaoyan, Yun Zou, Ziyang Guo, Ke Zhou i Qiangqiang Yuan. "Deep Contrastive Self-Supervised Hashing for Remote Sensing Image Retrieval". Remote Sensing 14, nr 15 (29.07.2022): 3643. http://dx.doi.org/10.3390/rs14153643.

Pełny tekst źródła
Streszczenie:
Hashing has been widely used for large-scale remote sensing image retrieval due to its outstanding advantages in storage and search speed. Recently, deep hashing methods, which produce discriminative hash codes by building end-to-end deep convolutional networks, have shown promising results. However, training these networks requires numerous labeled images, which are scarce and expensive in remote sensing datasets. In order to solve this problem, we propose a deep unsupervised hashing method, namely deep contrastive self-supervised hashing (DCSH), which uses only unlabeled images to learn accurate hash codes. It eliminates the need for label annotation by maximizing the consistency of different views generated from the same image. More specifically, we assume that the hash codes generated from different views of the same image are similar, and those generated from different images are dissimilar. On the basis of the hypothesis, we can develop a novel loss function containing the temperature-scaled cross-entropy loss and the quantization loss to train the developed deep network end-to-end, resulting in hash codes with semantic similarity preserved. Our proposed network contains four parts. First, each image is transformed into two different views using data augmentation. After that, they are fed into an encoder with the same shared parameters to obtain deep discriminate features. Following this, a hash layer converts the high-dimensional image representations into compact binary codes. Lastly, a novel hash function is introduced to train the proposed network end-to-end and thus guide generated hash codes with semantic similarity. Extensive experiments on two popular benchmark datasets of the UC Merced Land Use Database and the Aerial Image Dataset have demonstrated that our DCSH has significant superiority in remote sensing image retrieval compared with state-of-the-art unsupervised hashing methods.
Style APA, Harvard, Vancouver, ISO itp.
38

Qi, Xiaojun, Xianhua Zeng, Shumin Wang, Yicai Xie i Liming Xu. "Cross-modal variable-length hashing based on hierarchy". Intelligent Data Analysis 25, nr 3 (20.04.2021): 669–85. http://dx.doi.org/10.3233/ida-205162.

Pełny tekst źródła
Streszczenie:
Due to the emergence of the era of big data, cross-modal learning have been applied to many research fields. As an efficient retrieval method, hash learning is widely used frequently in many cross-modal retrieval scenarios. However, most of existing hashing methods use fixed-length hash codes, which increase the computational costs for large-size datasets. Furthermore, learning hash functions is an NP hard problem. To address these problems, we initially propose a novel method named Cross-modal Variable-length Hashing Based on Hierarchy (CVHH), which can learn the hash functions more accurately to improve retrieval performance, and also reduce the computational costs and training time. The main contributions of CVHH are: (1) We propose a variable-length hashing algorithm to improve the algorithm performance; (2) We apply the hierarchical architecture to effectively reduce the computational costs and training time. To validate the effectiveness of CVHH, our extensive experimental results show the superior performance compared with recent state-of-the-art cross-modal methods on three benchmark datasets, WIKI, NUS-WIDE and MIRFlickr.
Style APA, Harvard, Vancouver, ISO itp.
39

Yevseiev, Serhii, Alla Havrylova, Olha Korol, Oleh Dmitriiev, Oleksii Nesmiian, Yevhen Yufa i Asadi Hrebennikov. "Research of collision properties of the modified UMAC algorithm on crypto-code constructions". EUREKA: Physics and Engineering, nr 1 (10.01.2022): 34–43. http://dx.doi.org/10.21303/2461-4262.2022.002213.

Pełny tekst źródła
Streszczenie:
The transfer of information by telecommunication channels is accompanied by message hashing to control the integrity of the data and confirm the authenticity of the data. When using a reliable hash function, it is computationally difficult to create a fake message with a pre-existing hash code, however, due to the weaknesses of specific hashing algorithms, this threat can be feasible. To increase the level of cryptographic strength of transmitted messages over telecommunication channels, there are ways to create hash codes, which, according to practical research, are imperfect in terms of the speed of their formation and the degree of cryptographic strength. The collisional properties of hashing functions formed using the modified UMAC algorithm using the methodology for assessing the universality and strict universality of hash codes are investigated. Based on the results of the research, an assessment of the impact of the proposed modifications at the last stage of the generation of authentication codes on the provision of universal hashing properties was presented. The analysis of the advantages and disadvantages that accompany the formation of the hash code by the previously known methods is carried out. The scheme of cascading generation of data integrity and authenticity control codes using the UMAC algorithm on crypto-code constructions has been improved. Schemes of algorithms for checking hash codes were developed to meet the requirements of universality and strict universality. The calculation and analysis of collision search in the set of generated hash codes was carried out according to the requirements of a universal and strictly universal class for creating hash codes
Style APA, Harvard, Vancouver, ISO itp.
40

Górniak, Dawid, i Piotr Kopniak. "Comparing the speed of the selected hash and encryption algorithms". Journal of Computer Sciences Institute 4 (30.09.2017): 82–86. http://dx.doi.org/10.35784/jcsi.598.

Pełny tekst źródła
Streszczenie:
The data is often the most valuable thing that we collect on our computers. Without proper data security with encryption our valuable information may be illegally used by an unauthorised person. The article presents selected encryption methods and hash functions available in Boucy Castle library for Java programming language. The presented analysis applies to measurement of the speed of signature generation and verification. The signatures are for 240 bit encryption algorithms. In case of a hash function, the analysis refers to the speed of such functions. The fastest encryption algorithm and hash function from the research group were AES and SHA1.
Style APA, Harvard, Vancouver, ISO itp.
41

Song, Gyeong Ju, Min Ho Song i Hwa Jeong Seo. "Comparative analysis of quantum circuit implementation for domestic and international hash functions". Korean Institute of Smart Media 12, nr 2 (30.03.2023): 83–90. http://dx.doi.org/10.30693/smj.2023.12.2.83.

Pełny tekst źródła
Streszczenie:
The advent of quantum computers threatens the security of existing hash functions. In this paper, we confirmed the implementation results of quantum circuits for domestic/international hash functions, LSH, SHA2, SHA3 and SM3, and conducted a comparative analysis. To operate the existing hash function in a quantum computer, it must be implemented as a quantum circuit, and the quantum security strength can be confirmed by estimating the necessary quantum resources. We compared methods of quantum circuit implementation and results of quantum resource estimation in various aspects and discussed ways to meet quantum computer security in the future.
Style APA, Harvard, Vancouver, ISO itp.
42

Zhang, Lifang, Qi Shen, Defang Li, Guocan Feng, Xin Tang i Patrick S. Wang. "Adaptive Hashing with Sparse Modification for Scalable Image Retrieval". International Journal of Pattern Recognition and Artificial Intelligence 31, nr 06 (30.03.2017): 1754011. http://dx.doi.org/10.1142/s0218001417540118.

Pełny tekst źródła
Streszczenie:
Approximate Nearest Neighbor (ANN) search is a challenging problem with the explosive high-dimensional large-scale data in recent years. The promising technique for ANN search include hashing methods which generate compact binary codes by designing effective hash functions. However, lack of an optimal regularization is the key limitation of most of the existing hash functions. To this end, a new method called Adaptive Hashing with Sparse Modification (AHSM) is proposed. In AHSM, codes consist of vertices on the hypercube and the projection matrix is divided into two separate matrices. Data is rotated through a orthogonal matrix first and modified by a sparse matrix. Here the sparse matrix needs to be learned as a regularization item of hash function which is used to avoid overfitting and reduce quantization distortion. Totally, AHSM has two advantages: improvement of the accuracy without any time cost increasement. Furthermore, we extend AHSM to a supervised version, called Supervised Adaptive Hashing with Sparse Modification (SAHSM), by introducing Canonical Correlation Analysis (CCA) to the original data. Experiments show that the AHSM method stably surpasses several state-of-the-art hashing methods on four data sets. And at the same time, we compare three unsupervised hashing methods with their corresponding supervised version (including SAHSM) on three data sets with labels known. Similarly, SAHSM outperforms other methods on most of the hash bits.
Style APA, Harvard, Vancouver, ISO itp.
43

Wang, Dan. "Recognition and Error Correction Techniques for Piano Playing Music Based on Convolutional Cyclic Hashing Method". Wireless Communications and Mobile Computing 2022 (9.04.2022): 1–11. http://dx.doi.org/10.1155/2022/5660961.

Pełny tekst źródła
Streszczenie:
Music as a sound symbol can express what people think; music is both a form of social behavior and can promote people’s emotional communication; music is also a form of entertainment and can enrich people’s spiritual life. In this paper, we propose a new convolutional recurrent hashing method CRNNH, which uses multilayer RNN to learn to discriminate piano playing music using convolutional feature map sequences. Firstly, a convolutional feature map sequence learning preserving similarity hash function is designed consisting of multilayer convolutional feature maps extracted from multiple convolutional layers of a pretrained CNN; secondly, a new deep learning framework is proposed to generate hash codes using a multilayer RNN, which directly uses the convolutional feature maps as input to preserve the spatial structure of the feature maps; finally, a new loss function is proposed to preserve the semantic similarity and balance of the hash codes, while considering the quantization error generated when the hash layer outputs binary hash codes. The experimental results illustrate that the proposed CRNNH can obtain better performance compared to other hashing methods.
Style APA, Harvard, Vancouver, ISO itp.
44

Fitriyanto, Rachmad, Anton Yudhana i Sunardi Sunardi. "Implementation SHA512 Hash Function And Boyer-Moore String Matching Algorithm For Jpeg/exif Message Digest Compilation". Jurnal Online Informatika 4, nr 1 (6.09.2019): 16. http://dx.doi.org/10.15575/join.v4i1.304.

Pełny tekst źródła
Streszczenie:
Security information method for JPEG/exif documents generally aims to prevent security attacks by protecting documents with password and watermark. Both methods cannot be used to determine the condition of data integrity at the detection stage of the information security cycle. Message Digest is the essence of a file that has a function as a digital fingerprint to represent data integrity. This study aims to compile digital fingerprints to detect changes that occurred in JPEG / exif documents in information security. The research phase consists of five stages. The first stage, identification of the JPEG / exif document structure conducted using the Boyer-Moore string matching algorithm to find JPEG/exif segments location. The Second stage is segment content acquisition, conducted based on segment location and length obtained. The Third step, computing message digest for each segment using SHA512 hash function. Fourth stage, JPEG / exif document modification experiments to identified affected segments. The fifth stage is selecting and combining the hash value of the segment into the message digest. The obtained result shows the message digest for JPEG/exif documents composed of three hash values. The SOI segment hash value used to detect modifications for JPEG to png conversion and image editing. The APP1 hash value used to detect metadata editing. The SOF0 hash values use to detect modification for image recoloring, cropping and resizing — the combination from three hash values as JPEG/exif’s message digest.
Style APA, Harvard, Vancouver, ISO itp.
45

Xu, Yang, Lei Zhu, Zhiyong Cheng, Jingjing Li i Jiande Sun. "Multi-Feature Discrete Collaborative Filtering for Fast Cold-Start Recommendation". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 01 (3.04.2020): 270–78. http://dx.doi.org/10.1609/aaai.v34i01.5360.

Pełny tekst źródła
Streszczenie:
Hashing is an effective technique to address the large-scale recommendation problem, due to its high computation and storage efficiency on calculating the user preferences on items. However, existing hashing-based recommendation methods still suffer from two important problems: 1) Their recommendation process mainly relies on the user-item interactions and single specific content feature. When the interaction history or the content feature is unavailable (the cold-start problem), their performance will be seriously deteriorated. 2) Existing methods learn the hash codes with relaxed optimization or adopt discrete coordinate descent to directly solve binary hash codes, which results in significant quantization loss or consumes considerable computation time. In this paper, we propose a fast cold-start recommendation method, called Multi-Feature Discrete Collaborative Filtering (MFDCF), to solve these problems. Specifically, a low-rank self-weighted multi-feature fusion module is designed to adaptively project the multiple content features into binary yet informative hash codes by fully exploiting their complementarity. Additionally, we develop a fast discrete optimization algorithm to directly compute the binary hash codes with simple operations. Experiments on two public recommendation datasets demonstrate that MFDCF outperforms the state-of-the-arts on various aspects.
Style APA, Harvard, Vancouver, ISO itp.
46

Huang, Xiaoli, Haibo Chen i Zheng Zhang. "Design and Application of Deep Hash Embedding Algorithm with Fusion Entity Attribute Information". Entropy 25, nr 2 (15.02.2023): 361. http://dx.doi.org/10.3390/e25020361.

Pełny tekst źródła
Streszczenie:
Hash is one of the most widely used methods for computing efficiency and storage efficiency. With the development of deep learning, the deep hash method shows more advantages than traditional methods. This paper proposes a method to convert entities with attribute information into embedded vectors (FPHD). The design uses the hash method to quickly extract entity features, and uses a deep neural network to learn the implicit association between entity features. This design solves two main problems in large-scale dynamic data addition: (1) The linear growth of the size of the embedded vector table and the size of the vocabulary table leads to huge memory consumption. (2) It is difficult to deal with the problem of adding new entities to the retraining model. Finally, taking the movie data as an example, this paper introduces the encoding method and the specific algorithm flow in detail, and realizes the effect of rapid reuse of dynamic addition data model. Compared with three existing embedding algorithms that can fuse entity attribute information, the deep hash embedding algorithm proposed in this paper has significantly improved in time complexity and space complexity.
Style APA, Harvard, Vancouver, ISO itp.
47

Choi, Jong-Hyeok, Fei Hao i Aziz Nasridinov. "HI-Sky: Hash Index-Based Skyline Query Processing". Applied Sciences 10, nr 5 (2.03.2020): 1708. http://dx.doi.org/10.3390/app10051708.

Pełny tekst źródła
Streszczenie:
The skyline query has recently attracted a considerable amount of research interest in several fields. The query conducts computations using the domination test, where “domination” means that a data point does not have a worse value than others in any dimension, and has a better value in at least one dimension. Therefore, the skyline query can be used to construct efficient queries based on data from a variety of fields. However, when the number of dimensions or the amount of data increases, naïve skyline queries lead to a degradation in overall performance owing to the higher cost of comparisons among data. Several methods using index structures have been proposed to solve this problem but have not improved the performance of skyline queries because their indices are heavily influenced by the dimensionality and data amount. Therefore, in this study, we propose HI-Sky, a method that can perform quick skyline computations by using the hash index to overcome the above shortcomings. HI-Sky effectively manages data through the hash index and significantly improves performance by effectively eliminating unnecessary data comparisons when computing the skyline. We provide the theoretical background for HI-Sky and verify its improvement in skyline query performance through comparisons with prevalent methods.
Style APA, Harvard, Vancouver, ISO itp.
48

Sun, Li, i Bing Song. "Feature adaptive multi-view hash for image search". Electronic Research Archive 31, nr 9 (2023): 5845–65. http://dx.doi.org/10.3934/era.2023297.

Pełny tekst źródła
Streszczenie:
<abstract><p>With the rapid development of network technology and small handheld devices, the amount of data has significantly increased and various kinds of data can be supplied to us at the same time. Recently, hashing technology has become popular in executing large-scale similarity search and image matching tasks. However, most of the prior hashing methods are mainly focused on the choice of the high-dimensional feature descriptor for learning effective hashing functions. In practice, real world image data collected from multiple scenes cannot be descriptive enough by using a single type of feature. Recently, several unsupervised multi-view hashing learning methods have been proposed based on matrix factorization, anchor graph and metric learning. However, large quantization error will be introduced via a sign function and the robustness of multi-view hashing is ignored. In this paper we present a novel feature adaptive multi-view hashing (FAMVH) method based on a robust multi-view quantization framework. The proposed method is evaluated on three large-scale benchmarks CIFAR-10, CIFAR-20 and Caltech-256 for approximate nearest neighbor search task. The experimental results show that our approach can achieve the best accuracy and efficiency in the three large-scale datasets.</p></abstract>
Style APA, Harvard, Vancouver, ISO itp.
49

Suhaili, Shamsiah, i Norhuzaimin Julai. "FPGA-based Implementation of SHA-256 with Improvement of Throughput using Unfolding Transformation". Pertanika Journal of Science and Technology 30, nr 1 (10.01.2022): 581–603. http://dx.doi.org/10.47836/pjst.30.1.32.

Pełny tekst źródła
Streszczenie:
Security has grown in importance as a study issue in recent years. Several cryptographic algorithms have been created to increase the performance of these information-protecting methods. One of the cryptography categories is a hash function. This paper proposes the implementation of the SHA-256 (Secure Hash Algorithm-256) hash function. The unfolding transformation approach was presented in this study to enhance the throughput of the SHA-256 design. The unfolding method is employed in the hash function by producing the hash value output based on modifying the SHA-256 structure. In this unfolding method, SHA-256 decreases the number of clock cycles required for traditional architecture by a factor of two, from 64 to 34 because of the delay. To put it another way, one cycle of the SHA-256 design can generate up to four parallel inputs for the output. As a result, the throughput of the SHA-256 design can be improved by reducing the number of cycles by 16 cycles. ModelSim was used to validate the output simulations created in Verilog code. The SHA-256 hash function factor four hardware implementation was successfully tested using the Altera DE2-115 FPGA board. According to timing simulation findings, the suggested unfolding hash function with factor four provides the most significant throughput of around 4196.30 Mbps. In contrast, the suggested unfolding with factor two surpassed the classic SHA-256 design in terms of maximum frequency. As a result, the throughput of SHA-256 increases 13.7% compared to unfolding factor two and 58.1% improvement from the conventional design of SHA-256 design.
Style APA, Harvard, Vancouver, ISO itp.
50

Tang, Xu, Chao Liu, Jingjing Ma, Xiangrong Zhang, Fang Liu i Licheng Jiao. "Large-Scale Remote Sensing Image Retrieval Based on Semi-Supervised Adversarial Hashing". Remote Sensing 11, nr 17 (1.09.2019): 2055. http://dx.doi.org/10.3390/rs11172055.

Pełny tekst źródła
Streszczenie:
Remote sensing image retrieval (RSIR), a superior content organization technique, plays an important role in the remote sensing (RS) community. With the number of RS images increases explosively, not only the retrieval precision but also the retrieval efficiency is emphasized in the large-scale RSIR scenario. Therefore, the approximate nearest neighborhood (ANN) search attracts the researchers’ attention increasingly. In this paper, we propose a new hash learning method, named semi-supervised deep adversarial hashing (SDAH), to accomplish the ANN for the large-scale RSIR task. The assumption of our model is that the RS images have been represented by the proper visual features. First, a residual auto-encoder (RAE) is developed to generate the class variable and hash code. Second, two multi-layer networks are constructed to regularize the obtained latent vectors using the prior distribution. These two modules mentioned are integrated under the generator adversarial framework. Through the minimax learning, the class variable would be a one-hot-like vector while the hash code would be the binary-like vector. Finally, a specific hashing function is formulated to enhance the quality of the generated hash code. The effectiveness of the hash codes learned by our SDAH model was proved by the positive experimental results counted on three public RS image archives. Compared with the existing hash learning methods, the proposed method reaches improved performance.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii