Journal articles on the topic 'Error-correcting codes (Information theory)'

To see the other types of publications on this topic, follow the link: Error-correcting codes (Information theory).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Error-correcting codes (Information theory).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Conway, J., and N. Sloane. "Lexicographic codes: Error-correcting codes from game theory." IEEE Transactions on Information Theory 32, no. 3 (May 1986): 337–48. http://dx.doi.org/10.1109/tit.1986.1057187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Curto, Carina, Vladimir Itskov, Katherine Morrison, Zachary Roth, and Judy L. Walker. "Combinatorial Neural Codes from a Mathematical Coding Theory Perspective." Neural Computation 25, no. 7 (July 2013): 1891–925. http://dx.doi.org/10.1162/neco_a_00459.

Full text
Abstract:
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Li-Na, Hongxu Wei, Yuchen Zheng, Junyu Dong, and Guoqiang Zhong. "Deep Error-Correcting Output Codes." Algorithms 16, no. 12 (December 4, 2023): 555. http://dx.doi.org/10.3390/a16120555.

Full text
Abstract:
Ensemble learning, online learning and deep learning are very effective and versatile in a wide spectrum of problem domains, such as feature extraction, multi-class classification and retrieval. In this paper, combining the ideas of ensemble learning, online learning and deep learning, we propose a novel deep learning method called deep error-correcting output codes (DeepECOCs). DeepECOCs are composed of multiple layers of the ECOC module, which combines several incremental support vector machines (incremental SVMs) as base classifiers. In this novel deep architecture, each ECOC module can be considered as two successive layers of the network, while the incremental SVMs can be viewed as weighted links between two successive layers. In the pre-training procedure, supervisory information, i.e., class labels, can be used during the network initialization. The incremental SVMs lead this procedure to be very efficient, especially for large-scale applications. We have conducted extensive experiments to compare DeepECOCs with traditional ECOC, feature learning and deep learning algorithms. The results demonstrate that DeepECOCs perform, not only better than existing ECOC and feature learning algorithms, but also related to deep learning ones in most cases.
APA, Harvard, Vancouver, ISO, and other styles
4

Cazorla García, Pedro-José. "Perfect Codes over Non-Prime Power Alphabets: An Approach Based on Diophantine Equations." Mathematics 12, no. 11 (May 23, 2024): 1642. http://dx.doi.org/10.3390/math12111642.

Full text
Abstract:
Perfect error-correcting codes allow for an optimal transmission of information while guaranteeing error correction. For this reason, proving their existence has been a classical problem in both pure mathematics and information theory. Indeed, the classification of the parameters of e-error correcting perfect codes over q-ary alphabets was a very active topic of research in the late 20th century. Consequently, all parameters of perfect e-error-correcting codes were found if e≥3, and it was conjectured that no perfect 2-error-correcting codes exist over any q-ary alphabet, where q>3. In the 1970s, this was proved for q a prime power, for q=2r3s and for only seven other values of q. Almost 50 years later, it is surprising to note that there have been no new results in this regard and the classification of 2-error-correcting codes over non-prime power alphabets remains an open problem. In this paper, we use techniques from the resolution of the generalised Ramanujan–Nagell equation and from modern computational number theory to show that perfect 2-error-correcting codes do not exist for 172 new values of q which are not prime powers, substantially increasing the values of q which are now classified. In addition, we prove that, for any fixed value of q, there can be at most finitely many perfect 2-error-correcting codes over an alphabet of size q.
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Pengfei, Yi Liu, Xiaojie Zhang, Paul H. Siegel, and Erich F. Haratsch. "Syndrome-Coupled Rate-Compatible Error-Correcting Codes: Theory and Application." IEEE Transactions on Information Theory 66, no. 4 (April 2020): 2311–30. http://dx.doi.org/10.1109/tit.2020.2966439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sajjad, Muhammad, Tariq Shah, Qin Xin, and Bander Almutairi. "Eisenstein field BCH codes construction and decoding." AIMS Mathematics 8, no. 12 (2023): 29453–73. http://dx.doi.org/10.3934/math.20231508.

Full text
Abstract:
<abstract> <p>First, we will go through the theory behind the Eisenstein field (EF) and its extension field. In contrast, we provide a detailed framework for building BCH codes over the EF in the second stage. BCH codes over the EF are decoded using the Berlekamp-Massey algorithm (BMA) in this article. We investigate the error-correcting capabilities of these codes and provide expressions for minimal distance. We provide researchers and engineers creating and implementing robust error-correcting codes for digital communication systems with detailed information on building, decoding and performance assessment.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
7

Ben-Gal, Irad, and Lev B. Levitin. "An application of information theory and error-correcting codes to fractional factorial experiments." Journal of Statistical Planning and Inference 92, no. 1-2 (January 2001): 267–82. http://dx.doi.org/10.1016/s0378-3758(00)00165-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Namba, Kazuteru, and Eiji Fujiwara. "Nonbinary single-symbol error correcting, adjacent two-symbol transposition error correcting codes over integer rings." Systems and Computers in Japan 38, no. 8 (2007): 54–60. http://dx.doi.org/10.1002/scj.10516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Çalkavur, Selda. "Public-Key Cryptosystems and Bounded Distance Decoding of Linear Codes." Entropy 24, no. 4 (April 1, 2022): 498. http://dx.doi.org/10.3390/e24040498.

Full text
Abstract:
Error-correcting codes form an important topic in information theory. They are used to correct errors that occur during transmission on a noisy channel. An important method for correcting errors is bounded distance decoding. The public-key cryptosystem is a cryptographic protocol that has two different keys. One of them is a public-key that can be known by everyone, and the other is the private-key only known to the user of the system. The data encrypted with the public-key of a given user can only be decrypted by this user with his or her private-key. In this paper, we propose a public-key cryptosystem based on the error-correcting codes. The decryption is performed by using the bounded distance decoding of the code. For a given code length, dimension, and error-correcting capacity, the new system allows dealing with larger plaintext than other code based public-key cryptosystems.
APA, Harvard, Vancouver, ISO, and other styles
10

Khalifa, Othman O., Nur Amirah bt Sharif, Rashid A. Saeed, S. Abdel-Khalek, Abdulaziz N. Alharbi, and Ali A. Alkathiri. "Digital System Design for Quantum Error Correction Codes." Contrast Media & Molecular Imaging 2021 (December 15, 2021): 1–8. http://dx.doi.org/10.1155/2021/1101911.

Full text
Abstract:
Quantum computing is a computer development technology that uses quantum mechanics to perform the operations of data and information. It is an advanced technology, yet the quantum channel is used to transmit the quantum information which is sensitive to the environment interaction. Quantum error correction is a hybrid between quantum mechanics and the classical theory of error-correcting codes that are concerned with the fundamental problem of communication, and/or information storage, in the presence of noise. The interruption made by the interaction makes transmission error during the quantum channel qubit. Hence, a quantum error correction code is needed to protect the qubit from errors that can be caused by decoherence and other quantum noise. In this paper, the digital system design of the quantum error correction code is discussed. Three designs used qubit codes, and nine-qubit codes were explained. The systems were designed and configured for encoding and decoding nine-qubit error correction codes. For comparison, a modified circuit is also designed by adding Hadamard gates.
APA, Harvard, Vancouver, ISO, and other styles
11

He, Xianmang. "Constructing new q-ary quantum MDS codes with distances bigger than q/2 from generator matrices." Quantum Information and Computation 18, no. 3&4 (March 2018): 223–30. http://dx.doi.org/10.26421/qic18.3-4-3.

Full text
Abstract:
The construction of quantum error-correcting codes has been an active field of quantum information theory since the publication of \cite{Shor1995Scheme,Steane1998Enlargement,Laflamme1996Perfect}. It is becoming more and more difficult to construct some new quantum MDS codes with large minimum distance. In this paper, based on the approach developed in the paper \cite{NewHeMDS2016}, we construct several new classes of quantum MDS codes. The quantum MDS codes exhibited here have not been constructed before and the distance parameters are bigger than q/2.
APA, Harvard, Vancouver, ISO, and other styles
12

Namba, Kazuteru, and Eiji Fujiwara. "A class of systematicm-ary single-symbol error correcting codes." Systems and Computers in Japan 32, no. 6 (2001): 21–28. http://dx.doi.org/10.1002/scj.1030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Shimada, Ryosaku, Ryutaro Murakami, Kazuharu Sono, and Yoshiteru Ohkura. "Arithmetic burst error correcting fire-type cyclic ST-AN codes." Systems and Computers in Japan 18, no. 7 (1987): 57–68. http://dx.doi.org/10.1002/scj.4690180706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Haselgrove, H. L., and P. P. Rohde. "Trade-off between the tolerance of located and unlocated errors in nondegenrate quantum." Quantum Information and Computation 8, no. 5 (May 2008): 399–410. http://dx.doi.org/10.26421/qic8.5-3.

Full text
Abstract:
In a recent study [Rohde et al., quant-ph/0603130 (2006)] of several quantum error correcting protocols designed for tolerance against qubit loss, it was shown that these protocols have the undesirable effect of magnifying the effects of depolarization noise. This raises the question of which general properties of quantum error-correcting codes might explain such an apparent trade-off between tolerance to located and unlocated error types. We extend the counting argument behind the well-known quantum Hamming bound to derive a bound on the weights of combinations of located and unlocated errors which are correctable by nondegenerate quantum codes. Numerical results show that the bound gives an excellent prediction to which combinations of unlocated and located errors can be corrected {\em with high probability} by certain large degenerate codes. The numerical results are explained partly by showing that the generalized bound, like the original, is closely connected to the information-theoretic quantity the {\em quantum coherent information}. However, we also show that as a measure of the exact performance of quantum codes, our generalized Hamming bound is provably far from tight.
APA, Harvard, Vancouver, ISO, and other styles
15

Kuznetsov, Alexandr, Oleg Oleshko, and Kateryna Kuznetsova. "ENERGY GAIN FROM ERROR-CORRECTING CODING IN CHANNELS WITH GROUPING ERRORS." Acta Polytechnica 60, no. 1 (March 2, 2020): 65–72. http://dx.doi.org/10.14311/ap.2020.60.0065.

Full text
Abstract:
Abstract. This article explores the a mathematical model of the a data transmission channel with errors grouping. We propose an estimating method for energy gain from coding and energy efficiency of binary codes in channels with grouped errors. The proposed method uses a simplified Bennet and Froelich’s model and allows leading the research of the energy gain from coding for a wide class of data channels without restricting the way of the length distributing the error bursts. The reliability of the obtained results is confirmed by the information of the known results in the theory of error-correcting coding in the simplified variant.
APA, Harvard, Vancouver, ISO, and other styles
16

Imai, H., J. Mueller-Quade, A. C. A. Nascimento, P. Tuyls, and A. Winter. "An information theoretical model for quantum secret sharing schemes." Quantum Information and Computation 5, no. 1 (January 2005): 68–79. http://dx.doi.org/10.26421/qic5.1-7.

Full text
Abstract:
Similarly to earlier models for quantum error correcting codes, we introduce a quantum information theoretical model for quantum secret sharing schemes. This model provides new insights into the theory of quantum secret sharing. By using our model, among other results, we give a shorter proof of Gottesman's theorem that the size of the shares in a quantum secret sharing scheme must be as large as the secret itself. Also, we introduced approximate quantum secret sharing schemes and showed robustness of quantum secret sharing schemes by extending Gottesman's theorem to the approximate case.
APA, Harvard, Vancouver, ISO, and other styles
17

Magdalena de la Fuente, Julio Carlos, Nicolas Tarantino, and Jens Eisert. "Non-Pauli topological stabilizer codes from twisted quantum doubles." Quantum 5 (February 17, 2021): 398. http://dx.doi.org/10.22331/q-2021-02-17-398.

Full text
Abstract:
It has long been known that long-ranged entangled topological phases can be exploited to protect quantum information against unwanted local errors. Indeed, conditions for intrinsic topological order are reminiscent of criteria for faithful quantum error correction. At the same time, the promise of using general topological orders for practical error correction remains largely unfulfilled to date. In this work, we significantly contribute to establishing such a connection by showing that Abelian twisted quantum double models can be used for quantum error correction. By exploiting the group cohomological data sitting at the heart of these lattice models, we transmute the terms of these Hamiltonians into full-rank, pairwise commuting operators, defining commuting stabilizers. The resulting codes are defined by non-Pauli commuting stabilizers, with local systems that can either be qubits or higher dimensional quantum systems. Thus, this work establishes a new connection between condensed matter physics and quantum information theory, and constructs tools to systematically devise new topological quantum error correcting codes beyond toric or surface code models.
APA, Harvard, Vancouver, ISO, and other styles
18

El Ouakili, Hajar, Mohammed El Ghzaoui, and Rachid El Alami. "Optimized decoder for low-density parity check codes based on genetic algorithms." International Journal of Electrical and Computer Engineering (IJECE) 14, no. 3 (June 1, 2024): 2717. http://dx.doi.org/10.11591/ijece.v14i3.pp2717-2724.

Full text
Abstract:
Low-density parity check (LDPC) codes, are a family of error-correcting codes, their performances close to the Shannon limit make them very attractive solutions for digital communication systems. There are several algorithms for decoding LDPC codes that show great diversity in terms of performance related to error correction. Also, very recently, many research papers involved the genetic algorithm (GA) in coding theory, in particular, in the decoding linear block codes case, which has heavily contributed to reducing the bit error rate (BER). In this paper, an efficient method based on the GA is proposed and it is used to improve the power of correction in terms of BER and the frame error rate (FER) of LDPC codes. Subsequently, the proposed algorithm can independently decide the most suitable moment to stop the decoding process, moreover, it does not require channel information (CSI) making it adaptable for all types of channels with different noise or intensity. The simulations show that the proposed algorithm is more efficient in terms of BER compared to other LDPC code decoders.
APA, Harvard, Vancouver, ISO, and other styles
19

Mundici, Daniele. "Ulam Games, Łukasiewicz Logic, and AF C*-Algebras." Fundamenta Informaticae 18, no. 2-4 (April 1, 1993): 151–61. http://dx.doi.org/10.3233/fi-1993-182-405.

Full text
Abstract:
Ulam asked what is the minimum number of yes-no questions necessary to find an unknown number in the search space (1, …, 2n), if up to l of the answers may be erroneous. The solutions to this problem provide optimal adaptive l error correcting codes. Traditional, nonadaptive l error correcting codes correspond to the particular case when all questions are formulated before all answers. We show that answers in Ulam’s game obey the (l+2)-valued logic of Łukasiewicz. Since approximately finite-dimensional (AF) C*-algebras can be interpreted in the infinite-valued sentential calculus, we discuss the relationship between game-theoretic notions and their C*-algebraic counterparts. We describe the correspondence between continuous trace AF C*-algebras, and Ulam games with separable Boolean search space S. whose questions are the clopen subspaces of S. We also show that these games correspond to finite products of countable Post MV algebras, as well as to countable lattice-ordered Specker groups with strong unit.
APA, Harvard, Vancouver, ISO, and other styles
20

Ovchinnikov, A., A. Veresova, and A. Fominykh. "Usage of LDPC Codes in a Gilbert Channel." Proceedings of Telecommunication Universities 8, no. 4 (January 10, 2023): 55–63. http://dx.doi.org/10.31854/1813-324x-2022-8-4-55-63.

Full text
Abstract:
Although low-density parity-check (LDPC) codes in modern communication standards have been extensively studied over a memoryless channel, their burst error correction capacity in channels with memory has yet to be thoroughly analyzed. The conventional approach to transmission in channels with memory uses interleaving within a buffer of several codewords. However, such an approach reduces the efficiency of the redundancy embedded by the error-correcting code. It is known from information theory that considering channel memory during decoding allows the transmission rate to be increased. An evaluation of the decoding error probability of different types of low-density parity-check codes in channels with memory is presented along with estimates of minimum distance and burst error correction capability for the considered codes. The decoding error probability is estimated for conventional decoding with deinterleaving and decoding taking channel memory into account. The decoding error probability is estimated for several parameters of a channel with memory and different buffer lengths. The obtained results reveal the absence of the unique decoding approach for all parameters of the channel with memory. The best decoding error probability is determined by the degree of channel memory correlation.
APA, Harvard, Vancouver, ISO, and other styles
21

Maltiyar, Kaveri, and Deepti Malviya. "Polar Code: An Advanced Encoding And Decoding Architecture For Next Generation 5G Applications." International Journal on Recent and Innovation Trends in Computing and Communication 7, no. 5 (June 4, 2019): 26–29. http://dx.doi.org/10.17762/ijritcc.v7i5.5307.

Full text
Abstract:
Polar Codes become a new channel coding, which will be common to apply for next-generation wireless communication systems. Polar codes, introduced by Arikan, achieves the capacity of symmetric channels with “low encoding and decoding complexity” for a large class of underlying channels. Recently, polar code has become the most favorable error correcting code in the viewpoint of information theory due to its property of channel achieving capacity. Polar code achieves the capacity of the class of symmetric binary memory less channels. In this paper review of polar code, an advanced encoding and decoding architecture for next generation applications.
APA, Harvard, Vancouver, ISO, and other styles
22

Siewert, Jens. "On orthogonal bases in the Hilbert-Schmidt space of matrices." Journal of Physics Communications 6, no. 5 (May 1, 2022): 055014. http://dx.doi.org/10.1088/2399-6528/ac6f43.

Full text
Abstract:
Abstract Decomposition of (finite-dimensional) operators in terms of orthogonal bases of matrices has been a standard method in quantum physics for decades. In recent years, it has become increasingly popular because of various methodologies applied in quantum information, such as the graph state formalism and the theory of quantum error correcting codes, but also due to the intensified research on the Bloch representation of quantum states. In this contribution we collect various interesting facts and identities that hold for finite-dimensional orthogonal matrix bases.
APA, Harvard, Vancouver, ISO, and other styles
23

Weaver, Nik. "Quantum Graphs as Quantum Relations." Journal of Geometric Analysis 31, no. 9 (January 13, 2021): 9090–112. http://dx.doi.org/10.1007/s12220-020-00578-w.

Full text
Abstract:
AbstractThe “noncommutative graphs” which arise in quantum error correction are a special case of the quantum relations introduced in Weaver (Quantum relations. Mem Am Math Soc 215(v–vi):81–140, 2012). We use this perspective to interpret the Knill–Laflamme error-correction conditions (Knill and Laflamme in Theory of quantum error-correcting codes. Phys Rev A 55:900-911, 1997) in terms of graph-theoretic independence, to give intrinsic characterizations of Stahlke’s noncommutative graph homomorphisms (Stahlke in Quantum zero-error source-channel coding and non-commutative graph theory. IEEE Trans Inf Theory 62:554–577, 2016) and Duan, Severini, and Winter’s noncommutative bipartite graphs (Duan et al., op. cit. in Zero-error communication via quantum channels, noncommutative graphs, and a quantum Lovász number. IEEE Trans Inf Theory 59:1164–1174, 2013), and to realize the noncommutative confusability graph associated to a quantum channel (Duan et al., op. cit. in Zero-error communication via quantum channels, noncommutative graphs, and a quantum Lovász number. IEEE Trans Inf Theory 59:1164–1174, 2013) as the pullback of a diagonal relation. Our framework includes as special cases not only purely classical and purely quantum information theory, but also the “mixed” setting which arises in quantum systems obeying superselection rules. Thus we are able to define noncommutative confusability graphs, give error correction conditions, and so on, for such systems. This could have practical value, as superselection constraints on information encoding can be physically realistic.
APA, Harvard, Vancouver, ISO, and other styles
24

Semerenko, Vasyl, and Oleksandr Voinalovich. "The simplification of computationals in error correction coding." Technology audit and production reserves 3, no. 2(59) (June 30, 2021): 24–28. http://dx.doi.org/10.15587/2706-5448.2021.233656.

Full text
Abstract:
The object of research is the processes of error correction transformation of information in automated systems. The research is aimed at reducing the complexity of decoding cyclic codes by combining modern mathematical models and practical tools. The main prerequisite for the complication of computations in deterministic linear error-correcting codes is the use of the algebraic representation as the main mathematical apparatus for these types of codes. Despite the universalism of the algebraic approach, its main drawback is the impossibility of taking into account the characteristic features of all subclasses of linear codes. In particular, the cyclic property is not taken into account at all for cyclic codes. Taking this property into account, one can go to a fundamentally different mathematical representation of cyclic codes – the theory of linear automata in Galois fields (linear finite-state machine). For the automaton representation of cyclic codes, it is proved that the problem of syndromic decoding of these codes in the general case is an NP-complete problem. However, if to use the proposed hierarchical approach to problems of complexity, then on its basis it is possible to carry out a more accurate analysis of the growth of computational complexity. Correction of single errors during one time interval (one iteration) of decoding has a linear decoding complexity on the length of the codeword, and error correction during m iterations of permutations of codeword bits has a polynomial complexity. According to three subclasses of cyclic codes, depending on the complexity of their decoding: easy decoding (linear complexity), iteratively decoded (polynomial complexity), complicate decoding (exponential complexity). Practical ways to reduce the complexity of computations are considered: alternate use of probabilistic and deterministic linear codes, simplification of software and hardware implementation by increasing the decoding time, use of interleaving. A method of interleaving is proposed, which makes it possible to simultaneously generate the burst errors and replace them with single errors. The mathematical apparatus of linear automata allows solving together the indicated problems of error correction coding.
APA, Harvard, Vancouver, ISO, and other styles
25

Klimo, Martin, Peter Lukáč, and Peter Tarábek. "Deep Neural Networks Classification via Binary Error-Detecting Output Codes." Applied Sciences 11, no. 8 (April 15, 2021): 3563. http://dx.doi.org/10.3390/app11083563.

Full text
Abstract:
One-hot encoding is the prevalent method used in neural networks to represent multi-class categorical data. Its success stems from its ease of use and interpretability as a probability distribution when accompanied by a softmax activation function. However, one-hot encoding leads to very high dimensional vector representations when the categorical data’s cardinality is high. The Hamming distance in one-hot encoding is equal to two from the coding theory perspective, which does not allow detection or error-correcting capabilities. Binary coding provides more possibilities for encoding categorical data into the output codes, which mitigates the limitations of the one-hot encoding mentioned above. We propose a novel method based on Zadeh fuzzy logic to train binary output codes holistically. We study linear block codes for their possibility of separating class information from the checksum part of the codeword, showing their ability not only to detect recognition errors by calculating non-zero syndrome, but also to evaluate the truth-value of the decision. Experimental results show that the proposed approach achieves similar results as one-hot encoding with a softmax function in terms of accuracy, reliability, and out-of-distribution performance. It suggests a good foundation for future applications, mainly classification tasks with a high number of classes.
APA, Harvard, Vancouver, ISO, and other styles
26

Martyniuk, T. B., and O. Yu Voinalovych. "Classification model of digital coding methods." Optoelectronic Information-Power Technologies 47, no. 1 (June 27, 2024): 42–49. http://dx.doi.org/10.31649/1681-7893-2024-47-1-42-49.

Full text
Abstract:
In modern computer applications, efficient processing of digital information requires optimal coding of digital data, especially considering the speed and immunity of processing and transmission of coded information. This work is devoted to the classification analysis of known coding methods with an emphasis on their immunity. Jam-resistant codes play a key role in ensuring the reliability and efficiency of information transmission, as well as ensuring its integrity in critical object processing, analysis and recognition systems. An extended version of the classification model of coding methods is proposed, which uses the basic characteristics of codes according to the algebraic theory of coding. This classification is supplemented among single codes by a single even code, among multi-valued codes by the Fibonacci code, among cyclical codes by a modified Eliez code. In addition, a list of modern cyclic codes is given. From the point of view of immunity, two cyclic codes are considered: a modified Elaies code and a single positional code. A comparison of the corrective properties of the Elaies code, the modified Elaies code and the Hamming code was performed. As a result, it is shown that the modified Elaies code allows correcting triple errors in the rows of the correction matrix and detecting triple errors in the columns of the correction matrix. A comparative analysis of two unit codes, positional and normal with the Hamming code, was made. Therefore, the single positional code, like the Hamming code, is capable not only of detecting, but also of correcting single errors. This functional power of the unit positional code is contained in the way of encoding its combinations in contrast to the Hamming code, in which control symbols are provided.
APA, Harvard, Vancouver, ISO, and other styles
27

Riznyk, V. V., D. Yu Skrybaylo-Leskiv, V. M. Badz, C. I. Hlod, V. V. Liakh, Y. M. Kulyk, N. B. Romanjuk, K. I. Tkachuk, and V. V. Ukrajinets. "COMPARATIVE ANALYSIS OF MONOLITHIC AND CYCLIC NOISE-PROTECTIVE CODES EFFECTIVENESS." Ukrainian Journal of Information Technology 3, no. 1 (2021): 99–105. http://dx.doi.org/10.23939/ujit2021.03.099.

Full text
Abstract:
Comparative analysis of the effectiveness of monolithic and cyclic noise protective codes built on "Ideal Ring Bundles" (IRBs) as the common theoretical basis for synthesis, researches and application of the codes for improving technical indexes of coding systems with respect to performance, reliability, transformation speed, and security has been realized. IRBs are cyclic sequences of positive integers, which form perfect partitions of a finite interval of integers. Sums of connected IRB elements enumerate the natural integers set exactly R-times. The IRB-codes both monolithic and cyclic ones forming on the underlying combinatorial constructions can be used for finding optimal solutions for configure of an applicable coding systems based on the common mathematical platform. The mathematical model of noise-protective data coding systems presents remarkable properties of harmonious developing real space. These properties allow configure codes with useful possibilities. First of them belong to the self-correcting codes due to monolithic arranged both symbols "1" and of course "0" of each allowed codeword. This allows you to automatically detect and correct errors by the monolithic structure of the encoded words. IRB codes of the second type provide improving noise protection of the codes by choosing the optimal ratio of information parameters. As a result of comparative analysis of cyclic IRB-codes based with optimized parameters and monolithic IRB-codes, it was found that optimized cyclic IRB codes have an advantage over monolithic in relation to a clearly fixed number of detected and corrected codes, while monolithic codes favorably differ in the speed of message decoding due to their inherent properties of self-correction and encryption. Monolithic code characterized by packing of the same name characters in the form of solid blocks. The latter are capable of encoding data on several levels at the same time, which expands the ability to encrypt and protect encoded data from unauthorized access. Evaluation of the effectiveness of coding optimization methods by speed of formation of coding systems, method power, and error correcting has been made. The model based on the combinatorial configurations contemporary theory, which can find a wide scientific field for the development of fundamental and applied researches into information technolodies, including application multidimensional models, as well as algorithms for synthesis of the underlying models.
APA, Harvard, Vancouver, ISO, and other styles
28

Kesselring, Markus S., Fernando Pastawski, Jens Eisert, and Benjamin J. Brown. "The boundaries and twist defects of the color code and their applications to topological quantum computation." Quantum 2 (October 19, 2018): 101. http://dx.doi.org/10.22331/q-2018-10-19-101.

Full text
Abstract:
The color code is both an interesting example of an exactly solved topologically ordered phase of matter and also among the most promising candidate models to realize fault-tolerant quantum computation with minimal resource overhead. The contributions of this work are threefold. First of all, we build upon the abstract theory of boundaries and domain walls of topological phases of matter to comprehensively catalog the objects realizable in color codes. Together with our classification we also provide lattice representations of these objects which include three new types of boundaries as well as a generating set for all 72 color code twist defects. Our work thus provides an explicit toy model that will help to better understand the abstract theory of domain walls. Secondly, we discover a number of interesting new applications of the cataloged objects for quantum information protocols. These include improved methods for performing quantum computations by code deformation, a new four-qubit error-detecting code, as well as families of new quantum error-correcting codes we call stellated color codes, which encode logical qubits at the same distance as the next best color code, but using approximately half the number of physical qubits. To the best of our knowledge, our new topological codes have the highest encoding rate of local stabilizer codes with bounded-weight stabilizers in two dimensions. Finally, we show how the boundaries and twist defects of the color code are represented by multiple copies of other phases. Indeed, in addition to the well studied comparison between the color code and two copies of the surface code, we also compare the color code to two copies of the three-fermion model. In particular, we find that this analogy offers a very clear lens through which we can view the symmetries of the color code which gives rise to its multitude of domain walls.
APA, Harvard, Vancouver, ISO, and other styles
29

Yu, Mian Shui, Yu Xie, and Xiao Meng Xie. "Age Classification Based on Feature Fusion." Applied Mechanics and Materials 519-520 (February 2014): 644–50. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.644.

Full text
Abstract:
Age classification based on facial images is attracting wide attention with its broad application to human-computer interaction (HCI). Since human senescence is a tremendously complex process, age classification is still a highly challenging issue. In our study, Local Directional Pattern (LDP) and Gabor wavelet transform were used to extract global and local facial features, respectively, that were fused based on information fusion theory. The Principal Component Analysis (PCA) method was used for dimensionality reduction of the fused features, to obtain a lower-dimensional age characteristic vector. A Support Vector Machine (SVM) multi-class classifier with Error Correcting Output Codes (ECOC) was proposed in the paper. This was aimed at multi-class classification problems, such as age classification. Experiments on a public FG-NET age database proved the efficiency of our method.
APA, Harvard, Vancouver, ISO, and other styles
30

Baldi, Marco, Alessandro Barenghi, Franco Chiaraluce, Gerardo Pelosi, and Paolo Santini. "A Finite Regime Analysis of Information Set Decoding Algorithms." Algorithms 12, no. 10 (October 1, 2019): 209. http://dx.doi.org/10.3390/a12100209.

Full text
Abstract:
Decoding of random linear block codes has been long exploited as a computationally hard problem on which it is possible to build secure asymmetric cryptosystems. In particular, both correcting an error-affected codeword, and deriving the error vector corresponding to a given syndrome were proven to be equally difficult tasks. Since the pioneering work of Eugene Prange in the early 1960s, a significant research effort has been put into finding more efficient methods to solve the random code decoding problem through a family of algorithms known as information set decoding. The obtained improvements effectively reduce the overall complexity, which was shown to decrease asymptotically at each optimization, while remaining substantially exponential in the number of errors to be either found or corrected. In this work, we provide a comprehensive survey of the information set decoding techniques, providing finite regime temporal and spatial complexities for them. We exploit these formulas to assess the effectiveness of the asymptotic speedups obtained by the improved information set decoding techniques when working with code parameters relevant for cryptographic purposes. We also delineate computational complexities taking into account the achievable speedup via quantum computers and similarly assess such speedups in the finite regime. To provide practical grounding to the choice of cryptographically relevant parameters, we employ as our validation suite the ones chosen by cryptosystems admitted to the second round of the ongoing standardization initiative promoted by the US National Institute of Standards and Technology.
APA, Harvard, Vancouver, ISO, and other styles
31

ISOKAWA, TEIJIRO, FUKUTARO ABO, FERDINAND PEPER, SUSUMU ADACHI, JIA LEE, NOBUYUKI MATSUI, and SHINRO MASHIKO. "FAULT-TOLERANT NANOCOMPUTERS BASED ON ASYNCHRONOUS CELLULAR AUTOMATA." International Journal of Modern Physics C 15, no. 06 (July 2004): 893–915. http://dx.doi.org/10.1142/s0129183104006327.

Full text
Abstract:
Cellular Automata (CA) are a promising architecture for computers with nanometer-scale sized components, because their regular structure potentially allows chemical manufacturing techniques based on self-organization. With the increase in integration density, however, comes a decrease in the reliability of the components from which such computers will be built. This paper employs BCH error-correcting codes to construct CA with improved reliability. We construct an asynchronous CA of which a quarter of the (ternary) bits storing a cell's state information may be corrupted without affecting the CA's operations, provided errors are evenly distributed over a cell's bits (no burst errors allowed). Under the same condition, the corruption of half of a cell's bits can be detected.
APA, Harvard, Vancouver, ISO, and other styles
32

Korchynskyi, Volodymyr, Valerii Hordiichuk, Vitalii Kildishev, Oleksandr Riabukha, Sergii Staikutsa, and Khaled Alfaiomi. "Method of information protection based on the integration of probabilistic encryption and noise immune coding." Radioelectronic and Computer Systems, no. 4 (December 6, 2023): 184–95. http://dx.doi.org/10.32620/reks.2023.4.13.

Full text
Abstract:
The subject matter of this article is the processes of increasing information security in communication systems to solve applied problems that are critical in the transmission of confidential data. The goal is to develop a multi-stage method for protecting information from unauthorized access, based on probabilistic encryption integration, error-correcting code, and error decorrelation. The tasks to be solved are as follows: to develop a probabilistic encryption algorithm that considers the entropy of the characters’ appearance in a message; to implement iterative coding with variable parameters of the information bits matrix; and to implement error decorrelation based on the generated matrix by interleaving information and check bits according to a given algorithm. Methodology: theory of signal communication and signal processing, theory of cryptography, theory of noise-resistant codes, probability theory, statistical analysis. The following results were obtained: a multi-stage method for protecting information from unauthorized access was proposed, in which at each step of data conversion, information secrecy is increased; probabilistic encryption was implemented, which considers the entropy of a discrete source of information when forming a space of random combinations; based on an iterative code with specified matrix parameters, an additional increase in the space of random combinations was implemented; error decorrelation reduces the multiplicity of errors in codewords and ensures mixing of bits according to a predetermined law. Conclusions. The scientific novelty of the results obtained is as follows: 1) an improved method of information protection from unauthorized access is proposed, based on probabilistic encryption integration, interference-resistant coding, and error decorrelation; 2) increasing information concealment is realized by eliminating the main shortcomings of probabilistic encryption, which did not take into account the entropy of the symbols of the open message; 3) it is proposed to form the space of random combinations taking into account the average probability of the appearance of a symbol in open texts, which will allow the output of a probabilistic cipher to form a stream of random combinations, which, according to its statistical properties, is close to the uniform distribution law; 4) further development of information protection methods based on interference-resistant coding and error decorrelation with encryption functions is received; 5) a further development of the statistical encryption method is the use of all redundant elements for both probabilistic encryption and interference-resistant coding.
APA, Harvard, Vancouver, ISO, and other styles
33

Massey, J. "Review of 'Theory and Practice of Error Control Codes' (Blahut, R.E.; 1983)." IEEE Transactions on Information Theory 31, no. 4 (July 1985): 553–54. http://dx.doi.org/10.1109/tit.1985.1057072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

qizi, Kosimova Marjona Shakirjon. "APPLICATION OF CODING IN DATA TRANSMISSION AND INFORMATION STORAGE." American Journal of Applied Science and Technology 4, no. 6 (June 1, 2024): 67–70. http://dx.doi.org/10.37547/ajast/volume04issue06-13.

Full text
Abstract:
Coding theory plays a crucial role in modern data transmission and information storage systems, ensuring the efficient and reliable handling of digital information. This article explores the fundamental principles of coding theory, focusing on its applications in both data transmission and storage contexts. Beginning with an introduction to encoding and decoding techniques, the article delves into how various coding methods, such as error detection and correction codes, enhance data integrity during transmission. It further examines the implementation of coding in storage devices to safeguard against data loss and corruption.
APA, Harvard, Vancouver, ISO, and other styles
35

Taubin, Feliks, and Andrey Trofimov. "Concatenated Coding for Multilevel Flash Memory with Low Error Correction Capabilities in Outer Stage." SPIIRAS Proceedings 18, no. 5 (September 19, 2019): 1149–81. http://dx.doi.org/10.15622/sp.2019.18.5.1149-1181.

Full text
Abstract:
One of the approaches to organization of error correcting coding for multilevel flash memory is based on concatenated construction, in particular, on multidimensional lattices for inner coding. A characteristic feature of such structures is the dominance of the complexity of the outer decoder in the total decoder complexity. Therefore the concatenated construction with low-complexity outer decoder may be attractive since in practical applications the decoder complexity is the crucial limitation for the usage of the error correction coding. We consider a concatenated coding scheme for multilevel flash memory with the Barnes-Wall lattice based codes as an inner code and the Reed-Solomon code with correction up to 4…5 errors as an outer one. Performance analysis is fulfilled for a model characterizing the basic physical features of a flash memory cell with non-uniform target voltage levels and noise variance dependent on the recorded value (input-dependent additive Gaussian noise, ID-AGN). For this model we develop a modification of our approach for evaluation the error probability for the inner code. This modification uses the parallel structure of the inner code trellis which significantly reduces the computational complexity of the performance estimation. We present numerical examples of achievable recording density for the Reed-Solomon codes with correction up to four errors as the outer code for wide range of the retention time and number of write/read cycles.
APA, Harvard, Vancouver, ISO, and other styles
36

Sajjad, Muhammad, Tariq Shah, Robinson-Julian Serna, Zagalo Enrique Suárez Aguilar, and Omaida Sepúlveda Delgado. "Fundamental Results of Cyclic Codes over Octonion Integers and Their Decoding Algorithm." Computation 10, no. 12 (December 14, 2022): 219. http://dx.doi.org/10.3390/computation10120219.

Full text
Abstract:
Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection, error correction, data transmission, and data storage. Codes are studied by various scientific disciplines, such as information theory, electrical engineering, mathematics, linguistics, and computer science, to design efficient and reliable data transmission methods. Many authors in the previous literature have discussed codes over finite fields, Gaussian integers, quaternion integers, etc. In this article, the author defines octonion integers, fundamental theorems related to octonion integers, encoding, and decoding of cyclic codes over the residue class of octonion integers with respect to the octonion Mannheim weight one. The comparison of primes, lengths, cardinality, dimension, and code rate with respect to Quaternion Integers and Octonion Integers will be discussed.
APA, Harvard, Vancouver, ISO, and other styles
37

Lipnitski, V. A., and S. I. Semyonov. "Syndrome spectrums of error orbits in RS-codes." «System analysis and applied information science», no. 1 (March 27, 2020): 34–38. http://dx.doi.org/10.21122/2309-4923-2020-1-34-38.

Full text
Abstract:
This article is devoted to the research of the properties of syndromes of errors in Reed-Solomon codes. RS-codes are built on non-binary alphabets. So, unlike BCH-codes, RS-codes contain an extremely large variety of correctable errors. To correct these errors, a systematic application of automorphisms of codes is proposed. Characteristic automorphisms of RS-codes are cyclic and affine substitutions forming cyclic groups Г and A whose orders coincide with the code length. Cyclic and affine substitutions commute with each other and generate a joint АГ group, what is the product of subgroups A and Г. These three groups act on the space of error vectors of RS-codes, breaking this space into three types of error orbits. As a rule, these orbits are complete and contain the maximum possible number of errors. Syndromes are the main indicator of the presence of errors in each message received by the information system, a means of accurately identifying these errors. The specificity of syndromes of double errors in RS-codes is investigated. Determined that syndrome spectrums of error orbits are also complete in most cases. Proved that the structure of the syndrome spectrums copies the structure of the orbits themselves, which in turn copy the structure of groups of code automorphisms. The results obtained are a significant contribution to the construction of the theory of syndrome norms for RS-codes.
APA, Harvard, Vancouver, ISO, and other styles
38

Raussendorf, Robert. "Key ideas in quantum error correction." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 370, no. 1975 (September 28, 2012): 4541–65. http://dx.doi.org/10.1098/rsta.2011.0494.

Full text
Abstract:
In this introductory article on the subject of quantum error correction and fault-tolerant quantum computation, we review three important ingredients that enter known constructions for fault-tolerant quantum computation, namely quantum codes, error discretization and transversal quantum gates. Taken together, they provide a ground on which the theory of quantum error correction can be developed and fault-tolerant quantum information protocols can be built.
APA, Harvard, Vancouver, ISO, and other styles
39

Thiers, Johann-Philipp, and Jürgen Freudenberger. "Generalized Concatenated Codes over Gaussian and Eisenstein Integers for Code-Based Cryptography." Cryptography 5, no. 4 (November 29, 2021): 33. http://dx.doi.org/10.3390/cryptography5040033.

Full text
Abstract:
The code-based McEliece and Niederreiter cryptosystems are promising candidates for post-quantum public-key encryption. Recently, q-ary concatenated codes over Gaussian integers were proposed for the McEliece cryptosystem, together with the one-Mannheim error channel, where the error values are limited to the Mannheim weight one. Due to the limited error values, the codes over Gaussian integers achieve a higher error correction capability than maximum distance separable (MDS) codes with bounded minimum distance decoding. This higher error correction capability improves the work factor regarding decoding attacks based on information-set decoding. The codes also enable a low complexity decoding algorithm for decoding beyond the guaranteed error correction capability. In this work, we extend this coding scheme to codes over Eisenstein integers. These codes have advantages for the Niederreiter system. Additionally, we propose an improved code construction based on generalized concatenated codes. These codes extend to the rate region, where the work factor is beneficial compared to MDS codes. Moreover, generalized concatenated codes are more robust against structural attacks than ordinary concatenated codes.
APA, Harvard, Vancouver, ISO, and other styles
40

Paler, Alexandru, Austin G. Fowler, and Robert Wille. "Online scheduled execution of quantum circuits protected by surface codes." Quantum Information and Computation 17, no. 15&16 (December 2017): 1335–48. http://dx.doi.org/10.26421/qic17.15-16-5.

Full text
Abstract:
Quantum circuits are the preferred formalism for expressing quantum information processing tasks. Quantum circuit design automation methods mostly use a waterfall approach and consider that high level circuit descriptions are hardware agnostic. This assumption has lead to a static circuit perspective: the number of quantum bits and quantum gates is determined before circuit execution and everything is considered reliable with zero probability of failure. Many different schemes for achieving reliable fault-tolerant quantum computation exist, with different schemes suitable for different architectures. A number of large experimental groups are developing architectures well suited to being protected by surface quantum error correcting codes. Such circuits could include unreliable logical elements, such as state distillation, whose failure can be determined only after their actual execution. Therefore, practical logical circuits, as envisaged by many groups, are likely to have a dynamic structure. This requires an online scheduling of their execution: one knows for sure what needs to be executed only after previous elements have finished executing. This work shows that scheduling shares similarities with place and route methods. The work also introduces the first online schedulers of quantum circuits protected by surface codes. The work also highlights scheduling efficiency by comparing the new methods with state of the art static scheduling of surface code protected fault-tolerant circuits.
APA, Harvard, Vancouver, ISO, and other styles
41

Günlü, Onur, and Rafael Schaefer. "An Optimality Summary: Secret Key Agreement with Physical Unclonable Functions." Entropy 23, no. 1 (December 24, 2020): 16. http://dx.doi.org/10.3390/e23010016.

Full text
Abstract:
We address security and privacy problems for digital devices and biometrics from an information-theoretic optimality perspective to conduct authentication, message encryption/decryption, identification or secure and private computations by using a secret key. A physical unclonable function (PUF) provides local security to digital devices and this review gives the most relevant summary for information theorists, coding theorists, and signal processing community members who are interested in optimal PUF constructions. Low-complexity signal processing methods are applied to simplify information-theoretic analyses. The best trade-offs between the privacy-leakage, secret-key, and storage rates are discussed. Proposed optimal constructions that jointly design the vector quantizer and error-correction code parameters are listed. These constructions include modern and algebraic codes such as polar codes and convolutional codes, both of which can achieve small block-error probabilities at short block lengths, corresponding to a small number of PUF circuits. Open problems in the PUF literature from signal processing, information theory, coding theory, and hardware complexity perspectives and their combinations are listed to stimulate further advancements in the research on local privacy and security.
APA, Harvard, Vancouver, ISO, and other styles
42

Sarabia, Pablo, Alvaro Araujo, Luis Antonio Sarabia, and María de la Cruz Ortiz. "Electromyography Gesture Model Classifier for Fault-Tolerant-Embedded Devices by Means of Partial Least Square Class Modelling Error Correcting Output Codes (PLS-ECOC)." Algorithms 16, no. 3 (March 7, 2023): 149. http://dx.doi.org/10.3390/a16030149.

Full text
Abstract:
Surface electromyography (sEMG) plays a crucial role in several applications, such as for prosthetic controls, human–machine interfaces (HMI), rehabilitation, and disease diagnosis. These applications are usually occurring in real-time, so the classifier tends to run on a wearable device. This edge processing paradigm imposes strict requirements on the complexity classifier. To date, research on hand gesture recognition (GR) based on sEMG uses discriminant classifiers, such as support vector machines and neural networks. These classifiers can achieve good precision; they cannot detect when an error in classification has happened. This paper proposes a novel hand gesture multiclass model based on partial least square (PLS) class modelling that uses an encoding matrix called error correcting output codes (ECOC). A dataset of eight different gestures was classified using this method where all errors were detected, proving the feasibility of PLS-ECOC as a fault-tolerant classifier. Considering the PLS-ECOC model as a classifier, its accuracy, precision, and F1 are 87.5, 91.87, and 86.34%, respectively, similar to those obtained by other authors. The strength of our work lies in the extra information provided by the PLS-ECOC that allows the application to be fault tolerant while keeping a small-size model and low complexity, making it suitable for embedded real-time classification.
APA, Harvard, Vancouver, ISO, and other styles
43

Levina, Alla, and Gleb Ryaskin. "Robust Code Constructions Based on Bent Functions and Spline Wavelet Decomposition." Mathematics 10, no. 18 (September 12, 2022): 3305. http://dx.doi.org/10.3390/math10183305.

Full text
Abstract:
The paper investigates new robust code constructions based on bent functions and spline–wavelet transformation. Implementation of bent functions in code construction increases the probability of error detection in the data channel and cryptographic devices. Meanwhile, the use of spline wavelet theory for constructing the codes gives the possibility to increase system security from the actions of an attacker. Presented constructions combine spline-wavelets functions and bent functions. Developed robust codes, compared to existing ones, have a higher parameter of the maximum error masking probability. Illustrated codes ensure the security of transmitted information. Some of the granted constructions were implemented on FPGA.
APA, Harvard, Vancouver, ISO, and other styles
44

Ovchinnikov, Andrey. "The variant of post-quantum cryptosystem based on burst-correcting codes and on the complete decoding problem." Information and Control Systems, no. 3 (June 24, 2022): 45–54. http://dx.doi.org/10.31799/1684-8853-2022-3-45-54.

Full text
Abstract:
Introduction: Today the investigations of post-quantum cryptosystems secure against quantum computations is the area of great interest. An important direction here is code-based cryptography utilizing the mathematical problems from error-correcting coding theory. The improvement of existing code-based systems may be achieved both in practical part (reducing the key sizes) and theoretically by using more complicated mathematical code-based tasks. Purpose: The development of public-key code-based cryptosystem using low-density parity-check codes with burst correction; the estimation of the parameters of the obtained system. Results: The variant of code-based cryptosystem using random block permutation low-density parity-check codes is proposed. The cryptocomplexity of the system is supposed to be based on the complete decoding problem, which is believed to be a harder mathematical problem than those used in existing systems. With high probability, the analysis of the system by using decoding methods is not possible at all, which both increases the long-term cryptocomplexity of the system and allows to reduce the key size. The evaluation of the underlying code selection is performed, the approaches to the selection of the parameters of the proposed system on the basis of the required level of cryptocomplexity are considered. Practical relevance: The proposed system allows to reduce the public-key size as compared to the classical McEliece system, cryptocomplexity also comparable, with the underlying mathematical problem to be more stable against perspective attacks.
APA, Harvard, Vancouver, ISO, and other styles
45

Arora, H. D., and Anjali Dhiman. "Comparative Study of Generalized Quantitative-Qualitative Inaccuracy Fuzzy Measures for Noiseless Coding Theorem and 1:1 Codes." International Journal of Mathematics and Mathematical Sciences 2015 (2015): 1–6. http://dx.doi.org/10.1155/2015/258675.

Full text
Abstract:
In coding theory, we study various properties of codes for application in data compression, cryptography, error correction, and network coding. The study of codes is introduced in Information Theory, electrical engineering, mathematics, and computer sciences for the transmission of data through reliable and efficient methods. We have to consider how coding of messages can be done efficiently so that the maximum number of messages can be sent over a noiseless channel in a given time. Thus, the minimum value of mean codeword length subject to a given constraint on codeword lengths has to be founded. In this paper, we have introduced mean codeword length of orderαand typeβfor 1:1 codes and analyzed the relationship between average codeword length and fuzzy information measures for binary 1:1 codes. Further, noiseless coding theorem associated with fuzzy information measure has been established.
APA, Harvard, Vancouver, ISO, and other styles
46

Papadopoulos, Iason, and Jiabo Wang. "Polar Codes for Module-LWE Public Key Encryption: The Case of Kyber." Cryptography 7, no. 1 (January 10, 2023): 2. http://dx.doi.org/10.3390/cryptography7010002.

Full text
Abstract:
In modern society, the Internet is one of the most used means of communication. Thus, secure information transfer is inevitably of major importance. Computers nowadays use encryption methods based on arithmetic operations to turn messages into ciphertexts that are practically impossible for an attacker to reverse-engineer using a classical computer. Lately, it has been proven that this is possible in a post-quantum setting where quantum computers of considerable size are available to attackers. With the advance of technology of quantum computers, it is now more necessary than ever before to construct encryption schemes that cannot be broken either using a classical or a quantum computer. The National Institute of Technology and Standards (NIST) has orchestrated a competition, and numerous encryption schemes have been proposed. The NIST has identified one algorithm to be standardized for the post-quantum era. This algorithm is called CRYSTALS-Kyber and is based on module learning with errors (MLWE). This paper investigates how to apply error correcting codes in order to create some excess decryption failure rate (DFR) and to take advantage of that in order to re-tune Kyber’s parameters in the pursuit of higher security. By applying Polar Codes, Kyber’s security was managed to be increased by 54.4% under a new set of parameters, while keeping the decryption failure rate well below the upper acceptable bound set by the NIST.
APA, Harvard, Vancouver, ISO, and other styles
47

Sapozhnikov, Valeriy, Vladimir Sapozhnikov, Dmitriy Efanov, and Ruslan Abdullaev. "The specificities of organization of concurrent error detection systems for combinational circuits based on polynomial codes." Proceedings of Petersburg Transport University, no. 3 (September 20, 2018): 432–45. http://dx.doi.org/10.20295/1815-588x-2018-3-432-445.

Full text
Abstract:
Objective: To study the specificities of polynomial codes application during the organization of concurrent error detection systems for combinational logic circuits of automation and computer engineering. Methods: The methods of information theory and coding, the theory of discrete devices and diagnostic engineering of discrete systems were applied. Results: The possibilities of using polynomial codes in the process of combinational logic circuits control organization were analyzed. Some essential properties, inherent in generator polynomials, which make it possible to synthesize self-checking circuits of concurrent error detection systems, were pointed out. Particularly, one of such essential properties is the presence of a constant term in a generator polynomial (otherwise, all the required test patterns are not generated for a complete check of a coding device). An example of concurrent error detection sys- tem implementation for a combinational circuit was given. Some experimental data on error detection in LGSynth’89 combinational benchmarks were described. Practical importance: The use of polynomial codes for combinational circuit control makes it possible to synthesize self-checking discrete devices of automation and computer engineering.
APA, Harvard, Vancouver, ISO, and other styles
48

Lee, Ming-Che, Jia-Wei Chang, Tzone I. Wang, and Zi Feng Huang. "Using Variation Theory as a Guiding Principle in an OOP Assisted Syntax Correction Learning System." International Journal of Emerging Technologies in Learning (iJET) 15, no. 14 (July 31, 2020): 35. http://dx.doi.org/10.3991/ijet.v15i14.14191.

Full text
Abstract:
Object-oriented programming skill is important for the software professionals. It has become a mandatory course in information science and computer engineering departments of universities. However, it is hard for novice learners to understand the syntax and semantics of the language while learning object-oriented programming, and that makes them feel frustrated. The purpose of this study is to build an object-oriented programming assistant system that gives syntax error feedback based the variation theory. We established the syntax correction module on the basis of the Virtual Teaching Assistant (VTA). While compiling codes, the system will display syntax errors, if any, with feedbacks that are designed according to the variation theory in different levels (the generation, contrast, separation, and fusion levels) to help them correcting the errors. The experiment design of this study splits the participants, who are university freshmen, into two groups by the S-type method based on the result of a mid-term test. The learning performances and questionnaires were used for surveying, followed by in-depth inter-views, to evaluate the feasibility of the proposed assistant system. The findings indicate that the learners in the experimental group achieved better learning outcomes than their counterparts in the control group. This can also prove that the strategy of using the variation theory in implementing feed-back for object-oriented programming is effective.
APA, Harvard, Vancouver, ISO, and other styles
49

Haeupler, Bernhard, and Amirbehshad Shahrasbi. "Synchronization Strings: Codes for Insertions and Deletions Approaching the Singleton Bound." Journal of the ACM 68, no. 5 (October 31, 2021): 1–39. http://dx.doi.org/10.1145/3468265.

Full text
Abstract:
We introduce synchronization strings , which provide a novel way to efficiently deal with synchronization errors , i.e., insertions and deletions. Synchronization errors are strictly more general and much harder to cope with than more commonly considered Hamming-type errors , i.e., symbol substitutions and erasures. For every ε > 0, synchronization strings allow us to index a sequence with an ε -O(1) -size alphabet, such that one can efficiently transform k synchronization errors into (1 + ε)k Hamming-type errors . This powerful new technique has many applications. In this article, we focus on designing insdel codes , i.e., error correcting block codes (ECCs) for insertion-deletion channels. While ECCs for both Hamming-type errors and synchronization errors have been intensely studied, the latter has largely resisted progress. As Mitzenmacher puts it in his 2009 survey [30]: “ Channels with synchronization errors...are simply not adequately understood by current theory. Given the near-complete knowledge, we have for channels with erasures and errors...our lack of understanding about channels with synchronization errors is truly remarkable. ” Indeed, it took until 1999 for the first insdel codes with constant rate, constant distance, and constant alphabet size to be constructed and only since 2016 are there constructions of constant rate insdel codes for asymptotically large noise rates. Even in the asymptotically large or small noise regimes, these codes are polynomially far from the optimal rate-distance tradeoff. This makes the understanding of insdel codes up to this work equivalent to what was known for regular ECCs after Forney introduced concatenated codes in his doctoral thesis 50 years ago. A straightforward application of our synchronization strings-based indexing method gives a simple black-box construction that transforms any ECC into an equally efficient insdel code with only a small increase in the alphabet size. This instantly transfers much of the highly developed understanding for regular ECCs into the realm of insdel codes. Most notably, for the complete noise spectrum, we obtain efficient “near-MDS” insdel codes, which get arbitrarily close to the optimal rate-distance tradeoff given by the Singleton bound. In particular, for any δ ∈ (0,1) and ε > 0, we give a family of insdel codes achieving a rate of 1 - δ - ε over a constant-size alphabet that efficiently corrects a δ fraction of insertions or deletions.
APA, Harvard, Vancouver, ISO, and other styles
50

Sah, Dhaneshwar. "Iterative Decoding of Turbo Codes." Journal of Advanced College of Engineering and Management 3 (January 10, 2018): 15. http://dx.doi.org/10.3126/jacem.v3i0.18810.

Full text
Abstract:
<p><strong> </strong>This paper presents a Thesis which consists of a study of turbo codes as an error-control Code and the software implementation of two different decoders, namely the Maximum a Posteriori (MAP), and soft- Output Viterbi Algorithm (SOVA) decoders. Turbo codes were introduced in 1993 by berrouet at [2] and are perhaps the most exciting and potentially important development in coding theory in recent years. They achieve near- Shannon-Limit error correction performance with relatively simple component codes and large interleavers. They can be constructed by concatenating at least two component codes in a parallel fashion, separated by an interleaver. The convolutional codes can achieve very good results. In order of a concatenated scheme such as a turbo codes to work properly, the decoding algorithm must affect an exchange of soft information between component decoders. The concept behind turbo decoding is to pass soft information from the output of one decoder to the input of the succeeding one, and to iterate this process several times to produce better decisions. Turbo codes are still in the process of standardization but future applications will include mobile communication systems, deep space communications, telemetry and multimedia. Finally, we will compare these two algorithms which have less complexity and which can produce better performance.</p><p><strong>Journal of Advanced College of Engineering and Management</strong>, Vol.3, 2017, Page: 15-30</p>
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography