Dissertations / Theses on the topic 'Linear block codes'

To see the other types of publications on this topic, follow the link: Linear block codes.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Linear block codes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Spyrou, Spyros. "Linear block codes for block fading channels based on Hadamard matrices." Texas A&M University, 2005. http://hdl.handle.net/1969.1/3136.

Full text
Abstract:
We investigate the creation of linear block codes using Hadamard matrices for block fading channels. The aforementioned codes are very easy to find and have bounded cross correlation spectrum. The optimality is with respect to the metric-spectrum which gives a performance for the codes very close to optimal codes. Also, we can transform these codes according to different characteristics of the channel and can use selective transmission methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Yildiz, Senay. "Construction Of Substitution Boxes Depending On Linear Block Codes." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605540/index.pdf.

Full text
Abstract:
The construction of a substitution box (S-box) with high nonlinearity and high resiliency is an important research area in cryptography. In this thesis, t-resilient nxm S-box construction methods depending on linear block codes presented in "
A Construction of Resilient Functions with High Nonlinearity"
by T. Johansson and E. Pasalic in 2000, and two years later in "
Linear Codes in Generalized Construction of Resilient Functions with Very High Nonlinearity"
by E. Pasalic and S. Maitra are compared and the former one is observed to be more promising in terms of nonlinearity. The first construction method uses a set of nonintersecting [n-d,m,t+1] linear block codes in deriving t-resilient S-boxes of nonlinearity 2^(n-1)-2^(n-d-1),where d is a parameter to be maximized for high nonlinearity. For some cases, we have found better results than the results of Johansson and Pasalic, using their construction. As a distinguished reference for nxn S-box construction methods, we study the paper "
Differentially Uniform Mappings for Cryptography"
presented by K.Nyberg in Eurocrypt 1993. One of the two constructions of this paper, i.e., the inversion mapping described by Nyberg but first noticed in 1957 by L. Carlitz and S. Uchiyama, is used in the S-box of Rijndael, which is chosen as the Advanced Encryption Standard. We complete the details of some theorem and proposition proofs given by Nyberg.
APA, Harvard, Vancouver, ISO, and other styles
3

El, Rifai Ahmed Mahmoud. "Applications of linear block codes to the McEliece cryptosystem." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/16604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kovacevic, Sanja. "SOVA based on a sectionalized trellis of linear block codes." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=80115.

Full text
Abstract:
The use of block codes is a well known error-control technique for reliable transmission of digital information over noisy communication channels. However, a practically implementable soft-input soft-output (SISO) decoding algorithm for block codes is still a challenging problem.
This thesis examines a new decoding scheme based on the soft-output Viterbi algorithm (SOVA) applied to a sectionalized trellis for linear block codes. The computational complexities of the new SOVA decoder and of the conventional SOVA decoder based on the bit-level trellis are theoretically analyzed and derived. These results are used to obtain the optimum sectionalization of a trellis for SOVA. The optimum sectionalization of a trellis for Maximum A Posteriori (MAP), Maximum Logarithm MAP (Max-Log-MAP), and Viterbi algorithms, and their corresponding computational complexities are included for comparisons. The results confirm that SOVA based on a sectionalized trellis is the most computationally efficient SISO decoder examined in this thesis.
The simulation results of the bit error rate (BER) over additive white Gaussian noise (AWGN) channel demonstrate that the BER performance of the new SOVA decoder is not degraded. The BER performance of SOVA used in a serially concatenated block codes scheme reveals that the soft outputs of the proposed decoder are the same as those of the conventional SOVA decoder. Iterative decoding of serially concatenated block codes reveals that the quality of reliability estimates of the proposed SOVA decoder is the same as that of the conventional SOVA decoder.
APA, Harvard, Vancouver, ISO, and other styles
5

Chaudhari, Pragat. "Analytical Methods for the Performance Evaluation of Binary Linear Block Codes." Thesis, University of Waterloo, 2000. http://hdl.handle.net/10012/904.

Full text
Abstract:
The modeling of the soft-output decoding of a binary linear block code using a Binary Phase Shift Keying (BPSK) modulation system (with reduced noise power) is the main focus of this work. With this model, it is possible to provide bit error performance approximations to help in the evaluation of the performance of binary linear block codes. As well, the model can be used in the design of communications systems which require knowledge of the characteristics of the channel, such as combined source-channel coding. Assuming an Additive White Gaussian Noise channel model, soft-output Log Likelihood Ratio (LLR) values are modeled to be Gaussian distributed. The bit error performance for a binary linear code over an AWGN channel can then be approximated using the Q-function that is used for BPSK systems. Simulation results are presented which show that the actual bit error performance of the code is very well approximated by the LLR approximation, especially for low signal-to-noise ratios (SNR). A new measure of the coding gain achievable through the use of a code is introduced by comparing the LLR variance to that of an equivalently scaled BPSK system. Furthermore, arguments are presented which show that the approximation requires fewer samples than conventional simulation methods to obtain the same confidence in the bit error probability value. This translates into fewer computations and therefore, less time is needed to obtain performance results. Other work was completed that uses a discrete Fourier Transform technique to calculate the weight distribution of a linear code. The weight distribution of a code is defined by the number of codewords which have a certain number of ones in the codewords. For codeword lengths of small to moderate size, this method is faster and provides an easily implementable and methodical approach over other methods. This technique has the added advantage over other techniques of being able to methodically calculate the number of codewords of a particular Hamming weight instead of calculating the entire weight distribution of the code.
APA, Harvard, Vancouver, ISO, and other styles
6

Al-Lawati, Haider. "Performance analysis of linear block codes over the queue-based channel." Thesis, Kingston, Ont. : [s.n.], 2007. http://hdl.handle.net/1974/652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Griffiths, Wayne Bradley. "On a posteriori probability decoding of linear block codes over discrete channels." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0156.

Full text
Abstract:
One of the facets of the mobile or wireless environment is that errors quite often occur in bursts. Thus, strong codes are required to provide protection against such errors. This in turn motivates the employment of decoding algorithms which are simple to implement, yet are still able to attempt to take the dependence or memory of the channel model into account in order to give optimal decoding estimates. Furthermore, such algorithms should be able to be applied for a variety of channel models and signalling alphabets. The research presented within this thesis describes a number of algorithms which can be used with linear block codes. Given the received word, these algorithms determine the symbol which was most likely transmitted, on a symbol-by-symbol basis. Due to their relative simplicity, a collection of algorithms for memoryless channels is reported first. This is done to establish the general style and principles of the overall collection. The concept of matrix diagonalisation may or may not be applied, resulting in two different types of procedure. Ultimately, it is shown that the choice between them should be motivated by whether storage space or computational complexity has the higher priority. As with all other procedures explained herein, the derivation is first performed for a binary signalling alphabet and then extended to fields of prime order. These procedures form the paradigm for algorithms used in conjunction with finite state channel models, where errors generally occur in bursts. In such cases, the necessary information is stored in matrices rather than as scalars. Finally, by analogy with the weight polynomials of a code and its dual as characterised by the MacWilliams identities, new procedures are developed for particular types of Gilbert-Elliott channel models. Here, the calculations are derived from three parameters which profile the occurrence of errors in those models. The decoding is then carried out using polynomial evaluation rather than matrix multiplication. Complementing this theory are several examples detailing the steps required to perform the decoding, as well as a collection of simulation results demonstrating the practical value of these algorithms.
APA, Harvard, Vancouver, ISO, and other styles
8

Fogarty, Neville Lyons. "On Skew-Constacyclic Codes." UKnowledge, 2016. http://uknowledge.uky.edu/math_etds/36.

Full text
Abstract:
Cyclic codes are a well-known class of linear block codes with efficient decoding algorithms. In recent years they have been generalized to skew-constacyclic codes; such a generalization has previously been shown to be useful. We begin with a study of skew-polynomial rings so that we may examine these codes algebraically as quotient modules of non-commutative skew-polynomial rings. We introduce a skew-generalized circulant matrix to aid in examining skew-constacyclic codes, and we use it to recover a well-known result on the duals of skew-constacyclic codes from Boucher/Ulmer in 2011. We also motivate and develop a notion of idempotent elements in these quotient modules. We are particularly concerned with the existence and uniqueness of idempotents that generate a given submodule; we generalize relevant results from previous work on skew-constacyclic codes by Gao/Shen/Fu in 2013 and well-known results from the classical case.
APA, Harvard, Vancouver, ISO, and other styles
9

Collison, Sean Michael. "Extending the Dorsch decoder for efficient soft decision decoding of linear block codes." Pullman, Wash. : Washington State University, 2009. http://www.dissertations.wsu.edu/Thesis/Spring2009/s_collison_042309.pdf.

Full text
Abstract:
Thesis (M.S. in computer engineering)--Washington State University, May 2009.
Title from PDF title page (viewed on May 21, 2009). "School of Electrical Engineering and Computer Science." Includes bibliographical references (p. 64-65).
APA, Harvard, Vancouver, ISO, and other styles
10

Weaver, Elizabeth A. "MINIMALITY AND DUALITY OF TAIL-BITING TRELLISES FOR LINEAR CODES." UKnowledge, 2012. http://uknowledge.uky.edu/math_etds/1.

Full text
Abstract:
Codes can be represented by edge-labeled directed graphs called trellises, which are used in decoding with the Viterbi algorithm. We will first examine the well-known product construction for trellises and present an algorithm for recovering the factors of a given trellis. To maximize efficiency, trellises that are minimal in a certain sense are desired. It was shown by Koetter and Vardy that one can produce all minimal tail-biting trellises for a code by looking at a special set of generators for a code. These generators along with a set of spans comprise what is called a characteristic pair, and we will discuss how to determine the number of these pairs for a given code. Finally, we will look at trellis dualization, in which a trellis for a code is used to produce a trellis representing the dual code. The first method we discuss comes naturally with the known BCJR construction. The second, introduced by Forney, is a very general procedure that works for many different types of graphs and is based on dualizing the edge set in a natural way. We call this construction the local dual, and we show the necessary conditions needed for these two different procedures to result in the same dual trellis.
APA, Harvard, Vancouver, ISO, and other styles
11

Alabbadi, Mohssen. "Intergration of error correction, encryption, and signature based on linear error-correcting block codes." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/14959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wong, Brenden. "On the computation of the probability of undetected error for linear block codes on the Gilbert channel." Thesis, University of British Columbia, 1991. http://hdl.handle.net/2429/30119.

Full text
Abstract:
An important measure of the performance of error detecting codes is the probability of undetected error. Extensive study on the subject has yielded results which allow for the computation of the probability of undetected error for many codes on the binary symmetric channel (BSC). However, little is known about code performance in more complicated channel models. The Gilbert channel is a two-state, three-parameter model with memory which simulates the effects of burst noise. In this thesis, we investigate methods to compute the probability of undetected error of binary linear block codes on this channel. We examine an approach to approximate code performance based on the P(m,n) distribution which is the probability of m errors in a block of n bits and the weight distribution of the code. For the Gilbert channel, P(m,n) can in principle be calculated from the channel parameters. In practice however, existing methodologies suffer from rather excessive computational requirements, particularly when n is larger than one thousand or so. We have developed an efficient method to calculate P(m,n) for reasonable channel parameters. This allows the probability of undetected error for many codes to be readily estimated. For certain channel and code parameters, the approximation method described above may not be sufficiently accurate. Exact analytical results are difficult to obtain, however; because unlike the BSC, the probability of a particular error pattern on the Gilbert channel depends not just on the number of 1's in the pattern. Nevertheless, by appropriately exploiting certain symmetries present on the Gilbert channel, we can acquire some useful results. We have derived the probability of undetected error for the single parity check code. We have also obtained a formula for summing over a cyclic class of vectors and shown that reciprocal generator polynomials generate cyclic codes which have the same probability of undetected error on the Gilbert channel. The Monte Carlo simulation technique is often used when exact analysis is difficult. In a simulation study of CRC codes, we are able to observe several interesting qualitative results with just a reasonable amount of computational effort. We find that as on the BSC, on the Gilbert channel the probability of undetected error does not always increase with worsening channel conditions. Also, the CRC-CCITT code appears to maintain its superiority in terms of error detection performance over the CRC-ANSI code on the Gilbert channel, and perhaps most significantly, for some ranges of channel parameters, the probability of undetected error estimated using BSC results with the effective bit error rate can be quite inaccurate.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
13

DHAKAL, PAWAN. "Algorithms for 5G physical layer." Doctoral thesis, Politecnico di Torino, 2017. http://hdl.handle.net/11583/2670627.

Full text
Abstract:
There is a great activity in the research community towards the investigations of the various aspects of 5G at different protocol layers and parts of the network. Among all, physical layer design plays a very important role to satisfy high demands in terms of data rates, latency, reliability and number of connected devices for 5G deployment. This thesis addresses he latest developments in the physical layer algorithms regarding the channel coding, signal detection, frame synchronization and multiple access technique in the light of 5G use cases. These developments are governed by the requirements of the different use case scenarios that are envisioned to be the driving force in 5G. All chapters from chapter 2 to 5 are developed around the need of physical layer algorithms dedicated to 5G use cases. In brief, this thesis focuses on design, analysis, simulation and he advancement of physical layer aspects such as 1. Reliability based decoding of short length Linear Block Codes (LBCs) with very good properties in terms of minimum hamming istance for very small latency requiring applications. In this context, we enlarge the grid of possible candidates by considering, in particular, short length LBCs (especially extended CH codes) with soft-decision decoding; 2. Efficient synchronization of preamble/postamble in a short bursty frame using modified Massey correlator; 3. Detection of Primary User activity using semiblind spectrum sensing algorithms and analysis of such algorithms under practical imperfections; 4. Design of optimal spreading matrix for a Low Density Spreading (LDS) technique in the context of non-orthogonal multiple access. In such spreading matrix, small number of elements in a spreading sequences are non zero allowing each user to spread its data over small number of chips (tones), thus simplifying the decoding procedure using Message Passing Algorithm (MPA).
APA, Harvard, Vancouver, ISO, and other styles
14

Najahi, Mohamed amine. "Synthesis of certified programs in fixed-point arithmetic, and its application to linear algebra basic blocks : and its application to linear algebra basic blocks." Thesis, Perpignan, 2014. http://www.theses.fr/2014PERP1212.

Full text
Abstract:
Pour réduire les coûts des systèmes embarqués, ces derniers sont livrés avec des micro-processeurs peu puissants. Ces processeurs sont dédiés à l'exécution de tâches calculatoires dont certaines, comme la transformée de Fourier rapide, peuvent s'avérer exigeantes en termes de ressources de calcul. Afin que les implémentations de ces algorithmes soient efficaces, les programmeurs utilisent l'arithmétique à virgule fixe qui est plus adaptée aux processeurs dépourvus d'unité flottante. Cependant, ils se retrouvent confrontés à deux difficultés: D'abord, coder en virgule fixe est fastidieux et exige que le programmeur gère tous les détails arithmétiques. Ensuite, et en raison de la faible dynamique des nombres à virgule fixe par rapport aux nombres flottants, les calculs en fixe sont souvent perçus comme intrinsèquement peu précis. La première partie de cette thèse propose une méthodologie pour dépasser ces deux limitations. Elle montre comment concevoir et mettre en œuvre des outils pour générer automatiquement des programmes en virgule fixe. Ensuite, afin de rassurer l'utilisateur quant à la qualité numérique des codes synthétisés, des certificats sont générés qui fournissent des bornes sur les erreurs d'arrondi. La deuxième partie de cette thèse est dédiée à l'étude des compromis lors de la génération de programmes en virgule fixe pour les briques d'algèbre linéaire. Des données expérimentales y sont fournies sur la synthèse de code pour la multiplication et l'inversion matricielles
To be cost effective, embedded systems are shipped with low-end micro-processors. These processors are dedicated to one or few tasks that are highly demanding on computational resources. Examples of widely deployed tasks include the fast Fourier transform, convolutions, and digital filters. For these tasks to run efficiently, embedded systems programmers favor fixed-point arithmetic over the standardized and costly floating-point arithmetic. However, they are faced with two difficulties: First, writing fixed-point codes is tedious and requires that the programmer must be in charge of every arithmetical detail. Second, because of the low dynamic range of fixed-point numbers compared to floating-point numbers, there is a persistent belief that fixed-point computations are inherently inaccurate. The first part of this thesis addresses these two limitations as follows: It shows how to design and implement tools to automatically synthesize fixed-point programs. Next, to strengthen the user's confidence in the synthesized codes, analytic methods are suggested to generate certificates. These certificates can be checked using a formal verification tool, and assert that the rounding errors of the generated codes are indeed below a given threshold. The second part of the thesis is a study of the trade-offs involved when generating fixed-point code for linear algebra basic blocks. It gives experimental data on fixed-point synthesis for matrix multiplication and matrix inversion through Cholesky decomposition
APA, Harvard, Vancouver, ISO, and other styles
15

Susar, Aylin. "Ofdm Papr Reduction With Linear Coding And Codeword Modification." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606428/index.pdf.

Full text
Abstract:
In this thesis, reduction of the Peak-to-Average Power Ratio (PAPR) of Orthogonal Frequency Division Multiplexing (OFDM) is studied. A new PAPR reduction method is proposed that is based on block coding the input data and modifying the codeword until the PAPR is reduced below a certain threshold. The method makes use of the error correction capability of the block code employed. The performance of the algorithm has been investigated through theoretical models and computer simulations. For performance evaluation, a new gain parameter is defined. The gain parameter considers the SNR loss caused by modification of the codeword together with the PAPR reduction achieved. The gain parameter is used to compare a plain OFDM system with the system employing the PAPR reduction algorithm. The algorithm performance is examined through computer simulations and it is found that power reductions around 2-3 dB are obtained especially for low to moderate number of channels and relatively strong codes.
APA, Harvard, Vancouver, ISO, and other styles
16

Yue, Chentao. "Decoding Techniques based on Ordered Statistics." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25059.

Full text
Abstract:
Short code design and related decoding algorithms have gained a great deal of interest among industry and academia recently, triggered by the stringent requirements of the new ultra-reliable and low-latency communications (URLLC) service for mission-critical Internet of Things (IoT) services. URLLC services mandate the use of short block-length codes to achieve hundred-of-microsecond time-to-transmit latency and ultra-low block error rates. As a theoretical milestone, Polyanskiy et al. have given new capacity bounds tighter than Shannon's work at the finite block length regime. However, with most conventional channel codes such as LDPC, Polar, Turbo, and convolutional codes suffering from performance degradation when the code length is short, it is still an open research problem to seek potential coding schemes for URLLC. As a kind of maximum-likelihood decoding algorithm, ordered statistics decoding (OSD) can be applied with classical strong channel codes, e.g. BCH codes and Reed-Solomon codes, to potentially meet the requirements of URLLC. In this thesis, I am taking a step towards seeking practical decoders for URLLC by revisiting the OSD and significantly reducing its decoding complexity. I first provide a comprehensive analysis of the OSD algorithm by characterizing the statistical properties, evolution and the distribution of the Hamming distance, and the weighted Hamming distance (WHD) from codeword estimates to the received sequence in the OSD algorithm. I prove that the distance distributions in OSD can be characterized as mixture models capturing the decoding error probability and code weight distribution, reflecting the inherent relations between error rate performance, distance, and channel conditions. Based on the statistical properties of distances and with the aim to reduce the decoding complexity, several decoding techniques are proposed, and their decoding error performance and complexity are accordingly analyzed. Simulation results for decoding various eBCH codes demonstrate that the proposed techniques can be conveniently combined with the OSD algorithm and its variants to significantly reduce the decoding complexity with a negligible loss in decoding error performance. Finally, I proposed two complete decoding designs, namely segmentation-discarding decoding, and probability-based ordered statistics decoding, as potential solutions for URLLC scenarios. Simulation results for different codes show that our proposed decoding algorithm can significantly reduce the decoding complexity compared to the existing OSD algorithms in the literature.
APA, Harvard, Vancouver, ISO, and other styles
17

Susanto, Misfa. "Network Coding for Multihop Wireless Networks: Joint Random Linear Network Coding and Forward Error Correction with Interleaving for Multihop Wireless Networks." Thesis, University of Bradford, 2015. http://hdl.handle.net/10454/14864.

Full text
Abstract:
Optimising the throughput performance for wireless networks is one of the challenging tasks in the objectives of communication engineering, since wireless channels are prone to errors due to path losses, random noise, and fading phenomena. The transmission errors will be worse in a multihop scenario due to its accumulative effects. Network Coding (NC) is an elegant technique to improve the throughput performance of a communication network. There is the fact that the bit error rates over one modulation symbol of 16- and higher order- Quadrature Amplitude Modulation (QAM) scheme follow a certain pattern. The Scattered Random Network Coding (SRNC) system was proposed in the literature to exploit the error pattern of 16-QAM by using bit-scattering to improve the throughput of multihop network to which is being applied the Random Linear Network Coding (RLNC). This thesis aims to improve further the SRNC system by using Forward Error Correction (FEC) code; the proposed system is called Joint RLNC and FEC with interleaving. The first proposed system (System-I) uses Convolutional Code (CC) FEC. The performances analysis of System-I with various CC rates of 1/2, 1/3, 1/4, 1/6, and 1/8 was carried out using the developed simulation tools in MATLAB and compared to two benchmark systems: SRNC system (System-II) and RLNC system (System- III). The second proposed system (System-IV) uses Reed-Solomon (RS) FEC code. Performance evaluation of System IV was carried out and compared to three systems; System-I with 1/2 CC rate, System-II, and System-III. All simulations were carried out over three possible channel environments: 1) AWGN channel, 2) a Rayleigh fading channel, and 3) a Rician fading channel, where both fading channels are in series with the AWGN channel. The simulation results show that the proposed system improves the SRNC system. How much improvement gain can be achieved depends on the FEC type used and the channel environment.
APA, Harvard, Vancouver, ISO, and other styles
18

Tempel, Dirk J. "Sequential decoding of linear block codes." 1993. http://hdl.handle.net/1993/9697.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Esmaeili, Mostafa. "Application of linear block codes in cryptography." Thesis, 2019. http://hdl.handle.net/1828/10655.

Full text
Abstract:
Recently, there has been a renewed interest in code based cryptosystems. Amongst the reasons for this interest is that they have shown to be resistant to quantum at- tacks, making them candidates for post-quantum cryptosystems. In fact, the National Institute of Standards and Technology is currently considering candidates for secure communication in the post-quantum era. Three of the proposals are code based cryp- tosystems. Other reasons for this renewed interest include e cient encryption and decryption. In this dissertation, new code based cryptosystems (symmetric key and public key) are presented that use high rate codes and have small key sizes. Hence they overcome the drawbacks of code based cryptosystems (low information rate and very large key size). The techniques used in designing these cryptosystems include random bit/block deletions, random bit insertions, random interleaving, and random bit ipping. An advantage of the proposed cryptosystems over other code based cryp- tosystems is that the code can be/is not secret. These cryptosystems are among the rst with this advantage. Having a public code eliminates the need for permutation and scrambling matrices. The absence of permutation and scrambling matrices results in a signi cant reduction in the key size. In fact, it is shown that with simple random bit ipping and interleaving the key size is comparable to well known symmetric key cryptosystems in use today such as Advanced Encryption Standard (AES). The security of the new cryptosystems are analysed. It is shown that they are immune against previously proposed attacks for code based cryptosystems. This is because scrambling or permutation matrices are not used and the random bit ipping is beyond the error correcting capability of the code. It is also shown that having a public code still provides a good level of security. This is proved in two ways, by nding the probability of an adversary being able to break the cryptosystem and showing that this probability is extremely small, and showing that the cryptosystem has indistinguishability against a chosen plaintext attack (i.e. is IND-CPA secure). IND-CPA security is among the primary necessities for a cryptosystem to be practical. This means that a ciphertext reveals no information about the corresponding plaintext other than its length. It is also shown that having a public code results in smaller key sizes.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
20

Lin, Da-Zhong, and 林大中. "Modified Decoding Algorithm for Linear Block Codes." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/86ga57.

Full text
Abstract:
碩士
國立臺北科技大學
電機工程系研究所
98
In this thesis , we propose a modified decoding method for linesr block codes. For a linear block code, the maximum likelihood decoding can provide the best error performance, which can be obtained by applying the Viterbi algorithm in the decoding trellis. However, the decoding complexity of the trellis grows exponentially with the dimension of the code, which incurs a huge amount of computation. Therefore, a modified decoding algorithm is proposed to reduce the decoding complexity. Simulation results show that the error performance of the new method is close to that of the maximum likelihood decoding.
APA, Harvard, Vancouver, ISO, and other styles
21

Cheng, Jay, and 鄭傑. "On Generalized Hamming Weights of Linear Block Codes." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/58867308564051097365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Shiu, Jiun, and 徐鈞. "eal-Number Linear Block Codes and Their Applications." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/12846101570802598436.

Full text
Abstract:
博士
國立臺灣大學
資訊工程研究所
82
The research of error-control coding concerns the protecting of data against the errors that are introduced during the transmissions through channels. Conventionally, a digital channel is assumed. This assumption is reasonable since, in a digital communication system, we are processing digital data. However, in the case of protecting data through a computation, the above assumption is no longer valid. In this case, data to be protected are real or complex numbers and the computation are essentially additions and multiplications defined over a real or complex field. In this thesis, real-number error control codes for such a channel is studied. We define two classes of real-number codes which are based on discrete Hartley transform and discrete cosine transform, respectively. The general decoding algorithms for such kind of real-number codes are also studied. These algorithms have been viewed as new techniques for discrete-time signal processing. By utilizing the techniques of nonlinear filtering, real number codes can also be used to protect discrete-time signals from some nonlinear effects. Moreover, we have studied some majority decodable real-number codes which makes real-number codes more applicable, when extremely simple encoding and decoding schemes are required.
APA, Harvard, Vancouver, ISO, and other styles
23

Amirhossein, Shokouh Aghaei. "Widely-linear MMSE Receivers for Linear Dispersion Space-time Block-codes." Thesis, 2008. http://hdl.handle.net/1807/17224.

Full text
Abstract:
Space-time coding techniques are widely used in multiple-input multiple-output communication systems to mitigate the effect of multipath fading in wireless channels. An important subset of space-time codes are linear dispersion (LD) codes, which are used for quasi-static Rayleigh flat fading channels when the channel state information (CSI) is only available at the receiver side. In this thesis, we propose a new receiver structure for LD codes. We suggest to use widely-linear minimum-mean-squared-error (WL-MMSE) estimates of the transmitted symbols in lieu of the sufficient statistics for maximum likelihood (ML) detection of these symbols. This structure offers both optimal and suboptimal operation modes. The structures of the proposed receivers in both modes are derived for general LD codes. As special cases, we study two important subsets of LD codes, namely orthogonal and quasi-orthogonal codes, and examine the performance of the proposed receivers for these codes.
APA, Harvard, Vancouver, ISO, and other styles
24

Fossorier, Marc P. C. "Decoding of linear block codes based on ordered statistics." Thesis, 1994. http://hdl.handle.net/10125/9751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Genga, Yuval Odhiambo. "Techniques to improve iterative decoding of linear block codes." Thesis, 2019. https://hdl.handle.net/10539/29047.

Full text
Abstract:
A Thesis submitted in fulfilment of the requirements for the degree of Doctor of Philosophy in the Centre for Telecommunications Access and Services, School of Electrical and Information Engineering, October 2019
In the field of forward error correction, the development of decoding algorithms with a high error correction performance and tolerable complexity has been of great interest for the reliable transmission of data through a noisy channel. The focus of the work done in this thesis is to exploit techniques used in forward error correction in the development of an iterative soft-decision decoding approach that yields a high performance in terms of error correction and a tolerable computational complexity cost when compared to existing decoding algorithms. The decoding technique developed in this research takes advantage of the systematic structure exhibited by linear block codes to implement an information set decoding approach to correct errors in the received vector outputted from the channel. The proposed decoding approach improves the iterative performance of the algorithm as the decoder is only required to detect and correct a subset of the symbols from the received vector. These symbols are referred to as the information set. The information set, which matches the length of the message, is then used decode the entire codeword. The decoding approach presented in the thesis is tested on both Reed Solomon and Low Density Parity Check codes. The implementation of the decoder varies for both the linear block codes due to the different structural properties of the codes. Reed Solomon codes have the advantage of having a row rank inverse property which enables the construction of a partial systematic structure using any set of columns in the parity check matrix. This property provides a more direct implementation for finding the information set required by the decoder based on the soft reliability information. However, the dense structure of the parity check matrix of Reed Solomon codes presents challenges in terms of error detection and correction for the proposed decoding approach. To counter this problem, a bit-level implementation of the decoding technique for Reed Solomon codes is presented in the thesis. The presentation of the parity check matrix extension technique is also proposed in the thesis. This technique involves the addition of low weight codewords from the dual code, that match the minimum distance of the code, to the parity check matrix during the decoding process. This helps add sparsity to the symbol-level implementation of the proposed decoder. This sparsity helps with the efficient exchange of the soft information during the message passing stage of the proposed decoder. Most high performance Low Density Parity Check codes proposed in literature lack a systematic structure. This presents a challenge for the proposed decoding approach in obtaining the information set. A systematic construction for a Quasi-Cyclic Low Density Parity Check code is also presented in this thesis so as to allow for the information set decoding. The proposed construction is able to match the error correction performance of a high performance Quasi-Cyclic Low Density Parity Check matrix design, while having the benefit of a low complexity construction for the encoder. In addition, this thesis also proposes a stopping condition for iterative decoding algorithms based on the information set decoding technique. This stopping condition is applied to other high performance iterative decoding algorithms for both Reed Solomon codes and Low Density Parity Check codes so as to improve the iterative performance. This improves on the overall efficiency of the decoding algorithms.
PH2020
APA, Harvard, Vancouver, ISO, and other styles
26

Nori, Aditya Vithal. "Unifying Views Of Tail-Biting Trellises For Linear Block Codes." Thesis, 2005. https://etd.iisc.ac.in/handle/2005/1441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Nori, Aditya Vithal. "Unifying Views Of Tail-Biting Trellises For Linear Block Codes." Thesis, 2005. http://etd.iisc.ernet.in/handle/2005/1441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Tai, Chih-Han, and 戴志翰. "Applications of Linear Codes:Joint JPEG-Block Codes and Product-based Data Hiding Codes." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/13873039712661821884.

Full text
Abstract:
碩士
國立中興大學
電機工程學系所
98
This paper proposes a joint JPEG-block coding system (JJBC) based on BCJR decoding that maintains an acceptable image quality over AWGN and wireless channels. The joint design of JPEG compression and block codes in the transmitter has the advantage of avoiding error propagation and of low decoding complexity, compared to the conventional tandem system with separate optimization of source coding and channel coding. The PSNR performance of the JJBC system is much better than that of the tandem system when the low signal to noise ratio is low. Experimental results show that the proposed JJBC system achieves 6dB (3 dB) reduction at PSNR=20dB, compared to the conventional tandem system,under wireless IEEE 802.11b (AWGN) channels. The present paper use product code, proposed that a fast and effective data hiding method. Looking from the experimental result, the method we proposed have the good performance compared with other papers.
APA, Harvard, Vancouver, ISO, and other styles
29

Buttner, Werner Heinrich. "Trellis decoding of Reed Solomon and related linear block codes." Diss., 1999. http://hdl.handle.net/2263/30447.

Full text
Abstract:
Please read the abstract in the section 00front of this document
Dissertation (M Eng (Electronic Engineering))--University of Pretoria, 2006.
Electrical, Electronic and Computer Engineering
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Wei, and 陳瑋. "A Further Study on Modified-t Algorithm for Linear Block Codes." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/3th524.

Full text
Abstract:
碩士
國立臺北科技大學
電機工程系研究所
99
For an (n, k) linear block code, maximum likelihood decoding attains the best error performance. However, the decoding complexity and the hardware requirements grow with the dimension of the code, causing a huge amount of calculating time. Therefore, we may reduce the complexity by deleting unnecessary paths using the t-algorithm. The error performance is close to that of the maximum likelihood decoding. Nevertheless, there is a limited range for selecting parameter t. When parameter t passes the threshold, the performance decreases. In this thesis, based on t-algorithm, a modified algorithm is proposed to broaden the range of t. The resulting parameter t can be adjusted according to the channel condition.
APA, Harvard, Vancouver, ISO, and other styles
31

Swarts, Francis. "Undetected error behaviour of linear block codes on channels with memory." Thesis, 2014. http://hdl.handle.net/10210/9599.

Full text
Abstract:
D.Ing. (Electrical and Electronic)
In this thesis, the problem of undetected errors in digital communication systems employing error detection as the only means of error control, is investigated. In the past, the undetected error probability of linear block codes, was mainly investigated on the binary symmetric channel, which is memoryless. With this thesis, the main aim was to investigate the undetected error probability for linear block codes, on channels with memory. The channel models investigated are the Gilbert-Elliott and Fritchman channel models. Three techniques for calculating the undetected error probability on channels with memory are investigated. These techniques are: (i) Exhaustive codeword generation, (ii) Simulation and (iii) A trellis based technique. The trellis description of a block code based on the states of a syndrome calculating linear feedback shift register, formed the basis of the latter technique. The calculation of the weight spectrum of binary linear block codes, is still largely an unsolved problem. Using the trellis description of a binary linear block code, referred to earlier, we propose a description of binary linear block codes based on graphs, and from this we develop a technique for calculating the weight spectrum of binary linear block codes. The weight spectra of block codes determine the undetected error probability of these codes when used on the binary symmetric channel. Very favourable results were obtained through the application of the techniques developed for undetected error probability calculation as well as weight spectrum calculation.
APA, Harvard, Vancouver, ISO, and other styles
32

SHAW, HUNG-HSI, and 蕭煥錫. "The Multi-level Squaring Construction for the Trellises of Linear Block Codes." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/86976520780121033470.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Yan, Wei-Min, and 嚴偉&#x73C9. "Combination of Non-Disjoint Sub-Block Partition with Linear Block Codes for PAPR Reduction in OFDM System." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/48407208352703449691.

Full text
Abstract:
碩士
國立中興大學
通訊工程研究所
105
In this thesis, we propose combination of a modified PTS algorithm by partitioning an OFDM block into non-disjoint OFDM sub-block with linear block codes as phase factors. Through the concept of Gray code, the linear block codes are grown in parallel and serial, compare conventional PTS with our algorithms, our algorithms which are low computational complexity for searching phase factors, and simulation results show that the modified PTS with a non-disjoint partition achieves an obvious improvement of PAPR reduction. In addition, the side information provide extra error-correction property.
APA, Harvard, Vancouver, ISO, and other styles
34

Wu, Ke-liang, and 吳克良. "The Study of Linear Block Codes of Maximum-Likelihood Decoding based on Improved." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/94151116974603155757.

Full text
Abstract:
碩士
國立中央大學
通訊工程學系在職專班
102
Channel coding has been applied to various communication systems extensively to overcome the channel noise and ensure transmission of information can be acceptedcorrectly. The information symbols are coded by adding redundancy for recovery from channel errors before transmission. The receiver uses specific decoding algorithm to proceed correction and error detection. A* decoding algorithm is a tree search algorithm, which is used to find the shortest path within a graph with branch and bound approach to determine the metric of each vertex in a tree. This algorithm estimates metrics between target and each vertex so that it can prune irrelevant vertexes in the early searching phase so as to find the most likely answer in a short period of time and achieve the goal of increasing efficiency.A* decoding algorithm hasbeen used to implement maximum-likelihood decoding of linear blockcodes. Moreover, it reduces the searching edges. In this thesis, we use the shortest path search in the most commonly used A * decoding algorithm as a reference and to be improved, retain the advantages of the original shortest path A decoding algorithm, and applied to the maximum likelihood decoding of binary linear block codes, through computer simulation method improved A * decoding algorithm performance comparison.
APA, Harvard, Vancouver, ISO, and other styles
35

Huang, Yi-Chun, and 黃逸淳. "Application of the space time block codes on the uniform linear array antennas." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/63929120294953047902.

Full text
Abstract:
碩士
中原大學
電子工程研究所
96
The Alamouti code is an orthogonal code with transmission rate 1, but it can’t be used in more than two transmitted antennas. The uniform linear arrasults show that has by (ULA) can be applied in the transmitter by using Alamouti code and the simulation reenterperformance.
APA, Harvard, Vancouver, ISO, and other styles
36

Li, Oei-Siang, and 李維祥. "Combined Linear Block Codes with SLM and PTS to reduce PAPR in OFDM." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/30594370335678463795.

Full text
Abstract:
碩士
國立中興大學
電機工程學系所
94
Multicarrier transmission, also known as orthogonal frequency division multiplexing(OFDM), is a technique with a long history that has recently seen rising popularity in wireless and wireline applications. One of the major drawbacks of multicarrier transmission is the high peak-to-average power ratio(PAPR) of the transmit signal. We propose a concept to reduce the PAPR in OFDM systems by using the stadard arrays of linear block codes. Our scheme may be regarded as modified versions of the selective mapping(SLM) and Partial Transmit Sequence(PTS), which are to reduce the PAPR by reducing code rate and selecting minimum PAPR from several coset leaders as the transmit signal. Because the coset leaders of a linear code are used for scrambling in our scheme, no side information is required to be transmitted and the received signal can be easily decoded by syndrome decoding. Final our simulation results show that our scheme has good performance in PAPR reduction.
APA, Harvard, Vancouver, ISO, and other styles
37

Chandran, Aneesh. "Investigation of the use of infinite impulse response filters to construct linear block codes." Thesis, 2016. http://hdl.handle.net/10539/22669.

Full text
Abstract:
A dissertation submitted in ful lment of the requirements for the degree of Masters in Science in the Information Engineering School of Electrical and Information Engineering August 2016
The work presented extends and contributes to research in error-control coding and information theory. The work focuses on the construction of block codes using an IIR lter structure. Although previous works in this area uses FIR lter structures for error-detection, it was inherently used in conjunction with other error-control codes, there has not been an investigation into using IIR lter structures to create codewords, let alone to justify its validity. In the research presented, linear block codes are created using IIR lters, and the error-correcting capabilities are investigated. The construction of short codes that achieve the Griesmer bound are shown. The potential to construct long codes are discussed and how the construction is constrained due to high computational complexity is shown. The G-matrices for these codes are also obtained from a computer search, which is shown to not have a Quasi-Cyclic structure, and these codewords have been tested to show that they are not cyclic. Further analysis has shown that IIR lter structures implements truncated cyclic codes, which are shown to be implementable using an FIR lter. The research also shows that the codewords created from IIR lter structures are valid by decoding using an existing iterative soft-decision decoder. This represents a unique and valuable contribution to the eld of error-control coding and information theory.
MT2017
APA, Harvard, Vancouver, ISO, and other styles
38

Lin, Tien-Yu, and 林典育. "Study of Hybrid Automatic Request Schemes and Tree-Search Decoding of Linear Block Codes." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/71231770591464490295.

Full text
Abstract:
博士
國立臺灣大學
電信工程學研究所
101
In this dissertation, we study two topics on communications: hybrid automatic repeat request (ARQ) and tree-search decoding of linear block codes. For the first topic, we propose an hybrid ARQ (HARQ) scheme with subpacket transmission and subpacket scheduling for systems with constant packet lengths. In this scheme, each transmission packet comprises two subpackets of equal lengths. As compared to the conventional type II HARQ scheme, the proposed HARQ scheme can more effectively control the error-correcting capability and thus attain better throughput efficiency in the moderate to high signal-to-noise ratios (SNR) regime. Based on the proposed HARQ scheme, we then presents two modified versions of the proposed scheme. The first version with higher complexity, can provide further throughput improvement in low SNR. The second version, with the highest complexity, can obtain additional throughput than the first version in moderate to high SNR. For the second topic, we investigate a tree-search decoding algorithm, named A*, and aim to effectively reduce the decoding latency (complexity). We propose two complexity-reduction techniques for the A* algorithm. The first technique is that the searching is embedded with depth constraints in which the numbers of bit difference from the most reliable positions at different depths are limited. In the second technique, apart from the tree searching, the algorithm employs generation of candidate codewords based on processing the newly updated candidate codeword. For both proposed algorithms, the searching complexity can be significantly reduced at the cost of slight performance loss. In addition, the two proposed techniques can be effectively combined to obtain a more efficient modified A* algorithm. We also investigate the A* decoding for block coded schemes with interblock memory. With interblock memory, both the error performance and average decoding latency of the A* algorithm can be significantly improved in the high SNR regime. Finally, we discuss a HARQ scheme which employs packet division together with the A* decoding. This scheme is easy to be implemented and can perform as well throughput as the conventional type II HARQ scheme.
APA, Harvard, Vancouver, ISO, and other styles
39

Staphorst, Leonard. "Viterbi Decoded Linear Block Codes for Narrowband and Wideband Wireless Communication Over Mobile Fading Channels." Diss., 2005. http://hdl.handle.net/2263/27090.

Full text
Abstract:
Since the frantic race towards the Shannon bound [1] commenced in the early 1950’s, linear block codes have become integral components of most digital communication systems. Both binary and non-binary linear block codes have proven themselves as formidable adversaries against the impediments presented by wireless communication channels. However, prior to the landmark 1974 paper [2] by Bahl et al. on the optimal Maximum a-Posteriori Probability (MAP) trellis decoding of linear block codes, practical linear block code decoding schemes were not only based on suboptimal hard decision algorithms, but also code-specific in most instances. In 1978 Wolf expedited the work of Bahl et al. by demonstrating the applicability of a block-wise Viterbi Algorithm (VA) to Bahl-Cocke-Jelinek-Raviv (BCJR) trellis structures as a generic optimal soft decision Maximum-Likelihood (ML) trellis decoding solution for linear block codes [3]. This study, largely motivated by code implementers’ ongoing search for generic linear block code decoding algorithms, builds on the foundations established by Bahl, Wolf and other contributing researchers by thoroughly evaluating the VA decoding of popular binary and non-binary linear block codes on realistic narrowband and wideband digital communication platforms in lifelike mobile environments. Ideally, generic linear block code decoding algorithms must not only be modest in terms of computational complexity, but they must also be channel aware. Such universal algorithms will undoubtedly be integrated into most channel coding subsystems that adapt to changing mobile channel conditions, such as the adaptive channel coding schemes of current Enhanced Data Rates for GSM Evolution (EDGE), 3rd Generation (3G) and Beyond 3G (B3G) systems, as well as future 4th Generation (4G) systems. In this study classic BCJR linear block code trellis construction is annotated and applied to contemporary binary and non-binary linear block codes. Since BCJR trellis structures are inherently sizable and intricate, rudimentary trellis complexity calculation and reduction algorithms are also presented and demonstrated. The block-wise VA for BCJR trellis structures, initially introduced by Wolf in [3], is revisited and improved to incorporate Channel State Information (CSI) during its ML decoding efforts. In order to accurately appraise the Bit-Error-Rate (BER) performances of VA decoded linear block codes in authentic wireless communication environments, Additive White Gaussian Noise (AWGN), flat fading and multi-user multipath fading simulation platforms were constructed. Included in this task was the development of baseband complex flat and multipath fading channel simulator models, capable of reproducing the physical attributes of realistic mobile fading channels. Furthermore, a complex Quadrature Phase Shift Keying (QPSK) system were employed as the narrowband communication link of choice for the AWGN and flat fading channel performance evaluation platforms. The versatile B3G multi-user multipath fading simulation platform, however, was constructed using a wideband RAKE receiver-based complex Direct Sequence Spread Spectrum Multiple Access (DS/SSMA) communication system that supports unfiltered and filtered Complex Spreading Sequences (CSS). This wideband platform is not only capable of analysing the influence of frequency selective fading on the BER performances of VA decoded linear block codes, but also the influence of the Multi-User Interference (MUI) created by other users active in the Code Division Multiple Access (CDMA) system. CSS families considered during this study include Zadoff-Chu (ZC) [4, 5], Quadriphase (QPH) [6], Double Sideband (DSB) Constant Envelope Linearly Interpolated Root-of- Unity (CE-LI-RU) filtered Generalised Chirp-like (GCL) [4, 7-9] and Analytical Bandlimited Complex (ABC) [7, 10] sequences. Numerous simulated BER performance curves, obtained using the AWGN, flat fading and multi-user multipath fading channel performance evaluation platforms, are presented in this study for various important binary and non-binary linear block code classes, all decoded using the VA. Binary linear block codes examined include Hamming and Bose-Chaudhuri-Hocquenghem (BCH) codes, whereas popular burst error correcting non-binary Reed-Solomon (RS) codes receive special attention. Furthermore, a simple cyclic binary linear block code is used to validate the viability of employing the reduced trellis structures produced by the proposed trellis complexity reduction algorithm. The simulated BER performance results shed light on the error correction capabilities of these VA decoded linear block codes when influenced by detrimental channel effects, including AWGN, Doppler spreading, diminished Line-of-Sight (LOS) signal strength, multipath propagation and MUI. It also investigates the impact of other pertinent communication system configuration alternatives, including channel interleaving, code puncturing, the quality of the CSI available during VA decoding, RAKE diversity combining approaches and CSS correlation characteristics. From these simulated results it can not only be gathered that the VA is an effective generic optimal soft input ML decoder for both binary and non-binary linear block codes, but also that the inclusion of CSI during VA metric calculations can fortify the BER performances of such codes beyond that attainable by classic ML decoding algorithms.
Dissertation (MEng(Electronic))--University of Pretoria, 2006.
Electrical, Electronic and Computer Engineering
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
40

Li, Yeng-Ting, and 李彥霆. "Combination of Partial Transmit Sequence with Linear Block Codes for PAPR Reduction in OFDM System." Thesis, 2019. http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107NCHU5441109%22.&searchmode=basic.

Full text
Abstract:
碩士
國立中興大學
電機工程學系所
107
Orthogonal Frequency Division Multiplexing (OFDM) is a multi-carrier system capable of high-rate data transmission, and has many advantages. One of the important shortcomings is the Peak To Average Power (PAPR). In order to solve this problem, we have many methods such as SLM, PTS, etc. This article is mainly based on PTS. Compared with the traditional partial transmission sequence (PTS), this paper uses tree and trellis structure of error correction (ECC), it can reduce the peak-to-average power ratio and search complexity either, and based on this, it is applied to data of larger length.
APA, Harvard, Vancouver, ISO, and other styles
41

Krishnan, K. Murali. "Computational Problems In Codes On Graphs." Thesis, 2007. https://etd.iisc.ac.in/handle/2005/487.

Full text
Abstract:
Two standard graph representations for linear codes are the Tanner graph and the tailbiting trellis. Such graph representations allow the decoding problem for a code to be phrased as a computational problem on the corresponding graph and yield graph theoretic criteria for good codes. When a Tanner graph for a code is used for communication across a binary erasure channel (BEC) and decoding is performed using the standard iterative decoding algorithm, the maximum number of correctable erasures is determined by the stopping distance of the Tanner graph. Hence the computational problem of determining the stopping distance of a Tanner graph is of interest. In this thesis it is shown that computing stopping distance of a Tanner graph is NP hard. It is also shown that there can be no (1 + є ) approximation algorithm for the problem for any є > 0 unless P = NP and that approximation ratio of 2(log n)1- є for any є > 0 is impossible unless NPCDTIME(npoly(log n)). One way to construct Tanner graphs of large stopping distance is to ensure that the graph has large girth. It is known that stopping distance increases exponentially with the girth of the Tanner graph. A new elementary combinatorial construction algorithm for an almost regular LDPC code family with provable Ώ(log n) girth and O(n2) construction complexity is presented. The bound on the girth is close within a factor of two to the best known upper bound on girth. The problem of linear time exact maximum likelihood decoding of tailbiting trellis has remained open for several years. An O(n) complexity approximate maximum likelihood decoding algorithm for tail-biting trellises is presented and analyzed. Experiments indicate that the algorithm performs close to the ideal maximum likelihood decoder.
APA, Harvard, Vancouver, ISO, and other styles
42

Krishnan, K. Murali. "Computational Problems In Codes On Graphs." Thesis, 2007. http://hdl.handle.net/2005/487.

Full text
Abstract:
Two standard graph representations for linear codes are the Tanner graph and the tailbiting trellis. Such graph representations allow the decoding problem for a code to be phrased as a computational problem on the corresponding graph and yield graph theoretic criteria for good codes. When a Tanner graph for a code is used for communication across a binary erasure channel (BEC) and decoding is performed using the standard iterative decoding algorithm, the maximum number of correctable erasures is determined by the stopping distance of the Tanner graph. Hence the computational problem of determining the stopping distance of a Tanner graph is of interest. In this thesis it is shown that computing stopping distance of a Tanner graph is NP hard. It is also shown that there can be no (1 + є ) approximation algorithm for the problem for any є > 0 unless P = NP and that approximation ratio of 2(log n)1- є for any є > 0 is impossible unless NPCDTIME(npoly(log n)). One way to construct Tanner graphs of large stopping distance is to ensure that the graph has large girth. It is known that stopping distance increases exponentially with the girth of the Tanner graph. A new elementary combinatorial construction algorithm for an almost regular LDPC code family with provable Ώ(log n) girth and O(n2) construction complexity is presented. The bound on the girth is close within a factor of two to the best known upper bound on girth. The problem of linear time exact maximum likelihood decoding of tailbiting trellis has remained open for several years. An O(n) complexity approximate maximum likelihood decoding algorithm for tail-biting trellises is presented and analyzed. Experiments indicate that the algorithm performs close to the ideal maximum likelihood decoder.
APA, Harvard, Vancouver, ISO, and other styles
43

Paul, Prabal. "On The Peak-To-Average-Power-Ratio Of Affine Linear Codes." Thesis, 2006. https://etd.iisc.ac.in/handle/2005/350.

Full text
Abstract:
Employing an error control code is one of the techniques to reduce the Peak-to-Average Power Ratio (PAPR) in an Orthogonal Frequency Division Multiplexing system; a well known class of such codes being the cosets of Reed-Muller codes. In this thesis, classes of such coset-codes of arbitrary linear codes are considered. It has been proved that the size of such a code can be doubled with marginal/no increase in the PAPR. Conditions for employing this method iteratively have been enunciated. In fact this method has enabled to get the optimal coset-codes. The PAPR of the coset-codes of the extended codes is obtained from the PAPR of the corresponding coset-codes of the parent code. Utility of a special type of lengthening is established in PAPR studies
APA, Harvard, Vancouver, ISO, and other styles
44

Paul, Prabal. "On The Peak-To-Average-Power-Ratio Of Affine Linear Codes." Thesis, 2006. http://hdl.handle.net/2005/350.

Full text
Abstract:
Employing an error control code is one of the techniques to reduce the Peak-to-Average Power Ratio (PAPR) in an Orthogonal Frequency Division Multiplexing system; a well known class of such codes being the cosets of Reed-Muller codes. In this thesis, classes of such coset-codes of arbitrary linear codes are considered. It has been proved that the size of such a code can be doubled with marginal/no increase in the PAPR. Conditions for employing this method iteratively have been enunciated. In fact this method has enabled to get the optimal coset-codes. The PAPR of the coset-codes of the extended codes is obtained from the PAPR of the corresponding coset-codes of the parent code. Utility of a special type of lengthening is established in PAPR studies
APA, Harvard, Vancouver, ISO, and other styles
45

Meng, Jin. "Linear Interactive Encoding and Decoding Schemes for Lossless Source Coding with Decoder Only Side Information." Thesis, 2008. http://hdl.handle.net/10012/3928.

Full text
Abstract:
Near lossless source coding with side information only at the decoder, was first considered by Slepian and Wolf in 1970s, and rediscovered recently due to applications such as sensor network and distributed video coding. Suppose X is a source and Y is the side information. The coding scheme proposed by Slepian and Wolf, called SW coding, in which information only flows from the encoder to the decoder, was shown to achieve the rate H(X|Y) asymptotically for stationary ergodic source pairs, but not for non-ergodic case, shown by Yang and He. Recently, a new source coding paradigm called interactive encoding and decoding(IED) was proposed for near lossless coding with side information only at the decoder, where information flows in both ways, from the encoder to the decoder and vice verse. The results by Yang and He show that IED schemes are much more appealing than SW coding schemes to applications where the interaction between the encoder and the decoder is possible. However, the IED schemes proposed by Yang and He do not have an intrinsic structure that is amenable to design and implement in practice. Towards practical design, we restrict the encoding method to linear block codes, resulting in linear IED schemes. It is then shown that this restriction will not undermine the asymptotical performance of IED. Another step of practical design of IED schemes is to make the computational complexity incurred by encoding and decoding feasible. In the framework of linear IED, a scheme can be conveniently described by parity check matrices. Then we get an interesting trade-off between the density of the associated parity check matrices and the resulting symbol error probability. To implement the idea of linear IED and follow the instinct provided by the result above, Low Density Parity Check(LDPC) codes and Belief Propagation(BP) decoding are utilized. A successive LDPC code is proposed, and a new BP decoding algorithm is proposed, which applies to the case where the correlation between $Y$ and $X$ can be modeled as a finite state channel. Finally, simulation results show that linear IED schemes are indeed superior to SW coding schemes.
APA, Harvard, Vancouver, ISO, and other styles
46

Machireddy, Amrutha. "Learning Non-linear Mappings from Data with Applications to Priority-based Clustering, Prediction, and Detection." Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5670.

Full text
Abstract:
With the volume of data generated in today's internet-of-things, learning algorithms to extract and understand the underlying relations between the various attributes of data have gained momentum. This thesis is focused on learning algorithms to extract meaningful relations from the data using both unsupervised and supervised learning algorithms. Vector quantization techniques are popularly used for applications in contextual data clustering, data visualization and high-dimensional data exploration. Existing vector quantization techniques, such as, the K-means and its variants and those derived from the self-organizing maps consider the input data vector as a whole without prioritizing over individual coordinates. Motivated by applications requiring priorities over data coordinates, we develop a theory for clustering data with different priorities over the coordinates called the data-dependent priority-based soft vector quantization. Based on the input data distribution, the priorities over the data coordinates are learnt by estimating the marginal distributions over each coordinate. The number of neurons approximating each coordinate based on the priority are determined through a reinforcement learning algorithm. Analysis on the convergence of the proposed algorithm and the probability of misclassification are presented along with simulation results on various data sets. Self-organizing maps (SOM) are popularly used for applications in learning features, vector quantization, and recalling spatial input patterns. The adaptation rule in SOMs is based on the Euclidean distance between the input vector and the neuronal weight vector along with a neighborhood function that brings in topological arrangement of the neurons in the output space. It is capable of learning the spatial correlations among the data but fails to capture temporal correlations present in a sequence of inputs. We formulate a potential function based on a spatio-temporal metric and create hierarchical vector quantization feature maps by embedding memory structures similar to long short-term memories across the feature maps to learn the spatio-temporal correlations in the data across clusters. Error correction codes such as low density parity check codes are popularly used to enhance the performance of digital communication systems. The current decoding framework relies on exchanging beliefs over a Tanner graph, which the encoder and decoder are aware of. However, this information may not be available readily, for example, in covert communication. The main idea is to build a neural network to learn the encoder mappings in the absence of knowledge of the Tanner graph. We propose a scheme to learn the mappings using the back-propagation algorithm. We investigate into the choice of different cost functions and the number of hidden neurons for learning the encoding function. The proposed scheme is capable of learning the parity check equations over a binary field towards identifying the validity of a codeword. Simulation results over synthetic data show that our algorithm is indeed capable of learning the encoder mappings. We also propose an approach to identify noisy codes using uncertainty estimation and to decode them using autoencoders. In the next work, we consider the convolutional neural networks which are widely used in natural language processing, video analysis, and image recognition. However, the popularly used max-pooling layer discards most of the data, which is a drawback in applications, such as, prediction of video frames. We propose an adaptive prediction and classification network based on a data-dependent pooling architecture. We formulate a combined cost function for minimizing the prediction and classification errors. We also detect the presence of an unseen class during testing for digit prediction in videos.
APA, Harvard, Vancouver, ISO, and other styles
47

Μέρμιγκας, Παναγιώτης. "Αποκωδικοποιητής μέγιστης πιθανοφάνειας για κώδικες LDPC και υλοποίηση σε FPGA." Thesis, 2013. http://hdl.handle.net/10889/6012.

Full text
Abstract:
Στο πρώτο μέρος της παρούσας Διπλωματικής Εργασίας εισάγονται οι βασικές έννοιες της Θεωρίας Κωδικοποίησης και των Τηλεπικοινωνιακών Συστημάτων. Για τη διόρθωση λαθών στην περίπτωση της μετάδοσης μέσω ενός θορυβώδους καναλιού εφαρμόζεται κωδικοποίηση καναλιού με Γραμμικούς Μπλοκ Κώδικες, και πιο συγκεκριμένα Κώδικες Χαμηλής Πυκνότητας Ελέγχου Ισοτιμίας (Low-Density Parity-Check Codes, LDPC). Ορίζεται η μαθηματική περιγραφή των κωδίκων αυτών και διατυπώνονται σχετικοί ορισμοί και θεωρήματα. Επίσης, διατυπώνεται το κριτήριο Μέγιστης Πιθανοφάνειας, στο οποίο βασίζεται η ανάπτυξη του αντίστοιχου αποκωδικοποιητή. Το δεύτερο μέρος περιλαμβάνει την εξομοίωση του αποκωδικοποιητή Μέγιστης Πιθανοφάνειας στο λογισμικό και την υλοποίησή του σε FPGA, στις περιπτώσεις όπου χρησιμοποιούνται Soft ή Hard είσοδοι στον αποκωδικοποιητή. Ακόμη, παρουσιάζεται η Αρχιτεκτονική του αποκωδικοποιητή και η Μεθοδολογία Σχεδίασής του. Παρουσιάζονται βελτιώσεις στη σχεδίαση του αποκωδικοποιητή που οδηγούν σε μείωση της απαιτούμενης επιφάνειας στο υλικό. Τα αποτελέσματα που προκύπτουν από τις μετρήσεις των δύο υλοποιήσεων συγκρίνονται με την περίπτωση αποκωδικοποιητή βασισμένο σε επαναλήψεις και εξάγονται τα διαγράμματα ρυθμού σφαλμάτων bit και τα αντίστοιχα συμπεράσματα.
In the first part of this thesis, the basic principles of Coding Theory and Communication Systems are introduced. In order to correct errors in the case of transmission through a noisy channel, channel coding with Linear Block Codes is applied, and more specifically Low-Density Parity-Check (LDPC) codes. The mathematical description of such codes is defined and useful definitions and theorems are specified. In addition, the Maximum Likelihood (ML) criterion is specified, on which the development of the relevant decoder is based. The second part consists of the simulation of the ML decoder in software and its hardware implementation on FPGA, in the cases where either Soft or Hard information is used as the decoder's input. Furthermore, the decoder's Architecture and the Design Methodology used are presented. Improvements concerning the implementation of the decoder are introduced, which lead to a reduction in the required area on chip. The experimental results of the two implementations are compared to the case of the iterative decoder and the Bit Error Rate plots are produced, as well as the appropriate conclusions.
APA, Harvard, Vancouver, ISO, and other styles
48

Shih, Ian, and 施奕安. "Improving the Robustness of AudioWatermarking via Linear Block Code." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/bqp8us.

Full text
Abstract:
碩士
國立中央大學
電機工程研究所
94
Digital audio watermark is a technique to embed copyright or other information into the audio data. The embedded data is perceptually inaudible to maintain the quality of the host signal. The proposed method in this paper is to embed the watermark into the audio signal by using spread-spectrum modulation. To avoid the watermark being destroyed by the transmission failure or signal processing, we add the error-correction coding technique based on linear block code, so that the pixel error rate of the extracted watermark can be reduced; thus, the robustness of the watermark is enhanced. In the experiment, we will observe the results that influenced by α and the error-correction coding. After the MP3 compression, the results will show how the error-correction coding can reduce the pixel error rate of the watermark pattern.
APA, Harvard, Vancouver, ISO, and other styles
49

Liao, Wei-Chieh, and 廖偉傑. "Study on Decoding of The Linear Block Code Using Free Software." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/40720520368737836146.

Full text
Abstract:
碩士
中原大學
電機工程研究所
97
Error control coding is a commonly used technique in a commucation system to protect the transmitted data which may be corrupted due to noise or other reasons. Free software is software that can be used, copied, studied, modified, and redistributed without restriction. In this thesis, we use free software to develope a simulation system for decoding of linear block code. We work out and simplify several decoding methods for linear block code, and use a graphic interface to get and compare the error performances and coding complexities of differenct methods. The program can save the simulated data, and show the data graph simultaneously. We hope this research can be used as a basis of a more mature platform which can used as a tool for teaching and researching.
APA, Harvard, Vancouver, ISO, and other styles
50

Kot, Alan D. "On the construction, dimensionality, and decoding of linear block code trellises." Thesis, 1992. http://hdl.handle.net/2429/1589.

Full text
Abstract:
In principle, the coding gain of a digital communications system can be improved through the use of longer error control codes and soft-decision maximum-likelihood (ML) decoding. Unfortunately, the implementation of such techniques is limited by the computational requirements of the decoder. In this thesis, we consider several aspects of trellis-based ML, and 'near-ML', soft-decision decoding of general linear block codes. ML decoding is feasible for reasonably compact trellises using the Viterbi algorithm. For general linear block codes we present simple methods for trellis construction that are based on the methods of Wolf and Massey. It is confirmed that the trellises so constructed are minimal, and an improvement of Muder's lower bound on the maximum trellis dimension is found. It is shown that when equivalent codes are constructed by permutations of the symbol positions that the resulting trellis dimensions are fixed near either end, while in the central portion of the trellis the dimensions vary between Wolf's upper bound on the maximum dimension and a new lower bound on the minimum dimension. This lower bound and the improved bound on the maximum trellis dimension are exact for maximum distance separable codes. These bounds indicate that only codes (and their duals) that have a smallest minimum distance min (dmin, d⊥min) significantly less than the corresponding Singleton bound can possibly have a compact trellis. For trellises that are impractically large for decoding by the Viterbi algorithm, we consider near-ML decoding using partial searches of a code trellis or tree. Despite the fact that this approach is suboptimal in terms of the probability of error, it has the potential for an improved trade-off between coding gain and computational effort. The decoding problem faced by a partial trellis search is described in an alternative manner to Massey's variable-length code model. As part of this approach, we specialize Massey's use of 'random-tails' in order to exploit a priori knowledge of the code and information-symbol distribution used. As a result there are some minor changes to the usual metric, but only in atypical situations -- such as the use of unequal a priori information-symbol probabilities, non-linear codes, and non-symmetric, non-binary discrete memory less channels. We introduce a simple and efficient (linear time) method for the selection of the best metrics from among a number of contenders in a breadth-first search. This method can provide a sorted set of survivor metrics during M algorithm decoding of binary linear block codes using only M comparisons in the worst case. The application of the method to the decoding of convolutional codes is also discussed. We present a method for soft-decision decoding of general linear block codes that we refer to as Reconfigurable Trellis (RT) Decoding. RT decoding carries out a partial search on a different and more easily searched trellis (or tree) for the code. The trellis used for decoding is dependent upon the reliability of the received data, so that it is determined 'on-the-fly'. Only a portion of the reconfigured trellis needs to be constructed, as guided by the partial search. While RT decoding requires some computational overhead prior to beginning each search, the search effort itself can be very significantly reduced, so that overall an improvement can be achieved in the trade-off between coding gain and computational effort. With respect to the performance analysis of suboptimal soft-decision decoding schemes, we extend the uniform error property of ML decoding of linear codes on a binary-input symmetric-output memory less channel to include the more general case of partial trellis searches. For RT-M decoding of general linear block codes the uniform error property is used together with a novel channel model and a very simple search technique to estimate the increase in the probability of error over that of ML decoding. This approach can yield accurate results while avoiding the complexity of considering numerous soft-decision output values, and without requiring detailed knowledge of the code structure.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography