Dissertations / Theses on the topic 'Parity-check codes'

To see the other types of publications on this topic, follow the link: Parity-check codes.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Parity-check codes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Adhikari, Dikshya. "The Role of Eigenvalues of Parity Check Matrix in Low-Density Parity Check Codes." Thesis, University of North Texas, 2020. https://digital.library.unt.edu/ark:/67531/metadc1707297/.

Full text
Abstract:
The new developments in coding theory research have revolutionized the application of coding to practical systems. Low-Density Parity Check (LDPC) codes form a class of Shannon limit approaching codes opted for digital communication systems that require high reliability. This thesis investigates the underlying relationship between the spectral properties of the parity check matrix and LDPC decoding convergence. The bit error rate of an LDPC code is plotted for the parity check matrix that has different Second Smallest Eigenvalue Modulus (SSEM) of its corresponding Laplacian matrix. It is found that for a given (n,k) LDPC code, large SSEM has better error floor performance than low SSEM. The value of SSEM decreases as the sparseness in a parity-check matrix is increased. It was also found from the simulation that long LDPC codes have better error floor performance than short codes. This thesis outlines an approach to analyze LDPC decoding based on the eigenvalue analysis of the corresponding parity check matrix.
APA, Harvard, Vancouver, ISO, and other styles
2

Hayes, Bob. "LOW DENSITY PARITY CHECK CODES FOR TELEMETRY APPLICATIONS." International Foundation for Telemetering, 2007. http://hdl.handle.net/10150/604497.

Full text
Abstract:
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada
Next generation satellite communication systems require efficient coding schemes that enable high data rates, require low overhead, and have excellent bit error rate performance. A newly rediscovered class of block codes called Low Density Parity Check (LDPC) codes has the potential to revolutionize forward error correction (FEC) because of the very high coding rates. This paper presents a brief overview of LDPC coding and decoding. An LDPC algorithm developed by Goddard Space Flight Center is discussed, and an overview of an accompanying VHDL development by L-3 Communications Cincinnati Electronics is presented.
APA, Harvard, Vancouver, ISO, and other styles
3

Tee, James Seng Khien. "On concatenated single parity check codes and bit interleaved coded modulation." Thesis, University of Canterbury. Electrical and Electronic Engineering, 2001. http://hdl.handle.net/10092/5463.

Full text
Abstract:
In recent years, the invention of Turbo codes has spurred much interest in the coding community. Turbo codes are capable of approaching channel capacity closely at a decoding complexity much lower than previously thought possible. Although decoding complexity is relatively low, Turbo codes are still too complex to implement for many practical systems. This work is focused on low complexity channel coding schemes with Turbo-like performance. The issue of complexity is tackled by using single parity check (SPC) codes, arguably the simplest codes known. The SPC codes are used as component codes in multiple parallel and multiple serial concatenated structures to achieve high performance. An elegant technique for improving error performance by increasing the dimensionality of the code without changing the block length and code rate is presented. For high bandwidth efficiency applications, concatenated SPC codes are combined with 16-QAM Bit Interleaved Coded Modulation (BICM) to achieve excellent performance. Analytical and simulation results show that concatenated SPC codes are capable of achieving Turbo-like performances at a complexity which is approximately 10 times less than that of a 16-state Turbo code. A simple yet accurate generalised bounding method is derived for BICM systems employing large signal constellations. This bound works well over a wide range of SNRs for common signal constellations in the independent Rayleigh fading channel. Moreover, the bounding method is independent of the type and code rate of channel coding scheme. In addition to the primary aim of the research, an improved decoder structure for serially concatenated codes has been designed, and a sub-optimal, soft-in-soft-out iterative technique for decoding systematic binary algebraic block codes has been developed.
APA, Harvard, Vancouver, ISO, and other styles
4

Sharifi, Tehrani Saeed. "Stochastic decoding of low-density parity-check codes." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=97010.

Full text
Abstract:
Low-Density Parity-Check (LDPC) codes are one of the most powerful classes of error-control codes known to date. These codes have been considered for many recent digital communication applications. In this dissertation, we propose stochastic decoding of state-of-the-art LDPC codes and demonstrate it as a competitive approach to practical LDPC decoding algorithms.In stochastic decoding, probabilities are represented as streams of random bits using Bernoulli sequences in which the information is contained in the statistics of the bit stream. This representation results in low hardware-complexity processing nodes that perform computationally-intensive operations. However, stochastic decoding is prone to the acute problem of latching. This problem is caused by correlated bit streams within cycles in the code's factor graph, and significantly deteriorates the performance of stochastic LDPC decoders.We propose edge memories, tracking forecast memories, and majority-based tracking forecast memories to address the latching problem. These units efficiently extract the evolving statistics of stochastic bit streams and rerandomize them to disrupt latching. To the best of our knowledge, these methods are the first successful methods for stochastic decoding of state-of-the-art LDPC codes.We present novel decoder architectures and report on several hardware implementations. The most advanced reported implementation is a stochastic decoder that decodes the (2048,1723) LDPC code from the IEEE 802.3an standard. To the best of our knowledge, this decoder is the most silicon area-efficient and, with a maximum core throughput of 61.3 Gb/s, is one of the fastest fully parallel soft-decision LDPC decoders reported in the literature. We demonstrate the performance of this decoder in low bit-error-rate regimes.In addition to stochastic LDPC decoding, we propose the novel application of the stochastic approach for joint decoding of LDPC codes and partial-response channels that are considered in practical magnetic recording applications. Finally, we investigate the application of the stochastic approach for decoding linear block codes with high-density parity-check matrices on factor graphs. We consider Reed-Solomon, Bose-Chaudhuri-Hocquenghem, and block turbo codes.
À ce jour, les codes Low-Density Parity-Check (LDPC) font partie des codes correcteurs d'erreurs les plus performants. Ces codes sont inclus dans différents standards de communications numériques. Dans ce manuscrit, nous proposons d'utiliser le décodage stochastique pour les codes LDPC. D'autre part, nous démontrons que pour les codes LDPC, le décodage stochastique représente une alternative réaliste aux algorithmes de décodage existants.Dans le processus de décodage stochastique, les probabilités sont représentées sous forme de séquences de Bernoulli. L'information est contenue dans la statistique de ces flux binaires aléatoires. Cette représentation particulière permet d'exécuter des calculs intensifs avec une faible complexité matérielle. Cependant le décodage stochastique est enclin au problème du verrouillage ("latching"). La corrélation entre les bits des différents flux au sein des cycles du graphe biparti dégrade les performances du décodage stochastique des codes LDPC. Pour résoudre le problème du verrouillage, nous proposons trois solutions: les mémoires de branche, les mémoires de suivi, et les mémoires de suivi à majorité. Ces différents composants permettent de suivre l'évolution de la statistique des flux binaires et de réintroduire des éléments aléatoires au sein des séquences observées, minimisant ainsi le phénomène de verrouillage. À notre connaissance, il s'agit là des premiers résultats probants permettant un décodage stochastique efficace des codes LDPC. Nous proposons de nouvelles architectures de décodeurs associées à leurs implantations matérielles respectives. La plus perfectionnée des architectures présentée ici est celle d'un décodeur stochastique pour le code LDPC (2048,1723) associé au standard IEEE 802.3an. À notre connaissance, en comparaison avec l'état de l'art actuel, ce décodeur dispose du meilleur rapport vitesse/complexité. Le débit maximum (au niveau du coeur), est de 61.3 Gb/s, il s'agit là du plus rapide des décodeurs de codes LDPC à décisions souples connu à ce jour. Nous présentons par ailleurs les performances de ce décodeur à très faible taux d'erreurs binaire. De plus, nous proposons d'appliquer le calcul stochastique au décodage conjoint des codes LDPC et des canaux à réponse partielle qui est utilisé dans les applications d'enregistrement magnétique. Enfin, nous étudions l'extension du décodage stochastique au décodage des codes en blocs ayant une matrice de parité à forte densité. Nous appliquons le décodage stochastique sur des graphes biparti aux codes Reed-Solomon, Bose-Chaudhuri-Hocquenghem, et aux turbocodes en blocs.
APA, Harvard, Vancouver, ISO, and other styles
5

Meidan, Amir. "Linear-time encodable low-density parity-check codes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0006/MQ40942.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Davey, M. C. "Error-correction using low-density parity-check codes." Thesis, University of Cambridge, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.598305.

Full text
Abstract:
Gallager's low-density parity-check codes are defined by sparse parity-check matrices, usually with a random contruction. Such codes have near Shannon limit performance when decoded using an iterative probabilistic decoding algorithm. We report two advances that improve the error-correction performance of these codes. First, defining the codes over non-binary fields we can obtain a 0.6 dB improvement in signal to noise ratio for a given bit error rate. Second, using irregular parity-check matrices with non-uniform row and column weights we obtain gains of up to 0.5 dB. The empirical error-correction performance of irregular low-density parity-check codes is unbeaten for the additive white Gaussian noise channel. Low-density parity-check codes are also shown to be useful for communicating over channels which make insertions and deletions as well as additive (substitution) errors. Error-correction for such channels has not been widely studied, but is of importance whenever synchronisation of sender and receiver is imperfect. We introduce concatenated codes using novel non-linear inner codes which we call 'watermark' codes, and low-density parity-check codes over non-binary fields as outer codes. The inner code allows resynchronisation using a probabilistic decoder, providing soft outputs for the outer low-density parity-check decoder. Error-correction performance using watermark codes is several orders of magnitude better than any comparable results in the literature.
APA, Harvard, Vancouver, ISO, and other styles
7

Rankin, David Michael. "Single parity check product codes and iterative decoding." Thesis, University of Canterbury. Electrical and Computer Engineering, 2001. http://hdl.handle.net/10092/1084.

Full text
Abstract:
The aim of coding theory is to design codes which can achieve the fundamental limits of communication [52] and yet are simple to implement. On average randomly constructed codes can achieve this goal, but with a decoding complexity that is impractical. Consequently, highly structured codes with practical decoding algorithms have been extensively studied. Unfortunately the vast majority of these codes do not approach capacity. Recent advances involving simple 'random like' codes with practical iterative decoding algorithms have closely approached capacity as the blocklength increases. This thesis investigates single parity check (SPC) product codes and introduces the class of randomly interleaved (RI) SPC product codes. It will be shown that RI SPC product codes can asymptotically force the probability of error to zero, at code rates up to capacity, for almost all codewords. Furthermore the structure of these codes allows a very simple, sub-optimal, iterative decoding algorithm to be used. This thesis also derives an asymptotic analysis on SPC product codes from the decoding point of view. It is shown that the probability of error can be driven to zero, as the blocklength increases, for signal to noise ratios within 2dB of capacity on the additive white Gaussian noise (AWGN) channel. Simulation results for both SPC and RI SPC product codes in an AWGN channel are presented. These results indicate that RI SPC product codes perform very well, typically within 1.5dB of capacity over a wide range of blocklengths and code rates. Further analysis on the weight enumerator of finite length RI SPC product codes is used to confirm the error floor of these codes. Extensions to parallel and serially concatenated SPC product codes are also investigated. Simulation results show an advantageous trade-off between code rate, blocklength and performance for three dimensional parallel concatenated SPC product codes. The design of irregular SPC product codes is also considered, and some simulation results are presented.
APA, Harvard, Vancouver, ISO, and other styles
8

Anitei, Irina. "Circular Trellis based Low Density Parity Check Codes." Ohio University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1226513009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Moon, Todd K., and Jacob H. Gunther. "AN INTRODUCTION TO LOW-DENSITY PARITY-CHECK CODES." International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/607470.

Full text
Abstract:
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Low-Density Parity-Check (LDPC) codes are powerful codes capable of nearly achieving the Shannon channel capacity. This paper presents a tutorial introduction to LDPC codes, with a detailed description of the decoding algorithm. The algorithm propagates information about bit and check probabilities through a tree obtained from the Tanner graph for the code. This paper may be useful as a supplement in a course on error-control coding or digital communication.
APA, Harvard, Vancouver, ISO, and other styles
10

Ha, Jeongseok Ha. "Low-Density Parity-Check Codes with Erasures and Puncturing." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/5296.

Full text
Abstract:
In this thesis, we extend applications of Low-Density Parity-Check (LDPC) codes to a combination of constituent sub-channels, which is a mixture of Gaussian channels with erasures. This model, for example, represents a common channel in magnetic recordings where thermal asperities in the system are detected and represented at the decoder as erasures. Although this channel is practically useful, we cannot find any previous work that evaluates performance of LDPC codes over this channel. We are also interested in practical issues such as designing robust LDPC codes for the mixture channel and predicting performance variations due to erasure patterns (random and burst), and finite block lengths. On time varying channels, a common error control strategy is to adapt the coding rate according to available channel state information (CSI). An effective way to realize this coding strategy is to use a single code and puncture it in a rate-compatible fashion, a so-called rate-compatible punctured code (RCPC). We are interested in the existence of good puncturing patterns for rate-changes that minimize performance loss. We show the existence of good puncturing patterns with analysis and verify the results with simulations. Universality of a channel code across a broad range of coding rates is a theoretically interesting topic. We are interested in the possibility of using the puncturing technique proposed in this thesis for designing universal LDPC codes. We also consider how to design high rate LDPC codes by puncturing low rate LDPC codes. The new design method can take advantage of longer effect block lengths, sparser parity-check matrices, and larger minimum distances of low rate LDPC codes.
APA, Harvard, Vancouver, ISO, and other styles
11

Blad, Anton. "Efficient Decoding Algorithms for Low-Density Parity-Check Codes." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2794.

Full text
Abstract:

Low-density parity-check codes have recently received much attention because of their excellent performance and the availability of a simple iterative decoder. The decoder, however, requires large amounts of memory, which causes problems with memory consumption.

We investigate a new decoding scheme for low density parity check codes to address this problem. The basic idea is to define a reliability measure and a threshold, and stop updating the messages for a bit whenever its reliability is higher than the threshold. We also consider some modifications to this scheme, including a dynamic threshold more suitable for codes with cycles, and a scheme with soft thresholds which allow the possibility of removing a decision which have proved wrong.

By exploiting the bits different rates of convergence we are able to achieve an efficiency of up to 50% at a bit error rate of less than 10^-5. The efficiency should roughly correspond to the power consumption of a hardware implementation of the algorithm.

APA, Harvard, Vancouver, ISO, and other styles
12

Ismail, Mohamed Rafiq. "High throughput decoding of low density parity check codes." Thesis, University of Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.556712.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Vijayakumar, Suresh Mikler Armin. "FPGA implementation of low density parity check codes decoder." [Denton, Tex.] : University of North Texas, 2009. http://digital.library.unt.edu/permalink/meta-dc-11003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Kai. "High-Performance Decoder Architectures For Low-Density Parity-Check Codes." Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-dissertations/17.

Full text
Abstract:
The Low-Density Parity-Check (LDPC) codes, which were invented by Gallager back in 1960s, have attracted considerable attentions recently. Compared with other error correction codes, LDPC codes are well suited for wireless, optical, and magnetic recording systems due to their near- Shannon-limit error-correcting capacity, high intrinsic parallelism and high-throughput potentials. With these remarkable characteristics, LDPC codes have been adopted in several recent communication standards such as 802.11n (Wi-Fi), 802.16e (WiMax), 802.15.3c (WPAN), DVB-S2 and CMMB. This dissertation is devoted to exploring efficient VLSI architectures for high-performance LDPC decoders and LDPC-like detectors in sparse inter-symbol interference (ISI) channels. The performance of an LDPC decoder is mainly evaluated by area efficiency, error-correcting capability, throughput and rate flexibility. With this work we investigate tradeoffs between the four performance aspects and develop several decoder architectures to improve one or several performance aspects while maintaining acceptable values for other aspects. Firstly, we present a high-throughput decoder design for the Quasi-Cyclic (QC) LDPC codes. Two new techniques are proposed for the first time, including parallel layered decoding architecture (PLDA) and critical path splitting. Parallel layered decoding architecture enables parallel processing for all layers by establishing dedicated message passing paths among them. The decoder avoids crossbar-based large interconnect network. Critical path splitting technique is based on articulate adjustment of the starting point of each layer to maximize the time intervals between adjacent layers, such that the critical path delay can be split into pipeline stages. Furthermore, min-sum and loosely coupled algorithms are employed for area efficiency. As a case study, a rate-1/2 2304-bit irregular LDPC decoder is implemented using ASIC design in 90 nm CMOS process. The decoder can achieve an input throughput of 1.1 Gbps, that is, 3 or 4 times improvement over state-of-art LDPC decoders, while maintaining a comparable chip size of 2.9 mm^2. Secondly, we present a high-throughput decoder architecture for rate-compatible (RC) LDPC codes which supports arbitrary code rates between the rate of mother code and 1. While the original PLDA is lack of rate flexibility, the problem is solved gracefully by incorporating the puncturing scheme. Simulation results show that our selected puncturing scheme only introduces the BER performance degradation of less than 0.2dB, compared with the dedicated codes for different rates specified in the IEEE 802.16e (WiMax) standard. Subsequently, PLDA is employed for high throughput decoder design. As a case study, a RC- LDPC decoder based on the rate-1/2 WiMax LDPC code is implemented in CMOS 90 nm process. The decoder can achieve an input throughput of 975 Mbps and supports any rate between 1/2 and 1. Thirdly, we develop a low-complexity VLSI architecture and implementation for LDPC decoder used in China Multimedia Mobile Broadcasting (CMMB) systems. An area-efficient layered decoding architecture based on min-sum algorithm is incorporated in the design. A novel split-memory architecture is developed to efficiently handle the weight-2 submatrices that are rarely seen in conventional LDPC decoders. In addition, the check-node processing unit is highly optimized to minimize complexity and computing latency while facilitating a reconfigurable decoding core. Finally, we propose an LDPC-decoder-like channel detector for sparse ISI channels using belief propagation (BP). The BP-based detection computationally depends on the number of nonzero interferers only and are thus more suited for sparse ISI channels which are characterized by long delay but a small fraction of nonzero interferers. Layered decoding algorithm, which is popular in LDPC decoding, is also adopted in this paper. Simulation results show that the layered decoding doubles the convergence speed of the iterative belief propagation process. Exploring the special structure of the connections between the check nodes and the variable nodes on the factor graph, we propose an effective detector architecture for generic sparse ISI channels to facilitate the practical application of the proposed detection algorithm. The proposed architecture is also reconfigurable in order to switch flexible connections on the factor graph in the time-varying ISI channels.
APA, Harvard, Vancouver, ISO, and other styles
15

Chen, Jinghu. "Reduced complexity decoding algorithms for low-density parity check codes and turbo codes." Thesis, University of Hawaii at Manoa, 2003. http://hdl.handle.net/10125/6885.

Full text
Abstract:
Iterative decoding techniques have been receiving more and more attentions with the invention of turbo codes and the rediscovery of low-density parity-check (LDPC) codes. An important aspect in the study of iterative decoding is the tradeoff between decoding performance and complexities. For both LDPC codes and turbo codes, optimum decoding algorithms can provide very good performance. However, complicated operations are involved in the optimum decoding, and prohibit the wide applications of LDPC codes and turbo codes in the next generation digital communication and storage systems. Although there exist sub-optimum decoding algorithms for both LDPC codes and turbo codes, the decoding performance is degraded with the sub-optimum algorithms, and under some circumstances, the gap is very large. This research investigates the reduced complexity decoding algorithms of LDPC codes and turbo codes. For decoding LDPC codes, new algorithms, namely the normalized BP-based algorithm and the offset BP-based algorithm, are proposed. For these two reduced complexity algorithms, density evolution algorithms are derived, and are used to determine the best decoder parameters associated with each of the algorithms. Numerical results show that the new algorithms can achieve near optimum decoding performances for infinite code lengths, and simulation results reveal the same conclusion for short to medium code lengths. In addition to the advantage of low computational complexities, the two new algorithms are less subject to quantization errors and correlation effects than the optimum BP algorithm, and consequently are more suitable for hardware implementation. For a special kind of LDPC codes - the geometric LDPC codes, we propose the normalized APP-based algorithm, which is even more simplified yet still can achieve the near optimum performance. For decoding turbo codes, two new sub-optimum decoding algorithms are proposed. The first is the bi-directional soft-output Viterbi algorithm (bi-SOVA), which is based on utilizing a backward SOYA decoding in addition to the conventional forward one, and can achieve better performance than the uni-directional SOYA. The second is the normalized Max-Log-MAP algorithm, which improves the performance of the Max-Log-MAP decoding by scaling the soft outputs with some predetermined factors.
xiii, 117 leaves
APA, Harvard, Vancouver, ISO, and other styles
16

Kim, Jaehong. "Design of rate-compatible structured low-density parity-check codes." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/19723.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2007.
Committee Chair: McLaughlin, Steven; Committee Member: Barry, John; Committee Member: Boldyreva, Alexandra; Committee Member: Clements, Mark; Committee Member: Li, Ye.
APA, Harvard, Vancouver, ISO, and other styles
17

Kolayli, Mert. "Comparison Of Decoding Algorithms For Low-density Parity-check Codes." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607731/index.pdf.

Full text
Abstract:
Low-density parity-check (LDPC) codes are a subclass of linear block codes. These codes have parity-check matrices in which the ratio of the non-zero elements to all elements is low. This property is exploited in defining low complexity decoding algorithms. Low-density parity-check codes have good distance properties and error correction capability near Shannon limits. In this thesis, the sum-product and the bit-flip decoding algorithms for low-density parity-check codes are implemented on Intel Pentium M 1,86 GHz processor using the software called MATLAB. Simulations for the two decoding algorithms are made over additive white gaussian noise (AWGN) channel changing the code parameters like the information rate, the blocklength of the code and the column weight of the parity-check matrix. Performance comparison of the two decoding algorithms are made according to these simulation results. As expected, the sum-product algorithm, which is based on soft-decision decoding, outperforms the bit-flip algorithm, which depends on hard-decision decoding. Our simulations show that the performance of LDPC codes improves with increasing blocklength and number of iterations for both decoding algorithms. Since the sum-product algorithm has lower error-floor characteristics, increasing the number of iterations is more effective for the sum-product decoder compared to the bit-flip decoder. By having better BER performance for lower information rates, the bit-flip algorithm performs according to the expectations
however, the performance of the sum-product decoder deteriorates for information rates below 0.5 instead of improving. By irregular construction of LDPC codes, a performance improvement is observed especially for low SNR values.
APA, Harvard, Vancouver, ISO, and other styles
18

Richter, Gerd. "Puncturing, mapping, and design of low-density parity-check codes." Düsseldorf VDI-Verl, 2008. http://d-nb.info/99372230X/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Mantha, Ramesh. "Hybrid automatic repeat request schemes using turbo codes and low-density parity check codes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0019/MQ58728.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Pirou, Florent. "Low-density Parity-Check decoding Algorithms." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2160.

Full text
Abstract:

Recently, low-density parity-check (LDPC) codes have attracted much attention because of their excellent error correcting performance and highly parallelizable decoding scheme. However, the effective VLSI implementation of and LDPC decoder remains a big challenge and is a crucial issue in determining how well we can exploit the benefits of the LDPC codes in the real applications. In this master thesis report, following a error coding background, we describe Low-Density Parity-Check codes and their decoding algorithm, and also requirements and architectures of LPDC decoder implementations.

APA, Harvard, Vancouver, ISO, and other styles
21

Kopparthi, Sunitha. "Flexible encoder and decoder designs for low-density parity-check codes." Diss., Manhattan, Kan. : Kansas State University, 2010. http://hdl.handle.net/2097/4190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Liu, Yue Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Design of structured nonbinary quasi-cyclic low-density parity-check codes." Publisher:University of New South Wales. Electrical Engineering & Telecommunications, 2009. http://handle.unsw.edu.au/1959.4/43616.

Full text
Abstract:
Since the rediscovery, LDPC codes attract a large amount of research efforts. In 1998, nonbinary LDPC codes were firstly investigated and the results shown that they are better than their binary counterparts in performance. Recently, there is always a requirement from the industry to design applied nonbinary LDPC codes. In this dissertation, we firstly propose a novel class of quasi-cyclic (QC) LDPC codes. This class of QC-LDPC codes embraces both linear encoding complexity and excellent compatibility in various degree distributions and nonbinary expansions. We show by simulation results that our proposed QC-LDPC codes perform as well as their comparable counterparts. However, this proposed code structure is more flexible in designing. This feature may show its power when we change the code length and rate adaptively. Further more, we present two algorithms to generate codes with short girth and better girth distribution. The two algorithms are designed based on progressive edge growth (PEG) algorithm and they are specifically designed for quasi-cyclic structure. The simulation results show the improvement they achieved. In this thesis, we also investigate the believe propagation based iterative algorithms for decoding of nonbinary LDPC codes. The algorithms include sum-product (SP) algorithm, SP algorithm using fast Fourier transform, min-sum (MS) algorithm and complexity reduced extended min-sum (EMS) algorithm. In particular, we present the proposed modified min-sum algorithm with threshold filtering which further reduces the computation complexity.
APA, Harvard, Vancouver, ISO, and other styles
23

Planjery, Shiva Kumar. "Iterative Decoding Beyond Belief Propagation of Low-Density Parity-Check Codes." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/305883.

Full text
Abstract:
The recent renaissance of one particular class of error-correcting codes called low-density parity-check (LDPC) codes has revolutionized the area of communications leading to the so-called field of modern coding theory. At the heart of this theory lies the fact that LDPC codes can be efficiently decoded by an iterative inference algorithm known as belief propagation (BP) which operates on a graphical model of a code. With BP decoding, LDPC codes are able to achieve an exceptionally good error-rate performance as they can asymptotically approach Shannon's capacity. However, LDPC codes under BP decoding suffer from the error floor phenomenon, an abrupt degradation in the error-rate performance of the code in the high signal-to-noise ratio region, which prevents the decoder from achieving very low error-rates. It arises mainly due to the sub-optimality of BP decoding on finite-length loopy graphs. Moreover, the effects of finite precision that stem from hardware realizations of BP decoding can further worsen the error floor phenomenon. Over the past few years, the error floor problem has emerged as one of the most important problems in coding theory with applications now requiring very low error rates and faster processing speeds. Further, addressing the error floor problem while taking finite precision into account in the decoder design has remained a challenge. In this dissertation, we introduce a new paradigm for finite precision iterative decoding of LDPC codes over the binary symmetric channel (BSC). These novel decoders, referred to as finite alphabet iterative decoders (FAIDs), are capable of surpassing the BP in the error floor region at a much lower complexity and memory usage than BP without any compromise in decoding latency. The messages propagated by FAIDs are not quantized probabilities or log-likelihoods, and the variable node update functions do not mimic the BP decoder. Rather, the update functions are simple maps designed to ensure a higher guaranteed error correction capability which improves the error floor performance. We provide a methodology for the design of FAIDs on column-weight-three codes. Using this methodology, we design 3-bit precision FAIDs that can surpass the BP (floating-point) in the error floor region on several column-weight-three codes of practical interest. While the proposed FAIDs are able to outperform the BP decoder with low precision, the analysis of FAIDs still proves to be a difficult issue. Furthermore, their achievable guaranteed error correction capability is still far from what is achievable by the optimal maximum-likelihood (ML) decoding. In order to address these two issues, we propose another novel class of decoders called decimation-enhanced FAIDs for LDPC codes. For this class of decoders, the technique of decimation is incorporated into the variable node update function of FAIDs. Decimation, which involves fixing certain bits of the code to a particular value during decoding, can significantly reduce the number of iterations required to correct a fixed number of errors while maintaining the good performance of a FAID, thereby making such decoders more amenable to analysis. We illustrate this for 3-bit precision FAIDs on column-weight-three codes and provide insights into the analysis of such decoders. We also show how decimation can be used adaptively to further enhance the guaranteed error correction capability of FAIDs that are already good on a given code. The new adaptive decimation scheme proposed has marginally added complexity but can significantly increase the slope of the error floor in the error-rate performance of a particular FAID. On certain high-rate column-weight-three codes of practical interest, we show that adaptive decimation-enhanced FAIDs can achieve a guaranteed error-correction capability that is close to the theoretical limit achieved by ML decoding.
APA, Harvard, Vancouver, ISO, and other styles
24

Hussein, Ahmed Refaey Ahmed. "Universal Decoder for Low Density Parity Check, Turbo and Convolutional Codes." Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/28154/28154.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Hou, Jilei. "Capacity-approaching coding schemes based on low-density parity-check codes /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2003. http://wwwlib.umi.com/cr/ucsd/fullcit?p3076341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ho, Ki-hiu. "Study of quantum low density parity check and quantum degenerate codes." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B41897109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Mei, Zhen. "Analysis of low-density parity-check codes on impulsive noise channels." Thesis, University of Newcastle upon Tyne, 2017. http://hdl.handle.net/10443/3758.

Full text
Abstract:
Communication channels can severely degrade a signal, not only due to fading effects but also interference in the form of impulsive noise. In conventional communication systems, the additive noise at the receiver is usually assumed to be Gaussian distributed. However, this assumption is not always valid and examples of non-Gaussian distributed noise include power line channels, underwater acoustic channels and manmade interference. When designing a communication system it is useful to know the theoretical performance in terms of bit-error probability (BEP) on these types of channels. However, the effect of impulses on the BEP performance has not been well studied, particularly when error correcting codes are employed. Today, advanced error-correcting codes with very long block lengths and iterative decoding algorithms, such as Low-Density Parity-Check (LDPC) codes and turbo codes, are popular due to their capacity-approaching performance. However, very long codes are not always desirable, particularly in communications systems where latency is a serious issue, such as in voice and video communication between multiple users. This thesis focuses on the analysis of short LDPC codes. Finite length analyses of LDPC codes have already been presented for the additive white Gaussian noise channel in the literature, but the analysis of short LDPC codes for channels that exhibit impulsive noise has not been investigated. The novel contributions in this thesis are presented in three sections. First, uncoded and LDPC-coded BEP performance on channels exhibiting impulsive noise modelled by symmetric -stable (S S) distributions are examined. Different sub-optimal receivers are compared and a new low-complexity receiver is proposed that achieves near-optimal performance. Density evolution is then used to derive the threshold signal-tonoise ratio (SNR) of LDPC codes that employ these receivers. In order to accurately predict the waterfall performance of short LDPC codes, a nite length analysis is proposed with the aid of the threshold SNRs of LDPC codes and the derived uncoded BEPs for impulsive noise channels. Second, to investigate the e ect of impulsive noise on wireless channels, the analytic BEP on generalized fading channels with S S noise is derived. However, it requires the evaluation of a double integral to obtain the analytic BEP, so to reduce the computational cost, the Cauchy- Gaussian mixture model and the asymptotic property of S S process are used to derive upper bounds of the exact BEP. Two closed-form expressions are derived to approximate the exact BEP on a Rayleigh fading channel with S S noise. Then density evolution of different receivers is derived for these channels to nd the asymptotic performance of LDPC codes. Finally, the waterfall performance of LDPC codes is again estimated for generalized fading channels with S S noise by utilizing the derived uncoded BEP and threshold SNRs. Finally, the addition of spatial diversity at the receiver is investigated. Spatial diversity is an effective method to mitigate the effects of fading and when used in conjunction with LDPC codes and can achieve excellent error-correcting performance. Hence, the performance of conventional linear diversity combining techniques are derived. Then the SNRs of these linear combiners are compared and the relationship of the noise power between different linear combiners is obtained. Nonlinear detectors have been shown to achieve better performance than linear combiners hence, optimal and sub-optimal detectors are also presented and compared. A non-linear detector based on the bi-parameter Cauchy-Gaussian mixture model is used and shows near-optimal performance with a significant reduction in complexity when compared with the optimal detector. Furthermore, we show how to apply density evolution of LDPC codes for different combining techniques on these channels and an estimation of the waterfall performance of LDPC codes is derived that reduces the gap between simulated and asymptotic performance. In conclusion, the work presented in this thesis provides a framework to evaluate the performance of communication systems in the presence of additive impulsive noise, with and without spatial diversity at the receiver. For the first time, bounds on the BEP performance of LDPC codes on channels with impulsive noise have been derived for optimal and sub-optimal receivers, allowing other researchers to predict the performance of LDPC codes in these type of environments without needing to run lengthy computer simulations.
APA, Harvard, Vancouver, ISO, and other styles
28

Planjery, Shiva Kumar. "Iterative decoding beyond belief propagation for low-density parity-check codes." Thesis, Cergy-Pontoise, 2012. http://www.theses.fr/2012CERG0618.

Full text
Abstract:
Les codes Low-Density Parity-Check (LDPC) sont au coeur de larecherche des codes correcteurs d'erreurs en raison de leur excellenteperformance de décodage en utilisant un algorithme de décodageitératif de type propagation de croyances (Belief Propagation - BP).Cet algorithme utilise la représentation graphique d'un code, ditgraphe de Tanner, et calcule les fonctions marginales sur le graphe.Même si l'inférence calculée n'est exacte que sur un graphe acyclique(arbre), l'algorithme BP estime de manière très proche les marginalessur les graphes cycliques, et les codes LDPC peuvent asymptotiquementapprocher la capacité de Shannon avec cet algorithme.Cependant, sur des codes de longueurs finies dont la représentationgraphique contient des cycles, l'algorithme BP est sous-optimal etdonne lieu à l'apparition du phénomène dit de plancher d'erreur. Leplancher d'erreur se manifeste par la dégradation soudaine de la pentedu taux d'erreur dans la zone de fort rapport signal à bruit où lesstructures néfastes au décodage sont connues en termes de TrappingSets présents dans le graphe de Tanner du code, entraînant un échec dudécodage. De plus, les effets de la quantification introduite parl'implémentation en hardware de l'algorithme BP peuvent amplifier ceproblème de plancher d'erreur.Dans cette thèse nous introduisons un nouveau paradigme pour ledécodage itératif à précision finie des codes LDPC sur le canalbinaire symétrique. Ces nouveaux décodeurs, appelés décodeursitératifs à alphabet fini (Finite Alphabet Iterative Decoders – FAID)pour préciser que les messages appartiennent à un alphabet fini, sontcapables de surpasser l'algorithme BP dans la région du plancherd'erreur. Les messages échangés par les FAID ne sont pas desprobabilités ou vraisemblances quantifiées, et les fonctions de miseà jour des noeuds de variable ne copient en rien le décodage par BP cequi contraste avec les décodeurs BP quantifiés traditionnels. Eneffet, les fonctions de mise à jour sont de simples tables de véritéconçues pour assurer une plus grande capacité de correction d'erreuren utilisant la connaissance de topologies potentiellement néfastes audécodage présentes dans un code donné. Nous montrons que sur demultiples codes ayant un poids colonne de trois, il existe des FAIDutilisant 3 bits de précision pouvant surpasser l'algorithme BP(implémenté en précision flottante) dans la zone de plancher d'erreursans aucun compromis dans la latence de décodage. C'est pourquoi lesFAID obtiennent des performances supérieures comparées au BP avecseulement une fraction de sa complexité.Par ailleurs, nous proposons dans cette thèse une décimation amélioréedes FAID pour les codes LDPC dans le traitement de la mise à jour desnoeuds de variable. La décimation implique de fixer certains bits ducode à une valeur particulière pendant le décodage et peut réduire demanière significative le nombre d'itérations requises pour corriger uncertain nombre d'erreurs fixé tout en maintenant de bonnesperformances d'un FAID, le rendant plus à même d'être analysé. Nousillustrons cette technique pour des FAID utilisant 3 bits de précisioncodes de poids colonne trois. Nous montrons également comment cettedécimation peut être utilisée de manière adaptative pour améliorer lescapacités de correction d'erreur des FAID. Le nouveau modèle proposéde décimation adaptative a, certes, une complexité un peu plus élevée,mais améliore significativement la pente du plancher d'erreur pour unFAID donné. Sur certains codes à haut rendement, nous montrons que ladécimation adaptative des FAID permet d'atteindre des capacités decorrection d'erreur proches de la limite théorique du décodage au sensdu maximum de vraisemblance
At the heart of modern coding theory lies the fact that low-density parity-check (LDPC) codes can be efficiently decoded by message-passing algorithms which are traditionally based on the belief propagation (BP) algorithm. The BP algorithm operates on a graphical model of a code known as the Tanner graph, and computes marginals of functions on the graph. While inference using BP is exact only on loop-free graphs (trees), the BP still provides surprisingly close approximations to exact marginals on loopy graphs, and LDPC codes can asymptotically approach Shannon's capacity under BP decoding.However, on finite-length codes whose corresponding graphs are loopy, BP is sub-optimal and therefore gives rise to the error floor phenomenon. The error floor is an abrupt degradation in the slope of the error-rate performance of the code in the high signal-to-noise regime, where certain harmful structures generically termed as trapping sets present in the Tanner graph of the code, cause the decoder to fail. Moreover, the effects of finite precision that are introduced during hardware realizations of BP can further contribute to the error floor problem.In this dissertation, we introduce a new paradigm for finite precision iterative decoding of LDPC codes over the Binary Symmetric channel (BSC). These novel decoders, referred to as finite alphabet iterative decoders (FAIDs) to signify that the message values belong to a finite alphabet, are capable of surpassing the BP in the error floor region. The messages propagated by FAIDs are not quantized probabilities or log-likelihoods, and the variable node update functions do not mimic the BP decoder, which is in contrast to traditional quantized BP decoders. Rather, the update functions are simple maps designed to ensure a higher guaranteed error correction capability by using the knowledge of potentially harmful topologies that could be present in a given code. We show that on several column-weight-three codes of practical interest, there exist 3-bit precision FAIDs that can surpass the BP (floating-point) in the error floor without any compromise in decoding latency. Hence, they are able to achieve a superior performance compared to BP with only a fraction of its complexity.Additionally in this dissertation, we propose decimation-enhanced FAIDs for LDPC codes, where the technique of decimation is incorporated into the variable node update function of FAIDs. Decimation, which involves fixing certain bits of the code to a particular value during the decoding process, can significantly reduce the number of iterations required to correct a fixed number of errors while maintaining the good performance of a FAID, thereby making such decoders more amenable to analysis. We illustrate this for 3-bit precision FAIDs on column-weight-three codes. We also show how decimation can be used adaptively to further enhance the guaranteed error correction capability of FAIDs that are already good on a given code. The new adaptive decimation scheme proposed has marginally added complexity but can significantly improve the slope of the error floor performance of a particular FAID. On certain high-rate column-weight-three codes of practical interest, we show that adaptive decimation-enhanced FAIDs can achieve a guaranteed error-correction capability that is close to the theoretical limit achieved by maximum-likelihood decoding
APA, Harvard, Vancouver, ISO, and other styles
29

Healy, Cornelius Thomas. "Short-length low-density parity-check codes : construction and decoding algorithms." Thesis, University of York, 2014. http://etheses.whiterose.ac.uk/7875/.

Full text
Abstract:
Error control coding is an essential part of modern communications systems. LDPC codes have been demonstrated to offer performance near the fundamental limits of channels corrupted by random noise. Optimal maximum likelihood decoding of LDPC codes is too complex to be practically useful even at short block lengths and so a graph-based message passing decoder known as the belief propagation algorithm is used instead. In fact, on graphs without closed paths known as cycles the iterative message passing decoding is known to be optimal and may converge in a single iteration, although identifying the message update schedule which allows single-iteration convergence is not trivial. At finite block lengths graphs without cycles have poor minimum distance properties and perform poorly even under optimal decoding. LDPC codes with large block length have been demonstrated to offer performance close to that predicted for codes of infinite length, as the cycles present in the graph are quite long. In this thesis, LDPC codes of shorter length are considered as they offer advantages in terms of latency and complexity, at the cost of performance degradation from the increased number of short cycles in the graph. For these shorter LDPC codes, the problems considered are: First, improved construction of structured and unstructured LDPC code graphs of short length with a view to reducing the harmful effects of the cycles on error rate performance, based on knowledge of the decoding process. Structured code graphs are particularly interesting as they allow benefits in encoding and decoding complexity and speed. Secondly, the design and construction of LDPC codes for the block fading channel, a particularly challenging scenario from the point of view of error control code design. Both established and novel classes of codes for the channel are considered. Finally the decoding of LDPC codes by the belief propagation algorithm is considered, in particular the scheduling of messages passed in the iterative decoder. A knowledge-aided approach is developed based on message reliabilities and residuals to allow fast convergence and significant improvements in error rate performance.
APA, Harvard, Vancouver, ISO, and other styles
30

Planjery, Shiva Kumar. "Low-Complexity Finite Precision Decoders for Low-Density Parity-Check Codes." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605947.

Full text
Abstract:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
We present a new class of finite-precision decoders for low-density parity-check (LDPC) codes. These decoders are much lower in complexity compared to conventional floating-point decoders such as the belief propagation (BP) decoder, but they have the potential to outperform BP. The messages utilized by the decoders assume values (or levels) from a finite discrete set. We discuss the implementation aspects as well as describe the underlying philosophy in designing these decoders. We also provide results to show that in some cases, only 3 bits are required in the proposed decoders to outperform floating-point BP.
APA, Harvard, Vancouver, ISO, and other styles
31

Ho, Ki-hiu, and 何其曉. "Study of quantum low density parity check and quantum degeneratecodes." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B41897109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Miller, John. "High code rate, low-density parity-check codes with guaranteed minimum distance and stopping weight /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2003. http://wwwlib.umi.com/cr/ucsd/fullcit?p3090443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Hur, Woonhaing. "Incremental Redundancy Low-Density Parity-Check Codes for Hybrid FEC/ARQ Schemes." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14491.

Full text
Abstract:
The objective of this dissertation is to investigate incremental redundancy low-density parity-check (IR-LDPC) codes for hybrid forward error correction / automatic repeat request (HybridARQ) schemes. Powerful capacity-approaching IR-LDPC codes are one of the key functional elements in high-throughput HybridARQ schemes and provide a flexible rate-compatible structure, which is necessary for low-complexity HybridARQ schemes. This dissertation first studies the design and performance evaluation of IR-LDPC codes, which have good error rate performance at short block lengths. The subset codes of the IR-LDPC codes are compared to conventional random punctured codes and multiple dedicated codes. As a system model for this work, an adaptive LDPC coded system is presented. This adaptive system can confront the nature of time-varying channels and approach the capacity of the system with the aid of LDPC codes. This system shows remarkable throughput improvement over a conventional punctured system and, for systems that use multiple dedicated codes, provides comparable performance with low-complexity at every target error rate. This dissertation also focuses on IR-LDPC codes with a wider operating code range because the previous IR-LDPC codes exhibited performance limitation related to the maximum achievable code rate. For this reason, this research proposes a new way to increase the maximum code rate of the IR-LDPC codes, which provides throughput improvement at high throughput regions over conventional random punctured codes. Also presented is an adaptive code selection algorithm using threshold parameters. This algorithm reduces the number of the unnecessary traffic channels in HybridARQ schemes. This dissertation also examines how to improve throughput performance in HybridARQ schemes with low-complexity by exploiting irregular repeat accumulate (IRA) codes. The proposed adaptive transmission method with adaptive puncturing patterns of IRA codes shows higher throughput performance in all of operating code ranges than does any other single mode in HybridARQ schemes.
APA, Harvard, Vancouver, ISO, and other styles
34

Kazanci, Onur Husnu. "Performance Of Pseudo-random And Quasi-cyclic Low Density Parity Check Codes." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12609036/index.pdf.

Full text
Abstract:
Low Density Parity Check (LDPC) codes are the parity check codes of long block length, whose parity check matrices have relatively few non-zero entries. To improve the performance at relatively short block lengths, LDPC codes are constructed by either pseudo-random or quasi-cyclic methods instead of random construction methods. In this thesis, pseudo-random code construction methods, the effects of closed loops and the graph connectivity on the performance of pseudo-random LDPC codes are investigated. Moreover, quasi-cyclic LDPC codes, which have encoding and storage advantages over pseudo-random LDPC codes, their construction methods and performances are reviewed. Finally, performance comparison between pseudo-random and quasi-cyclic LDPC codes is given for both regular and irregular cases.
APA, Harvard, Vancouver, ISO, and other styles
35

Bardak, Erinc Deniz. "Design And Performance Of Capacity Approaching Irregular Low-density Parity-check Codes." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12611084/index.pdf.

Full text
Abstract:
In this thesis, design details of binary irregular Low-Density Parity-Check (LDPC) codes are investigated. We especially focus on the trade-off between the average variable node degree, wa, and the number of length-6 cycles of an irregular code. We observe that the performance of the irregular code improves with increasing wa up to a critical value, but deteriorates for larger wa because of the exponential increase in the number of length-6 cycles. We have designed an irregular code of length 16,000 bits with average variable node degree wa=3.8, that we call &lsquo
2/3/13&rsquo
since it has some variable nodes of degree 2 and 13 in addition to the majority of degree-3 nodes. The observed performance is found to be very close to that of the capacity approaching commercial codes. Time spent for decoding 50,000 codewords of length 1800 at Eb/No=1.6 dB for an irregular 2/3/13 code is measured to be 19% less than that of the regular (3, 6) code, mainly because of the smaller number of decoding failures.
APA, Harvard, Vancouver, ISO, and other styles
36

Selvarathinam, Anand Manivannan. "High throughput low power decoder architectures for low density parity check codes." Texas A&M University, 2005. http://hdl.handle.net/1969.1/2529.

Full text
Abstract:
A high throughput scalable decoder architecture, a tiling approach to reduce the complexity of the scalable architecture, and two low power decoding schemes have been proposed in this research. The proposed scalable design is generated from a serial architecture by scaling the combinational logic; memory partitioning and constructing a novel H matrix to make parallelization possible. The scalable architecture achieves a high throughput for higher values of the parallelization factor M. The switch logic used to route the bit nodes to the appropriate checks is an important constituent of the scalable architecture and its complexity is high with higher M. The proposed tiling approach is applied to the scalable architecture to simplify the switch logic and reduce gate complexity. The tiling approach generates patterns that are used to construct the H matrix by repeating a fixed number of those generated patterns. The advantages of the proposed approach are two-fold. First, the information stored about the H matrix is reduced by onethird. Second, the switch logic of the scalable architecture is simplified. The H matrix information is also embedded in the switch and no external memory is needed to store the H matrix. Scalable architecture and tiling approach are proposed at the architectural level of the LDPC decoder. We propose two low power decoding schemes that take advantage of the distribution of errors in the received packets. Both schemes use a hard iteration after a fixed number of soft iterations. The dynamic scheme performs X soft iterations, then a parity checker cHT that computes the number of parity checks in error. Based on cHT value, the decoder decides on performing either soft iterations or a hard iteration. The advantage of the hard iteration is so significant that the second low power scheme performs a fixed number of iterations followed by a hard iteration. To compensate the bit error rate performance, the number of soft iterations in this case is higher than that of those performed before cHT in the first scheme.
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, Lei. "Construction of structured low-density parity-check codes : combinatorial and algebraic approaches /." For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2005. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Yang, Sizhen Michael. "Construction of low-density parity-check codes for data storage and transmission." Diss., The University of Arizona, 2004. http://hdl.handle.net/10150/280665.

Full text
Abstract:
This dissertation presents a new class of irregular low-density parity-check (LDPC) codes of moderate length and high rate. The codes in this class admit low-complexity encoding and have lower error rate floors than other irregular LDPC code design approaches. It is also shown that this class of LDPC codes is equivalent to a class of systematic serial turbo codes and is an extension of irregular repeat-accumulate codes. A code design algorithm based on the combination of density evolution and differential evolution optimization with a modified cost function is presented. Moderate-length, high-rate codes with no error-rate floors down to a bit error-rate of 10-9 are presented. Although our focus is on moderate-length, high-rate codes, the proposed coding scheme is applicable to irregular LDPC codes with other lengths and rates. Applications of these codes to magnetic data storage and wireless transmission channels are then studied. In the case of data storage, we assume an EPR4 partial response model with noise bursts which models media defects and thermal asperities. We show the utility of sending burst noise channel state information to both the partial response detector and the decoder. Doing so eliminates the error rate curve flattening seen by other researchers. The simulation results presented have demonstrated that LDPC codes are very effective against noise bursts and, in fact, are superior to Reed-Solomon codes in the regime simulated. We also have presented an algorithm for finding the maximum resolvable erasure-burst length, Lmax, for a given LDPC code. The simulation results make the possibility of an error control system based solely on an LDPC code very promising. For the wireless communication channel, we assume two types of Gilbert-Elliott channels and design LDPC codes for such channels. Under certain assumptions, this model leads us to what we call the burst-erasure channel with AWGN (BuEC-G), in which bits are received in Gaussian noise or as part of an erasure burst. To design codes for this channel, we take a "shortcut" and instead design codes for the burst-erasure channel (BuEC) in which a bit is received correctly or it is received as an erasure, with erasures occurring in bursts. We show that optimal BuEC code ensembles are equal to optimal binary erasure channel (BEC) code ensembles and we design optimal codes for these channels. The burst-erasure efficacy can also be measured by the maximum resolvable erasure-burst length Lmax. Finally, we present error-rate results which demonstrate the superiority of the designed codes on the BuEC-G over other codes that appear in the literature.
APA, Harvard, Vancouver, ISO, and other styles
39

Ländner, Stefan [Verfasser]. "Improving the Error-Floor Behavior of Low-Density Parity-Check Codes / Stefan Ländner." Aachen : Shaker, 2011. http://d-nb.info/1070151254/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Papaharalabos, Stylianos. "Efficient iterative decoding algorithms for turbo and low-density parity-check (LDPC) codes." Thesis, University of Surrey, 2005. http://epubs.surrey.ac.uk/804383/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Wu, Xiaoxiao. "Efficient design and decoding of the rate-compatible low-density parity-check codes /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?ECED%202009%20WUXX.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Sandberg, Sara. "Low-density parity-check codes : unequal error protection and reduction of clipping effects /." Luleå, 2009. http://pure.ltu.se/ws/fbspretrieve/2546109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Huang, Jen-Fa. "On finding generator polynomials and parity-check sums of binary projective geometry codes." Thesis, University of Ottawa (Canada), 1985. http://hdl.handle.net/10393/4800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Krishnan, Anantha Raman, and Shashi Kiran Chilappagari. "Low-Density Parity-Check Codes Which Can Correct Three Errors Under Iterative Decoding." International Foundation for Telemetering, 2009. http://hdl.handle.net/10150/606118.

Full text
Abstract:
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada
In this paper, we give necessary and sufficient conditions for low-density parity-check (LDPC) codes with column-weight four to correct three errors when decoded using hard-decision message-passing decoding. We then give a construction technique which results in codes satisfying these conditions. We also provide numerical assessment of code performance via simulation results.
APA, Harvard, Vancouver, ISO, and other styles
45

Ma, Xudong. "On Constructing Low-Density Parity-Check Codes." Thesis, 2007. http://hdl.handle.net/10012/3440.

Full text
Abstract:
This thesis focuses on designing Low-Density Parity-Check (LDPC) codes for forward-error-correction. The target application is real-time multimedia communications over packet networks. We investigate two code design issues, which are important in the target application scenarios, designing LDPC codes with low decoding latency, and constructing capacity-approaching LDPC codes with very low error probabilities. On designing LDPC codes with low decoding latency, we present a framework for optimizing the code parameters so that the decoding can be fulfilled after only a small number of iterative decoding iterations. The brute force approach for such optimization is numerical intractable, because it involves a difficult discrete optimization programming. In this thesis, we show an asymptotic approximation to the number of decoding iterations. Based on this asymptotic approximation, we propose an approximate optimization framework for finding near-optimal code parameters, so that the number of decoding iterations is minimized. The approximate optimization approach is numerically tractable. Numerical results confirm that the proposed optimization approach has excellent numerical properties, and codes with excellent performance in terms of number of decoding iterations can be obtained. Our results show that the numbers of decoding iterations of the codes by the proposed design approach can be as small as one-fifth of the numbers of decoding iterations of some previously well-known codes. The numerical results also show that the proposed asymptotic approximation is generally tight for even non-extremely limiting cases. On constructing capacity-approaching LDPC codes with very low error probabilities, we propose a new LDPC code construction scheme based on $2$-lifts. Based on stopping set distribution analysis, we propose design criteria for the resulting codes to have very low error floors. High error floors are the main problems of previously constructed capacity-approaching codes, which prevent them from achieving very low error probabilities. Numerical results confirm that codes with very low error floors can be obtained by the proposed code construction scheme and the design criteria. Compared with the codes by the previous standard construction schemes, which have error floors at the levels of $10^{-3}$ to $10^{-4}$, the codes by the proposed approach do not have observable error floors at the levels higher than $10^{-7}$. The error floors of the codes by the proposed approach are also significantly lower compared with the codes by the previous approaches to constructing codes with low error floors.
APA, Harvard, Vancouver, ISO, and other styles
46

Ghayoor, Farzad. "Non-binary compound codes based on single parity-check codes." Thesis, 2013. http://hdl.handle.net/10413/10422.

Full text
Abstract:
Shannon showed that the codes with random-like codeword weight distribution are capable of approaching the channel capacity. However, the random-like property can be achieved only in codes with long-length codewords. On the other hand, the decoding complexity for a random-like codeword increases exponentially with its length. Therefore, code designers are combining shorter and simpler codes in a pseudorandom manner to form longer and more powerful codewords. In this research, a method for designing non-binary compound codes with moderate to high coding rate is proposed. Based on this method, non-binary single parity-check (SPC) codes are considered as component codes and different iterative decoding algorithms for decoding the constructed compound codes are proposed. The soft-input soft-output component decoders, which are employed for the iterative decoding algorithms, are constructed from optimal and sub-optimal a posteriori probability (APP) decoders. However, for non-binary codes, implementing an optimal APP decoder requires a large amount of memory. In order to reduce the memory requirement of the APP decoding algorithm, in the first part of this research, a modified form of the APP decoding algorithm is presented. The amount of memory requirement of this proposed algorithm is significantly less than that of the standard APP decoder. Therefore, the proposed algorithm becomes more practical for decoding non-binary block codes. The compound codes that are proposed in this research are constructed from combination of non-binary SPC codes. Therefore, as part of this research, the construction and decoding of the non-binary SPC codes, when SPC codes are defined over a finite ring of order q, are presented. The concept of finite rings is more general and it thus includes non-binary SPC codes defined over finite fields. Thereafter, based on production of non-binary SPC codes, a class of non-binary compound codes is proposed that is efficient for controlling both random-error and burst-error patterns and can be used for applications where high coding rate schemes are required. Simulation results show that the performance of the proposed codes is good. Furthermore, the performance of the compound code improves over larger rings. The analytical performance bounds and the minimum distance properties of these product codes are studied.
Thesis (Ph.D.)-University of KwaZulu-Natal, Durban, 2013.
APA, Harvard, Vancouver, ISO, and other styles
47

Malema, Gabofetswe Alafang. "Low-density parity-check codes : construction and implementation." 2007. http://hdl.handle.net/2440/45525.

Full text
Abstract:
Low-density parity-check (LDPC) codes have been shown to have good error correcting performance approaching Shannon’s limit. Good error correcting performance enables efficient and reliable communication. However, a LDPC code decoding algorithm needs to be executed efficiently to meet cost, time, power and bandwidth requirements of target applications. The constructed codes should also meet error rate performance requirements of those applications. Since their rediscovery, there has been much research work on LDPC code construction and implementation. LDPC codes can be designed over a wide space with parameters such as girth, rate and length. There is no unique method of constructing LDPC codes. Existing construction methods are limited in some way in producing good error correcting performing and easily implementable codes for a given rate and length. There is a need to develop methods of constructing codes over a wide range of rates and lengths with good performance and ease of hardware implementability. LDPC code hardware design and implementation depend on the structure of target LDPC code and is also as varied as LDPC matrix designs and constructions. There are several factors to be considered including decoding algorithm computations,processing nodes interconnection network, number of processing nodes, amount of memory, number of quantization bits and decoding delay. All of these issues can be handled in several different ways. This thesis is about construction of LDPC codes and their hardware implementation. LDPC code construction and implementation issues mentioned above are too many to be addressed in one thesis. The main contribution of this thesis is the development of LDPC code construction methods for some classes of structured LDPC codes and techniques for reducing decoding time. We introduce two main methods for constructing structured codes. In the first method, column-weight two LDPC codes are derived from distance graphs. A wide range of girths, rates and lengths are obtained compared to existing methods. The performance and implementation complexity of obtained codes depends on the structure of their corresponding distance graphs. In the second method, a search algorithm based on bit-filing and progressive-edge growth algorithms is introduced for constructing quasi-cyclic LDPC codes. The algorithm can be used to form a distance or Tanner graph of a code. This method could also obtain codes over a wide range of parameters. Cycles of length four are avoided by observing the row-column constraint. Row-column connections observing this condition are searched sequentially or randomly. Although the girth conditions are not sufficient beyond six, larger girths codes were easily obtained especially at low rates. The advantage of this algorithm compared to other methods is its flexibility. It could be used to construct codes for a given rate and length with girths of at least six for any sub-matrix configuration or rearrangement. The code size is also easily varied by increasing or decreasing sub-matrix size. Codes obtained using a sequential search criteria show poor performance at low girths (6 and 8) while random searches result in good performing codes. Quasi-cyclic codes could be implemented in a variety of decoder architectures. One of the many options is the choice of processing nodes interconnect. We show how quasi-cyclic codes processing could be scheduled through a multistage network. Although these net-works have more delay than other modes of communication, they offer more flexibility at a reasonable cost. Banyan and Benes networks are suggested as the most suitable networks. Decoding delay is also one of several issues considered in decoder design and implementation. In this thesis, we overlap check and variable node computations to reduce decoding time. Three techniques are discussed, two of which are introduced in this thesis. The techniques are code matrix permutation, matrix space restriction and sub-matrix row-column scheduling. Matrix permutation rearranges the parity-check matrix such that rows and columns that do not have connections in common are separated. This techniques can be applied to any matrix. Its effectiveness largely depends on the structure of the code. We show that its success also depends on the size of row and column weights. Matrix space restriction is another technique that can be applied to any code and has fixed reduction in time or amount of overlap. Its success depends on the amount of restriction and may be traded with performance loss. The third technique already suggested in literature relies on the internal cyclic structure of sub-matrices to achieve overlapping. The technique is limited to LDPC code matrices in which the number of sub-matrices is equal to row and column weights. We show that it can be applied to other codes with a lager number of sub-matrices than code weights. However, in this case maximum overlap is not guaranteed. We calculate the lower bound on the amount of overlapping. Overlapping could be applied to any sub-matrix configuration of quasi-cyclic codes by arbitrarily choosing the starting rows for processing. Overlapping decoding time depends on inter-iteration waiting times. We show that there are upper bounds on waiting times which depend on the code weights. Waiting times could be further reduced by restricting shifts in identity sub-matrices or using smaller sub-matrices. This overlapping technique can reduce the decoding time by up to 50% compared to conventional message and computation scheduling. Techniques of matrix permutation and space restriction results in decoder architectures that are flexible in LDPC code design in terms of code weights and size. This is due to the fact that with these techniques, rows and columns are processed in sequential order to achieve overlapping. However, in the existing technique, all sub-matrices have to be processed in parallel to achieve overlapping. Parallel processing of all code sub-matrices requires the architecture to have the number of processing units at least equal to the number sub-matrices. Processing units and memory space should therefore be distributed among the sub-matrices according to the sub-matrices arrangement. This leads to high complexity or inflexibility in the decoder architecture. We propose a simple, programmable and high throughput decoder architecture based on matrix permutation and space restriction techniques.
Thesis(Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2007
APA, Harvard, Vancouver, ISO, and other styles
48

Jhih-Siang, Wang, and 王致翔. "Density Evolution for Low-Density Parity-Check Codes." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/83172527534429207322.

Full text
Abstract:
碩士
國立中正大學
通訊工程研究所
99
In this paper, we demonstrate the Low-density parity-check (LDPC) codes and density evolution (DE). LDPC codes are a class of linear block codes which provide near-capacity performance on large data transmission by using belief propagation (BP) iterative decoding algorithm. In well design degree distribution, the rate 1/2 LDPC code can approach Shannon limit within 0.0045 dB for binary-input AWGN channels. Density evolution is a very useful and straightforward method to analyze the LDPC codes. In three assuming conditions, we can simulation and find the threshold for different rates binary-input memoryless channel. By using density evolution, we could know the LDPC code that we designed performs well or not, and then find the good degree distribution for LDPC code via density evolution to achieve the Shannon limit asymptotically as the block length tends to infinity become possible.
APA, Harvard, Vancouver, ISO, and other styles
49

Lee, Huang-Chang, and 李晃昌. "Decoding Scheduling for Low-Density Parity-Check Codes." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/9q2ctb.

Full text
Abstract:
博士
國立清華大學
電機工程學系
103
When the iterative Belief Propagation (BP) decoding algorithm is applied to low-density parity-check (LDPC) codes, the convergence speed in the waterfall region and the error floor in the high SNR region are two of the most important metrics of performance measurement. Both can be significantly improved using the scheduling techniques proposed in this thesis. Fast convergence can be achieved using informed dynamic scheduling (IDS) since the important decoding messages have more opportunities of being updated. However, greedy groups and silent variable nodes can be observed in many IDS decoders, and these obstruct the decoders from providing a satisfactory convergence error-rate performance. In this thesis, Q-RBP (Quota-based residual BP) and SVNF-BP (Silent-Variable-Node-Free BP) are proposed in order to suppress greedy groups and silent variable nodes, respectively. Since the number of updates for each message is limited by the proposed Q-RBP schedule, the message updates that would potentially form a greedy group are forced to release the occupied computation resources. On the other hand, following the SVNF-RBP schedule, the messages associated with all variable nodes are arranged to have an equal chance of contributing their intrinsic messages, and hence the silent variable nodes are totally avoided. Both the Q-RBP and SVNF-RBP schedules can provide a significant improvement in decoding performance when compared to other IDS decoders presented in the previous literature. Additional pre-computations are required in most of IDS decoders, including Q-RBP and the SVNF-RBP schedules, so as to order customized decoding sequences for individually received codewords. However, rather than arranging the decoding schedule based on each received codeword, the proposed maximum mutual information increase (M^2I^2)-based algorithm determines the schedule based on maximizing the increase in mutual information. A pre-determined and fixed decoding schedule can be applied to all codewords, and the decoding convergence can be accelerated without increasing the decoding complexity. Moreover, when multiple distinct schedules are applied to a single codeword to create schedule diversity, the error floor can be significantly lowered without requiring any knowledge of trapping sets. When the proposed decoding schedules are applied to punctured LDPC codes, the benefit in increasing convergence speed can be more significant compared to dedicated codes. If rate-compatible (RC)-LDPC codes constructed based on puncturing are considered, the $\mathrm{M^2I^2}$-based algorithm can be used to arrange fixed schedules for incremental decoding, and further reduce the required number of iterations. With the assistance of the proposed decoding schedules, the puncture-based RC-LDPC codes can be a potential solution for delay-sensitive HARQ (Hybrid-Automatic Repeat reQuest) applications.
APA, Harvard, Vancouver, ISO, and other styles
50

Smith, Benjamin. "Low-density parity-check codes with reduced decoding complexity." 2007. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=452849&T=F.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography