Dissertations / Theses on the topic 'Parity check decoder'

To see the other types of publications on this topic, follow the link: Parity check decoder.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Parity check decoder.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Vijayakumar, Suresh Mikler Armin. "FPGA implementation of low density parity check codes decoder." [Denton, Tex.] : University of North Texas, 2009. http://digital.library.unt.edu/permalink/meta-dc-11003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Kai. "High-Performance Decoder Architectures For Low-Density Parity-Check Codes." Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-dissertations/17.

Full text
Abstract:
The Low-Density Parity-Check (LDPC) codes, which were invented by Gallager back in 1960s, have attracted considerable attentions recently. Compared with other error correction codes, LDPC codes are well suited for wireless, optical, and magnetic recording systems due to their near- Shannon-limit error-correcting capacity, high intrinsic parallelism and high-throughput potentials. With these remarkable characteristics, LDPC codes have been adopted in several recent communication standards such as 802.11n (Wi-Fi), 802.16e (WiMax), 802.15.3c (WPAN), DVB-S2 and CMMB. This dissertation is devoted to exploring efficient VLSI architectures for high-performance LDPC decoders and LDPC-like detectors in sparse inter-symbol interference (ISI) channels. The performance of an LDPC decoder is mainly evaluated by area efficiency, error-correcting capability, throughput and rate flexibility. With this work we investigate tradeoffs between the four performance aspects and develop several decoder architectures to improve one or several performance aspects while maintaining acceptable values for other aspects. Firstly, we present a high-throughput decoder design for the Quasi-Cyclic (QC) LDPC codes. Two new techniques are proposed for the first time, including parallel layered decoding architecture (PLDA) and critical path splitting. Parallel layered decoding architecture enables parallel processing for all layers by establishing dedicated message passing paths among them. The decoder avoids crossbar-based large interconnect network. Critical path splitting technique is based on articulate adjustment of the starting point of each layer to maximize the time intervals between adjacent layers, such that the critical path delay can be split into pipeline stages. Furthermore, min-sum and loosely coupled algorithms are employed for area efficiency. As a case study, a rate-1/2 2304-bit irregular LDPC decoder is implemented using ASIC design in 90 nm CMOS process. The decoder can achieve an input throughput of 1.1 Gbps, that is, 3 or 4 times improvement over state-of-art LDPC decoders, while maintaining a comparable chip size of 2.9 mm^2. Secondly, we present a high-throughput decoder architecture for rate-compatible (RC) LDPC codes which supports arbitrary code rates between the rate of mother code and 1. While the original PLDA is lack of rate flexibility, the problem is solved gracefully by incorporating the puncturing scheme. Simulation results show that our selected puncturing scheme only introduces the BER performance degradation of less than 0.2dB, compared with the dedicated codes for different rates specified in the IEEE 802.16e (WiMax) standard. Subsequently, PLDA is employed for high throughput decoder design. As a case study, a RC- LDPC decoder based on the rate-1/2 WiMax LDPC code is implemented in CMOS 90 nm process. The decoder can achieve an input throughput of 975 Mbps and supports any rate between 1/2 and 1. Thirdly, we develop a low-complexity VLSI architecture and implementation for LDPC decoder used in China Multimedia Mobile Broadcasting (CMMB) systems. An area-efficient layered decoding architecture based on min-sum algorithm is incorporated in the design. A novel split-memory architecture is developed to efficiently handle the weight-2 submatrices that are rarely seen in conventional LDPC decoders. In addition, the check-node processing unit is highly optimized to minimize complexity and computing latency while facilitating a reconfigurable decoding core. Finally, we propose an LDPC-decoder-like channel detector for sparse ISI channels using belief propagation (BP). The BP-based detection computationally depends on the number of nonzero interferers only and are thus more suited for sparse ISI channels which are characterized by long delay but a small fraction of nonzero interferers. Layered decoding algorithm, which is popular in LDPC decoding, is also adopted in this paper. Simulation results show that the layered decoding doubles the convergence speed of the iterative belief propagation process. Exploring the special structure of the connections between the check nodes and the variable nodes on the factor graph, we propose an effective detector architecture for generic sparse ISI channels to facilitate the practical application of the proposed detection algorithm. The proposed architecture is also reconfigurable in order to switch flexible connections on the factor graph in the time-varying ISI channels.
APA, Harvard, Vancouver, ISO, and other styles
3

Kopparthi, Sunitha. "Flexible encoder and decoder designs for low-density parity-check codes." Diss., Manhattan, Kan. : Kansas State University, 2010. http://hdl.handle.net/2097/4190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hussein, Ahmed Refaey Ahmed. "Universal Decoder for Low Density Parity Check, Turbo and Convolutional Codes." Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/28154/28154.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Selvarathinam, Anand Manivannan. "High throughput low power decoder architectures for low density parity check codes." Texas A&M University, 2005. http://hdl.handle.net/1969.1/2529.

Full text
Abstract:
A high throughput scalable decoder architecture, a tiling approach to reduce the complexity of the scalable architecture, and two low power decoding schemes have been proposed in this research. The proposed scalable design is generated from a serial architecture by scaling the combinational logic; memory partitioning and constructing a novel H matrix to make parallelization possible. The scalable architecture achieves a high throughput for higher values of the parallelization factor M. The switch logic used to route the bit nodes to the appropriate checks is an important constituent of the scalable architecture and its complexity is high with higher M. The proposed tiling approach is applied to the scalable architecture to simplify the switch logic and reduce gate complexity. The tiling approach generates patterns that are used to construct the H matrix by repeating a fixed number of those generated patterns. The advantages of the proposed approach are two-fold. First, the information stored about the H matrix is reduced by onethird. Second, the switch logic of the scalable architecture is simplified. The H matrix information is also embedded in the switch and no external memory is needed to store the H matrix. Scalable architecture and tiling approach are proposed at the architectural level of the LDPC decoder. We propose two low power decoding schemes that take advantage of the distribution of errors in the received packets. Both schemes use a hard iteration after a fixed number of soft iterations. The dynamic scheme performs X soft iterations, then a parity checker cHT that computes the number of parity checks in error. Based on cHT value, the decoder decides on performing either soft iterations or a hard iteration. The advantage of the hard iteration is so significant that the second low power scheme performs a fixed number of iterations followed by a hard iteration. To compensate the bit error rate performance, the number of soft iterations in this case is higher than that of those performed before cHT in the first scheme.
APA, Harvard, Vancouver, ISO, and other styles
6

Cai, Fang. "Efficient VLSI Architectures for Non-binary Low Density Parity Check Decoding." Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1300821245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shadab, Rakin Muhammad. "Statistical Analysis of a Channel Emulator for Noisy Gradient Descent Low Density Parity Check Decoder." DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7582.

Full text
Abstract:
The purpose of a channel emulator is to emulate a communication channel in real-life use case scenario. These emulators are often used in the domains of research in digital and wireless communication. One such area is error correction coding, where transmitted data bits over a channel are decoded and corrected to prevent data loss. A channel emulator that does not follow the properties of the channel it is intended to replicate can lead to mistakes while analyzing the performance of an error-correcting decoder. Hence, it is crucial to validate an emulator for a particular communication channel. This work delves into the statistics of a channel emulator and analyzes its effects on a particular decoder.
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Lei. "VLSI implementation of low-error-floor multi-rate capacity-approaching low-density parity-check code decoder /." Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/5966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Von, Leipzig Mirko. "Code generation and simulation of an automatic, flexible QC-LDPC hardware decoder." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/96835.

Full text
Abstract:
Thesis (MEng)--Stellenbosch University, 2015.
ENGLISH ABSTRACT: Iterative error correcting codes such as LDPC codes have become prominent in modern forward error correction systems. A particular subclass of LDPC codes known as quasicyclic LDPC codes has been incorporated in numerous high speed wireless communication and video broadcasting standards. These standards feature multiple codes with varying codeword lengths and code rates and require a high throughput. Flexible hardware that is capable of decoding multiple quasi-cyclic LDPC codes is therefore desirable. This thesis investigates binary quasi-cyclic LDPC codes and designs a generic, flexible VHDL decoder. The decoder is further enhanced to automatically select the most likely decoder based on the initial a posterior probability of the parity-check equation syndromes. A software system is developed that generates hardware code for such a decoder based on a small user specification. The system is extended to provide performance simulations for this generated decoder.
AFRIKAANSE OPSOMMING: Iteratiewe foutkorreksiekodes soos LDPC-kodes word wyd gebruik in moderne voorwaartse foutkorreksiestelsels. ’n Subklas van LDPC-kodes, bekend as kwasisikliese LDPC-kodes, word in verskeie hoëspoed-kommunikasie- en video-uitsaaistelselstandaarde gebruik. Hierdie standaarde inkorporeer verskeie kodes van wisselende lengtes en kodetempos, en vereis hoë deurset. Buigsame apparatuur, wat die vermoë het om ’n verskeidenheid kwasisikliese LDPC-kodes te dekodeer, is gevolglik van belang. Hierdie tesis ondersoek binêre kwasisikliese LDPC-kodes, en ontwerp ’n generiese, buigsame VHDL-dekodeerder. Die dekodeerder word verder verbeter om outomaties die mees waarskynlike dekodeerder te selekteer, gebaseer op die aanvanklike a posteriori-waarskynlikheid van die pariteitstoetsvergelykings se sindrome. ’n Programmatuurstelsel word ontwikkel wat die fermware-kode vir so ’n dekodeerder genereer, gebaseer op ’n beknopte gebruikerspesifikasie. Die stelsel word uitgebrei om werksverrigting te simuleer vir die gegenereerde dekodeerder.
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Lan. "An Area-Efficient Architecture for the Implementation of LDPC Decoder." Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1300337576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Frenzel, Janik [Verfasser], Johannes [Akademischer Betreuer] Huber, Johannes [Gutachter] Huber, and Berthold [Gutachter] Lankl. "Decoder Improvements for Fast Decoding of Low-Density Parity-Check Codes / Janik Frenzel ; Gutachter: Johannes Huber, Berthold Lankl ; Betreuer: Johannes Huber." Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2020. http://d-nb.info/1217065237/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Gunnam, Kiran Kumar. "Area and energy efficient VLSI architectures for low-density parity-check decoders using an on-the-fly computation." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Matcha, Chaitanya Kumar, Mohsen Bahrami, Shounak Roy, Shayan Garani Srinivasa, and Bane Vasic. "Generalized belief propagation based TDMR detector and decoder." IEEE, 2016. http://hdl.handle.net/10150/622831.

Full text
Abstract:
Two dimensional magnetic recording (TDMR) achieves high areal densities by reducing the size of a bit comparable to the size of the magnetic grains resulting in two dimensional (2D) inter symbol interference (ISI) and very high media noise. Therefore, it is critical to handle the media noise along with the 2D ISI detection. In this paper, we tune the generalized belief propagation (GBP) algorithm to handle the media noise seen in TDMR. We also provide an intuition into the nature of hard decisions provided by the GBP algorithm. The performance of the GBP algorithm is evaluated over a Voronoi based TDMR channel model where the soft outputs from the GBP algorithm are used by a belief propagation (BP) algorithm to decode low-density parity check (LDPC) codes.
APA, Harvard, Vancouver, ISO, and other styles
14

Vijayakumar, Suresh. "FPGA Implementation of Low Density Party Check Codes Decoder." Thesis, University of North Texas, 2009. https://digital.library.unt.edu/ark:/67531/metadc11003/.

Full text
Abstract:
Reliable communication over the noisy channel has become one of the major concerns in the field of digital wireless communications. The low density parity check codes (LDPC) has gained lot of attention recently because of their excellent error-correcting capacity. It was first proposed by Robert G. Gallager in 1960. LDPC codes belong to the class of linear block codes. Near capacity performance is achievable on a large collection of data transmission and storage.In my thesis I have focused on hardware implementation of (3, 6) - regular LDPC codes. A fully parallel decoder will require too high complexity of hardware realization. Partly parallel decoder has the advantage of effective compromise between decoding throughput and high hardware complexity. The decoding of the codeword follows the belief propagation alias probability propagation algorithm in log domain. A 9216 bit, (3, 6) regular LDPC code with code rate ½ was implemented on FPGA targeting Xilinx Virtex 4 XC4VLX80 device with package FF1148. This decoder achieves a maximum throughput of 82 Mbps. The entire model was designed in VHDL in the Xilinx ISE 9.2 environment.
APA, Harvard, Vancouver, ISO, and other styles
15

Planjery, Shiva Kumar. "Low-Complexity Finite Precision Decoders for Low-Density Parity-Check Codes." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605947.

Full text
Abstract:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
We present a new class of finite-precision decoders for low-density parity-check (LDPC) codes. These decoders are much lower in complexity compared to conventional floating-point decoders such as the belief propagation (BP) decoder, but they have the potential to outperform BP. The messages utilized by the decoders assume values (or levels) from a finite discrete set. We discuss the implementation aspects as well as describe the underlying philosophy in designing these decoders. We also provide results to show that in some cases, only 3 bits are required in the proposed decoders to outperform floating-point BP.
APA, Harvard, Vancouver, ISO, and other styles
16

Blad, Anton. "Low Complexity Techniques for Low Density Parity Check Code Decoders and Parallel Sigma-Delta ADC Structures." Doctoral thesis, Linköpings universitet, Elektroniksystem, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-69432.

Full text
Abstract:
Since their rediscovery in 1995, low-density parity-check (LDPC) codes have received wide-spread attention as practical capacity-approaching code candidates. It has been shown that the class of codes can perform arbitrarily close to the channel capacity, and LDPC codes are also used or suggested for a number of important current and future communication standards. However, the problem of implementing an energy-efficient decoder has not yet been solved. Whereas the decoding algorithm is computationally simple, with uncomplicated arithmetic operations and low accuracy requirements, the random structure and irregularity of a theoretically well-defined code does not easily allow efficient VLSI implementations. Thus the LDPC decoding algorithm can be said to be communication-bound rather than computation-bound. In this thesis, a modification to the sum-product decoding algorithm called earlydecision decoding is suggested. The modification is based on the idea that the values of the bits in a block can be decided individually during decoding. As the sumproduct decoding algorithm is a soft-decision decoder, a reliability can be defined for each bit. When the reliability of a bit is above a certain threshold, the bit can be removed from the rest of the decoding process, and thus the internal communication associated with the bit can be removed in subsequent iterations. However, with the early decision modification, an increased error probability is associated. Thus, bounds on the achievable performance as well as methods to detect graph inconsistencies resulting from erroneous decisions are presented. Also, a hybrid decoder achieving a negligible performance penalty compared to the sum-product decoder is presented. With the hybrid decoder, the internal communication is reduced with up to 40% for a rate-1/2 code with a length of 1152 bits, whereas increasing the rate allows significantly higher gains. The algorithms have been implemented in a Xilinx Virtex 5 FPGA, and the resulting slice utilization and energy dissipation have been estimated. However, due to increased logic overhead of the early decision decoder, the slice utilization increases from 14.5% to 21.0%, whereas the logic energy dissipation reduction from 499 pJ to 291 pJ per iteration and bit is offset by the clock distribution power, increased from 141 pJ to 191 pJ per iteration and bit. Still, the early decision decoder shows a net 16% estimated decrease of energy dissipation.
APA, Harvard, Vancouver, ISO, and other styles
17

Guilloud, Frédéric. "Architecture generique de decodeurs de codes LDPC." Paris, ENST, 2004. http://www.theses.fr/2004ENST0004.

Full text
Abstract:
Les codes correcteurs d'erreurs à faible densité (LDPC) permettent de s'approcher à quelques fractions de dB de la limite de Shannon et commencent à apparaître dans les normes de télécommunications (DVB-S2). Dans cette thèse, nous nous sommes intéressés aux algorithmes de décodage des codes LDPC et à leur implantation matérielle. Nous avons tout d'abord proposé l'algorithme lambda-min permettant de réduire de façon significative la complexité du décodeur sans perte de performances par rapport à l'algorithme de référence dit à propagation de croyance. Nous avons ensuite conçu une architecture générique de décodeur LDPC,que nous avons implantée sur une plateforme dédiée à base de circuits logiques programmables FPGA. Ce décodeur matériel permet avant tout d'accélérer les simulations d'un facteur supérieur à 500 par rapport à une simulation logicielle. De plus, par sa conception entièrement programmable, modulaire et générique, il possède de nombreuses fonctionnalités : Il peut être configuré pour une large classe de codes ; la généricité des opérateurs de calcul permet l'optimisation de la précision interne des calculs en vue d'une conception ASIC ; par sa modularité, différents séquencement et algorithmes de calcul peuvent être testés. Enfin, notre travail nous a permis de dégager un cadre formel d'analyse et de synthèse des architectures de décodeurs LDPC. Ce cadre englobe les chemins de données et le mode de contrôle associé aux différents séquencements de décodage. Cette formalisation nous permet de classer les différentes solutions de l'état de l'art des décodeurs LDPC, mais aussi de proposer des architectures performantes non publiées à ce jour
The Low-Density Parity-Check codes are among the most powerful error correcting codes, since they enable to get as close as a fraction of a dB from the Shannon limit. They now begin to appear in industrial standard such as the second digital satellite broadcasting one (DVB-S2). In this thesis, we focused our research on the iterative decoding algorithms and their hardware implementations. We proposed first the lambda-min algorithm. It reduces significantly the complexity of the decoder without any significant performance loss, as compared to the belief propagation (BP) algorithm. Then we designed a generic architecture of an LDPC decoder, which has been implemented on a FPGA based platform. This hardware decoder enables to accelerate the simulations more than 500 times as compared to software simulations. Moreover, based on an all-tunable design, our decoder features many facilities: It is possible to configure it for a very wide code family, so that the research for good codes is processed faster ; thanks to the genericity of the processing components, it is also possible to optimize the internal coding format, and even to compare various decoding algorithms and various processing schedules. Finally, our experience in the area of LDPC decoders led us to propose a formal framework for analysing the architectures of LDPC decoders. This framework encompasses both the datapath (parallelism, node processors architectures) and the control mode associated to the several decoding schedules. Within this framework, a classification of the different state-of-the-art LDPC decoders is proposed. Moreover, a synthesis of an efficient and unpublished architectures have been proposed
APA, Harvard, Vancouver, ISO, and other styles
18

Lin, Kai-Li, and 林凱立. "High-Throughput Low-Density Parity-Check Code Decoder Designs." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/04211570571391710106.

Full text
Abstract:
碩士
國立交通大學
電子工程系所
93
In this thesis, two high-throughput low-density parity-check (LDPC) code decoders are presented. The first one is a (600, 450) LDPC code decoder applied for MB-OFDM UWB applications. The architecture adopts a re-scheduling data flow for channel values and the pipeline structure to reduce routing congestion and critical path delay. After fabricated in 0.18�慆 1P6M process, the proposed partially parallel decoder can support 480Mb/s data rate under 8 decoding iterations. Second decoder is implemented based on a (1200, 720) irregular parity check matrix. For achieving higher chip density and less interconnection delay, the proposed architecture features a new data reordering technique to simplify data bus between message memories and computational units; as a result, the chip size can be greatly reduced due to the increased chip density. Moreover, the LDPC decoder can also process two different codewords concurrently to increase throughput and datapath efficiency. After 0.18�慆 1P6M chip implementation, a 3.33Gb/s data rate with 8 decoding iterations is achieved in the 21.23mm2 silicon area. The other experiment using 0.13�慆 1P8M technology can further reach a 5.92Gb/s data rate within 10.24mm2 area while the chip density is 75.4%.
APA, Harvard, Vancouver, ISO, and other styles
19

Lee, Feng-En, and 李逢恩. "Configurable Parity-Check Polar Code Decoder for 5G Applications." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/musy3c.

Full text
Abstract:
碩士
國立交通大學
電子研究所
107
Polar code has been proposed in 2009 by Arikan and selected as the coding scheme in the control channel of the 5G communication system in 2016. It was proved to be the one that can achieve channel capacity theoretically. From then on, researchers start to exploit it and proposed the successive cancellation list (SCL) decoding to move one further step beyond SC decoding, and also proposed Cyclic-Redundancy-Check-Aided (CA) SCL decoding to improve performance further. However, it still leaves room for joint optimization between CRC and SCL decoding. To further improve the performance, we present Parity-Check Polar (PC-Polar) code decoder. In addition, to satisfy the requirement of 5G applications, we propose a configurable hardware for multiple code length and code rate. In this thesis, We implement the PC-Polar decoder in the TSMC 90nm CMOS process at post-layout state. We can achieve a throughput of 256 Mbps with 352K gates under a frequency of 645 MHz. The power consumption is 292 mW.
APA, Harvard, Vancouver, ISO, and other styles
20

Liao, Wei-Chung, and 廖惟中. "Design of Low-Cost Low-Density Parity-Check Code Decoder." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/35125395598491479322.

Full text
Abstract:
碩士
國立中山大學
資訊工程學系研究所
93
With the enormous growing applications of mobile communications, how to reduce the power dissipation of wireless communication has become an important issue that attracts much attention. One of the key techniques to achieve low power transmission is to develop a powerful channel coding scheme which can perform good error correcting capability even at low signal-to-noise ratio. In recent years, the trend of the error control code development is based on the iterative decoding algorithm which can lead to higher coding gain. Especially, the rediscovery of the low-density parity-check code (LDPC)has become the most famous code after the introduction of Turbo code since it is the code closest to the well-know Shannon limit. However, since the block size used in LDPC is usually very large, and the parity matrix used in LDPC is quite random, the hardware implementation of LDPC has become very difficult. It may require a significant number of arithmetic units as well as very complex routing topology. Therefore, this thesis will address several design issues of LDPC decoder. First, under no SNR estimation condition, some simulation results of several LDPC architectures are provided and have shown that some architectures can achieve close performance to those with SNR estimation. Secondly, a novel message quantization method is proposed and applied in the design LDPC to reduce to the memory and table sizes as well as routing complexity. Finally, several early termination schemes for LDPC are considered, and it is found that up to 42% of bit node operation can be saved.
APA, Harvard, Vancouver, ISO, and other styles
21

Tsai, Chen-Yuan, and 蔡鎮遠. "Design of Low-Parity Check Code Decoder for 802.16e Standard." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/15495709189221411177.

Full text
Abstract:
碩士
國立雲林科技大學
電機工程系碩士班
100
Low - Density Parity – Check (LDPC) codes were proposed,and prove error correcting ability very close to Shannon limit and low complexity,the channel has very good correcting anti-interference ability。In this paper we mainly on the adaptive-normalized min-sum algorithm (AN-MS) for the low - density parity – check (LDPC) code decoder,and the performance is compared with the commonly used min-sum algorithm(MSA),the main purpose is to get best the system bit error rate. In this paper we use IEEE 802.16e standard find out proper iteration number for AN-MS and MSA algorithm are set to 50,and consider the hardware design to facilitate,we setting the length of input data to be 5-bits,and multiplied by a normalization factor 0.75 in AN-MS for LDPC code rate 1/2、2/3、3/4、 5/6,using the above parameter setting we present simulation results the AN-MS outperforms MSA by about 0.2~0.3dB,we used the compensation method to improve MSA simplify the performance caused by the impact,to get better bit error rate.
APA, Harvard, Vancouver, ISO, and other styles
22

Rabbani, Abolfazli Ali Reza. "An Analog Decoder for Turbo-Structured Low-Density Parity-Check Codes." Thesis, 2012. http://spectrum.library.concordia.ca/975021/1/RabbaniAbolfazli%2DPhD%2DS2013.pdf.

Full text
Abstract:
In this work, we consider a class of structured regular LDPC codes, called Turbo-Structured LDPC (TS-LDPC). TS-LDPC codes outperform random LDPC codes and have much lower error floor at high Signal-to-Noise Ratio (SNR). In this thesis, Min-Sum (MS) algorithms are adopted in the decoding of TS-LDPC codes due to their low complexity in the implementation. We show that the error performance of the MS-based TS-LDPC decoder is comparable with the Sum-Product (SP) based decoder and the error floor property of TS-LDPC codes is preserved. The TS-LDPC decoding algorithms can be performed by analog or digital circuitry. Analog decoders are preferred in many communication systems due to their potential for higher speed, lower power dissipation and smaller chip area compared to their digital counterparts. In this work, implementation of the (120, 75) MS-based TS-LDPC analog decoder is considered. The decoder chip consists of an analog decoder heart, digital input and digital output blocks. These digital blocks are required to deliver the received signal to the analog decoder heart and transfer the estimated codewords to the off-chip module. The analog decoder heart is an analog processor performing decoding on the Tanner graph of the code. Variable and check nodes are the main building blocks of analog decoder which are designed and evaluated. The check node is the most complicated unit in MS-based decoders. The minimizer circuit, the fundamental block of a check node, is designed to have a good trade-off between speed and accuracy. In addition, the structure of a high degree minimizer is proposed considering the accuracy, speed, power consumption and robustness against mismatch of the check node unit. The measurement results demonstrate that the error performance of the chip is comparable with theory. The SNR loss at Bit-Error-Rate of 10−5 is only 0.2dB compared to the theory while information throughput is 750Mb/s and the energy efficiency of the decoder chip is 17pJ/b. It is shown that the proposed decoder outperforms the analog decoders that have been fabricated to date in the sense of error performance, throughput and energy efficiency. This decoder is the first analog decoder that has ever been implemented in a sub 100-nm technology and it improves the throughput of analog decoders by a factor of 56. This decoder sets a new state-of-the-art in analog decoding.
APA, Harvard, Vancouver, ISO, and other styles
23

Chien, Yi-Hsing, and 簡義興. "Design and Implementation of Configurable Low-Density Parity-Check Codes Decoder." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/64850195340982427366.

Full text
Abstract:
博士
國立臺灣大學
資訊工程學研究所
101
Low-density parity-check (LDPC) code has been shown to provide near-Shannon-limit performance for communication systems. Implementation cost, throughput, and power consumption are the most important issues for all communication design. In this dissertation, three techniques are proposed for efficient LDPC decoders: 1) a scheduling algorithm for high hardware utilization efficiency and throughput, 2) an iteration control algorithm for power efficiency and throughput, and 3) an enhanced min-sum decoding algorithm for better coding gains. First of all, we present a design of a high throughput LDPC decoder using an overlapped message-passing scheduling algorithm. Regular hierarchical quasi-cyclic (H-QC) LDPC code is used in this design to provide good coding performance at a long code length. The two-level regular H-QC LDPC code matrix structure is exploited to parallelize the row and column decoding operations. Our scheduling algorithm re-arranges these operations across iteration boundaries to avoid memory access conflicts. The memory requirements are reduced by half compared to pipelined decoders without scheduling. An (12288, 6144) LDPC decoder implemented in FPGA achieves 781.63 Mbps throughput performance. Second, the concurrent partially-parallel syndrome computation reduces complexity but incurs increased error rates due to the hard decision flipping (HDF) problem. We propose a low-complexity iteration control algorithm that eliminates BER degradation. The HDF rate is also used to terminate undecodable blocks to further save iterations. The algorithm works over multiple code rates. The implementation results show that a six-rate iteration control logic requires only 23.32% of the hardware of a single-rate fully parallel syndrome. Next, the min-sum algorithm is low complexity and has been shown to significantly reduce memory requirements by compression. We propose an enhanced min-sum decoding algorithm that yields better coding gains and maintains a low implementation cost. The difference of minimum values in the min-sum algorithm is used as a decision to boost unreliable bit nodes in the Tanner graph. Proposed enhanced min-sum algorithm offers a 0.35dB gain against the scaling min-sum algorithm for rate-½ LDPC code in IEEE 802.16e. The implementation results show that proposed enhanced min-sum decoding yields increases of less than 6% for a serial decoding processing unit. Finally, we propose a dynamically configurable layered decoder to support multiple quasi-cyclic parity-check matrices and different code rates. Parity-check matrices are described in an efficient way to reduce both the memory size and implementation cost of the address generator. Both iteration control and enhanced min-sum algorithms are integrated with the decoder for a complete verification and performance evaluation.
APA, Harvard, Vancouver, ISO, and other styles
24

Brandon, Tyler. "Parallel-Node Low-Density Parity-Check Convolutional Code Encoder and Decoder Architectures." Phd thesis, 2010. http://hdl.handle.net/10048/981.

Full text
Abstract:
We present novel architectures for parallel-node low-density parity-check convolutional code (PN-LDPC-CC) encoders and decoders. Based on a recently introduced implementation-aware class of LDPC-CCs, these encoders and decoders take advantage of increased node-parallelization to simultaneously decrease the energy-per-bit and increase the decoded information throughput. A series of progressively improved encoder and decoder designs are presented and characterized using synthesis results with respect to power, area and throughput. The best of the encoder and decoder designs significantly advance the state-of-the-art in terms of both the energy-per-bit and throughput/area metrics. One of the presented decoders, for an Eb /N0 of 2.5 dB has a bit-error-rate of 106, takes 4.5 mm2 in a CMOS 90-nm process, and achieves an energy-per-decoded-information-bit of 65 pJ and a decoded information throughput of 4.8 Gbits/s. We implement an earlier non-parallel node LDPC-CC encoder, decoder and a channel emulator in silicon. We provide readers, via two sets of tables, the ability to look up our decoder hardware metrics, across four different process technologies, for over 1000 variations of our PN-LDPC-CC decoders. By imposing practical decoder implementation constraints on power or area, which in turn drives trade-offs in code size versus the number of decoder processors, we compare the code BER performance. An extensive comparison to known LDPC-BC/CC decoder implementations is provided.
APA, Harvard, Vancouver, ISO, and other styles
25

Chuang, Ho-Sheng, and 莊和昇. "Chip Design of a Low Density Parity Check Decoder for IEEE 802.11n." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/63678887591316601874.

Full text
Abstract:
碩士
國立臺灣大學
電子工程學研究所
101
Gallager published Low Density Parity Check (LDPC) code in 1963. Since the computation power is so weak at that time, LDPC has not been paid much attention. However, LDPC has become an important technique because of advanced semiconductor technology that increases gradually the computation power. The architecture of LDPC decoder can be simply divided into two kinds: fully-parallel and partial-parallel schemes. There are some variations on the partial-parallel schemes depending on their parallelization methods. In this Thesis, we used a block-serial architecture for the implementation of an LDPC decoder, which has three improvements compared with previous work proposed by Xiang. The first is to shorten the critical path by reordering the process step and the storing step, which can facilitate an 11% improvement in the clock rate. The second is that the decoder updates the first check node message while it is storing the Log-Likelihood Ratio (LLR) data. As a result, the proposed design saves 3% clock cycles than the previous work. The third is that this work improves the algorithm of detecting convergence, which can avoid saving the last iteration result. Therefore, the storage space required to save the LLR can be reduced to 11% of the original size. Finally, the performance evaluation with other previous works was made to validate that the proposed design is having smaller chip area and less power dissipation.
APA, Harvard, Vancouver, ISO, and other styles
26

Chan, Ya-Ting, and 詹雅婷. "A Low Density Parity Check Decoder Using New Independent Column Operation Skill." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/48443290580986860899.

Full text
Abstract:
碩士
國立雲林科技大學
電子與資訊工程研究所
97
In this thesis, we adopt Log-Domain Sum-Product Algorithm (Log-SPA) to implement a modified low-density parity check code decoder in its check node part. From the prior art studies, we found that the look-up table is the most commonly use circuit being implemented in a traditional check-node hardware, and the main goal of this thesis is to propose a low hardware cost LDPC decoder, therefore we start from the traditional look-up table. Then, we propose a new independent column-wise operation skill to reduce the adder operations. It uses simply the concept of priority encoder which establishes a new binary weighting look-up table, so that a new simplified adder can be used for independent column-wise operation. The exact downing to valuable node value can be obtained by a recovery look-up table. The proposed new decoder is realized in the fully parallel architecture under the IEEE 802.16e standard that the matrix size is (1248, 624) and the code rate is 1/2. When compare with the traditional LDPC decoder hardware, the total chip area reduces about 28%, and the operational speed can reach up to 111MHz with only 0.1dB BER difference under TSMC 0.18um cell-based technology. In the same condition, the proposed decoder can save about 22% hardware by using UMC 90nm cell-based technology. The implemented test chip equipped with real-time on-chip self-test circuits which have an Additional White Gaussian Noise generator, and associated mode control circuits. The implemented chip, in TSMC 0.18um cell-base technology, shows that our new LDPC decoder architecture has low-cost and high-performance characteristics.
APA, Harvard, Vancouver, ISO, and other styles
27

Malema, Gabofetswe Alafang. "Low-density parity-check codes : construction and implementation." 2007. http://hdl.handle.net/2440/45525.

Full text
Abstract:
Low-density parity-check (LDPC) codes have been shown to have good error correcting performance approaching Shannon’s limit. Good error correcting performance enables efficient and reliable communication. However, a LDPC code decoding algorithm needs to be executed efficiently to meet cost, time, power and bandwidth requirements of target applications. The constructed codes should also meet error rate performance requirements of those applications. Since their rediscovery, there has been much research work on LDPC code construction and implementation. LDPC codes can be designed over a wide space with parameters such as girth, rate and length. There is no unique method of constructing LDPC codes. Existing construction methods are limited in some way in producing good error correcting performing and easily implementable codes for a given rate and length. There is a need to develop methods of constructing codes over a wide range of rates and lengths with good performance and ease of hardware implementability. LDPC code hardware design and implementation depend on the structure of target LDPC code and is also as varied as LDPC matrix designs and constructions. There are several factors to be considered including decoding algorithm computations,processing nodes interconnection network, number of processing nodes, amount of memory, number of quantization bits and decoding delay. All of these issues can be handled in several different ways. This thesis is about construction of LDPC codes and their hardware implementation. LDPC code construction and implementation issues mentioned above are too many to be addressed in one thesis. The main contribution of this thesis is the development of LDPC code construction methods for some classes of structured LDPC codes and techniques for reducing decoding time. We introduce two main methods for constructing structured codes. In the first method, column-weight two LDPC codes are derived from distance graphs. A wide range of girths, rates and lengths are obtained compared to existing methods. The performance and implementation complexity of obtained codes depends on the structure of their corresponding distance graphs. In the second method, a search algorithm based on bit-filing and progressive-edge growth algorithms is introduced for constructing quasi-cyclic LDPC codes. The algorithm can be used to form a distance or Tanner graph of a code. This method could also obtain codes over a wide range of parameters. Cycles of length four are avoided by observing the row-column constraint. Row-column connections observing this condition are searched sequentially or randomly. Although the girth conditions are not sufficient beyond six, larger girths codes were easily obtained especially at low rates. The advantage of this algorithm compared to other methods is its flexibility. It could be used to construct codes for a given rate and length with girths of at least six for any sub-matrix configuration or rearrangement. The code size is also easily varied by increasing or decreasing sub-matrix size. Codes obtained using a sequential search criteria show poor performance at low girths (6 and 8) while random searches result in good performing codes. Quasi-cyclic codes could be implemented in a variety of decoder architectures. One of the many options is the choice of processing nodes interconnect. We show how quasi-cyclic codes processing could be scheduled through a multistage network. Although these net-works have more delay than other modes of communication, they offer more flexibility at a reasonable cost. Banyan and Benes networks are suggested as the most suitable networks. Decoding delay is also one of several issues considered in decoder design and implementation. In this thesis, we overlap check and variable node computations to reduce decoding time. Three techniques are discussed, two of which are introduced in this thesis. The techniques are code matrix permutation, matrix space restriction and sub-matrix row-column scheduling. Matrix permutation rearranges the parity-check matrix such that rows and columns that do not have connections in common are separated. This techniques can be applied to any matrix. Its effectiveness largely depends on the structure of the code. We show that its success also depends on the size of row and column weights. Matrix space restriction is another technique that can be applied to any code and has fixed reduction in time or amount of overlap. Its success depends on the amount of restriction and may be traded with performance loss. The third technique already suggested in literature relies on the internal cyclic structure of sub-matrices to achieve overlapping. The technique is limited to LDPC code matrices in which the number of sub-matrices is equal to row and column weights. We show that it can be applied to other codes with a lager number of sub-matrices than code weights. However, in this case maximum overlap is not guaranteed. We calculate the lower bound on the amount of overlapping. Overlapping could be applied to any sub-matrix configuration of quasi-cyclic codes by arbitrarily choosing the starting rows for processing. Overlapping decoding time depends on inter-iteration waiting times. We show that there are upper bounds on waiting times which depend on the code weights. Waiting times could be further reduced by restricting shifts in identity sub-matrices or using smaller sub-matrices. This overlapping technique can reduce the decoding time by up to 50% compared to conventional message and computation scheduling. Techniques of matrix permutation and space restriction results in decoder architectures that are flexible in LDPC code design in terms of code weights and size. This is due to the fact that with these techniques, rows and columns are processed in sequential order to achieve overlapping. However, in the existing technique, all sub-matrices have to be processed in parallel to achieve overlapping. Parallel processing of all code sub-matrices requires the architecture to have the number of processing units at least equal to the number sub-matrices. Processing units and memory space should therefore be distributed among the sub-matrices according to the sub-matrices arrangement. This leads to high complexity or inflexibility in the decoder architecture. We propose a simple, programmable and high throughput decoder architecture based on matrix permutation and space restriction techniques.
Thesis(Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2007
APA, Harvard, Vancouver, ISO, and other styles
28

Malema, Gabofetswe Alafang. "Low-density parity-check codes : construction and implementation." Thesis, 2007. http://hdl.handle.net/2440/45525.

Full text
Abstract:
Low-density parity-check (LDPC) codes have been shown to have good error correcting performance approaching Shannon’s limit. Good error correcting performance enables efficient and reliable communication. However, a LDPC code decoding algorithm needs to be executed efficiently to meet cost, time, power and bandwidth requirements of target applications. The constructed codes should also meet error rate performance requirements of those applications. Since their rediscovery, there has been much research work on LDPC code construction and implementation. LDPC codes can be designed over a wide space with parameters such as girth, rate and length. There is no unique method of constructing LDPC codes. Existing construction methods are limited in some way in producing good error correcting performing and easily implementable codes for a given rate and length. There is a need to develop methods of constructing codes over a wide range of rates and lengths with good performance and ease of hardware implementability. LDPC code hardware design and implementation depend on the structure of target LDPC code and is also as varied as LDPC matrix designs and constructions. There are several factors to be considered including decoding algorithm computations,processing nodes interconnection network, number of processing nodes, amount of memory, number of quantization bits and decoding delay. All of these issues can be handled in several different ways. This thesis is about construction of LDPC codes and their hardware implementation. LDPC code construction and implementation issues mentioned above are too many to be addressed in one thesis. The main contribution of this thesis is the development of LDPC code construction methods for some classes of structured LDPC codes and techniques for reducing decoding time. We introduce two main methods for constructing structured codes. In the first method, column-weight two LDPC codes are derived from distance graphs. A wide range of girths, rates and lengths are obtained compared to existing methods. The performance and implementation complexity of obtained codes depends on the structure of their corresponding distance graphs. In the second method, a search algorithm based on bit-filing and progressive-edge growth algorithms is introduced for constructing quasi-cyclic LDPC codes. The algorithm can be used to form a distance or Tanner graph of a code. This method could also obtain codes over a wide range of parameters. Cycles of length four are avoided by observing the row-column constraint. Row-column connections observing this condition are searched sequentially or randomly. Although the girth conditions are not sufficient beyond six, larger girths codes were easily obtained especially at low rates. The advantage of this algorithm compared to other methods is its flexibility. It could be used to construct codes for a given rate and length with girths of at least six for any sub-matrix configuration or rearrangement. The code size is also easily varied by increasing or decreasing sub-matrix size. Codes obtained using a sequential search criteria show poor performance at low girths (6 and 8) while random searches result in good performing codes. Quasi-cyclic codes could be implemented in a variety of decoder architectures. One of the many options is the choice of processing nodes interconnect. We show how quasi-cyclic codes processing could be scheduled through a multistage network. Although these net-works have more delay than other modes of communication, they offer more flexibility at a reasonable cost. Banyan and Benes networks are suggested as the most suitable networks. Decoding delay is also one of several issues considered in decoder design and implementation. In this thesis, we overlap check and variable node computations to reduce decoding time. Three techniques are discussed, two of which are introduced in this thesis. The techniques are code matrix permutation, matrix space restriction and sub-matrix row-column scheduling. Matrix permutation rearranges the parity-check matrix such that rows and columns that do not have connections in common are separated. This techniques can be applied to any matrix. Its effectiveness largely depends on the structure of the code. We show that its success also depends on the size of row and column weights. Matrix space restriction is another technique that can be applied to any code and has fixed reduction in time or amount of overlap. Its success depends on the amount of restriction and may be traded with performance loss. The third technique already suggested in literature relies on the internal cyclic structure of sub-matrices to achieve overlapping. The technique is limited to LDPC code matrices in which the number of sub-matrices is equal to row and column weights. We show that it can be applied to other codes with a lager number of sub-matrices than code weights. However, in this case maximum overlap is not guaranteed. We calculate the lower bound on the amount of overlapping. Overlapping could be applied to any sub-matrix configuration of quasi-cyclic codes by arbitrarily choosing the starting rows for processing. Overlapping decoding time depends on inter-iteration waiting times. We show that there are upper bounds on waiting times which depend on the code weights. Waiting times could be further reduced by restricting shifts in identity sub-matrices or using smaller sub-matrices. This overlapping technique can reduce the decoding time by up to 50% compared to conventional message and computation scheduling. Techniques of matrix permutation and space restriction results in decoder architectures that are flexible in LDPC code design in terms of code weights and size. This is due to the fact that with these techniques, rows and columns are processed in sequential order to achieve overlapping. However, in the existing technique, all sub-matrices have to be processed in parallel to achieve overlapping. Parallel processing of all code sub-matrices requires the architecture to have the number of processing units at least equal to the number sub-matrices. Processing units and memory space should therefore be distributed among the sub-matrices according to the sub-matrices arrangement. This leads to high complexity or inflexibility in the decoder architecture. We propose a simple, programmable and high throughput decoder architecture based on matrix permutation and space restriction techniques.
Thesis(Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2007
APA, Harvard, Vancouver, ISO, and other styles
29

Elhami-Khorasani, Nazanin. "A multistage scheduled decoder for short block length low-density parity-check codes." Thesis, 2007. http://spectrum.library.concordia.ca/975270/1/MR28914.pdf.

Full text
Abstract:
Recent advances in coding theory have uncovered the previously forgotten power of Low-Density Parity-Check (LDPC) codes. Their popularity can be related to their relatively simple iterative decoders and their potential to achieve high performance close to shannon limit. These make them an attractive candidate for error correcting application in communication systems. In this thesis, we focus our research on the iterative decoding algorithms for Low-Density Parity-Check codes and present an improved decoding algorithm. First, the graph structure of LDPC codes is studied and a graph-based search algorithm to find the shortest closed walk and shortest cycle for each node of the graph is proposed. Then, the Deterministic schedule is applied on nodes of the graph with the objective of preserving the optimality of the algorithms. Finally, Hybrid Switch-Type technique is applied on the improved algorithms to provide a desirable complexity/performance trade-off. Hybrid Technique and Deterministic schedule are combined for decoding regular and irregular LDPC codes. The performance and complexity of the decoder is studied for Sum-Product and Gallager A algorithms. The result is a flexible decoder for any available LDPC code and any combination of decoding algorithms based on the communication systems need. In this technique, we benefit the high performance of soft-decision algorithms and low complexity of hard-decision algorithms by changing the decoding rule after a few iterations. Hence, a desirable performance can be obtained with less average number of soft iterations. Moreover, all the nodes do not update messages in each iteration. As a result, the total number of computations is reduced considerably
APA, Harvard, Vancouver, ISO, and other styles
30

Wu, Sheng-Dong, and 吳聲東. "Architecture design of the Modified Block-Type Low-Density Parity-Check Codes Decoder." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/51601930924832747187.

Full text
Abstract:
碩士
中興大學
電機工程學系所
95
The target of the next generation wireless communication system is to transmit higher data rate and have a larger coverage area. However, radio transmission power needs to be kept to a minimum due to regulation and system power consumption. Thus, we can not achieve the target problem by increasing transmission power. Forward-error- correction (FEC) system can be employed to reach this target. The advanced FEC is the key technique in the next generation wireless communication system. There are many up-to-date standards that take LDPC into consideration. For example, the next generation satellite communication DVB-S2 standard uses 64800-bit LDPC codes. By improving FEC, the data transmission throughput can get 30% up. The coderates of wireless network 802.11n standard are 1/2 to 5/6 that can support different levels of data protection. In the thesis, a combined Low-Density Parity-Check (LDPC) code decoding design method, called modified Block-Type LDPC (B-LDPC), for realistic LDPC coding system architectures is presented. The B-LDPC code, which is a special class of quasi-cyclic LDPC (QC-LDPC), has an efficient encoding algorithm owing to the simple structure of their parity-check matrices. A proposed distribution of irregular parity-check matrix for the modified B-LDPC is developed so that we can obtain an area-efficient decoder design, good error correction performance, and achievable architecture implementation. The modified B-LDPC code decoding utilizes the iterative min-sum algorithm (MSA) and its decoding architecture design employs the operations of bit node unit (BNU) and check node unit (CNU). Different block matrix sizes for parity-check matrix can be adopted so that the modified B-LDPC code decoding improves the throughput without obvious performance degradation.
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Wei-Yi, and 王瑋逸. "Deep Learning Assisted Low Density Parity Check Decoder with Hybrid Hidden Layer Architecture." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/us596n.

Full text
Abstract:
碩士
國立清華大學
電機工程學系
107
In this thesis, we propose a novel belief propagation for decoding the low density parity check code (LDPC) with the assistance of deep learning method. With long enough girth, the belief propagation (BP) has been shown with the powerful ability to reduce the complexity of decoding the LDPC, and yields nice error correction performance which is close to the maximum likelihood (ML) method. However, the equal weights on the Tanner graph is faced by ”double counting effect”. The messages passed on the edge have different reliability due to the structure of the parity check matrix design, e.g. girth and numbers of small cycles, and the channel condition each bit faced. The performance of BP relies on the independent of messages from different nodes. However, the small cycles in the Tanner graph leads to correlation of messages. The dependency of messages violates the independent requirement of BP in decoding and degrade the performance of belief propagation. There are many methods such as uniformly reweighted belief propagation (URW-BP) and variable factor appearance probability belief propagation (VFAP-BP) using unequal weights to deal with the message dependency in BP. However, the compensation is done by using one constant weights which is not general enough. Besides, the condition of reliability is changed in every iteration of decoding and the condition also varies from different check node. It is very difficult to develop a formula of the reweighted factor. Hence, we design a hybrid hidden layer neural network assisted BP algorithm to learned the unequal weights on Tanner graph. The weights compensate the negative effect of inreliability in the parity check matrix structure. With the aid of the learned weights, the error correction performance in high SNR region is enhanced. We also design an online training communication system to improve the modern system.
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Huei-Sheng, and 王惠生. "Chip Design of Novel Analog Min-Sum Iterative Decoder for a (8,4) Low-Density Parity-Check (LDPC) Decoder." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/ep898g.

Full text
Abstract:
碩士
國立臺北科技大學
電腦與通訊研究所
96
In this thesis, we have designed an analog decoder that directly received analog signal using min-sum low-density parity-check (LDPC) code. To verify our method, we have finished a novel analog min-sum iterative decoder for a (8,4) low-density parity-check decoder chip with TSMC 0.35-μm Mixed-Signal 2P4M Polycide CMOS technology. In the LDPC decoder chip, we used Wilson current mirrors to replace cascode current mirrors in the variable node. This chip contains 2352 transistors, using a single 3.3-V power supply and consumes 22.9mW. The chip area including pads is about 3.22mm2, and core area is about 1.36 mm2.This chip has the advantages of low-power, small area, low-cost, and it can provide an efficient design for future SOC communications.
APA, Harvard, Vancouver, ISO, and other styles
33

Chan, Ching-Da, and 詹慶達. "Low Cost Low-density Parity-Check Decoder Using AND Gate Based Min-Sum Algorithm." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/pqdrua.

Full text
Abstract:
碩士
國立雲林科技大學
電子與資訊工程研究所
96
In this thesis we proposed a new full parallel LDPC decoder, using the simple AND gates on check-node circuit operation, resulting no bit-error-rate performances lose when compare with the original min-sum algorithm. A modified min-sum algorithm is derived firstly, and then the hardware complexity of the check-node unit is reduced tremendously. Meanwhile, the variable-to-check node message has mapped into a special irregular quantization method in order that we can just employ basic AND gates to replace the complex comparators in check-node unit. In the hardware implementation case of IEEE 802.16e standard, LDPC matrix scale setting as (576, 288) and under the 0.18μm IC technology, new proposed LDPC decoder saves 22% hardware area. Furthermore, the implemented IEEE 802.11n with matrix scale (648, 324) and taking 0.13μm process technology, it saves 30% hardware when comparing with the traditional min-sum architecture. The operational speed can reach up to 100MHz without pipelining. A real MIMO application implementation is completed, in this thesis, adopting IEEE 802.16e standard matrix scale (576, 288) under a 0.18μm technology, and the core size of LDPC decoder is only 2800μm × 2800μm.
APA, Harvard, Vancouver, ISO, and other styles
34

Liu, Jia-ping, and 劉嘉萍. "A Compact Variable Node Using Thermalcode Addition Technique for Low Density Parity Check Decoder." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/43822828865079165107.

Full text
Abstract:
碩士
雲林科技大學
電子與資訊工程研究所
98
In the present error correction technology for digital data, Low-Density Parity-Check Code (LDPC) is with the bit-error-rate performance closest to Shannon limit. The earliest LDPC decoding algorithm is Sum-Product algorithm (SPA), which is mainly consisted of Check Nodes and Variable Nodes. Since the decoding algorithm utilizes large amount of multipliers, causing enormous operating complexity and hardware area, recent studies on LDPC decoders focus on reducing hardware area. With the skill of replacing multiplicantion by addition in logarithm domain algorithm and the widely discussed Minimum Sum Algorithm (MSA), which further reduces additive operations with the characteristics of comparators, the hardware complexity can effectively be reduced. However, these works focus more on the algorithm of Check Nodes in LDPC decoders. Besides, fewer literatures are about the Variable Nodes which occupy over a half of the LDPC decoder and whose primary algorithm, with small reduction capacity, is to implement simple additive operations. This thesis proposes to apply the addition technology in thermalcode to the Variable Nodes in the LDPC decoder so as to replace the traditional binary adder in Variable Nodes. In this thesis, the irregular quantization of thermalcode generates regular values which can be operated with simple combinational logic circuits in the additive operation in Variable Nodes. In comparison with conventional MSA, applying the decoding skill in this thesis to 802.11n digital wireless communication standard of code rate 1/2 and irregular matrix (1296,648) and fixing the iternation times, the simulation results show that the bit error rate, after decoding, can remain the same standard as MSA when SNR is 2.5dB. Besides, the logic synthesis results of the hardware also show that the hardwear measurement has been save 34% comparing with MSA, which is traditionally considered with small measurement, of the entire LDPC decoder hardware measurement.
APA, Harvard, Vancouver, ISO, and other styles
35

Yu, Hsin Fu, and 余信甫. "Improved Belief Propagation Decoder for Polar Codes with the assistance of Parity-Check Matrix." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/947ynb.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Zong-Yi, Siao, and 蕭宗毅. "LDPC Minimum Sum Algorithm Decoder for IEEE802.11n Non-regular Parity Check Matrix in SDR System." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/66224092231574590068.

Full text
Abstract:
碩士
亞東技術學院
資訊與通訊工程研究所
103
Referring to the approximate lower triangular low-density parity-check code check matrix of the IEEE P802.11n™/D1.04 (Part 11: Wireless Local Area Network Medium Access Control and Physical Layer specifications), this study established a decoder based on National Instruments Soft Defined Radio’s LabVIEW program language on a single program architecture that can adjust the transmission end to generate diverse codeword patterns, including three subblock sizes (27, 54, and 81 bit) and four code rates (1/2, 2/3, 3/4, and 5/6). A mask matrix design was adopted to distinguish the “0” and spaces in the code check matrix. Finally the code check matrix was multiplied using transposed codewords to verify the accuracy of the codes. Combined with the minimum sum algorithm (MSA), the decoder completed decoding tasks by changing the check node and variable node structures on the basis of the selected subblock size and code rate. This part for Tanner Graph not only adds some improvements and greatly simplifies the complexity of compile, but also deals with the irregular check matrix oversized problem. In addition to providing an introduction on the decoding mechanism of the MSA and completing decoding program optimization and analysis of bit error rate (BER) performance curves, this study applied the LabVIEW program to simulate the BER of the ratio of energy per bit to the spectral noise density (Eb/No) at each point from 0 to 10 dB, when subblock sizes (27, 54, and 81 bit) were combined with code rates (1/2, 2/3, and 5/6) operating in an additive white Gaussian noise channel environment. In addition to BPSK with Box-Muller for AWGN, we use higher order modulation schemes, QPSK, 16-QAM for the software-defined radios system and carry soft information by designing bit mapping to I-Q voltage to implement the soft decision decoder. The error rate performance curve diagrams (Waterfall) of two studies were referenced (regular weight (3,6) and 802.11n irregular subblock size 27 bit combined with code rate 5/6) and compared with the simulation outcome yielded in this study. The result showed that the subblock size did not affect the error rate, but the code rates substantially affected the error rates. When the code rate was set to 1/2, the error correction performance of the irregular check matrix was considerably higher than that of the regular check matrix.
APA, Harvard, Vancouver, ISO, and other styles
37

Chang, Chen-Pang, and 張鎮邦. "A Compact Comparator Circuit Design for Min-Sum Algorithm of Low Density Parity Check Decoder." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/a7q4t2.

Full text
Abstract:
碩士
國立雲林科技大學
電子與資訊工程研究所
96
This thesis proposed a kind of new simplified comparator circuit applied in the Check-Node unit of the Low-Density Parity-Check Code (LDPC) decoder. This new comparator, consisting of logical judgment circuit, multiplexer and basic logic AND function, replaces the traditional comparators used in the Check-node unit, so that it reduce the hardware area and improves operational frequency at the same time. In the Check-Node unit alone, synthesized in a 0.18μm CMOS cell based technology, the hardware area saves about 55% to 62%, and the operational frequency improved by 2 to 3 times, in different bit-lengths. As to the LDPC decoder, the whole hardware adopting new comparator in Check-node unit saves 11% to 15%. An IEEE 802.16e irregular code standard application, H-matrix setting as (576,288) and code rate setting as 1/2, is the main target of this thesis. Simulation results under the fixed iteration number of times show that the Bit Error Rate can be equally good with traditional Min-Sum. When SNR is 2.5dB, the BER difference is only 0.06dB. An ASIC, equipping with self-test mechanism and automatic generate AWGN circuit, has been implemented by a 0.18μm CMOS cell based technology, and verified the success of new comparator finally.
APA, Harvard, Vancouver, ISO, and other styles
38

Brookstein, Peter C. "Improving the convergence rate of the iterative Parity-check transformation algorithm decoder for Reed-Solom codes." Thesis, 2018. https://hdl.handle.net/10539/26743.

Full text
Abstract:
This masters by research dissertation contributes to research in the field of Telecommunications, with a focus on forward error correction and improving an iterative Reed-Solomon decoder known as the Parity-check Transformation Algorithm (PTA). Previous work in this field has focused on improving the runtime parameters and stopping conditions of the algorithm in order to reduce its computational complexity. In this dissertation, a di↵erent approach is taken by modifying the algorithm to more e↵ectively utilise the soft-decision channel information provided by the demodulator. Modifications drawing inspiration from the Belief Propagation (BP) algorithm used to decode Low-Density Parity-Check (LDPC) codes are successfully implemented and tested. In addition to the selection of potential codeword symbols, these changes make use of soft channel information to calculate dynamic weighting values. These dynamic weights are further used to modify the intrinsic reliability of the selected symbols after each iteration. Improvements to both the Symbol Error Rate (SER) performance and the rate of convergence of the decoder are quantified using computer simulations implemented in MATLAB and GNU Octave. A deterministic framework for executing these simulations is created and utilised to ensure that all results are reproducible and can be easily audited. Comparative simulations are performed between the modified algorithm and the PTA in its most e↵ective known configuration (with =0 .001). Results of simulations decoding half-rate RS(15,7) codewords over a 16-QAM AWGN channel show a more than 50-fold reduction in the number of operations required by the modified algorithm to converge on a valid codeword. This is achieved while simultaneously observing a coding gain of 1dB for symbol error rates between 102 and 104.
GR2019
APA, Harvard, Vancouver, ISO, and other styles
39

Lian, Chin-Kuang, and 廖欽寬. "A Partially Parallel Low-Density Parity Check Code Decoder with Reduced Memory for Long Code-Length." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/27023195850205517433.

Full text
Abstract:
碩士
國立中央大學
電機工程研究所
95
In recent years, low-density parity-check (LDPC) codes have attracted a lot of attention due to the near Shannon limit coding gains when iteratively decoded. Thus, the LDPC codes have been well recognized as an excellent error correction coding scheme for many digital communication systems, such as the next generation digital video broadcasting (DVB-S2), MIMO-WLAN (802.11n), WMAN (802.16e), mobile broadband wireless access (MBWA) (802.20) systems, and etc. Based on the message-passing algorithm, the LDPC decoder uses memories to store intermediate data between bit nodes and check nodes. In fact, the quantity of the stored data is related to the non-zero entries in H matrix. In other words, the memory size required by the LDPC decoder with the partially parallel architectures may significantly increase for large code length LDPC codes. In this thesis, we present an alternative approach which significantly reduces the memory size requirement and the salient feature of memory size reduction becomes significance particular for DVB-S2 applications.
APA, Harvard, Vancouver, ISO, and other styles
40

Tung, Bo-Wen, and 童柏文. "A Hardware Architecture Design of the Low-Density Parity-Check Code Decoder for IEEE 802.11n Standard." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/41066781939265196076.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
100
One of the main challenges of implementing an LDPC code decoder is that the interconnection complexity is growing along with the number of the parallel processing units, which results in the increased delay, power dissipation, and chip area. In this thesis, we propose a design called split-core architecture with reduced-quantization method which reserves the benefit of split-row threshold algorithm proposed by Mohsenin and simultaneously retains the good error performance for a multi-mode LDPC decoder of 802.11n standard. The implementation results show that the area of a decoder with our proposed architecture is 2.58 mm2 with a final core utilization of 85%, as compared to the area of 3 mm2 and a core utilization of 70% for the non-splitting architecture, while the area sizes are similar for both architectures before the routing process.
APA, Harvard, Vancouver, ISO, and other styles
41

Singh, Manpreet. "Power Characterization of a Digit-Online FPGA Implementation of a Low-Density Parity-Check Decoder for WiMAX Applications." Thesis, 2014. http://hdl.handle.net/10012/8529.

Full text
Abstract:
Low-density parity-check (LDPC) codes are a class of easily decodable error-correcting codes. Published parallel LDPC decoders demonstrate high throughput and low energy-per-bit but require a lot of silicon area. Decoders based on digit-online arithmetic (processing several bits per fundamental operation) process messages in a digit-serial fashion, reducing the area requirements, and can process multiple frames in frame-interlaced fashion. Implementations on Field-Programmable Gate Array (FPGA) are usually power- and area-hungry, but provide flexibility compared with application-specific integrated circuit implementations. With the penetration of mobile devices in the electronics industry the power considerations have become increasingly important. The power consumption of a digit-online decoder depends on various factors, like input log-likelihood ratio (LLR) bit precision, signal-to-noise ratio (SNR) and maximum number of iterations. The design is implemented on an Altera Stratix IV GX EP4SGX230 FPGA, which comes on an Altera DE4 Development and Education Board. In this work, both parallel and digit-online block LDPC decoder implementations on FPGAs for WiMAX 576-bit, rate-3/4 codes are studied, and power measurements from the DE4 board are reported. Various components of the system include a random-data generator, WiMAX Encoder, shift-out register, additive white Gaussian noise (AWGN) generator, channel LLR buffer, WiMAX Decoder and bit-error rate (BER) Calculator. The random-data generator outputs pseudo-random bit patterns through an implemented linear-feedback shift register (LFSR). Digit-online decoders with input LLR precisions ranging from 6 to 13 bits and parallel decoders with input LLR precisions ranging from 3 to 6 bits are synthesized in a Stratix IV FPGA. The digit-online decoders can be clocked at higher frequency for higher LLR precisions. A digit-online decoder can be used to decode two frames simultaneously in frame-interlaced mode. For the 6-bit implementation of digit-online decoder in single-frame mode, the minimum throughput achieved is 740 Mb/s at low SNRs. For the case of 11-bit LLR digit-online decoder in frame-interlaced mode, the minimum throughput achieved is 1363 Mb/s. Detailed analysis such as effect of SNR and LLR precision on decoder power is presented. Also, the effect of changing LLR precision on max clock frequency and logic utilization on the parallel and the digit-online decoders is studied. Alongside, power per iteration for a 6-bit LLR input digit-online decoder is also reported.
APA, Harvard, Vancouver, ISO, and other styles
42

Hung, Jui-Hui, and 洪瑞徽. "Investigation and Design of Decoding Algorithms, Decoder Architectures and Cooperative Coding Techniques for Low-Density Parity-Check Codes." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/42830170037293371227.

Full text
Abstract:
博士
國立交通大學
電子研究所
101
This dissertation conducts a thorough investigation on various technology aspects of low-density parity check (LDPC) codes, and then proposes corresponding efficient techniques for effective decoding and realization of LDPC codes. The investigated issues include decoding algorithms, hardware architectures of LDPC codes, and application of LDPC codes to cooperative coding and decoding. Furthermore, new coding schemes based on the joint concepts of LDPC and Turbo codes have been proposed in the end of the dissertation. Bit-flipping (BF) LDPC decoding algorithms have lower complexity compared with message passing (MP) algorithms, but have the drawbacks of lower decoding performances and higher iteration counts. In order to significantly enhance BF algorithms, a performance-boosting algorithm, called low-correlation culprit-bit-test multi-bit flipping (LCCBT-MBF) algorithm, has been proposed and integrated with BF algorithms. Besides, this work proposes a genet¬ics-aided message passing (GA-MP) algorithm by applying a new genetic algorithm to further improve the decoding performance of MP algorithm. Long delay time of the check node units are usually the major bottleneck in LDPC decoders for high-speed applications. Hence, this dissertation proposes several improved comparison algorithms for self-message-excluded CNU (SME-CNU) and two-minimum CNU (TM-CNU) architectures. Next, in order to eliminate the idling time and hardware complexity in conventional partially-parallel decoders, this work proposes a decoder architecture which can handle two different codewords simultaneously with 100% hardware utilization. Since LDPC decoding operations can be conducted very effectively with layered decoding schemes (LDS), a decoder architecture with an optimized execution reordering scheme for LDS is also proposed. In the final part of decoder designs, we implement the corresponding LDPC decoders for all the proposed decoding algorithms. Next, we propose an efficient coding scheme, called Turbo-LDPC code, which combines the merits of both the turbo code and LDPC code. The decoding performance is significantly improved by utilizing turbo decoding process, while the major part of the decoder is basically the same as a conventional LDPC decoder. Compared to combined block turbo code and BCH codes, the proposed Turbo-LDPC code also has much better decoding performance as well as lower computational complexity. Due to the significant improvements of the proposed Turbo-LDPC codes in decoding performance, the 2D coding scheme is extended to a new 3D codes, named Triple-LDPC codes. Finally, both proposed coding schemes are applied to cooperative coding in relay networks. Some cooperative coding and decoding schemes are devised based on these two codes. From analysis and simulations, the new cooperative coding/decoding techniques can significantly reduce the hardware complexity in relay stations, while obtain better error-correction capabilities.
APA, Harvard, Vancouver, ISO, and other styles
43

Su, Huang-Shiang, and 蘇皇祥. "Chips Design of Analog Current-Mode Quadrature Amplitude Modulation Demodulator and Min-Sum Decoder for (8,4) Regular Low-Density Parity-Check Codes." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/796j96.

Full text
Abstract:
碩士
國立臺北科技大學
電腦與通訊研究所
97
It was commonly believed that the analog demodulator and decoder have certainly error-correcting capacity and the advantages of low power consumption and lower silicon area. In this thesis, an analog quadrature amplitude modulation (QAM) demodulator and an analog iterative decoder of VLSI architecture design are proposed. First, we design a novel demodulator with analog circuits. The main design blocks are composed of differential pairs and current buffer circuits. The proposed demodulator architecture could transfer the analog input signals from the channel to the location of belonging constellation effectively and convey signals from the output of demodulator to decoder directly. Also, it can remove the analog to digital converter (ADC) device. The second design is the analog decoder architecture for low-density parity-check (LDPC) codes based on min-sum iterative algorithms. Current buffer circuits and check node accuracy issues are focal point for the architecture. We use the advantages of high output impedance and high accuracy to decrease the channel length modulation effect in cascode current buffers. Then, increase the transfer accuracy and decoding performance in the analog decoder. Finally, a novel analog QAM demodulator and a high accuracy LDPC decoder have been implemented with 0.35μm 2P4M CMOS technology. These two chips include 106 and 1944 transistors respectively and operate in 3.3V power supply. The power consumption are 402.9μW and 12.04mW, the core area are 0.14 × 0.13 mm^2 and 0.52 × 0.2 mm^2 respectively. The advantage of demodulator and decoder chips can achieve low power consumption, low cost and proper error correcting capacity that provides an efficient design for SOC integration in the communication receiver in the future.
APA, Harvard, Vancouver, ISO, and other styles
44

Kelly, Daniel R. "Arithmetic data value speculation." Thesis, 2011. http://hdl.handle.net/2440/70234.

Full text
Abstract:
Arithmetic approximation is used to decrease the latency of an arithmetic circuit by shortening the critical path delay or the sampling period so that result is not guaranteed to be correct for every input combination.Thus, an acceptable compromise between the circuit latency and the average probability of correctness drives the circuit design. Two methods of arithmetic approximation are: temporally incompleteness where circuits quote the result before the critical path delay (overclocking); and logically incompleteness where circuits use simplified logic, so that most input cases are calculated correctly, but the slowest cases are calculated incorrectly. Arithmetic data value speculation (ADVS) is a speculation scheme based on arithmetic approximation, and is used to increase the throughput of a general purpose processor. ADVS is similar to branch prediction, an arithmetic instruction is issued to an exact arithmetic unit and an approximate arithmetic unit which provides an approximate result faster than the exact counterpart. The approximate result is forwarded to dependent operations so they may be speculatively issued. When the exact result is eventually known, it is compared to the approximate result, and the pipeline is flushed if they differ. This thesis, "ArithmeticDataValue Speculation", presents work in the field of digital arithmetic and computer architecture. A summary of current probabilistic arithmetic methods from the literature is provided, and novel designs of approximate integer arithmetic units are presented, including results from logical synthesis. A case study demonstrates approximate arithmetic units used to increase the average throughput of benchmark programs by speculatively issuing dependent operations in a RISC processor. The average correctness of the approximate arithmetic units are shown to be highly data dependent, results vary depending on the benchmarks being run. In addition, the average correctness when running benchmarks is consistently higher than for random inputs. Simulations show that many arithmetic operations are often repeated in the same benchmark, leading to a high variation in correctness. Speculative gains from one operation can be offset by speculation losses due repeated incorrect approximation of another approximate unit, so typical throughput gains through speculation in a general purpose processor pipeline are low. The minimum threshold correctness of an approximate arithmetic unit used for speculation is shown to be approximately 95%. Logic synthesis is used to determine power, area and timing information for approximate units implemented from novel algorithms, and show a reduction in arithmetic cycle latency for integer operations, and the expense of 50% leakage and area, and 90% dynamic power. Value speculation can be complemented by result caching; repeated pipeline flushes can be avoided if the correct result is know before speculation, the average operation latency can be reduced, and caching can be used for operations that are difficult to approximate.
Thesis (Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2011
APA, Harvard, Vancouver, ISO, and other styles
45

Wu, Chia-chun, and 吳佳俊. "A Partially Parallel Low-Density Parity Check Code Decode." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/88035237346680041672.

Full text
Abstract:
碩士
國立中央大學
電機工程研究所
97
Abstract LDPC decoding algorithm is a result of the use of Message passing Concept way, obtain efficient decoding circumstances, the realization of the decoder hardware, with plenty of memory to store the messages exchanged, required memory size with the same H matrix contains the number of 1. In other words, When the H Structure, then the greater the need will increase memory. Two partially parallel architectures have been commonly implemented for LDPC decoders: Share-memory architecture and Individual-memory architecture. Our previous study has presented an alternative approach that significantly reduces the memory size requirement. In this study, shift-registers are employed to replace memory to simplify the data retrieval scheme and to further improve the throughput. Results show that the a LDPC decoder, with a code length of 1536 and a code rate of 1/2, can achieves the data rate up to 166 Mbps at the maximum clock frequency of 460 MHz.
APA, Harvard, Vancouver, ISO, and other styles
46

Darabiha, Ahmad. "VLSI architectures for multi-Gbps Low-Density Parity-Check decoders." 2008. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=742560&T=F.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Jinghu, Chen. "Reduced complexity decoding algorithms for low-density parity check codes and turbo codes." 2003. http://proquest.umi.com/pqdweb?index=0&did=765086321&SrchMode=2&sid=11&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1233251616&clientId=23440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Lee, Xin-Ru, and 李欣儒. "Research on Stochastic Binary and Nonbinary Low-Density Parity-Check Code Decoders." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/nh3dez.

Full text
Abstract:
博士
國立交通大學
電子工程學系 電子研究所
103
To ensure the reliability of transmission and storage, error-control codes are requisite in communication and storage systems. Among error-control codes, low-density parity-check (LDPC) codes, due to its excellent error-correcting performance and throughput, are very attractive to academia and industry. Recently, the multi-rate LDPC codes are widely adopted in a variety of specifications to support different transmission schemes. However, the design challenge of a multi-rate LDPC decoder is wire routing, which affects the size of area and the operating frequency. Stochastic decoding cleverly provides a routing efficient and compact solution for LDPC decoders by means of bit-serial property and simpler computations. On the other hand, nonbinary LDPC (NB-LDPC) codes can provide even better coding gain than binary LDPC codes at the cost of higher decoding complexity and lower throughput. How to achieve a throughput of Gbps as well as reduce the storage requirement remains key design challenge for stochastic decoding. In this dissertation, we propose an area/energy-efficient stochastic decoding for binary and nonbinary LDPC codes and demonstrate our proposals are competitive to conventional LDPC decoding algorithms. For binary LDPC codes, the first silicon-proven stochastic LDPC decoder to support multiple code rates for IEEE 802.15.3c applications is presented. The critical path is improved by a reconfigurable stochastic check node unit (CNU) and variable node unit (VNU); therefore, a high throughput scheme can be realized with 768 MHz clock frequency. To achieve higher hardware and energy efficiency, the reduced complexity architecture of tracking forecast memory is experimentally investigated to implement the variable node units for IEEE 802.15.3c applications. Based on the properties of parity check matrices and stochastic arithmetic, the optimized routing networks with re-permutation techniques are adopted to enhance chip utilization. Considering the measurement uncertainties, a delay-lock loop with isolated power domain and a test environment consisting of an encoder, an AWGN generator and bypass circuits are also designed for inner clock and information generation. With these features, our proposed fully parallel LDPC decoder chip fabricated in 90-nm CMOS process with 760.3K gate count can achieve 7.92Gb/s data rate and power consumption of 437.2mW under 1.2V supply voltage. Compared to the state-of-the-art IEEE 802.15.3c LDPC decoder chips, our proposed chip achieves over 90% reduction of routing wires, 73.8% and 11.5% enhancement of hardware and energy efficiency, respectively. For nonbinary LDPC codes, a partial parallel decoder of a (168, 84) regular-(2, 4) NB-LDPC code over GF(16) is implemented. The simpler routing networks profited from stochastic computation with optimized computation units deliver 96.6% logic utilization which is the highest value of silicon-proven LDPC or NB-LDPC decoders. The logarithm domain transformation as well as a message truncation technique is proposed to further reduce the bit-width and storage requirement of messages. A test chip was fabricated in UMC 90-nm 1P9M CMOS process with core area of 3.75mm2. At the standard performance condition with room temperature and 1V supply, the chip can be operated at 264MHz, achieving a throughput of 943.7Mb/s with 188 computation cycles and consuming a power of 347.1mW. For better area efficiency, we increase the supply voltage to 1.2V, where a clock rate of 368MHz is achieved for a higher throughput of 1.31Gb/s and an area efficiency of 350.67Mb/s/mm2. For energy-aware applications, we scale down the supply voltage to 0.8V for a lower operating frequency of 194MHz, leading to the best energy efficiency of 0.29nJ/b (associated with a power of 203.5mW). Compared to the latest NB-LDPC designs, the power consumption, area efficiency and energy efficiency of this chip are 6x, 2x, and 7.5x better, respectively.
APA, Harvard, Vancouver, ISO, and other styles
49

Ghayoor, Farzad. "Non-binary compound codes based on single parity-check codes." Thesis, 2013. http://hdl.handle.net/10413/10422.

Full text
Abstract:
Shannon showed that the codes with random-like codeword weight distribution are capable of approaching the channel capacity. However, the random-like property can be achieved only in codes with long-length codewords. On the other hand, the decoding complexity for a random-like codeword increases exponentially with its length. Therefore, code designers are combining shorter and simpler codes in a pseudorandom manner to form longer and more powerful codewords. In this research, a method for designing non-binary compound codes with moderate to high coding rate is proposed. Based on this method, non-binary single parity-check (SPC) codes are considered as component codes and different iterative decoding algorithms for decoding the constructed compound codes are proposed. The soft-input soft-output component decoders, which are employed for the iterative decoding algorithms, are constructed from optimal and sub-optimal a posteriori probability (APP) decoders. However, for non-binary codes, implementing an optimal APP decoder requires a large amount of memory. In order to reduce the memory requirement of the APP decoding algorithm, in the first part of this research, a modified form of the APP decoding algorithm is presented. The amount of memory requirement of this proposed algorithm is significantly less than that of the standard APP decoder. Therefore, the proposed algorithm becomes more practical for decoding non-binary block codes. The compound codes that are proposed in this research are constructed from combination of non-binary SPC codes. Therefore, as part of this research, the construction and decoding of the non-binary SPC codes, when SPC codes are defined over a finite ring of order q, are presented. The concept of finite rings is more general and it thus includes non-binary SPC codes defined over finite fields. Thereafter, based on production of non-binary SPC codes, a class of non-binary compound codes is proposed that is efficient for controlling both random-error and burst-error patterns and can be used for applications where high coding rate schemes are required. Simulation results show that the performance of the proposed codes is good. Furthermore, the performance of the compound code improves over larger rings. The analytical performance bounds and the minimum distance properties of these product codes are studied.
Thesis (Ph.D.)-University of KwaZulu-Natal, Durban, 2013.
APA, Harvard, Vancouver, ISO, and other styles
50

Tu, Shu-Wen, and 凃淑文. "Design and Implementation for Non-binary Low-density Parity-check Codes (NB-LDPC) Decoders." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/39120352634119597898.

Full text
Abstract:
碩士
國立交通大學
電子研究所
101
Non-binary LDPC codes which extended from binary LDPC codes have ex- cellent decoding performance, and it is robust to various channel impairments. With the remarkable decoding ability, the high computational complexity and huge memory usage are the main challenges for non-binary LDPC codes to be imple- mented in practical. This thesis presents a high hardware efficient architecture for implementing non-binary LDPC decoder using improved Extended Min-Sum de- coding algorithm with layered scheduling. Based on the enhancement in the check node processing and efficient memory storing, the proposed decoder can double the throughput and have half reduction in storing the edge messages. Using 90- nm CMOS process technology, a (2,4)-regular non-binary QC-LDPC decoder over GF(26) is implemented. In the post-layout simulation results, the decoder through- put can reach over 100 Mbps at 10 iterations. Compared with state-of-the-art de- signs, this implementation has at least 4.3 times improvement in hardware effi- ciency (throughput-to-gate-count-ratio), and the decoding performance still keep competitive.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography