Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Low-density parity-check Decoders.

Rozprawy doktorskie na temat „Low-density parity-check Decoders”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 45 najlepszych rozpraw doktorskich naukowych na temat „Low-density parity-check Decoders”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Planjery, Shiva Kumar. "Low-Complexity Finite Precision Decoders for Low-Density Parity-Check Codes". International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605947.

Pełny tekst źródła
Streszczenie:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
We present a new class of finite-precision decoders for low-density parity-check (LDPC) codes. These decoders are much lower in complexity compared to conventional floating-point decoders such as the belief propagation (BP) decoder, but they have the potential to outperform BP. The messages utilized by the decoders assume values (or levels) from a finite discrete set. We discuss the implementation aspects as well as describe the underlying philosophy in designing these decoders. We also provide results to show that in some cases, only 3 bits are required in the proposed decoders to outperform floating-point BP.
Style APA, Harvard, Vancouver, ISO itp.
2

Vijayakumar, Suresh Mikler Armin. "FPGA implementation of low density parity check codes decoder". [Denton, Tex.] : University of North Texas, 2009. http://digital.library.unt.edu/permalink/meta-dc-11003.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Yang, Lei. "VLSI implementation of low-error-floor multi-rate capacity-approaching low-density parity-check code decoder /". Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/5966.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Blad, Anton. "Low Complexity Techniques for Low Density Parity Check Code Decoders and Parallel Sigma-Delta ADC Structures". Doctoral thesis, Linköpings universitet, Elektroniksystem, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-69432.

Pełny tekst źródła
Streszczenie:
Since their rediscovery in 1995, low-density parity-check (LDPC) codes have received wide-spread attention as practical capacity-approaching code candidates. It has been shown that the class of codes can perform arbitrarily close to the channel capacity, and LDPC codes are also used or suggested for a number of important current and future communication standards. However, the problem of implementing an energy-efficient decoder has not yet been solved. Whereas the decoding algorithm is computationally simple, with uncomplicated arithmetic operations and low accuracy requirements, the random structure and irregularity of a theoretically well-defined code does not easily allow efficient VLSI implementations. Thus the LDPC decoding algorithm can be said to be communication-bound rather than computation-bound. In this thesis, a modification to the sum-product decoding algorithm called earlydecision decoding is suggested. The modification is based on the idea that the values of the bits in a block can be decided individually during decoding. As the sumproduct decoding algorithm is a soft-decision decoder, a reliability can be defined for each bit. When the reliability of a bit is above a certain threshold, the bit can be removed from the rest of the decoding process, and thus the internal communication associated with the bit can be removed in subsequent iterations. However, with the early decision modification, an increased error probability is associated. Thus, bounds on the achievable performance as well as methods to detect graph inconsistencies resulting from erroneous decisions are presented. Also, a hybrid decoder achieving a negligible performance penalty compared to the sum-product decoder is presented. With the hybrid decoder, the internal communication is reduced with up to 40% for a rate-1/2 code with a length of 1152 bits, whereas increasing the rate allows significantly higher gains. The algorithms have been implemented in a Xilinx Virtex 5 FPGA, and the resulting slice utilization and energy dissipation have been estimated. However, due to increased logic overhead of the early decision decoder, the slice utilization increases from 14.5% to 21.0%, whereas the logic energy dissipation reduction from 499 pJ to 291 pJ per iteration and bit is offset by the clock distribution power, increased from 141 pJ to 191 pJ per iteration and bit. Still, the early decision decoder shows a net 16% estimated decrease of energy dissipation.
Style APA, Harvard, Vancouver, ISO itp.
5

Gunnam, Kiran Kumar. "Area and energy efficient VLSI architectures for low-density parity-check decoders using an on-the-fly computation". [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1049.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Selvarathinam, Anand Manivannan. "High throughput low power decoder architectures for low density parity check codes". Texas A&M University, 2005. http://hdl.handle.net/1969.1/2529.

Pełny tekst źródła
Streszczenie:
A high throughput scalable decoder architecture, a tiling approach to reduce the complexity of the scalable architecture, and two low power decoding schemes have been proposed in this research. The proposed scalable design is generated from a serial architecture by scaling the combinational logic; memory partitioning and constructing a novel H matrix to make parallelization possible. The scalable architecture achieves a high throughput for higher values of the parallelization factor M. The switch logic used to route the bit nodes to the appropriate checks is an important constituent of the scalable architecture and its complexity is high with higher M. The proposed tiling approach is applied to the scalable architecture to simplify the switch logic and reduce gate complexity. The tiling approach generates patterns that are used to construct the H matrix by repeating a fixed number of those generated patterns. The advantages of the proposed approach are two-fold. First, the information stored about the H matrix is reduced by onethird. Second, the switch logic of the scalable architecture is simplified. The H matrix information is also embedded in the switch and no external memory is needed to store the H matrix. Scalable architecture and tiling approach are proposed at the architectural level of the LDPC decoder. We propose two low power decoding schemes that take advantage of the distribution of errors in the received packets. Both schemes use a hard iteration after a fixed number of soft iterations. The dynamic scheme performs X soft iterations, then a parity checker cHT that computes the number of parity checks in error. Based on cHT value, the decoder decides on performing either soft iterations or a hard iteration. The advantage of the hard iteration is so significant that the second low power scheme performs a fixed number of iterations followed by a hard iteration. To compensate the bit error rate performance, the number of soft iterations in this case is higher than that of those performed before cHT in the first scheme.
Style APA, Harvard, Vancouver, ISO itp.
7

Zhang, Kai. "High-Performance Decoder Architectures For Low-Density Parity-Check Codes". Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-dissertations/17.

Pełny tekst źródła
Streszczenie:
The Low-Density Parity-Check (LDPC) codes, which were invented by Gallager back in 1960s, have attracted considerable attentions recently. Compared with other error correction codes, LDPC codes are well suited for wireless, optical, and magnetic recording systems due to their near- Shannon-limit error-correcting capacity, high intrinsic parallelism and high-throughput potentials. With these remarkable characteristics, LDPC codes have been adopted in several recent communication standards such as 802.11n (Wi-Fi), 802.16e (WiMax), 802.15.3c (WPAN), DVB-S2 and CMMB. This dissertation is devoted to exploring efficient VLSI architectures for high-performance LDPC decoders and LDPC-like detectors in sparse inter-symbol interference (ISI) channels. The performance of an LDPC decoder is mainly evaluated by area efficiency, error-correcting capability, throughput and rate flexibility. With this work we investigate tradeoffs between the four performance aspects and develop several decoder architectures to improve one or several performance aspects while maintaining acceptable values for other aspects. Firstly, we present a high-throughput decoder design for the Quasi-Cyclic (QC) LDPC codes. Two new techniques are proposed for the first time, including parallel layered decoding architecture (PLDA) and critical path splitting. Parallel layered decoding architecture enables parallel processing for all layers by establishing dedicated message passing paths among them. The decoder avoids crossbar-based large interconnect network. Critical path splitting technique is based on articulate adjustment of the starting point of each layer to maximize the time intervals between adjacent layers, such that the critical path delay can be split into pipeline stages. Furthermore, min-sum and loosely coupled algorithms are employed for area efficiency. As a case study, a rate-1/2 2304-bit irregular LDPC decoder is implemented using ASIC design in 90 nm CMOS process. The decoder can achieve an input throughput of 1.1 Gbps, that is, 3 or 4 times improvement over state-of-art LDPC decoders, while maintaining a comparable chip size of 2.9 mm^2. Secondly, we present a high-throughput decoder architecture for rate-compatible (RC) LDPC codes which supports arbitrary code rates between the rate of mother code and 1. While the original PLDA is lack of rate flexibility, the problem is solved gracefully by incorporating the puncturing scheme. Simulation results show that our selected puncturing scheme only introduces the BER performance degradation of less than 0.2dB, compared with the dedicated codes for different rates specified in the IEEE 802.16e (WiMax) standard. Subsequently, PLDA is employed for high throughput decoder design. As a case study, a RC- LDPC decoder based on the rate-1/2 WiMax LDPC code is implemented in CMOS 90 nm process. The decoder can achieve an input throughput of 975 Mbps and supports any rate between 1/2 and 1. Thirdly, we develop a low-complexity VLSI architecture and implementation for LDPC decoder used in China Multimedia Mobile Broadcasting (CMMB) systems. An area-efficient layered decoding architecture based on min-sum algorithm is incorporated in the design. A novel split-memory architecture is developed to efficiently handle the weight-2 submatrices that are rarely seen in conventional LDPC decoders. In addition, the check-node processing unit is highly optimized to minimize complexity and computing latency while facilitating a reconfigurable decoding core. Finally, we propose an LDPC-decoder-like channel detector for sparse ISI channels using belief propagation (BP). The BP-based detection computationally depends on the number of nonzero interferers only and are thus more suited for sparse ISI channels which are characterized by long delay but a small fraction of nonzero interferers. Layered decoding algorithm, which is popular in LDPC decoding, is also adopted in this paper. Simulation results show that the layered decoding doubles the convergence speed of the iterative belief propagation process. Exploring the special structure of the connections between the check nodes and the variable nodes on the factor graph, we propose an effective detector architecture for generic sparse ISI channels to facilitate the practical application of the proposed detection algorithm. The proposed architecture is also reconfigurable in order to switch flexible connections on the factor graph in the time-varying ISI channels.
Style APA, Harvard, Vancouver, ISO itp.
8

Kopparthi, Sunitha. "Flexible encoder and decoder designs for low-density parity-check codes". Diss., Manhattan, Kan. : Kansas State University, 2010. http://hdl.handle.net/2097/4190.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Hussein, Ahmed Refaey Ahmed. "Universal Decoder for Low Density Parity Check, Turbo and Convolutional Codes". Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/28154/28154.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Cai, Fang. "Efficient VLSI Architectures for Non-binary Low Density Parity Check Decoding". Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1300821245.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Shadab, Rakin Muhammad. "Statistical Analysis of a Channel Emulator for Noisy Gradient Descent Low Density Parity Check Decoder". DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7582.

Pełny tekst źródła
Streszczenie:
The purpose of a channel emulator is to emulate a communication channel in real-life use case scenario. These emulators are often used in the domains of research in digital and wireless communication. One such area is error correction coding, where transmitted data bits over a channel are decoded and corrected to prevent data loss. A channel emulator that does not follow the properties of the channel it is intended to replicate can lead to mistakes while analyzing the performance of an error-correcting decoder. Hence, it is crucial to validate an emulator for a particular communication channel. This work delves into the statistics of a channel emulator and analyzes its effects on a particular decoder.
Style APA, Harvard, Vancouver, ISO itp.
12

Von, Leipzig Mirko. "Code generation and simulation of an automatic, flexible QC-LDPC hardware decoder". Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/96835.

Pełny tekst źródła
Streszczenie:
Thesis (MEng)--Stellenbosch University, 2015.
ENGLISH ABSTRACT: Iterative error correcting codes such as LDPC codes have become prominent in modern forward error correction systems. A particular subclass of LDPC codes known as quasicyclic LDPC codes has been incorporated in numerous high speed wireless communication and video broadcasting standards. These standards feature multiple codes with varying codeword lengths and code rates and require a high throughput. Flexible hardware that is capable of decoding multiple quasi-cyclic LDPC codes is therefore desirable. This thesis investigates binary quasi-cyclic LDPC codes and designs a generic, flexible VHDL decoder. The decoder is further enhanced to automatically select the most likely decoder based on the initial a posterior probability of the parity-check equation syndromes. A software system is developed that generates hardware code for such a decoder based on a small user specification. The system is extended to provide performance simulations for this generated decoder.
AFRIKAANSE OPSOMMING: Iteratiewe foutkorreksiekodes soos LDPC-kodes word wyd gebruik in moderne voorwaartse foutkorreksiestelsels. ’n Subklas van LDPC-kodes, bekend as kwasisikliese LDPC-kodes, word in verskeie hoëspoed-kommunikasie- en video-uitsaaistelselstandaarde gebruik. Hierdie standaarde inkorporeer verskeie kodes van wisselende lengtes en kodetempos, en vereis hoë deurset. Buigsame apparatuur, wat die vermoë het om ’n verskeidenheid kwasisikliese LDPC-kodes te dekodeer, is gevolglik van belang. Hierdie tesis ondersoek binêre kwasisikliese LDPC-kodes, en ontwerp ’n generiese, buigsame VHDL-dekodeerder. Die dekodeerder word verder verbeter om outomaties die mees waarskynlike dekodeerder te selekteer, gebaseer op die aanvanklike a posteriori-waarskynlikheid van die pariteitstoetsvergelykings se sindrome. ’n Programmatuurstelsel word ontwikkel wat die fermware-kode vir so ’n dekodeerder genereer, gebaseer op ’n beknopte gebruikerspesifikasie. Die stelsel word uitgebrei om werksverrigting te simuleer vir die gegenereerde dekodeerder.
Style APA, Harvard, Vancouver, ISO itp.
13

Yang, Lan. "An Area-Efficient Architecture for the Implementation of LDPC Decoder". Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1300337576.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Frenzel, Janik [Verfasser], Johannes [Akademischer Betreuer] Huber, Johannes [Gutachter] Huber i Berthold [Gutachter] Lankl. "Decoder Improvements for Fast Decoding of Low-Density Parity-Check Codes / Janik Frenzel ; Gutachter: Johannes Huber, Berthold Lankl ; Betreuer: Johannes Huber". Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2020. http://d-nb.info/1217065237/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Guilloud, Frédéric. "Architecture generique de decodeurs de codes LDPC". Paris, ENST, 2004. http://www.theses.fr/2004ENST0004.

Pełny tekst źródła
Streszczenie:
Les codes correcteurs d'erreurs à faible densité (LDPC) permettent de s'approcher à quelques fractions de dB de la limite de Shannon et commencent à apparaître dans les normes de télécommunications (DVB-S2). Dans cette thèse, nous nous sommes intéressés aux algorithmes de décodage des codes LDPC et à leur implantation matérielle. Nous avons tout d'abord proposé l'algorithme lambda-min permettant de réduire de façon significative la complexité du décodeur sans perte de performances par rapport à l'algorithme de référence dit à propagation de croyance. Nous avons ensuite conçu une architecture générique de décodeur LDPC,que nous avons implantée sur une plateforme dédiée à base de circuits logiques programmables FPGA. Ce décodeur matériel permet avant tout d'accélérer les simulations d'un facteur supérieur à 500 par rapport à une simulation logicielle. De plus, par sa conception entièrement programmable, modulaire et générique, il possède de nombreuses fonctionnalités : Il peut être configuré pour une large classe de codes ; la généricité des opérateurs de calcul permet l'optimisation de la précision interne des calculs en vue d'une conception ASIC ; par sa modularité, différents séquencement et algorithmes de calcul peuvent être testés. Enfin, notre travail nous a permis de dégager un cadre formel d'analyse et de synthèse des architectures de décodeurs LDPC. Ce cadre englobe les chemins de données et le mode de contrôle associé aux différents séquencements de décodage. Cette formalisation nous permet de classer les différentes solutions de l'état de l'art des décodeurs LDPC, mais aussi de proposer des architectures performantes non publiées à ce jour
The Low-Density Parity-Check codes are among the most powerful error correcting codes, since they enable to get as close as a fraction of a dB from the Shannon limit. They now begin to appear in industrial standard such as the second digital satellite broadcasting one (DVB-S2). In this thesis, we focused our research on the iterative decoding algorithms and their hardware implementations. We proposed first the lambda-min algorithm. It reduces significantly the complexity of the decoder without any significant performance loss, as compared to the belief propagation (BP) algorithm. Then we designed a generic architecture of an LDPC decoder, which has been implemented on a FPGA based platform. This hardware decoder enables to accelerate the simulations more than 500 times as compared to software simulations. Moreover, based on an all-tunable design, our decoder features many facilities: It is possible to configure it for a very wide code family, so that the research for good codes is processed faster ; thanks to the genericity of the processing components, it is also possible to optimize the internal coding format, and even to compare various decoding algorithms and various processing schedules. Finally, our experience in the area of LDPC decoders led us to propose a formal framework for analysing the architectures of LDPC decoders. This framework encompasses both the datapath (parallelism, node processors architectures) and the control mode associated to the several decoding schedules. Within this framework, a classification of the different state-of-the-art LDPC decoders is proposed. Moreover, a synthesis of an efficient and unpublished architectures have been proposed
Style APA, Harvard, Vancouver, ISO itp.
16

Matcha, Chaitanya Kumar, Mohsen Bahrami, Shounak Roy, Shayan Garani Srinivasa i Bane Vasic. "Generalized belief propagation based TDMR detector and decoder". IEEE, 2016. http://hdl.handle.net/10150/622831.

Pełny tekst źródła
Streszczenie:
Two dimensional magnetic recording (TDMR) achieves high areal densities by reducing the size of a bit comparable to the size of the magnetic grains resulting in two dimensional (2D) inter symbol interference (ISI) and very high media noise. Therefore, it is critical to handle the media noise along with the 2D ISI detection. In this paper, we tune the generalized belief propagation (GBP) algorithm to handle the media noise seen in TDMR. We also provide an intuition into the nature of hard decisions provided by the GBP algorithm. The performance of the GBP algorithm is evaluated over a Voronoi based TDMR channel model where the soft outputs from the GBP algorithm are used by a belief propagation (BP) algorithm to decode low-density parity check (LDPC) codes.
Style APA, Harvard, Vancouver, ISO itp.
17

XIAO, GUOPING. "VLSI architectures design for encoders of High Efficiency Video Coding (HEVC) standard". Doctoral thesis, Politecnico di Torino, 2016. http://hdl.handle.net/11583/2644058.

Pełny tekst źródła
Streszczenie:
The growing popularity of high resolution video and the continuously increasing demands for high quality video on mobile devices are producing stronger needs for more efficient video encoder. Concerning these desires, HEVC, a newest video coding standard, has been developed by a joint team formed by ISO/IEO MPEG and ITU/T VCEG. Its design goal is to achieve a 50% compression gain over its predecessor H.264 with an equal or even higher perceptual video quality. Motion Estimation (ME) being as one of the most critical module in video coding contributes almost 50%-70% of computational complexity in the video encoder. This high consumption of the computational resources puts a limit on the performance of encoders, especially for full HD or ultra HD videos, in terms of coding speed, bit-rate and video quality. Thus the major part of this work concentrates on the computational complexity reduction and improvement of timing performance of motion estimation algorithms for HEVC standard. First, a new strategy to calculate the SAD (Sum of Absolute Difference) for motion estimation is designed based on the statistics on property of pixel data of video sequences. This statistics demonstrates the size relationship between the sum of two sets of pixels has a determined connection with the distribution of the size relationship between individual pixels from the two sets. Taking the advantage of this observation, only a small proportion of pixels is necessary to be involved in the SAD calculation. Simulations show that the amount of computations required in the full search algorithm is reduced by about 58% on average and up to 70% in the best case. Secondly, from the scope of parallelization an enhanced TZ search for HEVC is proposed using novel schemes of multiple MVPs (motion vector predictor) and shared MVP. Specifically, resorting to multiple MVPs the initial search process is performed in parallel at multiple search centers, and the ME processing engine for PUs within one CU are parallelized based on the MVP sharing scheme on CU (coding unit) level. Moreover, the SAD module for ME engine is also parallelly implemented for PU size of 32×32. Experiments indicate it achieves an appreciable improvement on the throughput and coding efficiency of the HEVC video encoder. In addition, the other part of this thesis is contributed to the VLSI architecture design for finding the first W maximum/minimum values targeting towards high speed and low hardware cost. The architecture based on the novel bit-wise AND scheme has only half of the area of the best reference solution and its critical path delay is comparable with other implementations. While the FPCG (full parallel comparison grid) architecture, which utilizes the optimized comparator-based structure, achieves 3.6 times faster on average on the speed and even 5.2 times faster at best comparing with the reference architectures. Finally the architecture using the partial sorting strategy reaches a good balance on the timing performance and area, which has a slightly lower or comparable speed with FPCG architecture and a acceptable hardware cost.
Style APA, Harvard, Vancouver, ISO itp.
18

Darabiha, Ahmad. "VLSI architectures for multi-Gbps Low-Density Parity-Check decoders". 2008. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=742560&T=F.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Lee, Xin-Ru, i 李欣儒. "Research on Stochastic Binary and Nonbinary Low-Density Parity-Check Code Decoders". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/nh3dez.

Pełny tekst źródła
Streszczenie:
博士
國立交通大學
電子工程學系 電子研究所
103
To ensure the reliability of transmission and storage, error-control codes are requisite in communication and storage systems. Among error-control codes, low-density parity-check (LDPC) codes, due to its excellent error-correcting performance and throughput, are very attractive to academia and industry. Recently, the multi-rate LDPC codes are widely adopted in a variety of specifications to support different transmission schemes. However, the design challenge of a multi-rate LDPC decoder is wire routing, which affects the size of area and the operating frequency. Stochastic decoding cleverly provides a routing efficient and compact solution for LDPC decoders by means of bit-serial property and simpler computations. On the other hand, nonbinary LDPC (NB-LDPC) codes can provide even better coding gain than binary LDPC codes at the cost of higher decoding complexity and lower throughput. How to achieve a throughput of Gbps as well as reduce the storage requirement remains key design challenge for stochastic decoding. In this dissertation, we propose an area/energy-efficient stochastic decoding for binary and nonbinary LDPC codes and demonstrate our proposals are competitive to conventional LDPC decoding algorithms. For binary LDPC codes, the first silicon-proven stochastic LDPC decoder to support multiple code rates for IEEE 802.15.3c applications is presented. The critical path is improved by a reconfigurable stochastic check node unit (CNU) and variable node unit (VNU); therefore, a high throughput scheme can be realized with 768 MHz clock frequency. To achieve higher hardware and energy efficiency, the reduced complexity architecture of tracking forecast memory is experimentally investigated to implement the variable node units for IEEE 802.15.3c applications. Based on the properties of parity check matrices and stochastic arithmetic, the optimized routing networks with re-permutation techniques are adopted to enhance chip utilization. Considering the measurement uncertainties, a delay-lock loop with isolated power domain and a test environment consisting of an encoder, an AWGN generator and bypass circuits are also designed for inner clock and information generation. With these features, our proposed fully parallel LDPC decoder chip fabricated in 90-nm CMOS process with 760.3K gate count can achieve 7.92Gb/s data rate and power consumption of 437.2mW under 1.2V supply voltage. Compared to the state-of-the-art IEEE 802.15.3c LDPC decoder chips, our proposed chip achieves over 90% reduction of routing wires, 73.8% and 11.5% enhancement of hardware and energy efficiency, respectively. For nonbinary LDPC codes, a partial parallel decoder of a (168, 84) regular-(2, 4) NB-LDPC code over GF(16) is implemented. The simpler routing networks profited from stochastic computation with optimized computation units deliver 96.6% logic utilization which is the highest value of silicon-proven LDPC or NB-LDPC decoders. The logarithm domain transformation as well as a message truncation technique is proposed to further reduce the bit-width and storage requirement of messages. A test chip was fabricated in UMC 90-nm 1P9M CMOS process with core area of 3.75mm2. At the standard performance condition with room temperature and 1V supply, the chip can be operated at 264MHz, achieving a throughput of 943.7Mb/s with 188 computation cycles and consuming a power of 347.1mW. For better area efficiency, we increase the supply voltage to 1.2V, where a clock rate of 368MHz is achieved for a higher throughput of 1.31Gb/s and an area efficiency of 350.67Mb/s/mm2. For energy-aware applications, we scale down the supply voltage to 0.8V for a lower operating frequency of 194MHz, leading to the best energy efficiency of 0.29nJ/b (associated with a power of 203.5mW). Compared to the latest NB-LDPC designs, the power consumption, area efficiency and energy efficiency of this chip are 6x, 2x, and 7.5x better, respectively.
Style APA, Harvard, Vancouver, ISO itp.
20

Jinghu, Chen. "Reduced complexity decoding algorithms for low-density parity check codes and turbo codes". 2003. http://proquest.umi.com/pqdweb?index=0&did=765086321&SrchMode=2&sid=11&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1233251616&clientId=23440.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Tu, Shu-Wen, i 凃淑文. "Design and Implementation for Non-binary Low-density Parity-check Codes (NB-LDPC) Decoders". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/39120352634119597898.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
電子研究所
101
Non-binary LDPC codes which extended from binary LDPC codes have ex- cellent decoding performance, and it is robust to various channel impairments. With the remarkable decoding ability, the high computational complexity and huge memory usage are the main challenges for non-binary LDPC codes to be imple- mented in practical. This thesis presents a high hardware efficient architecture for implementing non-binary LDPC decoder using improved Extended Min-Sum de- coding algorithm with layered scheduling. Based on the enhancement in the check node processing and efficient memory storing, the proposed decoder can double the throughput and have half reduction in storing the edge messages. Using 90- nm CMOS process technology, a (2,4)-regular non-binary QC-LDPC decoder over GF(26) is implemented. In the post-layout simulation results, the decoder through- put can reach over 100 Mbps at 10 iterations. Compared with state-of-the-art de- signs, this implementation has at least 4.3 times improvement in hardware effi- ciency (throughput-to-gate-count-ratio), and the decoding performance still keep competitive.
Style APA, Harvard, Vancouver, ISO itp.
22

Mondal, Arijit. "Efficient Hardware Architectures for Error Correcting Codes Applicable to Data Storage". Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5674.

Pełny tekst źródła
Streszczenie:
Error correcting codes (ECCs) are essential to transmission and data storage sys-tems to protect the information from errors introduced by noisy communication channels. There are two main classes of ECCs, namely algebraic and iterative ECCs. While iterative ECCs like low-density parity-check (LDPC) codes provide improved performance in the waterfall region albeit exhibiting flooring effect for not so well-designed codes, algebraic ECCs like Bose–Chaudhuri–Hocquenghem (BCH) and Reed Solomon (RS) codes provide guaranteed error correction capability irrespective of the waterfall or error floor regions. Due to recent advancements in higher-dimensional data storage technologies like shingled and 2-D magnetic recording (TDMR), 3-DNAND flash memories, and holographic memories, native 2-Dsignal processing and coding techniques are re-quired to overcome inter-symbol interference (ISI) and noise leading to 2-Dburst and random errors. With high data densities beyond 2 Tb/in2 in practical TDMR channels, reliable information storage and retrieval require highly efficient ECCs. The primary motivation of this dissertation is to design efficient hardware architectures for error correcting codes pertaining to 1-Dand 2-Dstorage channels. The focus topics are as follows: (i) First, we designed a high-throughput 1-DLDPC decoder using layered and non-layered min-sum algorithm based on non-uniform quantization on a field programmable gate array (FPGA) kit. Unlike the standard state-of-the-art uniform quantization used in virtually all decoder circuits, our non-uniform quantization technique achieves a slight performance improvement in the signal-to-noise ratio (SNR) using the same bit budget as the uniform case. Using 1 bit lesser than uniform quantization, it yields area savings for the block RAMs used for storing intermediate check node and variable node messages. (ii) We proposed efficient encoding and decoding hardware architectures for (n,k), t-error correcting BCH product codes in the frequency domain. Using the properties of conjugate classes over a finite field, we reduced the algorithmic complexity of the encoder, leading to a significant reduction in the hardware complexity. v vi A low-latency (2t + 2) decoder for the above encoder is also designed. For a particular case of n = 15 and t = 2, the architectures were implemented on a FPGA kit, giving high throughputs of 22.5 Gbps and 5.6 Gbps at 100 MHz for the encoder and decoder respectively. (iii) We proposed fast and efficient hardware architectures for a 2-D BCH code of size n × n, with a quasi-cyclic burst error correction capability of t × t, in the frequency domain for data storage applications. A fully parallel encoder with the ability to produce an output every clock cycle was designed. Using conjugate class properties of finite fields, the algorithmic complexity of the encoder was significantly reduced, leading to a reduction in the number of gates by about 94% compared to the brute force implementation per 2-Dinverse discrete finite field Fourier transform (IDFFFT) point for a 15 × 15, t = 2, 2-DBCH code. We also designed a pipelined, low-latency decoder for the above encoder. The algorithmic complexities of various pipeline stages of the decoder were reduced significantly using finite field properties, reducing the space complexity of the entire decoder. For a particular case of n = 15 and t = 2, the architectures were implemented targeting a Kintex 7 KC-705 FPGA kit, giving high throughputs of 22.5 Gbps and 5.6 Gbps at 100 MHz for the encoder and decoder, respectively. (iv) We developed an efficient design architecture for finding the roots of a bi-variate polynomial over GF(q) by extending the Chien search procedure to two-dimensions. The complexity of the Chien search was reduced to an order of the number of conjugacy classes over GF(qλ), leading to a significant reduction in the computational complexity. We provided an efficient design architecture for our algorithm towards a circuit realization, useful for decoding of 2-Dalgebraic ECCs. v) Native 2-DLDPC codes provide 2-Dburst erasure correction capability and have promising applications in TDMR technology. Though carefully constructed rastered 1-DLDPC codes can provide 2-Dburst erasure correction, they are not as efficient as 2-Dnative codes constructed for handling 2-Dspan of burst erasures. Our contributions are two-fold: (a) We propose a new 2-DLDPC code with girth greater than 4 by generating a parity check tensor through stacking permutation tensors of size p×p×p along the i,j,k axes. The permutations are achieved through circular shifts on an identity tensor along different co-ordinate axes in such a way that it provides a burst erasure correction capability of at least p×p. (b) We propose a fast, efficient, and scalable hardware architecture for a parallel 2-DLDPC decoder based on the proposed code construction for data storage applications. Through efficient indexing of the received messages in a RAM, we propose novel routing mechanisms for messages between the check nodes and variable nodes through a set of two barrel shifters, producing shifts along two axes. Through simulations, we show that the performance of the proposed 2-D LDPC codes match a 1-DQC-LDPC code, with a sharp waterfall drop of 3-4 orders of magnitude over ∼0.3 dB, for random errors over code sizes of ∼32 Kbits or equivalently ∼180×180 2-Darrays. Further, we prove that the proposed native 2-DLDPC codes outperform their 1-Dcounterparts in terms of 2-Dcluster erasure correction ability. For p = 16 and code arrays of size 48 × 48, we implemented the proposed design architecture on a Kintex-7 KC-705 FPGA kit, achieving a significantly high worst case throughput of 12.52 Gbps at a clock frequency of 163 MHz.
Style APA, Harvard, Vancouver, ISO itp.
23

"Parallel VLSI Architectures for Multi-Gbps MIMO Communication Systems". Thesis, 2011. http://hdl.handle.net/1911/70461.

Pełny tekst źródła
Streszczenie:
In wireless communications, the use of multiple antennas at both the transmitter and the receiver is a key technology to enable high data rate transmission without additional bandwidth or transmit power. Multiple-input multiple-output (MIMO) schemes are widely used in many wireless standards, allowing higher throughput using spatial multiplexing techniques. MIMO soft detection poses significant challenges to the MIMO receiver design as the detection complexity increases exponentially with the number of antennas. As the next generation wireless system is pushing for multi-Gbps data rate, there is a great need for high-throughput low-complexity soft-output MIMO detector. The brute-force implementation of the optimal MIMO detection algorithm would consume enormous power and is not feasible for the current technology. We propose a reduced-complexity soft-output MIMO detector architecture based on a trellis-search method. We convert the MIMO detection problem into a shortest path problem. We introduce a path reduction and a path extension algorithm to reduce the search complexity while still maintaining sufficient soft information values for the detection. We avoid the missing counter-hypothesis problem by keeping multiple paths during the trellis search process. The proposed trellis-search algorithm is a data-parallel algorithm and is very suitable for high speed VLSI implementation. Compared with the conventional tree-search based detectors, the proposed trellis-based detector has a significant improvement in terms of detection throughput and area efficiency. The proposed MIMO detector has great potential to be applied for the next generation Gbps wireless systems by achieving very high throughput and good error performance. The soft information generated by the MIMO detector will be processed by a channel decoder, e.g. a low-density parity-check (LDPC) decoder or a Turbo decoder, to recover the original information bits. Channel decoder is another very computational-intensive block in a MIMO receiver SoC (system-on-chip). We will present high-performance LDPC decoder architectures and Turbo decoder architectures to achieve 1+ Gbps data rate. Further, a configurable decoder architecture that can be dynamically reconfigured to support both LDPC codes and Turbo codes is developed to support multiple 3G/4G wireless standards. We will present ASIC and FPGA implementation results of various MIMO detectors, LDPC decoders, and Turbo decoders. We will discuss in details the computational complexity and the throughput performance of these detectors and decoders.
Style APA, Harvard, Vancouver, ISO itp.
24

Liao, Wei-Chung, i 廖惟中. "Design of Low-Cost Low-Density Parity-Check Code Decoder". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/35125395598491479322.

Pełny tekst źródła
Streszczenie:
碩士
國立中山大學
資訊工程學系研究所
93
With the enormous growing applications of mobile communications, how to reduce the power dissipation of wireless communication has become an important issue that attracts much attention. One of the key techniques to achieve low power transmission is to develop a powerful channel coding scheme which can perform good error correcting capability even at low signal-to-noise ratio. In recent years, the trend of the error control code development is based on the iterative decoding algorithm which can lead to higher coding gain. Especially, the rediscovery of the low-density parity-check code (LDPC)has become the most famous code after the introduction of Turbo code since it is the code closest to the well-know Shannon limit. However, since the block size used in LDPC is usually very large, and the parity matrix used in LDPC is quite random, the hardware implementation of LDPC has become very difficult. It may require a significant number of arithmetic units as well as very complex routing topology. Therefore, this thesis will address several design issues of LDPC decoder. First, under no SNR estimation condition, some simulation results of several LDPC architectures are provided and have shown that some architectures can achieve close performance to those with SNR estimation. Secondly, a novel message quantization method is proposed and applied in the design LDPC to reduce to the memory and table sizes as well as routing complexity. Finally, several early termination schemes for LDPC are considered, and it is found that up to 42% of bit node operation can be saved.
Style APA, Harvard, Vancouver, ISO itp.
25

Lin, Kai-Li, i 林凱立. "High-Throughput Low-Density Parity-Check Code Decoder Designs". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/04211570571391710106.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
電子工程系所
93
In this thesis, two high-throughput low-density parity-check (LDPC) code decoders are presented. The first one is a (600, 450) LDPC code decoder applied for MB-OFDM UWB applications. The architecture adopts a re-scheduling data flow for channel values and the pipeline structure to reduce routing congestion and critical path delay. After fabricated in 0.18�慆 1P6M process, the proposed partially parallel decoder can support 480Mb/s data rate under 8 decoding iterations. Second decoder is implemented based on a (1200, 720) irregular parity check matrix. For achieving higher chip density and less interconnection delay, the proposed architecture features a new data reordering technique to simplify data bus between message memories and computational units; as a result, the chip size can be greatly reduced due to the increased chip density. Moreover, the LDPC decoder can also process two different codewords concurrently to increase throughput and datapath efficiency. After 0.18�慆 1P6M chip implementation, a 3.33Gb/s data rate with 8 decoding iterations is achieved in the 21.23mm2 silicon area. The other experiment using 0.13�慆 1P8M technology can further reach a 5.92Gb/s data rate within 10.24mm2 area while the chip density is 75.4%.
Style APA, Harvard, Vancouver, ISO itp.
26

Malema, Gabofetswe Alafang. "Low-density parity-check codes : construction and implementation". 2007. http://hdl.handle.net/2440/45525.

Pełny tekst źródła
Streszczenie:
Low-density parity-check (LDPC) codes have been shown to have good error correcting performance approaching Shannon’s limit. Good error correcting performance enables efficient and reliable communication. However, a LDPC code decoding algorithm needs to be executed efficiently to meet cost, time, power and bandwidth requirements of target applications. The constructed codes should also meet error rate performance requirements of those applications. Since their rediscovery, there has been much research work on LDPC code construction and implementation. LDPC codes can be designed over a wide space with parameters such as girth, rate and length. There is no unique method of constructing LDPC codes. Existing construction methods are limited in some way in producing good error correcting performing and easily implementable codes for a given rate and length. There is a need to develop methods of constructing codes over a wide range of rates and lengths with good performance and ease of hardware implementability. LDPC code hardware design and implementation depend on the structure of target LDPC code and is also as varied as LDPC matrix designs and constructions. There are several factors to be considered including decoding algorithm computations,processing nodes interconnection network, number of processing nodes, amount of memory, number of quantization bits and decoding delay. All of these issues can be handled in several different ways. This thesis is about construction of LDPC codes and their hardware implementation. LDPC code construction and implementation issues mentioned above are too many to be addressed in one thesis. The main contribution of this thesis is the development of LDPC code construction methods for some classes of structured LDPC codes and techniques for reducing decoding time. We introduce two main methods for constructing structured codes. In the first method, column-weight two LDPC codes are derived from distance graphs. A wide range of girths, rates and lengths are obtained compared to existing methods. The performance and implementation complexity of obtained codes depends on the structure of their corresponding distance graphs. In the second method, a search algorithm based on bit-filing and progressive-edge growth algorithms is introduced for constructing quasi-cyclic LDPC codes. The algorithm can be used to form a distance or Tanner graph of a code. This method could also obtain codes over a wide range of parameters. Cycles of length four are avoided by observing the row-column constraint. Row-column connections observing this condition are searched sequentially or randomly. Although the girth conditions are not sufficient beyond six, larger girths codes were easily obtained especially at low rates. The advantage of this algorithm compared to other methods is its flexibility. It could be used to construct codes for a given rate and length with girths of at least six for any sub-matrix configuration or rearrangement. The code size is also easily varied by increasing or decreasing sub-matrix size. Codes obtained using a sequential search criteria show poor performance at low girths (6 and 8) while random searches result in good performing codes. Quasi-cyclic codes could be implemented in a variety of decoder architectures. One of the many options is the choice of processing nodes interconnect. We show how quasi-cyclic codes processing could be scheduled through a multistage network. Although these net-works have more delay than other modes of communication, they offer more flexibility at a reasonable cost. Banyan and Benes networks are suggested as the most suitable networks. Decoding delay is also one of several issues considered in decoder design and implementation. In this thesis, we overlap check and variable node computations to reduce decoding time. Three techniques are discussed, two of which are introduced in this thesis. The techniques are code matrix permutation, matrix space restriction and sub-matrix row-column scheduling. Matrix permutation rearranges the parity-check matrix such that rows and columns that do not have connections in common are separated. This techniques can be applied to any matrix. Its effectiveness largely depends on the structure of the code. We show that its success also depends on the size of row and column weights. Matrix space restriction is another technique that can be applied to any code and has fixed reduction in time or amount of overlap. Its success depends on the amount of restriction and may be traded with performance loss. The third technique already suggested in literature relies on the internal cyclic structure of sub-matrices to achieve overlapping. The technique is limited to LDPC code matrices in which the number of sub-matrices is equal to row and column weights. We show that it can be applied to other codes with a lager number of sub-matrices than code weights. However, in this case maximum overlap is not guaranteed. We calculate the lower bound on the amount of overlapping. Overlapping could be applied to any sub-matrix configuration of quasi-cyclic codes by arbitrarily choosing the starting rows for processing. Overlapping decoding time depends on inter-iteration waiting times. We show that there are upper bounds on waiting times which depend on the code weights. Waiting times could be further reduced by restricting shifts in identity sub-matrices or using smaller sub-matrices. This overlapping technique can reduce the decoding time by up to 50% compared to conventional message and computation scheduling. Techniques of matrix permutation and space restriction results in decoder architectures that are flexible in LDPC code design in terms of code weights and size. This is due to the fact that with these techniques, rows and columns are processed in sequential order to achieve overlapping. However, in the existing technique, all sub-matrices have to be processed in parallel to achieve overlapping. Parallel processing of all code sub-matrices requires the architecture to have the number of processing units at least equal to the number sub-matrices. Processing units and memory space should therefore be distributed among the sub-matrices according to the sub-matrices arrangement. This leads to high complexity or inflexibility in the decoder architecture. We propose a simple, programmable and high throughput decoder architecture based on matrix permutation and space restriction techniques.
Thesis(Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2007
Style APA, Harvard, Vancouver, ISO itp.
27

Malema, Gabofetswe Alafang. "Low-density parity-check codes : construction and implementation". Thesis, 2007. http://hdl.handle.net/2440/45525.

Pełny tekst źródła
Streszczenie:
Low-density parity-check (LDPC) codes have been shown to have good error correcting performance approaching Shannon’s limit. Good error correcting performance enables efficient and reliable communication. However, a LDPC code decoding algorithm needs to be executed efficiently to meet cost, time, power and bandwidth requirements of target applications. The constructed codes should also meet error rate performance requirements of those applications. Since their rediscovery, there has been much research work on LDPC code construction and implementation. LDPC codes can be designed over a wide space with parameters such as girth, rate and length. There is no unique method of constructing LDPC codes. Existing construction methods are limited in some way in producing good error correcting performing and easily implementable codes for a given rate and length. There is a need to develop methods of constructing codes over a wide range of rates and lengths with good performance and ease of hardware implementability. LDPC code hardware design and implementation depend on the structure of target LDPC code and is also as varied as LDPC matrix designs and constructions. There are several factors to be considered including decoding algorithm computations,processing nodes interconnection network, number of processing nodes, amount of memory, number of quantization bits and decoding delay. All of these issues can be handled in several different ways. This thesis is about construction of LDPC codes and their hardware implementation. LDPC code construction and implementation issues mentioned above are too many to be addressed in one thesis. The main contribution of this thesis is the development of LDPC code construction methods for some classes of structured LDPC codes and techniques for reducing decoding time. We introduce two main methods for constructing structured codes. In the first method, column-weight two LDPC codes are derived from distance graphs. A wide range of girths, rates and lengths are obtained compared to existing methods. The performance and implementation complexity of obtained codes depends on the structure of their corresponding distance graphs. In the second method, a search algorithm based on bit-filing and progressive-edge growth algorithms is introduced for constructing quasi-cyclic LDPC codes. The algorithm can be used to form a distance or Tanner graph of a code. This method could also obtain codes over a wide range of parameters. Cycles of length four are avoided by observing the row-column constraint. Row-column connections observing this condition are searched sequentially or randomly. Although the girth conditions are not sufficient beyond six, larger girths codes were easily obtained especially at low rates. The advantage of this algorithm compared to other methods is its flexibility. It could be used to construct codes for a given rate and length with girths of at least six for any sub-matrix configuration or rearrangement. The code size is also easily varied by increasing or decreasing sub-matrix size. Codes obtained using a sequential search criteria show poor performance at low girths (6 and 8) while random searches result in good performing codes. Quasi-cyclic codes could be implemented in a variety of decoder architectures. One of the many options is the choice of processing nodes interconnect. We show how quasi-cyclic codes processing could be scheduled through a multistage network. Although these net-works have more delay than other modes of communication, they offer more flexibility at a reasonable cost. Banyan and Benes networks are suggested as the most suitable networks. Decoding delay is also one of several issues considered in decoder design and implementation. In this thesis, we overlap check and variable node computations to reduce decoding time. Three techniques are discussed, two of which are introduced in this thesis. The techniques are code matrix permutation, matrix space restriction and sub-matrix row-column scheduling. Matrix permutation rearranges the parity-check matrix such that rows and columns that do not have connections in common are separated. This techniques can be applied to any matrix. Its effectiveness largely depends on the structure of the code. We show that its success also depends on the size of row and column weights. Matrix space restriction is another technique that can be applied to any code and has fixed reduction in time or amount of overlap. Its success depends on the amount of restriction and may be traded with performance loss. The third technique already suggested in literature relies on the internal cyclic structure of sub-matrices to achieve overlapping. The technique is limited to LDPC code matrices in which the number of sub-matrices is equal to row and column weights. We show that it can be applied to other codes with a lager number of sub-matrices than code weights. However, in this case maximum overlap is not guaranteed. We calculate the lower bound on the amount of overlapping. Overlapping could be applied to any sub-matrix configuration of quasi-cyclic codes by arbitrarily choosing the starting rows for processing. Overlapping decoding time depends on inter-iteration waiting times. We show that there are upper bounds on waiting times which depend on the code weights. Waiting times could be further reduced by restricting shifts in identity sub-matrices or using smaller sub-matrices. This overlapping technique can reduce the decoding time by up to 50% compared to conventional message and computation scheduling. Techniques of matrix permutation and space restriction results in decoder architectures that are flexible in LDPC code design in terms of code weights and size. This is due to the fact that with these techniques, rows and columns are processed in sequential order to achieve overlapping. However, in the existing technique, all sub-matrices have to be processed in parallel to achieve overlapping. Parallel processing of all code sub-matrices requires the architecture to have the number of processing units at least equal to the number sub-matrices. Processing units and memory space should therefore be distributed among the sub-matrices according to the sub-matrices arrangement. This leads to high complexity or inflexibility in the decoder architecture. We propose a simple, programmable and high throughput decoder architecture based on matrix permutation and space restriction techniques.
Thesis(Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2007
Style APA, Harvard, Vancouver, ISO itp.
28

Wu, Chia-chun, i 吳佳俊. "A Partially Parallel Low-Density Parity Check Code Decode". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/88035237346680041672.

Pełny tekst źródła
Streszczenie:
碩士
國立中央大學
電機工程研究所
97
Abstract LDPC decoding algorithm is a result of the use of Message passing Concept way, obtain efficient decoding circumstances, the realization of the decoder hardware, with plenty of memory to store the messages exchanged, required memory size with the same H matrix contains the number of 1. In other words, When the H Structure, then the greater the need will increase memory. Two partially parallel architectures have been commonly implemented for LDPC decoders: Share-memory architecture and Individual-memory architecture. Our previous study has presented an alternative approach that significantly reduces the memory size requirement. In this study, shift-registers are employed to replace memory to simplify the data retrieval scheme and to further improve the throughput. Results show that the a LDPC decoder, with a code length of 1536 and a code rate of 1/2, can achieves the data rate up to 166 Mbps at the maximum clock frequency of 460 MHz.
Style APA, Harvard, Vancouver, ISO itp.
29

Rabbani, Abolfazli Ali Reza. "An Analog Decoder for Turbo-Structured Low-Density Parity-Check Codes". Thesis, 2012. http://spectrum.library.concordia.ca/975021/1/RabbaniAbolfazli%2DPhD%2DS2013.pdf.

Pełny tekst źródła
Streszczenie:
In this work, we consider a class of structured regular LDPC codes, called Turbo-Structured LDPC (TS-LDPC). TS-LDPC codes outperform random LDPC codes and have much lower error floor at high Signal-to-Noise Ratio (SNR). In this thesis, Min-Sum (MS) algorithms are adopted in the decoding of TS-LDPC codes due to their low complexity in the implementation. We show that the error performance of the MS-based TS-LDPC decoder is comparable with the Sum-Product (SP) based decoder and the error floor property of TS-LDPC codes is preserved. The TS-LDPC decoding algorithms can be performed by analog or digital circuitry. Analog decoders are preferred in many communication systems due to their potential for higher speed, lower power dissipation and smaller chip area compared to their digital counterparts. In this work, implementation of the (120, 75) MS-based TS-LDPC analog decoder is considered. The decoder chip consists of an analog decoder heart, digital input and digital output blocks. These digital blocks are required to deliver the received signal to the analog decoder heart and transfer the estimated codewords to the off-chip module. The analog decoder heart is an analog processor performing decoding on the Tanner graph of the code. Variable and check nodes are the main building blocks of analog decoder which are designed and evaluated. The check node is the most complicated unit in MS-based decoders. The minimizer circuit, the fundamental block of a check node, is designed to have a good trade-off between speed and accuracy. In addition, the structure of a high degree minimizer is proposed considering the accuracy, speed, power consumption and robustness against mismatch of the check node unit. The measurement results demonstrate that the error performance of the chip is comparable with theory. The SNR loss at Bit-Error-Rate of 10−5 is only 0.2dB compared to the theory while information throughput is 750Mb/s and the energy efficiency of the decoder chip is 17pJ/b. It is shown that the proposed decoder outperforms the analog decoders that have been fabricated to date in the sense of error performance, throughput and energy efficiency. This decoder is the first analog decoder that has ever been implemented in a sub 100-nm technology and it improves the throughput of analog decoders by a factor of 56. This decoder sets a new state-of-the-art in analog decoding.
Style APA, Harvard, Vancouver, ISO itp.
30

Chien, Yi-Hsing, i 簡義興. "Design and Implementation of Configurable Low-Density Parity-Check Codes Decoder". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/64850195340982427366.

Pełny tekst źródła
Streszczenie:
博士
國立臺灣大學
資訊工程學研究所
101
Low-density parity-check (LDPC) code has been shown to provide near-Shannon-limit performance for communication systems. Implementation cost, throughput, and power consumption are the most important issues for all communication design. In this dissertation, three techniques are proposed for efficient LDPC decoders: 1) a scheduling algorithm for high hardware utilization efficiency and throughput, 2) an iteration control algorithm for power efficiency and throughput, and 3) an enhanced min-sum decoding algorithm for better coding gains. First of all, we present a design of a high throughput LDPC decoder using an overlapped message-passing scheduling algorithm. Regular hierarchical quasi-cyclic (H-QC) LDPC code is used in this design to provide good coding performance at a long code length. The two-level regular H-QC LDPC code matrix structure is exploited to parallelize the row and column decoding operations. Our scheduling algorithm re-arranges these operations across iteration boundaries to avoid memory access conflicts. The memory requirements are reduced by half compared to pipelined decoders without scheduling. An (12288, 6144) LDPC decoder implemented in FPGA achieves 781.63 Mbps throughput performance. Second, the concurrent partially-parallel syndrome computation reduces complexity but incurs increased error rates due to the hard decision flipping (HDF) problem. We propose a low-complexity iteration control algorithm that eliminates BER degradation. The HDF rate is also used to terminate undecodable blocks to further save iterations. The algorithm works over multiple code rates. The implementation results show that a six-rate iteration control logic requires only 23.32% of the hardware of a single-rate fully parallel syndrome. Next, the min-sum algorithm is low complexity and has been shown to significantly reduce memory requirements by compression. We propose an enhanced min-sum decoding algorithm that yields better coding gains and maintains a low implementation cost. The difference of minimum values in the min-sum algorithm is used as a decision to boost unreliable bit nodes in the Tanner graph. Proposed enhanced min-sum algorithm offers a 0.35dB gain against the scaling min-sum algorithm for rate-½ LDPC code in IEEE 802.16e. The implementation results show that proposed enhanced min-sum decoding yields increases of less than 6% for a serial decoding processing unit. Finally, we propose a dynamically configurable layered decoder to support multiple quasi-cyclic parity-check matrices and different code rates. Parity-check matrices are described in an efficient way to reduce both the memory size and implementation cost of the address generator. Both iteration control and enhanced min-sum algorithms are integrated with the decoder for a complete verification and performance evaluation.
Style APA, Harvard, Vancouver, ISO itp.
31

Brandon, Tyler. "Parallel-Node Low-Density Parity-Check Convolutional Code Encoder and Decoder Architectures". Phd thesis, 2010. http://hdl.handle.net/10048/981.

Pełny tekst źródła
Streszczenie:
We present novel architectures for parallel-node low-density parity-check convolutional code (PN-LDPC-CC) encoders and decoders. Based on a recently introduced implementation-aware class of LDPC-CCs, these encoders and decoders take advantage of increased node-parallelization to simultaneously decrease the energy-per-bit and increase the decoded information throughput. A series of progressively improved encoder and decoder designs are presented and characterized using synthesis results with respect to power, area and throughput. The best of the encoder and decoder designs significantly advance the state-of-the-art in terms of both the energy-per-bit and throughput/area metrics. One of the presented decoders, for an Eb /N0 of 2.5 dB has a bit-error-rate of 106, takes 4.5 mm2 in a CMOS 90-nm process, and achieves an energy-per-decoded-information-bit of 65 pJ and a decoded information throughput of 4.8 Gbits/s. We implement an earlier non-parallel node LDPC-CC encoder, decoder and a channel emulator in silicon. We provide readers, via two sets of tables, the ability to look up our decoder hardware metrics, across four different process technologies, for over 1000 variations of our PN-LDPC-CC decoders. By imposing practical decoder implementation constraints on power or area, which in turn drives trade-offs in code size versus the number of decoder processors, we compare the code BER performance. An extensive comparison to known LDPC-BC/CC decoder implementations is provided.
Style APA, Harvard, Vancouver, ISO itp.
32

Chuang, Ho-Sheng, i 莊和昇. "Chip Design of a Low Density Parity Check Decoder for IEEE 802.11n". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/63678887591316601874.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣大學
電子工程學研究所
101
Gallager published Low Density Parity Check (LDPC) code in 1963. Since the computation power is so weak at that time, LDPC has not been paid much attention. However, LDPC has become an important technique because of advanced semiconductor technology that increases gradually the computation power. The architecture of LDPC decoder can be simply divided into two kinds: fully-parallel and partial-parallel schemes. There are some variations on the partial-parallel schemes depending on their parallelization methods. In this Thesis, we used a block-serial architecture for the implementation of an LDPC decoder, which has three improvements compared with previous work proposed by Xiang. The first is to shorten the critical path by reordering the process step and the storing step, which can facilitate an 11% improvement in the clock rate. The second is that the decoder updates the first check node message while it is storing the Log-Likelihood Ratio (LLR) data. As a result, the proposed design saves 3% clock cycles than the previous work. The third is that this work improves the algorithm of detecting convergence, which can avoid saving the last iteration result. Therefore, the storage space required to save the LLR can be reduced to 11% of the original size. Finally, the performance evaluation with other previous works was made to validate that the proposed design is having smaller chip area and less power dissipation.
Style APA, Harvard, Vancouver, ISO itp.
33

Chan, Ya-Ting, i 詹雅婷. "A Low Density Parity Check Decoder Using New Independent Column Operation Skill". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/48443290580986860899.

Pełny tekst źródła
Streszczenie:
碩士
國立雲林科技大學
電子與資訊工程研究所
97
In this thesis, we adopt Log-Domain Sum-Product Algorithm (Log-SPA) to implement a modified low-density parity check code decoder in its check node part. From the prior art studies, we found that the look-up table is the most commonly use circuit being implemented in a traditional check-node hardware, and the main goal of this thesis is to propose a low hardware cost LDPC decoder, therefore we start from the traditional look-up table. Then, we propose a new independent column-wise operation skill to reduce the adder operations. It uses simply the concept of priority encoder which establishes a new binary weighting look-up table, so that a new simplified adder can be used for independent column-wise operation. The exact downing to valuable node value can be obtained by a recovery look-up table. The proposed new decoder is realized in the fully parallel architecture under the IEEE 802.16e standard that the matrix size is (1248, 624) and the code rate is 1/2. When compare with the traditional LDPC decoder hardware, the total chip area reduces about 28%, and the operational speed can reach up to 111MHz with only 0.1dB BER difference under TSMC 0.18um cell-based technology. In the same condition, the proposed decoder can save about 22% hardware by using UMC 90nm cell-based technology. The implemented test chip equipped with real-time on-chip self-test circuits which have an Additional White Gaussian Noise generator, and associated mode control circuits. The implemented chip, in TSMC 0.18um cell-base technology, shows that our new LDPC decoder architecture has low-cost and high-performance characteristics.
Style APA, Harvard, Vancouver, ISO itp.
34

Chan, Ching-Da, i 詹慶達. "Low Cost Low-density Parity-Check Decoder Using AND Gate Based Min-Sum Algorithm". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/pqdrua.

Pełny tekst źródła
Streszczenie:
碩士
國立雲林科技大學
電子與資訊工程研究所
96
In this thesis we proposed a new full parallel LDPC decoder, using the simple AND gates on check-node circuit operation, resulting no bit-error-rate performances lose when compare with the original min-sum algorithm. A modified min-sum algorithm is derived firstly, and then the hardware complexity of the check-node unit is reduced tremendously. Meanwhile, the variable-to-check node message has mapped into a special irregular quantization method in order that we can just employ basic AND gates to replace the complex comparators in check-node unit. In the hardware implementation case of IEEE 802.16e standard, LDPC matrix scale setting as (576, 288) and under the 0.18μm IC technology, new proposed LDPC decoder saves 22% hardware area. Furthermore, the implemented IEEE 802.11n with matrix scale (648, 324) and taking 0.13μm process technology, it saves 30% hardware when comparing with the traditional min-sum architecture. The operational speed can reach up to 100MHz without pipelining. A real MIMO application implementation is completed, in this thesis, adopting IEEE 802.16e standard matrix scale (576, 288) under a 0.18μm technology, and the core size of LDPC decoder is only 2800μm × 2800μm.
Style APA, Harvard, Vancouver, ISO itp.
35

Elhami-Khorasani, Nazanin. "A multistage scheduled decoder for short block length low-density parity-check codes". Thesis, 2007. http://spectrum.library.concordia.ca/975270/1/MR28914.pdf.

Pełny tekst źródła
Streszczenie:
Recent advances in coding theory have uncovered the previously forgotten power of Low-Density Parity-Check (LDPC) codes. Their popularity can be related to their relatively simple iterative decoders and their potential to achieve high performance close to shannon limit. These make them an attractive candidate for error correcting application in communication systems. In this thesis, we focus our research on the iterative decoding algorithms for Low-Density Parity-Check codes and present an improved decoding algorithm. First, the graph structure of LDPC codes is studied and a graph-based search algorithm to find the shortest closed walk and shortest cycle for each node of the graph is proposed. Then, the Deterministic schedule is applied on nodes of the graph with the objective of preserving the optimality of the algorithms. Finally, Hybrid Switch-Type technique is applied on the improved algorithms to provide a desirable complexity/performance trade-off. Hybrid Technique and Deterministic schedule are combined for decoding regular and irregular LDPC codes. The performance and complexity of the decoder is studied for Sum-Product and Gallager A algorithms. The result is a flexible decoder for any available LDPC code and any combination of decoding algorithms based on the communication systems need. In this technique, we benefit the high performance of soft-decision algorithms and low complexity of hard-decision algorithms by changing the decoding rule after a few iterations. Hence, a desirable performance can be obtained with less average number of soft iterations. Moreover, all the nodes do not update messages in each iteration. As a result, the total number of computations is reduced considerably
Style APA, Harvard, Vancouver, ISO itp.
36

Wu, Sheng-Dong, i 吳聲東. "Architecture design of the Modified Block-Type Low-Density Parity-Check Codes Decoder". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/51601930924832747187.

Pełny tekst źródła
Streszczenie:
碩士
中興大學
電機工程學系所
95
The target of the next generation wireless communication system is to transmit higher data rate and have a larger coverage area. However, radio transmission power needs to be kept to a minimum due to regulation and system power consumption. Thus, we can not achieve the target problem by increasing transmission power. Forward-error- correction (FEC) system can be employed to reach this target. The advanced FEC is the key technique in the next generation wireless communication system. There are many up-to-date standards that take LDPC into consideration. For example, the next generation satellite communication DVB-S2 standard uses 64800-bit LDPC codes. By improving FEC, the data transmission throughput can get 30% up. The coderates of wireless network 802.11n standard are 1/2 to 5/6 that can support different levels of data protection. In the thesis, a combined Low-Density Parity-Check (LDPC) code decoding design method, called modified Block-Type LDPC (B-LDPC), for realistic LDPC coding system architectures is presented. The B-LDPC code, which is a special class of quasi-cyclic LDPC (QC-LDPC), has an efficient encoding algorithm owing to the simple structure of their parity-check matrices. A proposed distribution of irregular parity-check matrix for the modified B-LDPC is developed so that we can obtain an area-efficient decoder design, good error correction performance, and achievable architecture implementation. The modified B-LDPC code decoding utilizes the iterative min-sum algorithm (MSA) and its decoding architecture design employs the operations of bit node unit (BNU) and check node unit (CNU). Different block matrix sizes for parity-check matrix can be adopted so that the modified B-LDPC code decoding improves the throughput without obvious performance degradation.
Style APA, Harvard, Vancouver, ISO itp.
37

Wang, Wei-Yi, i 王瑋逸. "Deep Learning Assisted Low Density Parity Check Decoder with Hybrid Hidden Layer Architecture". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/us596n.

Pełny tekst źródła
Streszczenie:
碩士
國立清華大學
電機工程學系
107
In this thesis, we propose a novel belief propagation for decoding the low density parity check code (LDPC) with the assistance of deep learning method. With long enough girth, the belief propagation (BP) has been shown with the powerful ability to reduce the complexity of decoding the LDPC, and yields nice error correction performance which is close to the maximum likelihood (ML) method. However, the equal weights on the Tanner graph is faced by ”double counting effect”. The messages passed on the edge have different reliability due to the structure of the parity check matrix design, e.g. girth and numbers of small cycles, and the channel condition each bit faced. The performance of BP relies on the independent of messages from different nodes. However, the small cycles in the Tanner graph leads to correlation of messages. The dependency of messages violates the independent requirement of BP in decoding and degrade the performance of belief propagation. There are many methods such as uniformly reweighted belief propagation (URW-BP) and variable factor appearance probability belief propagation (VFAP-BP) using unequal weights to deal with the message dependency in BP. However, the compensation is done by using one constant weights which is not general enough. Besides, the condition of reliability is changed in every iteration of decoding and the condition also varies from different check node. It is very difficult to develop a formula of the reweighted factor. Hence, we design a hybrid hidden layer neural network assisted BP algorithm to learned the unequal weights on Tanner graph. The weights compensate the negative effect of inreliability in the parity check matrix structure. With the aid of the learned weights, the error correction performance in high SNR region is enhanced. We also design an online training communication system to improve the modern system.
Style APA, Harvard, Vancouver, ISO itp.
38

Liu, Jia-ping, i 劉嘉萍. "A Compact Variable Node Using Thermalcode Addition Technique for Low Density Parity Check Decoder". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/43822828865079165107.

Pełny tekst źródła
Streszczenie:
碩士
雲林科技大學
電子與資訊工程研究所
98
In the present error correction technology for digital data, Low-Density Parity-Check Code (LDPC) is with the bit-error-rate performance closest to Shannon limit. The earliest LDPC decoding algorithm is Sum-Product algorithm (SPA), which is mainly consisted of Check Nodes and Variable Nodes. Since the decoding algorithm utilizes large amount of multipliers, causing enormous operating complexity and hardware area, recent studies on LDPC decoders focus on reducing hardware area. With the skill of replacing multiplicantion by addition in logarithm domain algorithm and the widely discussed Minimum Sum Algorithm (MSA), which further reduces additive operations with the characteristics of comparators, the hardware complexity can effectively be reduced. However, these works focus more on the algorithm of Check Nodes in LDPC decoders. Besides, fewer literatures are about the Variable Nodes which occupy over a half of the LDPC decoder and whose primary algorithm, with small reduction capacity, is to implement simple additive operations. This thesis proposes to apply the addition technology in thermalcode to the Variable Nodes in the LDPC decoder so as to replace the traditional binary adder in Variable Nodes. In this thesis, the irregular quantization of thermalcode generates regular values which can be operated with simple combinational logic circuits in the additive operation in Variable Nodes. In comparison with conventional MSA, applying the decoding skill in this thesis to 802.11n digital wireless communication standard of code rate 1/2 and irregular matrix (1296,648) and fixing the iternation times, the simulation results show that the bit error rate, after decoding, can remain the same standard as MSA when SNR is 2.5dB. Besides, the logic synthesis results of the hardware also show that the hardwear measurement has been save 34% comparing with MSA, which is traditionally considered with small measurement, of the entire LDPC decoder hardware measurement.
Style APA, Harvard, Vancouver, ISO itp.
39

Wang, Huei-Sheng, i 王惠生. "Chip Design of Novel Analog Min-Sum Iterative Decoder for a (8,4) Low-Density Parity-Check (LDPC) Decoder". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/ep898g.

Pełny tekst źródła
Streszczenie:
碩士
國立臺北科技大學
電腦與通訊研究所
96
In this thesis, we have designed an analog decoder that directly received analog signal using min-sum low-density parity-check (LDPC) code. To verify our method, we have finished a novel analog min-sum iterative decoder for a (8,4) low-density parity-check decoder chip with TSMC 0.35-μm Mixed-Signal 2P4M Polycide CMOS technology. In the LDPC decoder chip, we used Wilson current mirrors to replace cascode current mirrors in the variable node. This chip contains 2352 transistors, using a single 3.3-V power supply and consumes 22.9mW. The chip area including pads is about 3.22mm2, and core area is about 1.36 mm2.This chip has the advantages of low-power, small area, low-cost, and it can provide an efficient design for future SOC communications.
Style APA, Harvard, Vancouver, ISO itp.
40

Chang, Chen-Pang, i 張鎮邦. "A Compact Comparator Circuit Design for Min-Sum Algorithm of Low Density Parity Check Decoder". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/a7q4t2.

Pełny tekst źródła
Streszczenie:
碩士
國立雲林科技大學
電子與資訊工程研究所
96
This thesis proposed a kind of new simplified comparator circuit applied in the Check-Node unit of the Low-Density Parity-Check Code (LDPC) decoder. This new comparator, consisting of logical judgment circuit, multiplexer and basic logic AND function, replaces the traditional comparators used in the Check-node unit, so that it reduce the hardware area and improves operational frequency at the same time. In the Check-Node unit alone, synthesized in a 0.18μm CMOS cell based technology, the hardware area saves about 55% to 62%, and the operational frequency improved by 2 to 3 times, in different bit-lengths. As to the LDPC decoder, the whole hardware adopting new comparator in Check-node unit saves 11% to 15%. An IEEE 802.16e irregular code standard application, H-matrix setting as (576,288) and code rate setting as 1/2, is the main target of this thesis. Simulation results under the fixed iteration number of times show that the Bit Error Rate can be equally good with traditional Min-Sum. When SNR is 2.5dB, the BER difference is only 0.06dB. An ASIC, equipping with self-test mechanism and automatic generate AWGN circuit, has been implemented by a 0.18μm CMOS cell based technology, and verified the success of new comparator finally.
Style APA, Harvard, Vancouver, ISO itp.
41

Lian, Chin-Kuang, i 廖欽寬. "A Partially Parallel Low-Density Parity Check Code Decoder with Reduced Memory for Long Code-Length". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/27023195850205517433.

Pełny tekst źródła
Streszczenie:
碩士
國立中央大學
電機工程研究所
95
In recent years, low-density parity-check (LDPC) codes have attracted a lot of attention due to the near Shannon limit coding gains when iteratively decoded. Thus, the LDPC codes have been well recognized as an excellent error correction coding scheme for many digital communication systems, such as the next generation digital video broadcasting (DVB-S2), MIMO-WLAN (802.11n), WMAN (802.16e), mobile broadband wireless access (MBWA) (802.20) systems, and etc. Based on the message-passing algorithm, the LDPC decoder uses memories to store intermediate data between bit nodes and check nodes. In fact, the quantity of the stored data is related to the non-zero entries in H matrix. In other words, the memory size required by the LDPC decoder with the partially parallel architectures may significantly increase for large code length LDPC codes. In this thesis, we present an alternative approach which significantly reduces the memory size requirement and the salient feature of memory size reduction becomes significance particular for DVB-S2 applications.
Style APA, Harvard, Vancouver, ISO itp.
42

Tung, Bo-Wen, i 童柏文. "A Hardware Architecture Design of the Low-Density Parity-Check Code Decoder for IEEE 802.11n Standard". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/41066781939265196076.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣大學
電信工程學研究所
100
One of the main challenges of implementing an LDPC code decoder is that the interconnection complexity is growing along with the number of the parallel processing units, which results in the increased delay, power dissipation, and chip area. In this thesis, we propose a design called split-core architecture with reduced-quantization method which reserves the benefit of split-row threshold algorithm proposed by Mohsenin and simultaneously retains the good error performance for a multi-mode LDPC decoder of 802.11n standard. The implementation results show that the area of a decoder with our proposed architecture is 2.58 mm2 with a final core utilization of 85%, as compared to the area of 3 mm2 and a core utilization of 70% for the non-splitting architecture, while the area sizes are similar for both architectures before the routing process.
Style APA, Harvard, Vancouver, ISO itp.
43

Singh, Manpreet. "Power Characterization of a Digit-Online FPGA Implementation of a Low-Density Parity-Check Decoder for WiMAX Applications". Thesis, 2014. http://hdl.handle.net/10012/8529.

Pełny tekst źródła
Streszczenie:
Low-density parity-check (LDPC) codes are a class of easily decodable error-correcting codes. Published parallel LDPC decoders demonstrate high throughput and low energy-per-bit but require a lot of silicon area. Decoders based on digit-online arithmetic (processing several bits per fundamental operation) process messages in a digit-serial fashion, reducing the area requirements, and can process multiple frames in frame-interlaced fashion. Implementations on Field-Programmable Gate Array (FPGA) are usually power- and area-hungry, but provide flexibility compared with application-specific integrated circuit implementations. With the penetration of mobile devices in the electronics industry the power considerations have become increasingly important. The power consumption of a digit-online decoder depends on various factors, like input log-likelihood ratio (LLR) bit precision, signal-to-noise ratio (SNR) and maximum number of iterations. The design is implemented on an Altera Stratix IV GX EP4SGX230 FPGA, which comes on an Altera DE4 Development and Education Board. In this work, both parallel and digit-online block LDPC decoder implementations on FPGAs for WiMAX 576-bit, rate-3/4 codes are studied, and power measurements from the DE4 board are reported. Various components of the system include a random-data generator, WiMAX Encoder, shift-out register, additive white Gaussian noise (AWGN) generator, channel LLR buffer, WiMAX Decoder and bit-error rate (BER) Calculator. The random-data generator outputs pseudo-random bit patterns through an implemented linear-feedback shift register (LFSR). Digit-online decoders with input LLR precisions ranging from 6 to 13 bits and parallel decoders with input LLR precisions ranging from 3 to 6 bits are synthesized in a Stratix IV FPGA. The digit-online decoders can be clocked at higher frequency for higher LLR precisions. A digit-online decoder can be used to decode two frames simultaneously in frame-interlaced mode. For the 6-bit implementation of digit-online decoder in single-frame mode, the minimum throughput achieved is 740 Mb/s at low SNRs. For the case of 11-bit LLR digit-online decoder in frame-interlaced mode, the minimum throughput achieved is 1363 Mb/s. Detailed analysis such as effect of SNR and LLR precision on decoder power is presented. Also, the effect of changing LLR precision on max clock frequency and logic utilization on the parallel and the digit-online decoders is studied. Alongside, power per iteration for a 6-bit LLR input digit-online decoder is also reported.
Style APA, Harvard, Vancouver, ISO itp.
44

Hung, Jui-Hui, i 洪瑞徽. "Investigation and Design of Decoding Algorithms, Decoder Architectures and Cooperative Coding Techniques for Low-Density Parity-Check Codes". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/42830170037293371227.

Pełny tekst źródła
Streszczenie:
博士
國立交通大學
電子研究所
101
This dissertation conducts a thorough investigation on various technology aspects of low-density parity check (LDPC) codes, and then proposes corresponding efficient techniques for effective decoding and realization of LDPC codes. The investigated issues include decoding algorithms, hardware architectures of LDPC codes, and application of LDPC codes to cooperative coding and decoding. Furthermore, new coding schemes based on the joint concepts of LDPC and Turbo codes have been proposed in the end of the dissertation. Bit-flipping (BF) LDPC decoding algorithms have lower complexity compared with message passing (MP) algorithms, but have the drawbacks of lower decoding performances and higher iteration counts. In order to significantly enhance BF algorithms, a performance-boosting algorithm, called low-correlation culprit-bit-test multi-bit flipping (LCCBT-MBF) algorithm, has been proposed and integrated with BF algorithms. Besides, this work proposes a genet¬ics-aided message passing (GA-MP) algorithm by applying a new genetic algorithm to further improve the decoding performance of MP algorithm. Long delay time of the check node units are usually the major bottleneck in LDPC decoders for high-speed applications. Hence, this dissertation proposes several improved comparison algorithms for self-message-excluded CNU (SME-CNU) and two-minimum CNU (TM-CNU) architectures. Next, in order to eliminate the idling time and hardware complexity in conventional partially-parallel decoders, this work proposes a decoder architecture which can handle two different codewords simultaneously with 100% hardware utilization. Since LDPC decoding operations can be conducted very effectively with layered decoding schemes (LDS), a decoder architecture with an optimized execution reordering scheme for LDS is also proposed. In the final part of decoder designs, we implement the corresponding LDPC decoders for all the proposed decoding algorithms. Next, we propose an efficient coding scheme, called Turbo-LDPC code, which combines the merits of both the turbo code and LDPC code. The decoding performance is significantly improved by utilizing turbo decoding process, while the major part of the decoder is basically the same as a conventional LDPC decoder. Compared to combined block turbo code and BCH codes, the proposed Turbo-LDPC code also has much better decoding performance as well as lower computational complexity. Due to the significant improvements of the proposed Turbo-LDPC codes in decoding performance, the 2D coding scheme is extended to a new 3D codes, named Triple-LDPC codes. Finally, both proposed coding schemes are applied to cooperative coding in relay networks. Some cooperative coding and decoding schemes are devised based on these two codes. From analysis and simulations, the new cooperative coding/decoding techniques can significantly reduce the hardware complexity in relay stations, while obtain better error-correction capabilities.
Style APA, Harvard, Vancouver, ISO itp.
45

Su, Huang-Shiang, i 蘇皇祥. "Chips Design of Analog Current-Mode Quadrature Amplitude Modulation Demodulator and Min-Sum Decoder for (8,4) Regular Low-Density Parity-Check Codes". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/796j96.

Pełny tekst źródła
Streszczenie:
碩士
國立臺北科技大學
電腦與通訊研究所
97
It was commonly believed that the analog demodulator and decoder have certainly error-correcting capacity and the advantages of low power consumption and lower silicon area. In this thesis, an analog quadrature amplitude modulation (QAM) demodulator and an analog iterative decoder of VLSI architecture design are proposed. First, we design a novel demodulator with analog circuits. The main design blocks are composed of differential pairs and current buffer circuits. The proposed demodulator architecture could transfer the analog input signals from the channel to the location of belonging constellation effectively and convey signals from the output of demodulator to decoder directly. Also, it can remove the analog to digital converter (ADC) device. The second design is the analog decoder architecture for low-density parity-check (LDPC) codes based on min-sum iterative algorithms. Current buffer circuits and check node accuracy issues are focal point for the architecture. We use the advantages of high output impedance and high accuracy to decrease the channel length modulation effect in cascode current buffers. Then, increase the transfer accuracy and decoding performance in the analog decoder. Finally, a novel analog QAM demodulator and a high accuracy LDPC decoder have been implemented with 0.35μm 2P4M CMOS technology. These two chips include 106 and 1944 transistors respectively and operate in 3.3V power supply. The power consumption are 402.9μW and 12.04mW, the core area are 0.14 × 0.13 mm^2 and 0.52 × 0.2 mm^2 respectively. The advantage of demodulator and decoder chips can achieve low power consumption, low cost and proper error correcting capacity that provides an efficient design for SOC integration in the communication receiver in the future.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii