Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Low-density parity-check Decoders.

Artykuły w czasopismach na temat „Low-density parity-check Decoders”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Low-density parity-check Decoders”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Mao, Yun, Ying Guo, Jun Peng, Xueqin Jiang i Moon Ho Lee. "Double-Layer Low-Density Parity-Check Codes over Multiple-Input Multiple-Output Channels". International Journal of Antennas and Propagation 2012 (2012): 1–6. http://dx.doi.org/10.1155/2012/716313.

Pełny tekst źródła
Streszczenie:
We introduce a double-layer code based on the combination of a low-density parity-check (LDPC) code with the multiple-input multiple-output (MIMO) system, where the decoding can be done in both inner-iteration and outer-iteration manners. The present code, called low-density MIMO code (LDMC), has a double-layer structure, that is, one layer defines subcodes that are embedded in each transmission vector and another glues these subcodes together. It supports inner iterations inside the LDPC decoder and outeriterations between detectors and decoders, simultaneously. It can also achieve the desired design rates due to the full rank of the deployed parity-check matrix. Simulations show that the LDMC performs favorably over the MIMO systems.
Style APA, Harvard, Vancouver, ISO itp.
2

Xia, Tian, Hsiao-Chun Wu i Hong Jiang. "New Stopping Criterion for Fast Low-Density Parity-Check Decoders". IEEE Communications Letters 18, nr 10 (październik 2014): 1679–82. http://dx.doi.org/10.1109/lcomm.2014.2349988.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Djordjevic, Ivan B. "Photonic entanglement-assisted quantum low-density parity-check encoders and decoders". Optics Letters 35, nr 9 (30.04.2010): 1464. http://dx.doi.org/10.1364/ol.35.001464.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Perez-Pascual, Asun, Alex Hamilton, Robert G. Maunder i Lajos Hanzo. "Conceiving Extrinsic Information Transfer Charts for Stochastic Low-Density Parity-Check Decoders". IEEE Access 6 (2018): 55741–53. http://dx.doi.org/10.1109/access.2018.2872113.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Mohsenin, Tinoosh, i Bevan M. Baas. "A Split-Decoding Message Passing Algorithm for Low Density Parity Check Decoders". Journal of Signal Processing Systems 61, nr 3 (26.02.2010): 329–45. http://dx.doi.org/10.1007/s11265-010-0456-y.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Psota, Eric, i Lance C. Pérez. "The Manifestation of Stopping Sets and Absorbing Sets as Deviations on the Computation Trees of LDPC Codes". Journal of Electrical and Computer Engineering 2010 (2010): 1–17. http://dx.doi.org/10.1155/2010/432495.

Pełny tekst źródła
Streszczenie:
The error mechanisms of iterative message-passing decoders for low-density parity-check codes are studied. A tutorial review is given of the various graphical structures, including trapping sets, stopping sets, and absorbing sets that are frequently used to characterize the errors observed in simulations of iterative decoding of low-density parity-check codes. The connections between trapping sets and deviations on computation trees are explored in depth using the notion ofproblematictrapping sets in order to bridge the experimental and analytic approaches to these error mechanisms. A new iterative algorithm for finding low-weight problematic trapping sets is presented and shown to be capable of identifying many trapping sets that are frequently observed during iterative decoding of low-density parity-check codes on the additive white Gaussian noise channel. Finally, a new method is given for characterizing the weight of deviations that result from problematic trapping sets.
Style APA, Harvard, Vancouver, ISO itp.
7

Ismail, Mohamed, Imran Ahmed i Justin Coon. "Low Power Decoding of LDPC Codes". ISRN Sensor Networks 2013 (17.01.2013): 1–12. http://dx.doi.org/10.1155/2013/650740.

Pełny tekst źródła
Streszczenie:
Wireless sensor networks are used in many diverse application scenarios that require the network designer to trade off different factors. Two such factors of importance in many wireless sensor networks are communication reliability and battery life. This paper describes an efficient, low complexity, high throughput channel decoder suited to decoding low-density parity-check (LDPC) codes. LDPC codes have demonstrated excellent error-correcting ability such that a number of recent wireless standards have opted for their inclusion. Hardware realisation of practical LDPC decoders is a challenging area especially when power efficient solutions are needed. Implementation details are given for an LDPC decoding algorithm, termed adaptive threshold bit flipping (ATBF), designed for low complexity and low power operation. The ATBF decoder was implemented in 90 nm CMOS at 0.9 V using a standard cell design flow and was shown to operate at 250 MHz achieving a throughput of 252 Gb/s/iteration. The decoder area was 0.72 mm2 with a power consumption of 33.14 mW and a very small energy/decoded bit figure of 1.3 pJ.
Style APA, Harvard, Vancouver, ISO itp.
8

Wang, Biao. "Novel Early Termination Method of an ADMM-Penalized Decoder for LDPC Codes in the IoT". Security and Communication Networks 2022 (14.10.2022): 1–13. http://dx.doi.org/10.1155/2022/4599105.

Pełny tekst źródła
Streszczenie:
As a critical communication technology, low-density parity-check (LDPC) codes are widely concerned with the Internet of things (IoT). To increase the convergence rate of the alternating direction method of multiplier (ADMM)-penalized decoder for LDPC codes, a novel early termination (ET) method is presented by computing the average sum of the hard decision (ASHD) during each ADMM iteration. In terms of the flooding scheduling and layered scheduling ADMM-penalized decoders, the simulation results show that the proposed ET method can significantly reduce the average number of iterations at low signal-to-noise ratios (SNRs) with negligible decoding performance loss.
Style APA, Harvard, Vancouver, ISO itp.
9

Stark, Maximilian, Jan Lewandowsky i Gerhard Bauch. "Information-Bottleneck Decoding of High-Rate Irregular LDPC Codes for Optical Communication Using Message Alignment". Applied Sciences 8, nr 10 (11.10.2018): 1884. http://dx.doi.org/10.3390/app8101884.

Pełny tekst źródła
Streszczenie:
In high-throughput applications, low-complexity and low-latency channel decoders are inevitable. Hence, for low-density parity-check (LDPC) codes, message passing decoding has to be implemented with coarse quantization—that is, the exchanged beliefs are quantized with a small number of bits. This can result in a significant performance degradation with respect to decoding with high-precision messages. Recently, so-called information-bottleneck decoders were proposed which leverage a machine learning framework (i.e., the information bottleneck method) to design coarse-precision decoders with error-correction performance close to high-precision belief-propagation decoding. In these decoders, all conventional arithmetic operations are replaced by look-up operations. Irregular LDPC codes for next-generation fiber optical communication systems are characterized by high code rates and large maximum node degrees. Consequently, the implementation complexity is mainly influenced by the memory required to store the look-up tables. In this paper, we show that the complexity of information-bottleneck decoders remains manageable for irregular LDPC codes if our proposed construction approach is deployed. Furthermore, we reveal that in order to design information bottleneck decoders for arbitrary degree distributions, an intermediate construction step which we call message alignment has to be included. Exemplary numerical simulations show that incorporating message alignment in the construction yields a 4-bit information bottleneck decoder which performs only 0.15 dB worse than a double-precision belief propagation decoder and outperforms a min-sum decoder.
Style APA, Harvard, Vancouver, ISO itp.
10

Dinh, The Cuong, Huyen Pham Thi, Hung Dao Tuan i Nghia Pham Xuan. "ONE-MINIUM-ONLY BASIC-SET TRELLIS MIN-MAX DECODER ARCHITECTURE FOR NONBINARY LDPC CODE". Journal of Computer Science and Cybernetics 37, nr 2 (31.05.2021): 91–106. http://dx.doi.org/10.15625/1813-9663/37/2/15917.

Pełny tekst źródła
Streszczenie:
Nonbinary low-density-parity-check (NB-LDPC) code outperforms their binary counterpart in terms of error-correcting performance and error-floor property when the code length is moderate. However, the drawback of NB-LDPC decoders is high complexity and the complexity increases considerably when increasing the Galois-field order. In this paper, an One-Minimum-Only basic-set trellis min-max (OMO-BS-TMM) algorithm and the corresponding decoder architecture are proposed for NBLDPC codes to greatly reduce the complexity of the check node unit (CNU) as well as the whole decoder. In the proposed OMO-BS-TMM algorithm, only the first minimum values are used for generating the check node messages instead of using both the first and second minimum values, and the number of messages exchanged between the check node and the variable node is reduced in comparison with the previous works. Layered decoder architectures based on the proposed algorithm were implemented for the (837, 726) NB-LDPC code over GF(32) using 90-nm CMOS technology. The implementation results showed that the OMO-BS-TMM algorithm achieves the almost similar error-correcting performance, and a reduction of the complexity by 31.8% and 20.5% for the whole decoder, compared to previous works. Moreover, the proposed decoder achieves a higher throughput at 1.4 Gbps, compared with the other state-of-the-art NBLDPC decoders.
Style APA, Harvard, Vancouver, ISO itp.
11

Grospellier, Antoine, Lucien Grouès, Anirudh Krishna i Anthony Leverrier. "Combining hard and soft decoders for hypergraph product codes". Quantum 5 (15.04.2021): 432. http://dx.doi.org/10.22331/q-2021-04-15-432.

Pełny tekst źródła
Streszczenie:
Hypergraph product codes are a class of constant-rate quantum low-density parity-check (LDPC) codes equipped with a linear-time decoder called small-set-flip (SSF). This decoder displays sub-optimal performance in practice and requires very large error correcting codes to be effective. In this work, we present new hybrid decoders that combine the belief propagation (BP) algorithm with the SSF decoder. We present the results of numerical simulations when codes are subject to independent bit-flip and phase-flip errors. We provide evidence that the threshold of these codes is roughly 7.5% assuming an ideal syndrome extraction, and remains close to 3% in the presence of syndrome noise. This result subsumes and significantly improves upon an earlier work by Grospellier and Krishna (arXiv:1810.03681). The low-complexity high-performance of these heuristic decoders suggests that decoding should not be a substantial difficulty when moving from zero-rate surface codes to constant-rate LDPC codes and gives a further hint that such codes are well-worth investigating in the context of building large universal quantum computers.
Style APA, Harvard, Vancouver, ISO itp.
12

Awais, Muhammad, i Carlo Condo. "Flexible LDPC Decoder Architectures". VLSI Design 2012 (26.06.2012): 1–16. http://dx.doi.org/10.1155/2012/730835.

Pełny tekst źródła
Streszczenie:
Flexible channel decoding is getting significance with the increase in number of wireless standards and modes within a standard. A flexible channel decoder is a solution providing interstandard and intrastandard support without change in hardware. However, the design of efficient implementation of flexible low-density parity-check (LDPC) code decoders satisfying area, speed, and power constraints is a challenging task and still requires considerable research effort. This paper provides an overview of state-of-the-art in the design of flexible LDPC decoders. The published solutions are evaluated at two levels of architectural design: the processing element (PE) and the interconnection structure. A qualitative and quantitative analysis of different design choices is carried out, and comparison is provided in terms of achieved flexibility, throughput, decoding efficiency, and area (power) consumption.
Style APA, Harvard, Vancouver, ISO itp.
13

Li, Rongchun, Yong Dou, Dan Zou, Shi Wang i Ying Zhang. "Efficient graphics processing unit based layered decoders for quasicyclic low-density parity-check codes". Concurrency and Computation: Practice and Experience 27, nr 1 (17.12.2013): 29–46. http://dx.doi.org/10.1002/cpe.3193.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Tran-Thi, Bich Ngoc, Thien Truong Nguyen-Ly i Trang Hoang. "An FPGA Design with High Memory Efficiency and Decoding Performance for 5G LDPC Decoder". Electronics 12, nr 17 (30.08.2023): 3667. http://dx.doi.org/10.3390/electronics12173667.

Pełny tekst źródła
Streszczenie:
A hardware-efficient implementation of a Low-Density Parity-Check (LDPC) decoder is presented in this paper. The proposed decoder design is based on the Hybrid Offset Min-Sum (HOMS) algorithm. In the check node processing of this decoder, only the first minimum is computed instead of the first two minimum values among all the variable-to-check message inputs as in the conventional approach. Additionally, taking advantage of the unique structure of 5G LDPC codes, layered scheduling and partially parallel structures are employed to minimize hardware costs. Implementation results on the Xilinx Kintex UltraScale+ FPGA platform show that the proposed decoder can achieve a throughput of 2.82 Gbps for 10 decoding iterations with a 5G LDPC codelength of 8832 bits and a code rate of 1/2. Moreover, it yields a check node memory reduction of 10% with respect to the baseline and provides a hardware usage efficiency of 4.96 hardware resources/layer/Mbps, while providing a decoding performance of 0.15 dB better than some of the existing decoders.
Style APA, Harvard, Vancouver, ISO itp.
15

Thi Bao Nguyen, Tram, Tuy Nguyen Tan i Hanho Lee. "Low-Complexity High-Throughput QC-LDPC Decoder for 5G New Radio Wireless Communication". Electronics 10, nr 4 (22.02.2021): 516. http://dx.doi.org/10.3390/electronics10040516.

Pełny tekst źródła
Streszczenie:
This paper presents a pipelined layered quasi-cyclic low-density parity-check (QC-LDPC) decoder architecture targeting low-complexity, high-throughput, and efficient use of hardware resources compliant with the specifications of 5G new radio (NR) wireless communication standard. First, a combined min-sum (CMS) decoding algorithm, which is a combination of the offset min-sum and the original min-sum algorithm, is proposed. Then, a low-complexity and high-throughput pipelined layered QC-LDPC decoder architecture for enhanced mobile broadband specifications in 5G NR wireless standards based on CMS algorithm with pipeline layered scheduling is presented. Enhanced versions of check node-based processor architectures are proposed to improve the complexity of the LDPC decoders. An efficient minimum-finder for the check node unit architecture that reduces the hardware required for the computation of the first two minima is introduced. Moreover, a low complexity a posteriori information update unit architecture, which only requires one adder array for their operations, is presented. The proposed architecture shows significant improvements in terms of area and throughput compared to other QC-LDPC decoder architectures available in the literature.
Style APA, Harvard, Vancouver, ISO itp.
16

Raveendran, Nithin, i Bane Vasić. "Trapping Sets of Quantum LDPC Codes". Quantum 5 (14.10.2021): 562. http://dx.doi.org/10.22331/q-2021-10-14-562.

Pełny tekst źródła
Streszczenie:
Iterative decoders for finite length quantum low-density parity-check (QLDPC) codes are attractive because their hardware complexity scales only linearly with the number of physical qubits. However, they are impacted by short cycles, detrimental graphical configurations known as trapping sets (TSs) present in a code graph as well as symmetric degeneracy of errors. These factors significantly degrade the decoder decoding probability performance and cause so-called error floor. In this paper, we establish a systematic methodology by which one can identify and classify quantum trapping sets (QTSs) according to their topological structure and decoder used. The conventional definition of a TS from classical error correction is generalized to address the syndrome decoding scenario for QLDPC codes. We show that the knowledge of QTSs can be used to design better QLDPC codes and decoders. Frame error rate improvements of two orders of magnitude in the error floor regime are demonstrated for some practical finite-length QLDPC codes without requiring any post-processing.
Style APA, Harvard, Vancouver, ISO itp.
17

Raveendran, Nithin, i Bane Vasić. "Trapping Sets of Quantum LDPC Codes". Quantum 5 (14.10.2021): 562. http://dx.doi.org/10.22331/q-2021-10-14-562.

Pełny tekst źródła
Streszczenie:
Iterative decoders for finite length quantum low-density parity-check (QLDPC) codes are attractive because their hardware complexity scales only linearly with the number of physical qubits. However, they are impacted by short cycles, detrimental graphical configurations known as trapping sets (TSs) present in a code graph as well as symmetric degeneracy of errors. These factors significantly degrade the decoder decoding probability performance and cause so-called error floor. In this paper, we establish a systematic methodology by which one can identify and classify quantum trapping sets (QTSs) according to their topological structure and decoder used. The conventional definition of a TS from classical error correction is generalized to address the syndrome decoding scenario for QLDPC codes. We show that the knowledge of QTSs can be used to design better QLDPC codes and decoders. Frame error rate improvements of two orders of magnitude in the error floor regime are demonstrated for some practical finite-length QLDPC codes without requiring any post-processing.
Style APA, Harvard, Vancouver, ISO itp.
18

ZHENG, XIA, FRANCIS C. M. LAU, CHI K. TSE i S. C. WONG. "STUDY OF BIFURCATION BEHAVIOR OF LDPC DECODERS". International Journal of Bifurcation and Chaos 16, nr 11 (listopad 2006): 3435–49. http://dx.doi.org/10.1142/s0218127406016926.

Pełny tekst źródła
Streszczenie:
The use of low-density-parity-check (LDPC) codes in coding digital messages has aroused much research interest because of their excellent bit-error performance. The behavior of the iterative LDPC decoders of finite length, however, has not been fully evaluated under different signal-to-noise conditions. By considering the finite-length LDPC decoders as high-dimensional nonlinear dynamical systems, we attempt to investigate their dynamical behavior and bifurcation phenomena for a range of signal-to-noise ratios (SNRs). Extensive simulations have been performed on both regular and irregular LDPC codes. Moreover, we derive the Jacobian of the system and calculate the corresponding eigenvalues. Results show that bifurcations, including fold, flip and Neimark–Sacker bifurcations, are exhibited by the LDPC decoder. Results are useful for optimizing the choice of parameters that may enhance the effectiveness of the decoding algorithm and improve the convergence rates.
Style APA, Harvard, Vancouver, ISO itp.
19

Biazaran, Reza, i Hermann Joseph Helgert. "Improved noisy gradient descent bit-flipping algorithm over Rayleigh fading channel". International Journal of Electrical and Computer Engineering (IJECE) 12, nr 3 (1.06.2022): 2699. http://dx.doi.org/10.11591/ijece.v12i3.pp2699-2710.

Pełny tekst źródła
Streszczenie:
<span>Gradient descent bit flipping (GDBF) and its many variants have offered remarkable improvements over legacy, or modified, bit flipping decoding techniques in case of decoding low density parity check (LDPC) codes. GDBF method and its many variants, such as noisy gradient descent bit flipping (NGDBF) have been extensively studied and their performances have been assessed over multiple channels such as binary symmetric channel (BSC), binary erasure channel (BEC) and additive white Gaussian noise (AWGN) channel. However, performance of the said decoders in more realistic channels or channel conditions have not been equally studied. An improved noisy gradient descent bit flipping algorithm is proposed in this paper that optimally decodes LDPC encoded codewords over Rayleigh fading channel and under various fade rates. Comparing to NGDBF method, our proposed decoder provides substantial improvements in both error performance of the code, and in the number of iterations required to achieve the said error performance. It subsequently reduces the end-to-end latency in applications with low or ultra-low latency requirements.</span>
Style APA, Harvard, Vancouver, ISO itp.
20

Yao, Chang-Kun, Yun-Ching Tang i Hongchin Lin. "Energy-Efficient and Area-Efficient QC-LDPC with RS Decoders Using 2M-LMSA". Journal of Circuits, Systems and Computers 24, nr 02 (27.11.2014): 1550026. http://dx.doi.org/10.1142/s0218126615500267.

Pełny tekst źródła
Streszczenie:
This study proposes an energy-efficient and area-efficient dual-path low-density parity-check (LDPC) with Reed–Solomon (RS) decoder for communication systems. Hardware complexity is reduced by applying a dual-path 2-bit modified layered min-sum algorithm (2M-LMSA) to a (2550, 2040) quasi-cyclic LDPC (QC-LDPC) code with the column and row weights of 3 and 15, respectively. The simplified check node units (CNUs) reduce memory and routing complexity as well as the energy needed to decode each bit. A throughput of 11 Gb/s is achieved by using 90-nm CMOS technology at a clock frequency of 208 MHz at 0.9 V with average power of 244 mW on a chip area of 3.05 mm2. Decoding performance is further improved by appending the (255, 239) RS decoder after the LDPC decoder. The LDPC plus RS decoder consumes the power of 434 mW on the area of 3.45 mm2.
Style APA, Harvard, Vancouver, ISO itp.
21

Arnone, L. J., J. Castiñeira Moreira i P. G. Farrell. "Field programmable gate arrays implementations of low complexity soft-input soft-output low-density parity-check decoders". IET Communications 6, nr 12 (2012): 1670. http://dx.doi.org/10.1049/iet-com.2011.0767.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Senthilpari, Chinnaiyan, Rosalind Deena i Lee Lini. "Low power, less occupying area, and improved speed of a 4-bit router/rerouter circuit for low-density parity-check (LDPC) decoders". F1000Research 11 (5.01.2022): 7. http://dx.doi.org/10.12688/f1000research.73404.1.

Pełny tekst źródła
Streszczenie:
Background: Low-density parity-check (LDPC) codes are more error-resistant than other forward error-correcting codes. Existing circuits give high power dissipation, less speed, and more occupying area. This work aimed to propose a better design and performance circuit, even in the presence of noise in the channel. Methods: In this research, the design of the multiplexer and demultiplexer were achieved using pass transistor logic. The target parameters were low power dissipation, improved throughput, and more negligible delay with a minimum area. One of the essential connecting circuits in a decoShder architecture is a multiplexer (MUX) and a demultiplexer (DEMUX) circuit. The design of the MUX and DEMUX contributes significantly to the performance of the decoder. The aim of this paper was the design of a 4 × 1 MUX to route the data bits received from the bit update blocks to the parallel adder circuits and a 1 × 4 DEMUX to receive the input bits from the parallel adder and distribute the output to the bit update blocks in a layered architecture LDPC decoder. The design uses pass transistor logic and achieves the reduction of the number of transistors used. The proposed circuit was designed using the Mentor Graphics CAD tool for 180 nm technology. Results: The parameters of power dissipation, area, and delay were considered crucial parameters for a low power decoder. The circuits were simulated using computer-aided design (CAD) tools, and the results depicted a significantly low power dissipation of 7.06 nW and 5.16 nW for the multiplexer and demultiplexer, respectively. The delay was found to be 100.5 ns (MUX) and 80 ns (DEMUX). Conclusion: This decoder’s potential use may be in low-power communication circuits such as handheld devices and Internet of Things (IoT) circuits.
Style APA, Harvard, Vancouver, ISO itp.
23

Senthilpari, Chinnaiyan, Rosalind Deena i Lee Lini. "Low power, less occupying area, and improved speed of a 4-bit router/rerouter circuit for low-density parity-check (LDPC) decoders". F1000Research 11 (14.11.2022): 7. http://dx.doi.org/10.12688/f1000research.73404.2.

Pełny tekst źródła
Streszczenie:
Background: Low-density parity-check (LDPC) codes are more error-resistant than other forward error-correcting codes. Existing circuits give high power dissipation, less speed, and more occupying area. This work aimed to propose a better design and performance circuit, even in the presence of noise in the channel. Methods: In this research, the design of the multiplexer and demultiplexer were achieved using pass transistor logic. The target parameters were low power dissipation, improved throughput, and more negligible delay with a minimum area. One of the essential connecting circuits in a decoShder architecture is a multiplexer (MUX) and a demultiplexer (DEMUX) circuit. The design of the MUX and DEMUX contributes significantly to the performance of the decoder. The aim of this paper was the design of a 4 × 1 MUX to route the data bits received from the bit update blocks to the parallel adder circuits and a 1 × 4 DEMUX to receive the input bits from the parallel adder and distribute the output to the bit update blocks in a layered architecture LDPC decoder. The design uses pass transistor logic and achieves the reduction of the number of transistors used. The proposed circuit was designed using the Mentor Graphics CAD tool for 180 nm technology. Results: The parameters of power dissipation, area, and delay were considered crucial parameters for a low power decoder. The circuits were simulated using computer-aided design (CAD) tools, and the results depicted a significantly low power dissipation of 7.06 nW and 5.16 nW for the multiplexer and demultiplexer, respectively. The delay was found to be 100.5 ns (MUX) and 80 ns (DEMUX). Conclusion: This decoder’s potential use may be in low-power communication circuits such as handheld devices and Internet of Things (IoT) circuits.
Style APA, Harvard, Vancouver, ISO itp.
24

Kuc, Mateusz, Wojciech Sułek i Dariusz Kania. "FPGA-Oriented LDPC Decoder for Cyber-Physical Systems". Mathematics 8, nr 5 (4.05.2020): 723. http://dx.doi.org/10.3390/math8050723.

Pełny tekst źródła
Streszczenie:
A potentially useful Cyber-Physical Systems element is a modern forward error correction (FEC) coding system, utilizing a code selected from the broad class of Low-Density Parity-Check (LDPC) codes. In this paper, development of a hardware implementation in an FPGAs of the decoder for Quasi-Cyclic (QC-LDPC) subclass of codes is presented. The decoder can be configured to support the typical decoding algorithms: Min-Sum or Normalized Min-Sum (NMS). A novel method of normalization in the NMS algorithm is proposed, one that utilizes combinational logic instead of arithmetic units. A comparison of decoders with different bit-lengths of data (beliefs that are messages propagated between computing units) is also provided. The presented decoder has been implemented with a distributed control system. Experimental studies were conducted using the Intel Cyclone V FPGA module, which is a part of the developed testing environment for LDPC coding systems.
Style APA, Harvard, Vancouver, ISO itp.
25

TSANG, TONY. "A METHOD FOR PERFORMANCE MODELING AND EVALUATION OF LDPC DECODER ARCHITECTURE". International Journal of Modeling, Simulation, and Scientific Computing 04, nr 02 (czerwiec 2013): 1350003. http://dx.doi.org/10.1142/s1793962313500037.

Pełny tekst źródła
Streszczenie:
This paper presents a high-throughput memory efficient decoder for low density parity check (LDPC) codes in the high-rate wireless personal area network application. The novel techniques which can apply to our selected LDPC code is proposed, including parallel blocked layered decoding architecture and simplification of the WiGig networks. State-of-the-art flexible LDPC decoders cannot simultaneously achieve the high throughput mandated by these standards and the low power needed for mobile applications. This work develops a flexible, fully pipelined architecture for the IEEE 802.11ad standard capable of achieving both goals. We use Real Time–Performance Evaluation Process Algebra (RT-PEPA) to evaluate a typical LDPC Decoder system's performance. The approach is more convenient, flexible, and lower cost than the former simulation method which needs to develop special hardware and software tools. Moreover, we can easily analyze how changes in performance depend on changes in a particular mode by supplying ranges for parameter values.
Style APA, Harvard, Vancouver, ISO itp.
26

Raza, Hasnain, Syed Azhar Ali Zaidi, Aamir Rashid i Shafiq Haider. "An area efficient and high throughput implementation of layered min-sum iterative construction a posteriori probability LDPC decoder". PLOS ONE 16, nr 3 (29.03.2021): e0249269. http://dx.doi.org/10.1371/journal.pone.0249269.

Pełny tekst źródła
Streszczenie:
Area efficient and high speed forward error correcting codes decoder are the demand of many high speed next generation communication standards. This paper explores a low complexity decoding algorithm of low density parity check codes, called the min-sum iterative construction a posteriori probability (MS-IC-APP), for this purpose. We performed the error performance analysis of MS-IC-APP for a (648,1296) regular QC-LDPC code and proposed an area and throughput optimized hardware implementation of MS-IC-APP. We proposed to use the layered scheduling of MS-IC-APP and performed other optimizations at architecture level to reduce the area and to increase the throughput of the decoder. Synthesis results show 6.95 times less area and 4 times high throughput as compared to the standard min-sum decoder. The area and throughput are also comparable to the improved variants of hard-decision bit-flipping (BF) decoders, whereas, the simulation results show a coding gain of 2.5 over the best implementation of BF decoder in terms of error performance.
Style APA, Harvard, Vancouver, ISO itp.
27

Li, Yunfeng, Yingchun Li, Nan Ye, Tianyang Chen, Zhijie Wang i Junjie Zhang. "High Throughput Priority-Based Layered QC-LDPC Decoder with Double Update Queues for Mitigating Pipeline Conflicts". Sensors 22, nr 9 (5.05.2022): 3508. http://dx.doi.org/10.3390/s22093508.

Pełny tekst źródła
Streszczenie:
A high-throughput layered decoder for quasi-cyclic (QC) low-density parity-check (LDPC) codes is required for communication systems. The preferred way to improve the throughput is to insert pipeline stages and increase the operating frequency, which suffers from pipeline conflicts at the same time. A priority-based layered schedule is proposed to keep the updates of log-likelihood ratios (LLRs) as frequent as possible when pipeline conflicts happen. To reduce pipeline conflicts, we also propose double update queues for layered decoders. The proposed double update queues improve the percentage of updated LLRs per iteration. Benefitting from these, the performance loss of the proposed decoder for the fifth generation (5G) new radio (NR) is reduced from 0.6 dB to 0.2 dB using the same quantization compared with the state-of-the-art work. As a result, the throughput of the proposed decoder improved up to 2.85 times when the signal-to-noise ratio (SNR) was equal to 5.9 dB.
Style APA, Harvard, Vancouver, ISO itp.
28

Old, Josias, i Manuel Rispler. "Generalized Belief Propagation Algorithms for Decoding of Surface Codes". Quantum 7 (7.06.2023): 1037. http://dx.doi.org/10.22331/q-2023-06-07-1037.

Pełny tekst źródła
Streszczenie:
Belief propagation (BP) is well-known as a low complexity decoding algorithm with a strong performance for important classes of quantum error correcting codes, e.g. notably for the quantum low-density parity check (LDPC) code class of random expander codes. However, it is also well-known that the performance of BP breaks down when facing topological codes such as the surface code, where naive BP fails entirely to reach a below-threshold regime, i.e. the regime where error correction becomes useful. Previous works have shown, that this can be remedied by resorting to post-processing decoders outside the framework of BP. In this work, we present a generalized belief propagation method with an outer re-initialization loop that successfully decodes surface codes, i.e. opposed to naive BP it recovers the sub-threshold regime known from decoders tailored to the surface code and from statistical-mechanical mappings. We report a threshold of 17% under independent bit-and phase-flip data noise (to be compared to the ideal threshold of 20.6%) and a threshold value of 14% under depolarizing data noise (compared to the ideal threshold of 18.9%), which are on par with thresholds achieved by non-BP post-processing methods.
Style APA, Harvard, Vancouver, ISO itp.
29

Yun, In-Woo, Hee-ran Lee i Joon Tae Kim. "An Alternative Approach Obtaining a Normalization Factor in Normalized Min-Sum Algorithm for Low-Density Parity-Check Code". Wireless Communications and Mobile Computing 2018 (17.10.2018): 1–7. http://dx.doi.org/10.1155/2018/1398191.

Pełny tekst źródła
Streszczenie:
The min-sum algorithm (MSA) for decoding Low-Density Parity-Check (LDPC) code is an approximation algorithm that can greatly reduce the computational complexity of the belief propagation algorithm (BPA). To reduce the error between MSA and BPA, an improved MSA such as normalized min-sum algorithm (NMSA) that uses the normalization factor when updating the check node is used in many LDPC decoders. When obtaining an optimal normalization factor, density evolution (DE) is usually used. However, not only does the DE method require a large number of calculations, it may not be optimal for obtaining a normalization factor due to the theoretical assumptions that need to be satisfied. This paper proposes a new method obtaining a normalization factor for NMSA. We first examine the relationship between the minimum value of variable node messages’ magnitudes and the magnitudes of check node outputs of BPA using the check node message distribution (CMD) chart. And then, we find a normalization factor that minimizes the error between the magnitudes of check node output of NMSA and BPA. We use the least square method (LSM) to minimize the error. Simulation on ATSC 3.0 LDPC codes demonstrates that the normalization factor obtained by this proposed method shows better decoding performance than the normalization factor obtained by DE.
Style APA, Harvard, Vancouver, ISO itp.
30

Dai, Jingxin, Hang Yin, Yansong Lv, Weizhang Xu i Zhanxin Yang. "Multi-Gbps LDPC Decoder on GPU Devices". Electronics 11, nr 21 (25.10.2022): 3447. http://dx.doi.org/10.3390/electronics11213447.

Pełny tekst źródła
Streszczenie:
To meet the high throughput requirement of communication systems, the design of high-throughput low-density parity-check (LDPC) decoders has attracted significant attention. This paper proposes a high-throughput GPU-based LDPC decoder, aiming at the large-scale data process scenario, which optimizes the decoder from the perspectives of the decoding parallelism and data scheduling strategy, respectively. For decoding parallelism, the intra-codeword parallelism is fully exploited by combining the characteristics of the flooding-based decoding algorithm and GPU programming model, and the inter-codeword parallelism is improved using the single-instruction multiple-data (SIMD) instructions. For the data scheduling strategy, the utilization of off-chip memory is optimized to satisfy the demands of large-scale data processing. The experimental results demonstrate that the decoder achieves 10 Gbps throughput by incorporating the early termination mechanism on general-purpose GPU (GPGPU) devices and can also achieve a high-throughput and high-power-efficiency performance on low-power embedded GPU (EGPU) devices. Compared with the state-of-the-art work, the proposed decoder had a × 1.787 normalized throughput speedup at the same error correcting performance.
Style APA, Harvard, Vancouver, ISO itp.
31

Nguyen, Tuy Tan, Tram Thi Bao Nguyen i Hanho Lee. "Low-Complexity Multi-Size Circular-Shift Network for 5G New Radio LDPC Decoders". Sensors 22, nr 5 (24.02.2022): 1792. http://dx.doi.org/10.3390/s22051792.

Pełny tekst źródła
Streszczenie:
This paper presents a low-complexity multi-size circular-shift network (MCSN) structure for 5th-generation (5G) New Radio (NR) quasi-cyclic low-density parity-check (QC-LDPC) decoders. In particular, a fine-coarse approach-based multi-size cyclic shift network, which decomposes the cyclic shift size into fine part and coarse part, is introduced. The proposed MCSN structure is composed of a pre-rotator performing the fine part of the cyclic shift, and a main rotator executing the coarse part of the cyclic shift. In addition, a forward routing circular-shift (FRCS) network, which is based on the barrel shifter and the forward routing process is presented. The proposed switch network is able to support all 51 different submatrix sizes as defined in the 5G NR standard through an efficient forward routing switch network and help reduce the hardware complexity using a cyclic shift size decomposition method. The proposed MCSN is analyzed, and indicates a substantial reduction in the hardware complexity. The experimental results on TSMC 65-nm CMOS technology show that the proposed MCSN structure for 5G NR LDPC decoder offers an area saving up to 56.75% compared to related works in the literature.
Style APA, Harvard, Vancouver, ISO itp.
32

Ganin, Dmitriy V., Mokhammed A. Y. Damdam i Aleksandr L. Savkin. "PERMUTATION DECODING IN LOW-POWER WIRELESS SENSOR NETWORKS". Автоматизация процессов управления 2, nr 68 (2022): 37–44. http://dx.doi.org/10.35752/1991-2927-2022-2-68-37-44.

Pełny tekst źródła
Streszczenie:
Wireless sensor networks are currently considered for many subject areas as effective communication systems, including, for example, industrial and agricultural facilities and enterprises, monitoring of sensors of transport security systems, medicine applications, environmental and weather monitoring, etc. In such systems, the correction code (CC) is a classic approach used to increase the reliability of the data transmission channel while simultaneously reducing the required transmitted signal power, called the energy gain code. At the same time, the power of the transmitter is reduced due to additional power consumption in the receiver decoder. Stronger codes provide better performance with lower power requirements, but have more complex decoders with higher power consumption than relatively simple error control systems. If the additional power consumption at the output of the decoder exceeds the savings in transmitted power due to the use of CC, then such a code will not be energy efficient compared to a system with a non-redundant code. Various codes can be used to protect data, while the best codes for which the critical distance turns out to be large according to a number of studies are codes with a low parity check density. The paper proposes an alternative variant based on the synthesis of a permutation decoding system using the method of localization of a limited number of permutations and the organization of additional parity checks exclusively for the verification bits of the CC using the Bayesian approach to iterative transformations of integer soft solutions of symbols.
Style APA, Harvard, Vancouver, ISO itp.
33

Gurskiy, S. S., i N. S. Mogilevskaya. "On the modification of bit-flipping decoder of LDPC-codes". Advanced Engineering Research 21, nr 1 (4.04.2021): 96–104. http://dx.doi.org/10.23947/2687-1653-2021-21-1-96-104.

Pełny tekst źródła
Streszczenie:
Introduction. In all types of digital communication, error control coding techniques are used. Many digital communication standards, such as Wi-Fi and 5G, use low density parity check (LDPC) codes. These codes are popular because they provide building encoders and decoders with low computational complexity. This work objective is to increase the error correcting capability of the well-known bit-flipping decoder (BF) of LDPC-codes. For this purpose, a modification of the decoder is built, which enables to dynamically control one of its main parameters whose choice affects significantly the quality of decoding.Materials and Methods. The well-known bit-flipping decoder of binary LDPC-codes is considered. This decoder has several parameters that are not rigidly bound with the code parameters. The dependence of the decoding quality on the selection of the output parameters of the bit-flipping decoder was investigated through simulation modeling. It is shown that the decoding results in this case are significantly affected by the input parameter of the decoder — threshold T. A modification of the BF-decoder of binary LDPC-codes has been developed, in which it is proposed to set the threshold dynamically during the execution of the algorithm depending on the error rate. A comparative analysis of the error- correcting capability of decoders is carried out by the simulation modeling method.Results. A lemma on the maximum value of the decoder threshold T is formulated and proved. Upper bounds for the number of operations are found for the original and modified decoders. A simulation model that implements a digital noise-immune communication channel has been built. In the model, the initial data is encoded with a given LDPC-code, then it is made noisy by additive uniformly distributed errors, and thereafter, it is decoded in turn by the bit-flipping algorithm with different threshold T parameters, as well as by a modified decoder. Based on the input and output data, the correction capacity of the decoders used is estimated. Experiments have shown that the error-correcting capability of the modified decoder in the range of the real error rate is higher than that of the original decoder, regardless of the selection of its parameters.Discussion and Conclusions. The lemma, proved in the paper, sets the upper bound on the threshold value in the original decoder, which simplifies its adjustment. The developed modification of the decoder has a better error- correcting capability compared to the original decoder. Nevertheless, the complexity of the modification is slightly increased compared to the original algorithm. It has been pointed out that the decoding quality of a modified decoder develops with a decrease in the number of cycles in the Tanner graph and an increase in the length of the code.Keywords: LDPC-codes, error-correcting capability, dynamic threshold, binary symmetric channel, experimental research.
Style APA, Harvard, Vancouver, ISO itp.
34

El maammar, Nejwa, Seddik Bri i Jaouad Foshi. "A comparative simulation study of different decoding schemes in LDPC coded OFDM systems for NB-PLC channel". Indonesian Journal of Electrical Engineering and Computer Science 15, nr 1 (1.07.2019): 306. http://dx.doi.org/10.11591/ijeecs.v15.i1.pp306-313.

Pełny tekst źródła
Streszczenie:
In this paper, we study the performance of Low density Parity Check (LDPC) coded orthogonal frequency-division multiplexing (COFDM) systems when they are applied on the short data block of a narrowband (NB) power line communications (PLC) channel. In the modelled system simulations have been performed using different code lengths. It is assumed that the channel has a multipath propagation with two different noise scenarios: AWGN background noise with and without the presence of impulsive noise. Performances of Various soft and hard decision LDPC decoder schemes such as belief propagation (BP), weighted bit flipping (WBF), improved weighted bit flipping (IWBF) and implementation-efficient reliability ratio based weighted bit flipping (IRRWBF) decoders were investigated. It has been shown for all simulations performed in PLC channel model showed that remarkable performance improvement can be achieved by using short-length LDPC codes. Especially, the improvements are striking when the BP decoding algorithms are employed on the receiver side.
Style APA, Harvard, Vancouver, ISO itp.
35

Brkić, Srđan, Predrag Ivaniš i Bane Vasić. "On guaranteed correction of error patterns with artificial neural networks". Telfor Journal 14, nr 2 (2022): 51–55. http://dx.doi.org/10.5937/telfor2202051b.

Pełny tekst źródła
Streszczenie:
In this paper, we analyze applicability of single-and two-hidden-layer feed-forward artificial neural networks, SLFNs and TLFNs, respectively, in decoding linear block codes. Based on the provable capability of SLFNs and TLFNs to approximate discrete functions, we discuss sizes of the network capable to perform maximum likelihood decoding. Furthermore, we propose a decoding scheme, which use artificial neural networks (ANNs) to lower the error-floors of low-density parity-check (LDPC) codes. By learning a small number of error patterns, uncorrectable with typical decoders of LDPC codes, ANN can lower the error-floor by an order of magnitude, with only marginal average complexity incense.
Style APA, Harvard, Vancouver, ISO itp.
36

El-Abbasy, Karim, Ramy Taki Eldin, Salwa El Ramly i Bassant Abdelhamid. "Optimized Polar Codes as Forward Error Correction Coding for Digital Video Broadcasting Systems". Electronics 10, nr 17 (3.09.2021): 2152. http://dx.doi.org/10.3390/electronics10172152.

Pełny tekst źródła
Streszczenie:
Polar codes are featured by their low encoding/decoding complexity for symmetric binary input-discrete memoryless channels. Recently, flexible generic Successive Cancellation List (SCL) decoders for polar codes were proposed to provide different throughput, latency, and decoding performances. In this paper, we propose to use polar codes with flexible fast-adaptive SCL decoders in Digital Video Broadcasting (DVB) systems to meet the growing demand for more bitrates. In addition, they can provide more interactive services with less latency and more throughput. First, we start with the construction of polar codes and propose a new mathematical relation to get the optimized design point for the polar code. We prove that our optimized design point is too close to the one that achieves minimum Bit Error Rate (BER). Then, we compare the performance of polar and Low-Density Parity Check (LDPC) codes in terms of BER, encoder/decoder latencies, and throughput. The results show that both channel coding techniques have comparable BER. However, polar codes are superior to LDPC in terms of decoding latency, and system throughput. Finally, we present the possible performance enhancement of DVB systems in terms of decoding latency and complexity when using optimized polar codes as a Forward Error Correction (FEC) technique instead of Bose Chaudhuri Hocquenghem (BCH) and LDPC codes that are currently adopted in DVB standards.
Style APA, Harvard, Vancouver, ISO itp.
37

Mondal, Arijit, Satyannarayana Thatimattala, Vamshi Krishna Yalamaddi i Shayan Srinivasa Garani. "Efficient Coding Architectures for Reed–Solomon and Low-Density Parity-Check Decoders for Magnetic and Other Data Storage Systems". IEEE Transactions on Magnetics 54, nr 2 (luty 2018): 1–15. http://dx.doi.org/10.1109/tmag.2017.2778053.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Monsees, Tobias, Oliver Griebel, Matthias Herrmann, Dirk Wübben, Armin Dekorsy i Norbert Wehn. "Minimum-Integer Computation Finite Alphabet Message Passing Decoder: From Theory to Decoder Implementations towards 1 Tb/s". Entropy 24, nr 10 (12.10.2022): 1452. http://dx.doi.org/10.3390/e24101452.

Pełny tekst źródła
Streszczenie:
In Message Passing (MP) decoding of Low-Density Parity Check (LDPC) codes, extrinsic information is exchanged between Check Node (CNs) and Variable Node (VNs). In a practical implementation, this information exchange is limited by quantization using only a small number of bits. In recent investigations, a novel class of Finite Alphabet Message Passing (FA-MP) decoders are designed to maximize the Mutual Information (MI) using only a small number of bits per message (e.g., 3 or 4 bits) with a communication performance close to high-precision Belief Propagation (BP) decoding. In contrast to the conventional BP decoder, operations are given as discrete-input discrete-output mappings which can be described by multidimensional LUT (mLUTs). A common approach to avoid exponential increases in the size of mLUTs with the node degree is given by the sequential LUT (sLUT) design approach, i.e., by using a sequence of two-dimensional Lookup-Table (LUTs) for the design, leading to a slight performance degradation. Recently, approaches such as Reconstruction-Computation-Quantization (RCQ) and Mutual Information-Maximizing Quantized Belief Propagation (MIM-QBP) have been proposed to avoid the complexity drawback of using mLUTs by using pre-designed functions that require calculations over a computational domain. It has been shown that these calculations are able to represent the mLUT mapping exactly by executing computations with infinite precision over real numbers. Based on the framework of MIM-QBP and RCQ, the Minimum-Integer Computation (MIC) decoder design generates low-bit integer computations that are derived from the Log-Likelihood Ratio (LLR) separation property of the information maximizing quantizer to replace the mLUT mappings either exactly or approximately. We derive a novel criterion for the bit resolution that is required to represent the mLUT mappings exactly. Furthermore, we show that our MIC decoder has exactly the communication performance of the corresponding mLUT decoder, but with much lower implementation complexity. We also perform an objective comparison between the state-of-the-art Min-Sum (MS) and the FA-MP decoder implementations for throughput towards 1 Tb/s in a state-of-the-art 28 nm Fully-Depleted Silicon-on-Insulator (FD-SOI) technology. Furthermore, we demonstrate that our new MIC decoder implementation outperforms previous FA-MP decoders and MS decoders in terms of reduced routing complexity, area efficiency and energy efficiency.
Style APA, Harvard, Vancouver, ISO itp.
39

Scruby, T. R., i K. Nemoto. "Local Probabilistic Decoding of a Quantum Code". Quantum 7 (29.08.2023): 1093. http://dx.doi.org/10.22331/q-2023-08-29-1093.

Pełny tekst źródła
Streszczenie:
flip is an extremely simple and maximally local classical decoder which has been used to great effect in certain classes of classical codes. When applied to quantum codes there exist constant-weight errors (such as half of a stabiliser) which are uncorrectable for this decoder, so previous studies have considered modified versions of flip, sometimes in conjunction with other decoders. We argue that this may not always be necessary, and present numerical evidence for the existence of a threshold for flip when applied to the looplike syndromes of a three-dimensional toric code on a cubic lattice. This result can be attributed to the fact that the lowest-weight uncorrectable errors for this decoder are closer (in terms of Hamming distance) to correctable errors than to other uncorrectable errors, and so they are likely to become correctable in future code cycles after transformation by additional noise. Introducing randomness into the decoder can allow it to correct these "uncorrectable" errors with finite probability, and for a decoding strategy that uses a combination of belief propagation and probabilistic flip we observe a threshold of &#x223C;5.5&#x0025; under phenomenological noise. This is comparable to the best known threshold for this code (&#x223C;7.1&#x0025;) which was achieved using belief propagation and ordered statistics decoding [Higgott and Breuckmann, 2022], a strategy with a runtime of O(n3) as opposed to the O(n) (O(1) when parallelised) runtime of our local decoder. We expect that this strategy could be generalised to work well in other low-density parity check codes, and hope that these results will prompt investigation of other previously overlooked decoders.
Style APA, Harvard, Vancouver, ISO itp.
40

Hasan, Fadhil S., Mahmood F. Mosleh i Aya H. Abdulhameed. "FPGA implementation of LDPC soft-decision decoders based DCSK for spread spectrum applications". International Journal of Electrical and Computer Engineering (IJECE) 11, nr 6 (1.12.2021): 4794. http://dx.doi.org/10.11591/ijece.v11i6.pp4794-4809.

Pełny tekst źródła
Streszczenie:
<span lang="EN-US">Spread spectrum (SS) communications have attracted interest because of their channel attenuation immunity and low intercept potential. Apart from some extra features such as basic transceiver structures, chaotic communication would be the analog alternative to digital SS systems. Differential chaos shift keying (DCSK) systems, non-periodic and random characteristics among chaos carriers as well as their interaction with soft data are designed based on low-density parity-check (LDPC) codes in this brief. Because of simple structure, and glorious ability to <span>correct errors. Using the Xilinx kintex7 FPGA development kit, we investigate the hardware performance and resource requirement tendencies of the DCSK</span> communication system based on LDPC decoding algorithms (Prob. Domain, Log Domain and Min-Sum) over AWGN channel. The results indicate that the proposed system model has substantial improvements in the performance of the bit error rate (BER) and the real-time process. The Min-Sum decoder has relatively fewer FPGA resources than the other decoders. The implemented system will achieve 10-4 BER efficiency with 5 dB associate E<sub>b</sub>/N<sub>o</sub> as a coding gain.</span>
Style APA, Harvard, Vancouver, ISO itp.
41

Shah, Stark i Bauch. "Coarsely Quantized Decoding and Construction of Polar Codes Using the Information Bottleneck Method". Algorithms 12, nr 9 (10.09.2019): 192. http://dx.doi.org/10.3390/a12090192.

Pełny tekst źródła
Streszczenie:
The information bottleneck method is a generic clustering framework from the fieldof machine learning which allows compressing an observed quantity while retaining as much ofthe mutual information it shares with the quantity of primary relevance as possible. The frameworkwas recently used to design message-passing decoders for low-density parity-check codes in whichall the arithmetic operations on log-likelihood ratios are replaced by table lookups of unsignedintegers. This paper presents, in detail, the application of the information bottleneck method to polarcodes, where the framework is used to compress the virtual bit channels defined in the code structureand show that the benefits are twofold. On the one hand, the compression restricts the outputalphabet of the bit channels to a manageable size. This facilitates computing the capacities of the bitchannels in order to identify the ones with larger capacities. On the other hand, the intermediatesteps of the compression process can be used to replace the log-likelihood ratio computations inthe decoder with table lookups of unsigned integers. Hence, a single procedure produces a polarencoder as well as its tailored, quantized decoder. Moreover, we also use a technique called messagealignment to reduce the space complexity of the quantized decoder obtained using the informationbottleneck framework.
Style APA, Harvard, Vancouver, ISO itp.
42

Chen, Rong, i Lan Chen. "Dual Threshold Self-Corrected Minimum Sum Algorithm for 5G LDPC Decoders". Information 11, nr 7 (7.07.2020): 355. http://dx.doi.org/10.3390/info11070355.

Pełny tekst źródła
Streszczenie:
Fifth generation (5G) is a new generation mobile communication system developed for the growing demand for mobile communication. Channel coding is an indispensable part of most modern digital communication systems, for it can improve the transmission reliability and anti-interference. In order to meet the requirements of 5G communication, a dual threshold self-corrected minimum sum (DT-SCMS) algorithm for low-density parity-check (LDPC) decoders is proposed in this paper. Besides, an architecture of LDPC decoders is designed. By setting thresholds to judge the reliability of messages, the DT-SCMS algorithm erases unreliable messages, improving the decoding performance and efficiency. Simulation results show that the performance of DT-SCMS is better than that of SCMS. When the code rate is 1/3, the performance of DT-SCMS has been improved by 0.2 dB at the bit error rate of 10 − 4 compared with SCMS. In terms of the convergence, when the code rate is 2/3, the number of iterations of DT-SCMS can be reduced by up to 20.46% compared with SCMS, and the average proportion of reduction is 18.68%.
Style APA, Harvard, Vancouver, ISO itp.
43

Skliarova, Iouliia. "Analysis and Comparison of Different Approaches to Implementing a Network-Based Parallel Data Processing Algorithm". Journal of Low Power Electronics and Applications 12, nr 3 (9.07.2022): 38. http://dx.doi.org/10.3390/jlpea12030038.

Pełny tekst źródła
Streszczenie:
It is well known that network-based parallel data processing algorithms are well suited to implementation in reconfigurable hardware recurring to either Field-Programmable Gate Arrays (FPGA) or Programmable Systems-on-Chip (PSoC). The intrinsic parallelism of these devices makes it possible to execute several data-independent network operations in parallel. However, the approaches to designing the respective systems vary significantly with the experience and background of the engineer in charge. In this paper, we analyze and compare the pros and cons of using an embedded processor, high-level synthesis methods, and register-transfer low-level design in terms of design effort, performance, and power consumption for implementing a parallel algorithm to find the two smallest values in a dataset. This problem is easy to formulate, has a number of practical applications (for instance, in low-density parity check decoders), and is very well suited to parallel implementation based on comparator networks.
Style APA, Harvard, Vancouver, ISO itp.
44

Benhayoun, Mhammed, Mouhcine Razi, Anas Mansouri i Ali Ahaitouf. "Low-Complexity LDPC Decoding Algorithm Based on Layered Vicinal Variable Node Scheduling". Modelling and Simulation in Engineering 2022 (19.01.2022): 1–12. http://dx.doi.org/10.1155/2022/1407788.

Pełny tekst źródła
Streszczenie:
The informed dynamic scheduling (IDS) strategies for the low-density parity check (LDPC) decoding have shown superior performance in error correction and convergence speed, particularly those based on reliability measures and residual belief propagation (RBP). However, the search for the most unreliable variable nodes and the residual precomputation required for each iteration of the IDS-LDPC increases the complexity of the decoding process which becomes more sequential, making it hard to exploit the parallelism of signal processing algorithms available in multicore platforms. To overcome this problem, a new, low-complexity scheduling system, called layered vicinal variable nodes scheduling (LWNS) is presented in this paper. With this LWNS, each variable node is updated by exchanging intrinsic information with all its associated control and variable nodes before moving to the next variable node updating. The proposed scheduling strategy is fixed by a preprocessing step of the parity control matrix instead of calculation of the residuals values and by computation of the most influential variable node instead the most unreliable metric. It also allows the parallel processing of independent Tanner graph subbranches identified and grouped in layers. Our simulation results show that the LWNS BP have an attractive convergence rate and better error correction performance with low complexity when compared to previous IDS decoders under the white Gaussian noise channel (AWGN).
Style APA, Harvard, Vancouver, ISO itp.
45

Salih, Layla M., Thuraya Mahmoud Ali Al-Qaradaghi i Jalal Jamal Hamad Ameen. "Performance Analysis of Different Flexible Decoding Algorithms for NR-LDPC Codes". Tikrit Journal of Engineering Sciences 29, nr 4 (11.12.2022): 10–18. http://dx.doi.org/10.25130/tjes.29.4.2.

Pełny tekst źródła
Streszczenie:
Channel coding technique is a fundamental building block in any modern communication system to realize reliable, fast, and secure data transmission. At the same time, it is a challenging and crucial task, as the data transmission happens in a channel where noise, fading, and other impairments are present. The Low-Density Parity-Check (LDPC) codes give substantial results close to the Shannon limit when the complexity and processing delay time are unlimited. In this paper, the performance of the LDPC decoding with four algorithms was investigated. The investigated four algorithms were Belief Propagation (BP), Layered Belief Propagation (LBP), Normalized min-sum (NMS), and Offset min-sum (OMS). These algorithms were examined for code rates ranging from 1/3 to 9/10 and message block lengths (64, 512, 1024, and 5120) bits. The simulation results revealed the flexibility of these decoders in supporting these code rates and block lengths, which enables their usage in a wide range of applications and scenarios for fifth-generation (5G) wireless communication. In addition, the effect of the maximum number of decoding iterations on the error correction performance was investigated, and a gain of 5.6 dB can be obtained by using 32 decoding iterations at BER=2*10-3 instead of one decoding iteration. The results showed that the decoders performed better for longer message blocks than for short message blocks, and less power was required for transmitting longer messages. Finally, the comparison results of their performance in terms of bit error rate (BER) under the same conditions showed a gain of 0.8 dB using LBP at BER= 10-5 compared with the NMS decoding algorithm.
Style APA, Harvard, Vancouver, ISO itp.
46

Pham, Huyen Thi, Hung Tuan Dao i Nghia Xuan Pham. "Simplified Variable Node Unit Architecture for Nonbinary LDPC Decoder". Journal of Science and Technology on Information security 9, nr 01 (9.04.2020): 12–19. http://dx.doi.org/10.54654/isj.v9i01.36.

Pełny tekst źródła
Streszczenie:
Abstract— Nonbinary low-density-parity-check (NB-LDPC) code outperforms their binary counterpart in terms of error correcting performance and error-floor property when the code length is moderate. However, the drawback of NB-LDPC decoders is high complexity and the complexity increases considerably when increasing the Galois-field order. In this paper, a simplified basic-set trellis min-max (sBS-TMM) algorithm that is especially efficient for high-order Galois Fields, is proposed for the variable node processing to reduce the complexity of the variable node unit (VNU) as well as the whole decoder. The decoder architecture corresponding to the proposed algorithm is designed for the (837, 726) NB-LDPC code over GF(32). The implementation results using 90-nm CMOS technology show that the proposed decoder architecture reduces the gate count by 21.35% and 9.4% with almost similar error-correcting performance, compared to the up-to-date works.Tóm tắt— Các mã LDPC phi nhị phân (NB-LDPC) vượt trội so với các mã LDPC nhị phân về chất lượng sửa lỗi và thuộc tính lỗi san bằng khi chiều dài là trung bình. Tuy nhiên, nhược điểm của các bộ giải mã NB-LDPC là tính phức tạp cao và độ phức tạp tăng đáng kể khi bậc của trường Galois cao. Trong bài báo này, thuật toán Trellis Min-Max dựa trên tập cơ sở được đơn giản hóa được đề xuất cho xử lý nốt biến mà hiệu quả cho các trường Galois bậc cao để giảm độ phức tạp của khối nốt biến (VNU) cũng như cả bộ giải mã. Kiến trúc bộ giải mã tương ứng với thuật toán đề xuất được thiết kế cho mã NB-LDPC (837, 726) thông qua trường GF(32). Các kết quả thực hiện sử dụng công nghệ CMOS 90-nm chỉ ra rằng kiến trúc bộ giải mã được đề xuất giảm số lượng cổng logic 21,35% và 9,4% với chất lượng sửa lỗi gần như không thay đổi so với các nghiên cứu gần đây.
Style APA, Harvard, Vancouver, ISO itp.
47

Zhang, Chuan, Lulu Ge, Xingchi Zhang, Wei Wei, Jing Zhao, Zaichen Zhang, Zhongfeng Wang i Xiaohu You. "A Uniform Molecular Low-Density Parity Check Decoder". ACS Synthetic Biology 8, nr 1 (4.12.2018): 82–90. http://dx.doi.org/10.1021/acssynbio.8b00304.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Sułek, W. "Pipeline processing in low-density parity-check codes hardware decoder". Bulletin of the Polish Academy of Sciences: Technical Sciences 59, nr 2 (1.06.2011): 149–55. http://dx.doi.org/10.2478/v10175-011-0019-9.

Pełny tekst źródła
Streszczenie:
Pipeline processing in low-density parity-check codes hardware decoderLow-Density Parity-Check (LDPC) codes are one of the best known error correcting coding methods. This article concerns the hardware iterative decoder for a subclass of LDPC codes that are implementation oriented, known also as Architecture Aware LDPC. The decoder has been implemented in a form of synthesizable VHDL description. To achieve high clock frequency of the decoder hardware implementation - and in consequence high data-throughput, a large number of pipeline registers has been used in the processing chain. However, the registers increase the processing path delay, since the number of clock cycles required for data propagating is increased. Thus in general the idle cycles must be introduced between decoding subiterations. In this paper we study the conditions for necessity of idle cycles and provide a method for calculation the exact number of required idle cycles on the basis of parity check matrix of the code. Then we propose a parity check matrix optimization method to minimize the total number of required idle cycles and hence, maximize the decoder throughput. The proposed matrix optimization by sorting rows and columns does not change the code properties. Results, presented in the paper, show that the decoder throughput can be significantly increased with the proposed optimization method.
Style APA, Harvard, Vancouver, ISO itp.
49

Oh, Daesun, i Keshab K. Parhi. "Low Complexity Decoder Architecture for Low-Density Parity-Check Codes". Journal of Signal Processing Systems 56, nr 2-3 (30.05.2008): 217–28. http://dx.doi.org/10.1007/s11265-008-0231-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Khittiwitchayakul, Sirawit, Watid Phakphisut i Pornchai Supnithi. "Associated Sectors of Magnetic Recording Systems Using Spatially Coupled LDPC Codes". ECTI Transactions on Electrical Engineering, Electronics, and Communications 20, nr 1 (18.02.2022): 10–21. http://dx.doi.org/10.37936/ecti-eec.2022201.246094.

Pełny tekst źródła
Streszczenie:
In traditional magnetic recording systems, non-associated sectors are mainly adopted, whereby two consecutive sectors are decoded independently by the low-density parity-check (LDPC) codes. In this paper, we propose a magnetic recording system with associated sectors, constructed using spatially coupled low-density parity-check (SC-LDPC) codes. If the SC-LDPC decoder cannot correct the erroneous bits in the current sector, it can request information stored in previous sectors to improve decoding performance. Moreover, we modify protograph-based extrinsic information transfer (P-EXIT) charts to examine the theoretical performance of SC-LDPC codes applied to both non-associated and associated sectors. Our theoretical results show that the associated sectors achieve significant performance gains compared to the traditional non-associated sectors.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii