LDPC Options for Next Generation Wireless Systems ?· LDPC Options for Next Generation Wireless Systems…

Embed Size (px)

Text of LDPC Options for Next Generation Wireless Systems ?· LDPC Options for Next Generation Wireless...

  • LDPC Options for Next Generation Wireless Systems

    T. Lestable* and E. Zimmermann#

    *Advanced Technology Group, Samsung Electronics Research Institute, UK #Technische Universitt Dresden, Vodafone Chair Mobile Communications Systems

    AbstractLow-Density Parity-Check (LDPC) codes have recently drawn much attention due to their near-capacity error correction performance, and are currently in the focus of many standardization activities, e.g., IEEE 802.11n, IEEE 802.16e, and ETSI DVB-S. In this contribution, we discuss several aspects related to the practical application of such codes to wireless communications systems. We consider flexibility, memory requirements, en- and decoding complexity and different variants of decoding algorithms for LDPC codes that enable to effectively trade-off error correction performance for implementation simplicity. We conclude that many of what have been considered significant disadvantages of LDPC codes (inflexibility, high encoding complexity, etc.) can be overcome by appropriate use of different algorithms and strategies that have been recently developed making LDPC codes a highly attractive option for forward error correction for B3G/4G systems.

    Index TermsLDPC, Belief Propagation, Bit

    Flipping, Scheduling, Complexity, TGnSync, 4G.

    R INTR

    ecently many standards proposals, namely TGnSync [15][27], or WWise [28] for IEEE

    802.11n, together with IEEE 802.16e [14], have considered LDPC coding schemes as key component of their system features. The adoption by such standards activities proves the increasing maturity of the LDPC related technology, especially the affordable joint complexity from encoder and decoder implementation. From sub-optimal lower-

    complexity decoding algorithms [16] to complete flexible architecture design [6][7][8], some pragmatic and realistic implementation solutions allow LDPC codes to be more and more attractive as enhancement of current (B3G) or next generation wireless systems (4G) [29].

    The aim of this paper is thus to present and evaluate non-exhaustive solutions that allow to decrease the global complexity of encoding/decoding. The first part presents basic properties of LDPC codes, together with the message-passing principle. Then we tackle the encoder complexity issue, where Hardware (HW) requirements in terms of dimensioning are assessed, relying on the Block-LDPC approach. The decoder side is kept for the final part, as it represents the more voracious element within the joint design. We thus review and evaluate performance/complexity trade-off of sub-optimal low-complexity decoding algorithms, and highlight common trends in the architecture of such LDPC codes, by evaluating their HW requirements.

    ODUCTION

    LDPC Codes: Fundamentals

    In this part, we briefly introduce the basic principle and notations of LDPC codes. For further reading see [1][16][17]. LDPC codes are linear block codes whose parity-check matrix H has the favourable property of being sparse, i.e. contains only a low number of non-zero elements. Tanner graphs of such codes are bipartite graphs containing two

    Page 1 (10)

  • different kinds of nodes, code (bit, or variable) nodes and check nodes. A (n, k) LDPC code is thus represented by a m x n parity-check matrix H, where m=n-k, is the number of redundancy (parity bits) of the coding scheme. We can then distinguish regular from irregular LDPC codes, depending on the degree distribution of code nodes (column weights) and check nodes (row weights). A regular scheme means that these distributions are constant along column and rows, and are usually represented by the notation (dv, dc). For such a code, the number of non-zero elements is thus given either by n*dv or m*dc, leading to the following code rate relation Rc=1-(m/n)=1-(dv/dc). The decoding of LDPC codes is relying on the Belief-Propagation Algorithm (BPA) framework extensively discussed in literature [19]. This involves two major steps, the check node update and the bit node update. (Fig.1), where intrinsic values from the channel feed first bit nodes (parents), then extrinsic information is processed and forwarded to check nodes (child), that themselves will produce new extrinsic information relying on parity-check constraints, feeding their connected bit nodes.

    Figure 1: Message-Passing Illustration

    Check Node Update followed by Bit Node Update

    The way of switching between bit and check nodes updates is referred as scheduling, and will be discussed later on, as this can impact the decoder complexity. Joint Design Methodology

    With parallelization holding the promise of keeping delays low while continuously increasing data rates, the major attraction from a design point of view is that LDPC codes are inherently fully parallel oriented.

    Nevertheless, a fully parallel implementation is prohibitive due to large block lengths. Consequently a strong trend is currently going towards semi-parallel architectures [6][8], with Block-LDPC being the centrepiece for this approach. Another important practical issue when dealing with coding schemes for adaptive air interfaces is flexibility in terms of block sizes and code rates. While designing LDPC codes we have therefore to keep in mind the direct and strong relation between the structure of the parity check matrix and the total encoding, decoding and implementation complexity. Indeed, a completely random LDPC might achieve better performance at the expense of a very complex interconnections (shuffle) network that might be prohibitive for large block lengths in terms of HW wirings, together with leading to potentially high complexity encoding, low achievable parallelization level, and most importantly low flexibility in terms of block sizes and code rates. Therefore the sequel intends to highlight and assess the most relevant performance/HW requirements trade-offs. Random-Like LDPC One typical way of constructing good LDPC codes is to take a degree distribution that promises good error correction performance [17] (e.g. by EXIT chart curve matching of variable and check node decoder transfer curves) as a starting point and then use e.g. progressive edge growth (PEG) [18] algorithms to ensure a good distance spectrum, i.e., low error floors. Codes that are contructed following this framework usually come very close to the bounds of what is achievable in terms of error correction performance [17]. They are hence considered as the baseline comparison case for performance assessment. The disadvantage of this approach for practical implementation is that a new code needs to be designed for each block length and code rate, leading to the above mentioned low flexibility. The obtained codes are often non-systematic, thus requiring appropriate preprocessing to enable near linear-time encoding [2].

    Page 2 (10)

  • Structured LDPC (Block-LDPC) Structured (Block-)LDPC codes on the other hand, such as Pi-Construction Codes [15] or LDPC Array Codes [14] proposed in the framework of IEEE 802 have been shown to have good performance and high flexibility in terms of code rates and block sizes at the same time. The parity-check matrix H of such codes can be seen as an array of square sub-block matrices. These latter sub-block matrices are obtained by circular shift and/or rotation of the identity matrix, or are all-zero matrices. The parity check matrix is hence fully determined by means of these circular shifts, and the square sub-block matrices dimension p. LDPC codes defined by such standards rely on the concept of a base model matrix introduced by Zhong and Zhang [7], which contains the circular shift values. Figure 2 shows the base model matrix (Mb,Nb)=(12, 24) Hb for the Rc=1/2 LDPC code defined in TGnSync [27]:

    Figure 2: Base model matrix for TGnSync Rc=1/2 [27] Adaptation to different block lengths can e.g. be done by expanding elements of the base matrix (e.g. by replacing each 1 in the parity check matrix by an identity matrix and each 0 by an all-zeros matrix). Different code rates are obtained by appending more elements to the matrix in only one dimension (i.e., add more variable nodes, but no check nodes). Note that the decoder must be flexible enough to support such changes of the code structure. Using such base matrices hence adds flexibility in terms of packet length

    together with maintaining the degree distributions of H. Indeed, for a given block length N, the expansion factor p (denoted Zf in standards) is obtained through p=N/Nb. As N must therefore be a multiple of Nb, the maximum achievable granularity is obviously bounded by the size of the base model matrix. In the case of the TGnSync LDPC code the block length is hence scalable in steps of 24 bits it is not very probable that a higher granularity could be required. Another interesting aspect is that the whole expansion process is independent from the circular shift values, leading to many different possible LDPC code designs [10][11][12] with different performance, but capable of being mapped onto the same semi-parallel decoding architecture. Encoding Complexity One considerable challenge for the application of LDPC in practical wireless systems has long been encoding complexity (it is in fact a still quite widespread mis-conception that this remains an open issue). If the parity check matrix is in non-systematic format, straightforward methods for encoding destroy (or do not exploit) the sparse nature of the matrix thus leading to an encoding complexity quadratic with block length. However, in his famous paper [2], Richardson presented several pre-processing techniques to transform H into an approximate lower triangular (and thus approximate systematic) form, leading to a complexity quadratic w.r.t. to only a small fraction of the block length n for the encoding. The resulting