Skip to article frontmatterSkip to article content

Channel Coding

Reliable communications over a noisy channel is achievable when the transmission rate remains below the channel capacity, a fundamental limit defining the maximum rate for error-free data transfer.

This reliability is facilitated by channel coding, a process that assigns messages to specific blocks of channel inputs, selectively using only a subset of all possible blocks to enhance error resilience.

Specific mappings between messages and channel input sequences have not been explored in detail here, as the focus is on theoretical bounds rather than practical implementations.

Channel capacity CC and channel cutoff rate R0R_0 are evaluated using random coding, a method that avoids specifying an optimal mapping.

Instead, random coding averages the error probability across all possible mappings, showing that when the transmission rate is less than CC, the ensemble average error probability approaches zero as block length increases, due to the statistical likelihood of favorable mappings.

This implies the existence of at least one mapping where the error probability diminishes with longer blocks, providing a theoretical basis for practical code design.

Block Codes

Channel codes are classified into two primary types: block codes and convolutional codes, each with distinct encoding strategies.

In block codes, one of M=2kM = 2^k messages, each a binary sequence of length kk called the information sequence, is mapped to a binary sequence of length nn, known as the codeword, where nkn \geq k to allow redundancy for error correction.

For example:

Transmission occurs by sending nn binary symbols over the channel, often using modulation like BPSK (Binary Phase Shift Keying) to represent bits as signal phases.

Block coding schemes are memoryless, meaning the encoding of each codeword is independent of prior transmissions.

After encoding and sending a codeword, the system processes a new set of kk information bits, producing a codeword based solely on the current input, unaffected by previous codewords, which simplifies implementation but limits error correction across blocks.

Code Rate

The code rate RcR_c of a block or convolutional code is defined as:

Rc=knR_c = \frac{k}{n}

where kk is the number of information bits in the input message, and nn is the total number of bits in the output codeword (for block codes) or output bits per stage (for convolutional codes).

This ratio quantifies the efficiency of information transfer, representing the average number of information bits per transmitted bit in the coded sequence.

For error-correcting codes, where n>kn > k, the code rate satisfies Rc<1R_c < 1, reflecting the addition of redundancy to enhance error resilience.

When combined with a modulation scheme that maps mm bits to a transmitted symbol, the effective information rate becomes m×Rcm \times R_c bits per symbol.

For example, sing the (7,4) Hamming code:

For a codeword of length nn transmitted via an N\color{red} N-dimensional constellation of size MM (a power of 2), L=nlog2ML = \frac{n}{\log_2 M} symbols are sent per codeword, where LL is an integer.

Discussion

Recall that NN is the number of dimensions of the signal space in which the constellation points are defined.

The size MM of the constellation is the number of distinct points (symbols) in this space, and since MM is a power of 2, each symbol can represent log2M\log_2 M bits.

Specifically, each symbol in the constellation is a point in an NN-dimensional Euclidean space. For example:

The requirement that LL is an integer ensures that the codeword’s bits can be exactly mapped to an integer number of symbols. For example, if M=4M = 4, then log24=2\log_2 4 = 2, and nn must be a multiple of 2 for LL to be an integer.

Role of NN

The parameter NN specifies the dimensionality of the signal space for each transmitted symbol.

However, NN does not directly affect the calculation of LL, which depends only on nn and MM.

Instead, NN defines the structure of the constellation and how bits are mapped to physical signals.

Example with a (7,4) Hamming Code.Consider a (7,4) Hamming code, where n=7n = 7, and suppose we use a constellation to transmit the codeword:
  • Let the constellation be N=2N = 2-dimensional with M=8M = 8 (e.g., 8-PSK, where points are equally spaced on a circle in the complex plane).

  • Each symbol carries:

    log2M=log28=3 bits\log_2 M = \log_2 8 = 3 \text{ bits}
  • Number of symbols per codeword:

    L=nlog2M=732.333L = \frac{n}{\log_2 M} = \frac{7}{3} \approx 2.333
Since LL is not an integer, this constellation is not suitable for transmitting a single (7,4) codeword without additional processing (e.g., combining multiple codewords or padding bits).Thus, the transmission scheme needs padding or bundling of codewords.Instead, let’s try M=4M = 4 (e.g., QPSK, N=2N = 2):
log24=2 bits\log_2 4 = 2 \text{ bits}
L=72=3.5L = \frac{7}{2} = 3.5
Again, LL is not an integer.To satisfy the integer constraint, we might transmit two codewords (2×7=142 \times 7 = 14 bits):
L=142=7 symbolsL = \frac{14}{2} = 7 \text{ symbols}
Here, N=2N = 2 indicates that each of the 7 QPSK symbols is a 2D point, but NN does not enter the LL calculation directly.With symbol duration TsT_s, the transmission time for kk bits is T=LTsT = LT_s, and the rate becomes Proakis (2007, Eq. (7.1-2)):
R=kLTs=kn×log2MTs=Rclog2MTsbits/s\begin{split} R &= \frac{k}{LT_s} = \frac{k}{n} \times \frac{\log_2 M}{T_s} \\ &= R_c \frac{\log_2 M}{T_s} \quad \text{bits/s} \end{split}
This equation ties RcR_c to modulation complexity (log2M\log_2 M) and symbol rate (1/Ts1/T_s), showing how coding impacts overall data throughput.

Spectral Bit Rate

The encoded and modulated signals span a space of dimension LNLN.Per the dimensionality theorem, the minimum transmission bandwidth is:
W=N2Ts=RN2Rclog2Mbits/sW = \frac{N}{2T_s} = \frac{RN}{2R_c \log_2 M} \quad \text{bits/s}
The spectral bit rate, or bandwidth efficiency, is:
r=RW=2log2MNRcr = \frac{R}{W} = \frac{2 \log_2 M}{N} R_c
Here, NN is the signal space dimension per symbol, and log2M\log_2 M is the number of bits per symbol.Compared to an uncoded system with identical modulation, coding reduces the bit rate by RcR_c (due to redundancy) and increases bandwidth by 1Rc\frac{1}{R_c}, as more symbols are needed per information bit.This trade-off enhances reliability at the cost of spectral efficiency.

Energy and Transmitted Power

Given an average constellation energy Eavg\mathcal{E}_{\text{avg}}, the energy per codeword is:
E=LEavg=nlog2MEavg\mathcal{E} = L \mathcal{E}_{\text{avg}} = \frac{n}{\log_2 M} \mathcal{E}_{\text{avg}}
This accounts for LL symbols per codeword, each carrying Eavg\mathcal{E}_{\text{avg}}.The energy per codeword component is:
Ec=En=Eavglog2M\mathcal{E}_c = \frac{\mathcal{E}}{n} = \frac{\mathcal{E}_{\text{avg}}}{\log_2 M}
This distributes energy across nn components.The energy per information bit is:
Eb=Ek=EavgRclog2M\mathcal{E}_b = \frac{\mathcal{E}}{k} = \frac{\mathcal{E}_{\text{avg}}}{R_c \log_2 M}
Since k<nk < n, Eb>Ec\mathcal{E}_b > \mathcal{E}_c, reflecting redundancy’s energy cost per bit.Combining these:
Ec=RcEb\mathcal{E}_c = R_c \mathcal{E}_b
This shows Ec\mathcal{E}_c scales with RcR_c, linking code efficiency to energy allocation.

Transmitted Power

Transmitted power is:
P=ELTs=EavgTs=REavgRclog2M=REbP = \frac{\mathcal{E}}{LT_s} = \frac{\mathcal{E}_{\text{avg}}}{T_s} = R \frac{\mathcal{E}_{\text{avg}}}{R_c \log_2 M} = R \mathcal{E}_b
Note that since joules per second (J/s) equals watts, the units are consistent:
[REb]=bitss×Jbit=Js=W[R \mathcal{E}_b] = \frac{\text{bits}}{\text{s}} \times \frac{\text{J}}{\text{bit}} = \frac{\text{J}}{\text{s}} = \text{W}

Common schemes

  • BPSK: W=RRc,r=RcW = \frac{R}{R_c}, r = R_c (1 bit/symbol, N=1N=1)

  • BFSK: W=RRc,r=RcW = \frac{R}{R_c}, r = R_c (1 bit/symbol, N=1N=1, frequency-based)

  • QPSK: W=R2Rc,r=2RcW = \frac{R}{2R_c}, r = 2R_c (2 bits/symbol, N=2N=2)

These reflect how modulation affects bandwidth (halved in QPSK due to higher bits/symbol) and efficiency, with coding consistently scaling both by RcR_c.
References
  1. Proakis, J. (2007). Digital Communications (5th ed.). McGraw-Hill Professional.