Channel Coding¶
Reliable communications over a noisy channel is achievable when the transmission rate remains below the channel capacity, a fundamental limit defining the maximum rate for error-free data transfer.
This reliability is facilitated by channel coding, a process that assigns messages to specific blocks of channel inputs, selectively using only a subset of all possible blocks to enhance error resilience.
Specific mappings between messages and channel input sequences have not been explored in detail here, as the focus is on theoretical bounds rather than practical implementations.
Channel capacity and channel cutoff rate are evaluated using random coding, a method that avoids specifying an optimal mapping.
Instead, random coding averages the error probability across all possible mappings, showing that when the transmission rate is less than , the ensemble average error probability approaches zero as block length increases, due to the statistical likelihood of favorable mappings.
This implies the existence of at least one mapping where the error probability diminishes with longer blocks, providing a theoretical basis for practical code design.
Block Codes¶
Channel codes are classified into two primary types: block codes and convolutional codes, each with distinct encoding strategies.
In block codes, one of messages, each a binary sequence of length called the information sequence, is mapped to a binary sequence of length , known as the codeword, where to allow redundancy for error correction.
For example:
A block code is a linear code where each codeword consists of bits, encoding information bits (the message), with parity-check bits added for error detection and correction.
The code maps possible messages to 7-bit codewords. The total number of possible 7-bit sequences (blocks) is .
The code uses only a subset of 16 out of these 128 possible blocks, chosen to maximize error resilience by ensuring a minimum Hamming distance between codewords.
Transmission occurs by sending binary symbols over the channel, often using modulation like BPSK (Binary Phase Shift Keying) to represent bits as signal phases.
Block coding schemes are memoryless, meaning the encoding of each codeword is independent of prior transmissions.
After encoding and sending a codeword, the system processes a new set of information bits, producing a codeword based solely on the current input, unaffected by previous codewords, which simplifies implementation but limits error correction across blocks.
Code Rate¶
The code rate of a block or convolutional code is defined as:
where is the number of information bits in the input message, and is the total number of bits in the output codeword (for block codes) or output bits per stage (for convolutional codes).
This ratio quantifies the efficiency of information transfer, representing the average number of information bits per transmitted bit in the coded sequence.
For error-correcting codes, where , the code rate satisfies , reflecting the addition of redundancy to enhance error resilience.
When combined with a modulation scheme that maps bits to a transmitted symbol, the effective information rate becomes bits per symbol.
For example, sing the (7,4) Hamming code:
Code rate: , meaning each transmitted bit carries about 0.571 information bits.
If modulated with BPSK (), information bits per symbol = .
If modulated with QPSK (), information bits per symbol = .
Bits per transmission (if interpreted as one codeword) = 4 information bits.
For a codeword of length transmitted via an -dimensional constellation of size (a power of 2), symbols are sent per codeword, where is an integer.
Discussion¶
Recall that is the number of dimensions of the signal space in which the constellation points are defined.
The size of the constellation is the number of distinct points (symbols) in this space, and since is a power of 2, each symbol can represent bits.
Specifically, each symbol in the constellation is a point in an -dimensional Euclidean space. For example:
: A one-dimensional constellation, such as Pulse Amplitude Modulation (PAM), where points lie on a line (e.g., 4-PAM with points at , ).
: A two-dimensional constellation, such as QAM or PSK, where points are in a plane (e.g., QPSK with , points at ).
: Higher-dimensional constellations, often used in advanced schemes like lattice codes or MIMO systems, where points are defined in -dimensional space to increase efficiency or robustness.
Size (): The number of points in the constellation. Since (a power of 2), each symbol carries:
Codeword length (): The number of bits in the coded sequence (e.g., output of a block code like the (7,4) Hamming code, where ).
Symbols per codeword (): The number of constellation symbols needed to transmit the -bit codeword. Since each symbol carries bits, the number of symbols is:
The requirement that is an integer ensures that the codeword’s bits can be exactly mapped to an integer number of symbols. For example, if , then , and must be a multiple of 2 for to be an integer.
Role of
The parameter specifies the dimensionality of the signal space for each transmitted symbol.
However, does not directly affect the calculation of , which depends only on and .
Instead, defines the structure of the constellation and how bits are mapped to physical signals.
Example with a (7,4) Hamming Code.Consider a (7,4) Hamming code, where , and suppose we use a constellation to transmit the codeword: Let the constellation be -dimensional with (e.g., 8-PSK, where points are equally spaced on a circle in the complex plane).
Each symbol carries:
Number of symbols per codeword:
Since is not an integer, this constellation is not suitable for transmitting a single (7,4) codeword without additional processing (e.g., combining multiple codewords or padding bits).Thus, the transmission scheme needs padding or bundling of codewords.Instead, let’s try (e.g., QPSK, ): Again, is not an integer.To satisfy the integer constraint, we might transmit two codewords ( bits): Here, indicates that each of the 7 QPSK symbols is a 2D point, but does not enter the calculation directly.With symbol duration , the transmission time for bits is , and the rate becomes Proakis (2007, Eq. (7.1-2)):This equation ties to modulation complexity () and symbol rate (), showing how coding impacts overall data throughput.Spectral Bit Rate¶
The encoded and modulated signals span a space of dimension .Per the dimensionality theorem, the minimum transmission bandwidth is: The spectral bit rate, or bandwidth efficiency, is: Here, is the signal space dimension per symbol, and is the number of bits per symbol.Compared to an uncoded system with identical modulation, coding reduces the bit rate by (due to redundancy) and increases bandwidth by , as more symbols are needed per information bit.This trade-off enhances reliability at the cost of spectral efficiency.Energy and Transmitted Power¶
Given an average constellation energy , the energy per codeword is: This accounts for symbols per codeword, each carrying .The energy per codeword component is: This distributes energy across components.The energy per information bit is: Since , , reflecting redundancy’s energy cost per bit.Combining these: This shows scales with , linking code efficiency to energy allocation.Transmitted Power¶
Transmitted power is: Note that since joules per second (J/s) equals watts, the units are consistent: Common schemes¶
BPSK: (1 bit/symbol, )
BFSK: (1 bit/symbol, , frequency-based)
QPSK: (2 bits/symbol, )
These reflect how modulation affects bandwidth (halved in QPSK due to higher bits/symbol) and efficiency, with coding consistently scaling both by .¶
Let the constellation be -dimensional with (e.g., 8-PSK, where points are equally spaced on a circle in the complex plane).
Each symbol carries:
Number of symbols per codeword:
BPSK: (1 bit/symbol, )
BFSK: (1 bit/symbol, , frequency-based)
QPSK: (2 bits/symbol, )
- Proakis, J. (2007). Digital Communications (5th ed.). McGraw-Hill Professional.