Performance Analysis of Linear Block Codes¶
Weight and Distance for Linear Block Codes¶
The weight of a codeword in a code , denoted , is the count of its nonzero components, a measure of its active elements.
Since the all-zero vector is a codeword in every linear block code (due to linearity), each such code includes exactly one codeword with weight zero.
The Hamming distance between two codewords , denoted , is the number of positions where they differ, quantifying their separation.
Notably, a codeword’s weight is its Hamming distance from , linking these concepts directly in the context of linear structures.
Relationship between Weight and Distance¶
In linear block codes, the distance between codewords and equals the weight of their difference: .
Since is itself a codeword (due to closure under addition), this establishes a one-to-one correspondence between weights and distances.
Consequently, the set of distances from any codeword to all others matches the set of weights of all codewords, a property independent of the chosen .
This uniformity simplifies analysis, as the distance profile remains consistent across perspectives.
In binary codes, subtraction and addition are equivalent modulo-2, so , reinforcing this relationship.
Minimum Distance and Minimum Weight¶
The minimum distance of a code, , is the smallest Hamming distance between any two distinct codewords Proakis (2007, Eq. (7.2-11)):
It determines the code’s error-correcting capability.
The minimum weight, , is the smallest weight among nonzero codewords Proakis (2007, Eq. (7.2-11)):
For linear block codes, , as the distance between any two codewords corresponds to the weight of a nonzero codeword, a key property simplifying code evaluation.
Relationship between Minimum Weight and Parity Check Matrix ¶
The minimum weight (or equivalently ) of a linear block code is intricately tied to the parity check matrix .
A vector is a codeword if .
For a codeword of minimum weight, this implies that columns of are linearly dependent, as their combination yields the zero vector.
Conversely, no fewer than columns can be linearly dependent, since no nonzero codeword has a weight less than .
Thus, is the smallest number of linearly dependent columns in , and the dimension of ’s column space is , offering a structural insight into the code’s strength.
Relationship Between Hamming Distance and Euclidean Distance of the Codewords¶
In binary antipodal signaling, such as BPSK modulation, the binary components (0 and 1) of a codeword are mapped to and , respectively, where is the energy per codeword component.
For a modulated sequence corresponding to codeword , the components are:
This mapping transforms binary differences into signal amplitudes.
The squared Euclidean distance between two modulated sequences and relates to the Hamming distance as Proakis (2007, Eq. (7.2-14)):
Each differing bit contributes a distance of , squared to .
Consequently, the minimum Euclidean distance of the modulated sequences is Proakis (2007, Eq. (7.2-15)):
where is the minimum Hamming distance, is the code rate, and is the energy per bit, linking physical signal separation to code structure.
The Weight Distribution Polynomial¶
An linear block code comprises codewords with weights ranging from 0 to .
The presence of the all-zero codeword ensures one codeword of weight 0, while nonzero codewords have weights from (the minimum distance) to .
The weight distribution polynomial (WEP), or weight enumeration function (WEF), denoted , quantifies this distribution Proakis (2007, Eq. (7.2-16)):
Here, is the number of codewords with weight .
The polynomial encodes the code’s weight profile, starting with (for ) and detailing the frequency of each nonzero weight, a critical tool for analyzing error-correcting performance.
Input-Output Weight Enumeration Function¶
The input-output weight enumeration function (IOWEF), denoted , extends the WEP by incorporating both codeword weights and the weights of their corresponding information sequences Proakis (2007, Eq. (7.2-25)):
where represents the number of codewords of weight generated by information sequences of weight .
This bivariate polynomial relates input and output weights, with:
summing contributions across all input weights to yield the WEP’s coefficients.
For linear block codes, (the zero input maps to the zero codeword), and:
collapsing to by summing over all input weights, providing a comprehensive view of code behavior.
Conditional Weight Enumeration Function¶
The conditional weight enumeration function (CWEF), denoted , focuses on codewords tied to information sequences of a specific weight Proakis (2007, Eq. (7.2-28)):
It enumerates the weight distribution of all codewords generated by inputs of weight , offering a detailed breakdown by input weight.
The relationship to the IOWEF is derived as:
This partial derivative extracts the -th term’s coefficient from , isolating the contribution of inputs with weight .
Together, these functions provide a layered analysis of the code’s structure, connecting input patterns to output characteristics.
EXAMPLE: (7, 4) Linear Block Code Weight Distribution¶
For the (7, 4) linear block code from a prior example, there are codewords, with weights ranging from 0 to 7.
By generating all codewords from information sequences , the minimum distance is verified as .
The weight distribution reveals one codeword of weight 0 (the all-zero codeword), one of weight 7, seven of weight 3, and seven of weight 4.
Thus, the weight distribution polynomial is:
This polynomial encapsulates the frequency of each weight, providing a concise summary of the code’s structure and its error-correcting potential based on weight diversity.
Input-Output Weight Enumeration Function¶
For the same code, the input-output weight enumeration function (IOWEF) details both input and output weights.
Coefficients are: , , , , , , , , yielding:
The conditional weight enumeration functions (CWEF) are:
These functions map input weights (0 to 4) to output weight distributions, offering a detailed view of how information sequence weights influence codeword weights.
Error Probability of Linear Block Codes¶
Linear block codes involve two key error probabilities.
The block error probability (or word error probability) is the likelihood of transmitting a codeword and decoding it as a different codeword , reflecting whole-codeword errors.
The bit error probability is the probability that an individual information bit is received incorrectly, focusing on per-bit accuracy.
These metrics assess different aspects of code performance, with block errors impacting entire messages and bit errors affecting data integrity at a finer level.
Block Error Probability¶
Due to the linearity of the code, the distances from any codeword to all others are consistent across all choices of .
Thus, analysis can assume the all-zero codeword is transmitted without loss of generality.
The block error probability occurs when the receiver decodes any nonzero codeword .
This is bounded by the sum of pairwise error probabilities , the probability of mistaking for :
This union bound accounts for all possible error events, providing an upper limit on the likelihood of decoding errors.
Upper Bound of ¶
The pairwise error probability depends on the Hamming distance from to , which is , and varies with modulation.
For codewords of equal weight, is identical, leading to Proakis (2007, Eq. (7.2-34)):
where is the pairwise error probability for distance , and is the number of codewords of weight .
Using the Bhattacharyya bound for a memoryless channel:
Defining , this simplifies to:
Thus:
Since (from ), and , a looser bound is Proakis (2007, Eq. (7.2-43)):
Bit Error Probability¶
Bit error probabilities vary across the positions in an information sequence.
The average bit error probability is defined as the mean error rate per bit.
Assuming is transmitted, the probability of decoding a codeword of weight is , with codewords of weight from input weight .
The expected number of bit errors is Proakis (2007, Eq. (7.2-44)):
This counts erroneous bits weighted by input weight .
Since for , the bound extends over all :
The average bit error probability is Proakis (2007, Eq. (7.2-46)):
This normalizes the expected errors by , using to bound the error rate, reflecting the code’s per-bit reliability.
- Proakis, J. (2007). Digital Communications (5th ed.). McGraw-Hill Professional.