Hard Decision Decoding¶
Soft decision bounds assume unquantized matched filter outputs, optimizing performance but requiring correlation metrics, which becomes computationally intensive for large .
Hard decision decoding (HDD) quantizes these outputs digitally, reducing complexity.
By making binary decisions (0 or 1) per bit, HDD trades some performance for practicality, especially beneficial when the number of codewords is large, simplifying receiver design at the cost of precision.
HDD with Binary Decision¶
In HDD, each bit’s matched filter output is quantized to two levels (0 or 1), forming a binary symmetric channel (BSC) with crossover probability Proakis (2007, Eq. (7.5-1)):
Here, is the Gaussian tail function, and ties error probability to SNR and code rate, modeling the AWGN channel’s effect post-quantization.
Minimum-Distance (Maximum-Likelihood) Decoding¶
In HDD, the decoder receives bits and compares them to all possible codewords, selecting the one with the smallest Hamming distance to the received sequence.
This minimum-distance decoding rule is optimal for a BSC, minimizing block error probability by choosing the most likely transmitted codeword based on bit differences, aligning with maximum-likelihood principles under binary quantization.
Minimum-Distance Decoding Rule Implementation¶
A straightforward but inefficient method computes error vectors (modulo-2) for all codewords , where is the received sequence.
Each represents the error pattern transforming into , with its weight (number of 1s) equaling the Hamming distance.
Selecting the yielding the smallest weight implements minimum-distance decoding, though this exhaustive approach scales poorly with .
Syndrome¶
A more efficient HDD method uses the parity check matrix .
If is transmitted and is received (where is the error vector), the syndrome is:
Since , depends only on .
This -dimensional vector identifies parity check failures, enabling error pattern detection without exhaustively testing all codewords, significantly enhancing decoding efficiency.
Detectable and Correctable Capability of HDD¶
The syndrome reflects the error pattern rather than the transmitted codeword, as .
A zero syndrome () indicates is a codeword, leading to an undetected error if .
Of the nonzero error patterns, match nonzero codewords and are undetectable.
The remaining are detectable, as their syndromes are nonzero, but not all are correctable.
With only distinct syndromes, multiple error patterns map to the same syndrome, limiting correction to one per coset.
Maximum-likelihood (ML) decoding seeks the least-weight error pattern per syndrome, optimizing error correction within this constraint.
Standard Array¶
A standard array is a decoding table for an code, listing all codewords in the first row, starting with .
The second row begins with , the minimum-weight non-codeword sequence, followed by for each codeword .
Subsequent rows follow similarly: select the minimum-weight sequence not yet listed, placing it in the first column, and complete the row with .
This process continues until all sequences of length are included, systematically organizing received sequences by error patterns for decoding purposes.
Standard Array Structure¶
The resulting table is an array:
Each row, a coset, groups sequences sharing the same error pattern (first column), called the coset leader.
By construction, coset leaders have the lowest weight in their coset, aligning with ML decoding’s preference for minimal error correction.
EXAMPLE: Standard Array for (5, 2) Code¶
For a (5, 2) systematic code with generator matrix:
the standard array is:
With , coset leaders include one weight-0 (00000), five weight-1 (e.g., 00001), and two weight-2 (e.g., 11000, 10010) patterns, filling the rows, though many double-error patterns are excluded due to space constraints.
Error Detection¶
For a transmitted codeword and error pattern (a coset leader), the received sequence is .
The syndrome is:
All sequences in a coset share the same syndrome, as it depends solely on .
With cosets, each has a unique syndrome, establishing a one-to-one mapping between cosets (or their leaders) and syndromes, enabling precise error pattern identification within the coset framework.
Decoding with Syndromes¶
Decoding involves finding the least-weight error such that .
Each syndrome uniquely identifies a coset, and the coset leader —the lowest-weight member—is the ML error estimate.
Adding to yields the decoded codeword . Only coset leaders (up to nonzero patterns) are correctable.
Of nonzero error patterns, are undetectable (codewords), are detectable, and are correctable, delineating the code’s error-handling limits.
Example: Decoding Error in (5, 2) Code¶
For the (5, 2) code with the standard array previously defined, consider an error vector .
The syndrome, computed as , corresponds to the coset leader from the syndrome table.
Adding to the received sequence yields an incorrect codeword, as the actual error (weight 2) differs from the decoded error (weight 1), illustrating a decoding failure due to syndrome ambiguity.
Syndromes and Coset Leaders¶
The syndrome table for this code lists is presented in Table 1.
Table 1:Syndrome to Error Pattern Mapping
Syndrome | Error Pattern |
000 | 00000 |
001 | 00001 |
010 | 00010 |
100 | 00100 |
011 | 01000 |
101 | 10000 |
110 | 11000 |
111 | 10010 |
This code corrects all five single-error patterns (weight 1) and two specific double-error patterns (11000, 10010), limited by the syndromes.
MATLAB’s error detection and correction tools can simulate such scenarios, confirming the code’s capabilities.
Error Detection Capability¶
A zero syndrome () indicates the received sequence is a valid codeword, one of possibilities.
Since is the minimum separation between codewords, an error pattern of weight can transform one codeword into another, resulting in an undetected error.
For errors fewer than , the syndrome is nonzero, signaling detection.
Thus, an code detects up to errors, enabling use with automatic repeat-request (ARQ) systems to request retransmission upon detection.
Error Correction Capability¶
The error correction capability hinges on , constrained by the syndromes.
Viewing the codewords as points in an -dimensional space, each is the center of a sphere with radius (Hamming distance).
The maximum avoiding sphere overlap is:
Sequences within radius of a codeword are corrected to it, ensuring unique decoding.
Thus, the code corrects up to errors, determined by , balancing the trade-off between detection and correction.
Trade-off Between Detection and Correction ( and )¶
Correcting errors implies detecting at least , but a code can detect more errors by reducing correction capability.
For , errors are correctable.
Reducing the sphere radius to 2 allows detection of 4 errors, correcting only 2; or to 1, detecting 5 and correcting 1.
Generally, for , the number of detectable errors and correctable errors satisfy:
This flexibility allows tailoring the code for detection-heavy (e.g., ARQ) or correction-focused applications, optimizing performance based on channel needs.
Block and Bit Error Probability for HDD¶
In hard decision decoding (HDD) of linear binary block codes, error probability bounds are derived based solely on error correction.
The optimum decoder for a binary symmetric channel (BSC) correctly decodes if the number of errors is less than half the minimum distance , though this is not a strict necessity.
The guaranteed correctable error count is:
This represents the maximum number of errors the code can always correct, leveraging to ensure distinct codeword spheres in Hamming space.
Error Probability Bounds¶
Given the BSC’s memoryless nature, bit errors are independent, with the probability of errors in bits given by Proakis (2007, Eq. (7.5-5)):
where is the crossover probability.
The block error probability is upper-bounded by the probability of exceeding errors Proakis (2007, Eq. (7.5-6)):
For high SNR (small ), this approximates to the dominant term:
This binomial approximation highlights the steep decline in error probability as decreases, reflecting the code’s robustness at high SNR.
Perfect Code and Quasi-Perfect Code¶
Equality in the block error bound holds for perfect codes, where all possible sequences are either codewords or exactly errors away from one, fully packing the Hamming space.
Hamming codes (, , ) exemplify this, correcting all single errors precisely.
Quasi-perfect codes have disjoint spheres of radius around codewords, with every sequence at most from a codeword.
Their error probability is Proakis (2007, Eq. (7.5-14)):
Here, adjusts for sequences at distance not uniquely correctable, refining the bound for such codes.
Alternative Upper and Lower Bounds of ¶
Considering two codewords at distance , the lower bound on is the probability of mistaking one for its nearest neighbor Proakis (2007, Eq. (7.5-15)):
This reflects the minimum error rate from confusing closest codewords.
The upper bound multiplies this by the number of possible incorrect codewords Proakis (2007, Eq. (7.5-16)):
For large , these bounds widen significantly, as the upper bound scales with the codebook size, making them less tight but still illustrative of ’s range.
General Bounds on Block and Bit Error Probabilities with HDD¶
For hard decision decoding (HDD), the parameter (where is the BSC crossover probability) enables general bounds.
The block error probability is bounded using the weight enumerating polynomial Proakis (2007, Eq. (7.5-17)):
A simpler bound leverages the minimum distance Proakis (2007, Eq. (7.5-18)):
The bit error probability uses the input-output weight enumerating function Proakis (2007, Eq. (7.5-19)):
These bounds quantify HDD performance, reflecting the impact of quantization on error rates.
- Proakis, J. (2007). Digital Communications (5th ed.). McGraw-Hill Professional.