Soft Decision Decoding¶
For linear binary block codes on an AWGN channel, soft decision decoding optimizes error probability using unquantized receiver outputs.
With coherent PSK or orthogonal FSK (coherent or noncoherent), an optimal receiver employs matched filters, each tuned to a codeword waveform, selecting the codeword with the highest output.
Alternatively, a single matched filter per bit, followed by cross-correlators, computes decision variables, offering equivalent performance with different implementation complexity.
Soft Decision Signal Model¶
For binary coherent PSK, the -th matched filter output of a codeword is:
where is AWGN with zero mean and variance , modeling noise impact on the signal energy .
Correlation Metrics¶
The decoder computes correlation metrics Proakis (2007, Eq. (7.4-3)):
Here, maps 1 to +1 and 0 to -1, aligning ’s signal component.
The correct codeword’s metric averages , exceeding others, enabling optimal selection.
Block and Bit Error Probability in SDD¶
Recall that the general bound on the block error probability is
The block error probability for soft decision decoding (SDD) can be bounded using this general bound, adjusted for the specific modulation.
For BPSK, the parameter Δ, defined earlier, is , where relates component energy to bit energy via the code rate .
Substituting into the weight enumerating polynomial , the bound becomes Proakis (2007, Eq. (7.4-4)):
This leverages the code’s weight distribution to estimate error likelihood under AWGN with BPSK modulation.
Simplified Bounds and Bit Error Probability¶
A simpler bound for in SDD is:
Using , this refines to:
where is the SNR per bit, highlighting the exponential decay of error probability with SNR and code parameters.
The bit error probability for BPSK is bounded as Proakis (2007, Eq. (7.4-9)):
This uses the IOWEF to average bit errors, computable via tools like MATLAB’s ‘bercoding‘.
Coding Gain¶
Comparing the SDD bound to uncoded BPSK’s bound , coding offers a gain of approximately dB, termed the coding gain.
This gain, dependent on , , , and , quantifies performance improvement.
For high , the asymptotic coding gain emerges as the dominant term, representing the maximum achievable benefit of coding as noise diminishes.
- Proakis, J. (2007). Digital Communications (5th ed.). McGraw-Hill Professional.