Skip to article frontmatterSkip to article content

Soft Decision Decoding

For linear binary block codes on an AWGN channel, soft decision decoding optimizes error probability using unquantized receiver outputs.

With coherent PSK or orthogonal FSK (coherent or noncoherent), an optimal receiver employs M=2kM = 2^k matched filters, each tuned to a codeword waveform, selecting the codeword with the highest output.

Alternatively, a single matched filter per bit, followed by MM cross-correlators, computes decision variables, offering equivalent performance with different implementation complexity.

Soft Decision Signal Model

For binary coherent PSK, the jj-th matched filter output rjr_j of a codeword is:

rj=Ec+nj(if bit 1),rj=Ec+nj(if bit 0)r_j = \sqrt{\mathcal{E}_c} + n_j \quad (\text{if bit } 1), \quad r_j = -\sqrt{\mathcal{E}_c} + n_j \quad (\text{if bit } 0)

where njn_j is AWGN with zero mean and variance N0/2N_0/2, modeling noise impact on the signal energy Ec\mathcal{E}_c.

Correlation Metrics

The decoder computes MM correlation metrics Proakis (2007, Eq. (7.4-3)):

CMmC(r,cm)=j=1n(2cmj1)rj,m=1,2,,M\begin{split} CM_m &\triangleq C(\vec{r}, \vec{c}_m) \\ &= \sum_{j=1}^{n} (2c_{mj} - 1)r_j, \quad m = 1, 2, \ldots, M \end{split}

Here, 2cmj12c_{mj} - 1 maps 1 to +1 and 0 to -1, aligning rjr_j’s signal component.

The correct codeword’s metric averages nEcn\sqrt{\mathcal{E}_c}, exceeding others, enabling optimal selection.

Block and Bit Error Probability in SDD

Recall that the general bound on the block error probability is

Pe(2k1)ΔdminP_e \leq (2^k - 1)\Delta^{d_{\min}}

The block error probability PeP_e for soft decision decoding (SDD) can be bounded using this general bound, adjusted for the specific modulation.

For BPSK, the parameter Δ, defined earlier, is Δ=eEc/N0\Delta = e^{-\mathcal{E}_c/N_0}, where Ec=RcEb\mathcal{E}_c = R_c \mathcal{E}_b relates component energy to bit energy via the code rate RcR_c.

Substituting into the weight enumerating polynomial A(Z)A(Z), the bound becomes Proakis (2007, Eq. (7.4-4)):

Pe(A(Z)1)Z=eRcEbN0P_e \leq (A(Z) - 1) \Big|_{Z = e^{-\frac{R_c \mathcal{E}_b}{N_0}}}

This leverages the code’s weight distribution to estimate error likelihood under AWGN with BPSK modulation.

Simplified Bounds and Bit Error Probability

A simpler bound for PeP_e in SDD is:

Pe(2k1)eRcdminEb/N0P_e \leq (2^k - 1)e^{-R_c d_{\min} \mathcal{E}_b / N_0}

Using 2k1<2k=ekln22^k - 1 < 2^k = e^{k \ln 2}, this refines to:

Peeγb(Rcdminkln2/γb)P_e \leq e^{-\gamma_b (R_c d_{\min} - k \ln 2 / \gamma_b)}

where γb=Eb/N0\gamma_b = \mathcal{E}_b / N_0 is the SNR per bit, highlighting the exponential decay of error probability with SNR and code parameters.

The bit error probability PbP_b for BPSK is bounded as Proakis (2007, Eq. (7.4-9)):

Pb1kYB(Y,Z)Y=1,Z=exp(RcEbN0)P_b \leq \frac{1}{k} \left. \frac{\partial}{\partial Y} B(Y, Z) \right|_{Y=1, Z=\exp\left(-\frac{R_c \mathcal{E}_b}{N_0}\right)}

This uses the IOWEF B(Y,Z)B(Y, Z) to average bit errors, computable via tools like MATLAB’s ‘bercoding‘.

Coding Gain

Comparing the SDD bound eγb(Rcdminkln2/γb)e^{-\gamma_b (R_c d_{\min} - k \ln 2 / \gamma_b)} to uncoded BPSK’s bound 12eγb\frac{1}{2}e^{-\gamma_b}, coding offers a gain of approximately 10log(Rcdminkln2/γb)10 \log (R_c d_{\min} - k \ln 2 / \gamma_b) dB, termed the coding gain.

This gain, dependent on RcR_c, dmind_{\min}, kk, and γb\gamma_b, quantifies performance improvement.

For high γb\gamma_b, the asymptotic coding gain RcdminR_c d_{\min} emerges as the dominant term, representing the maximum achievable benefit of coding as noise diminishes.

References
  1. Proakis, J. (2007). Digital Communications (5th ed.). McGraw-Hill Professional.