I am studying LDPC codes and I got stuck at calculating symbol-to-check-node messages in SPA. We have a graph representation of decoding process with check-nodes (representing rows of parity-check matrix H) and symbol-nodes (representing cols of H).
Consider $m$-th row and $n$-th column. Suppose we have a received codeword $y=[y_1,y_2,...,y_n]$. At first we should do the initialization: for every position $(m,n)$ such as ${H_m}_n=1$ we have $$ \lambda{_n\rightarrow}_m(u_n)=L(u_n)$$ $$\Lambda{_m\rightarrow}_n(u_n)=0 $$ where $L(u_n)=2\frac{y_n}{\sigma^2}$, $ \lambda{_n\rightarrow}_m(u_n)$ is symbol-to-check-node message and $\Lambda{_m\rightarrow}_n(u_n) $ is check-to-symbol-node message.
At the step (i) we should calculate $\Lambda{_m\rightarrow}_n(u_n)$ using $\lambda{_n\rightarrow}_m(u_n)$ and "some hyperbolic functions". That step is ok to me.
The real problem for me comes at the step (ii) where we should update $ \lambda{_n\rightarrow}_m(u_n)$ as follow $$ \lambda{_n\rightarrow}_m(u_n)=L(u_n)+\sum_{m'}\Lambda{_m\rightarrow}_n(u_n)$$ Here $m'\in M(n)\backslash m$, where $M(n)\backslash m$ denotes the set of check-nodes, connected to symbol-node $n$ excluding the $m$-th one.
The first summand in this equation is LLR $L(u_n)=2\frac{y_n}{\sigma^2}$. Suppose we did the update at the step(ii). After that we should make a vector of $\lambda_n(u_n) $ to make a decision about received bits to create a vector $u=[u_1,u_2,...u_n]$, where $u_i=0$ if $\lambda_n(u_n) \ge 0$ and $u_i=1$ otherwise.
After making the decision, we can either go to the first step of algorithm and do the whole procedure again or, if we are lucky, we get decoded bits. It depends on satisfying the $uH^T=0$.
So I have a few questions:
- Step(ii). Does $L(u_n)$ remain unchanged after the whole algorithm? (i.e. if we complete all of the steps and return to the step(i) to do the whole procedure again, is $L(u_n)$ always equal $2\frac{y_n}{\sigma^2}$?)
- If the answer is true, then I have another big misunderstanding in trellis topology, which allows us to compute step (i) effectively without hyperbolic functions. We have $\Lambda{_m\rightarrow}_n(u_n) $ there being computed in recursive manner instead of using hyperbolic functions: $$\Lambda{_m\rightarrow}{_n}_i(u_{n_i})=L(f{_i}_{-1}\oplus b{_i}_{+1}) $$ $$\Lambda{_m\rightarrow}{_n}_1(u_{n_1})=L(b_2) $$ $$\Lambda{_m\rightarrow}{_n}_{{_d}_c}(u_{n_1})=L(f{_{d_c}}_{-1}) $$ Here $f$ and $b$ are auxiliary binary random variables, defined as $$f_1=u{_n}_1, f_2=f_1\oplus u{_n}_2, ..., f{_d}_c=f{{_d}_c}_{-1}\oplus u{{_n}_d}_c$$ $$b{_d}_c=u{{_n}_d}_c, b{{_d}_c}_{-1}=b{_d}_c\oplus u{{{_n}_d}_c}_{-1}, b_1=b_2\oplus u{_n}_1 $$
For computing $L$ we use the following formula $$ L(U\oplus V)=log(\frac{1+e^{L(U)}+e^{L(V)}}{e^{L(U)}+e^{L(V)}})$$
So here is my second question:
When we do the procedure for the first time, we have $\lambda{_n\rightarrow}_m(u_n)$ initialized with $L(u_n)$ and by using it we obtain $\Lambda{_m\rightarrow}{_n}$ in a recursive manner (because we know $L(u_n)= 2\frac{y_n}{\sigma^2}$ which initialize $\lambda{_n\rightarrow}_m(u_n)$ ). But on the step (ii) we update $\lambda{_n\rightarrow}_m(u_n)$.
Judging from formulas for $\Lambda$ in the recursive algorithm the only things we need to know to compute it are $L(u_n)$ (which remain constant $L(u_n)= 2\frac{y_n}{\sigma^2}$ by my suggestion). So what is the sense to update $\lambda$ on the step (ii) if we don't need those updated numbers in recursive algorithm, but need only constant $L(u_n)$?
I am totally confused with that moment.