I'm working through a pair of papers on Simultaneous Localization and Mapping and I'm having trouble with some of the notation as I lack some formal math education.
The papers can be found
here: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.128.4195
and here http://robots.stanford.edu/papers/thrun.seif.pdf
$P_{k|k}$ and $\Sigma_{t}^{-1}$ are both covariance matrices.
What does the $_{k|k}$ mean? I recall from probability, that P(y|x) means the probability of y given x, but that doesn't seem to make sense here.
With $\Sigma_{t}^{-1}$ I thought that $\sum$ is usually used for summation (and I initially confused it with summation!) Is there any significance to $\sigma$ being used to represent the covariance matrix or is it just historical accident?
Some other questions raised while formulating this question (answered in the comments below):
Later, on page 6, there is a formula $P(x_{2} | x_1^{(i)})$. Here I don't know what the superscript $^{(i)}$ represents.
Answer: $x_1^{(i)} \sim P(x_1)$ is the ith sample from the distribution $P(x_1)$
The information vector $b_{t} = \mu_{t}^{T}H_{t}$ looks like the mean times the information matrix, but I thought the mean was a scalar value, so I don't understand the Transpose symbol.
Answer: $\mu_{t}$ is a vector because it is the mean of the $\zeta$
Thanks for everyone who's helped me refine these questions!