It seems my comment has answered the question, so I'm posting it here with a little elaboration.
Let $A = \begin{bmatrix}2 & 1 \\ 1 & 2\end{bmatrix}$. Its singular value decomposition is $A = U\Sigma V^T$, where $\begin{align} U = V &= \frac1{\sqrt{2}}\begin{bmatrix}1 & -1 \\ 1 & 1\end{bmatrix}, \\ \Sigma &= \begin{bmatrix}3 & 0 \\ 0 & 1\end{bmatrix}. \\ \end{align}$ Assuming by $u_k$ and $v_k$ you mean the $k$th column of $U$ and $V$, we have $\begin{align} u_1\sigma_1v_1^T &= \frac32\begin{bmatrix}1 & 1 \\ 1 & 1\end{bmatrix}, \\ u_2\sigma_2v_2^T &= \frac12\begin{bmatrix}1 & -1 \\ -1 & 1\end{bmatrix}. \end{align}$
In general, $u_k\sigma_kv_k^T$ is not a matrix with the $k$th row or column equal to that of $A$ and zeroes everywhere else. What it is, is a rank-one matrix that represents the following linear transformation: given an input vector $x$, take its component along $v_k$ and discard the rest, scale that by $\sigma_k$, and make it point along $u_k$ instead. (Apart from the scaling by $\sigma_k$, this is something like a rank-one version of an orthogonal transformation. I can't remember if there's a specific name for it.) What the singular value decomposition tells you is how to represent your rank-$r$ matrix as a linear combination of $r$ such rank-one linear transformations.