I'm adding some assumptions to the question:
Assume $A$ is a $n \times n$ invertible matrix and $d$ is a random vector with covariance matrix $C$ whose diagonal is the vector $\sigma^2$. Then we seek the variance of $x$, the unique solution to $Ax=d$. But $x=A^{-1}d=A^{-1}(\mu+C^{1/2}\epsilon)$ where $\epsilon$ is the noise vector, assumed uncorrelated and with unit variance (but no other assumptions are required). Now $A^{-1} \mu$ is a fixed vector, so the covariance is that of the random vector $A^{-1} C^{1/2} \epsilon$.
If we now assume additionally that $C$ is diagonal (i.e. the noise to $d$ is uncorrelated) then we can read off the variance from here: the variance of $x_i$ is $\sum_{j=1}^n (A^{-1})_{ij}^2 \sigma^2_j$, so the standard deviation of $x_i$ is the square root of that. I think you can proceed similarly when $C$ is not diagonal but the situation will get significantly more complicated.
I'm not sure getting the SVD of $A$ involved here does you any good from the mathematical perspective (as opposed to the numerical perspective).
Apparently the $A$ here is actually $m \times n$ with $m>n$, so that there is no solution to $Ax=d$ for most $d$. Then the least squares solution is given by $x=A'd$ where $A'$ is the Moore-Penrose pseudoinverse. One can repeat the above analysis exactly as written with $A'$ replacing $A^{-1}$ and the result turns out the same.