0
$\begingroup$

I've got a linear system of equations in complex numbers:

$\mathbf{A}\mathbf{x}=\mathbf{b}$,

where $\mathbf{A}$ is fully-populated, $\mathbf{b}$ is not equal to zero, vector $\mathbf{x}$ is unknown and complex. Is it possible to decouple the real and imaginary parts of matrix $\mathbf{A}$ and treat them separately? On other words, is it possible to rewrite this equation as two equations with $Re(\mathbf{A})$ and $Im(\mathbf{A})$ separated? If yes, how can I do that?

The reason why I'm asking is that some elements of $\mathbf{A}$ have very different real and imaginary parts: $Im({A}_{ij})\approx10^{-9}Re({A}_{ij})$, which I believe causes problems for numerical solvers.

  • 3
    Of course, it's linear after all...2017-02-21
  • 0
    As long as you work with a finite-dimensional vector spaces you can easily rewrite a linear system of $n$ complex equations with $m$ complex variables as a linear system of $2n$ real equations with $2m$ real variables. Just write the original system through the weighted sums on components and use the multiplication rule of complex numbers.2017-02-21
  • 0
    By the way, yes, large differences in magnitudes of the coefficients may indeed severely damage precision of floating-point computations, in certain cases. (See, for example, how https://en.wikipedia.org/wiki/Hilbert_matrix behaves. Try to invert it and multiply back...)2017-02-21
  • 0
    Indeed. And the solution precision depends on the solution method, as I can see. But my problem is a bit different. In my case, it is not that different elements of $\mathbf{A}$ have different magnitudes, but that there are some elements in $\mathbf{A}$ whose real parts are $10^9$ times larger than their imaginary parts. Can it cause problems? Do you have any examples and recipes for that?2017-02-21
  • 0
    I can compute the elements with 3-digit precision (otherwise is too costly). Having this difference in one element (when $Im(A_{ij})≈10^{−9}Re(A_{ij})$) can mix the imaginary part with the insignificant figures of the real part when performing operations like multiplication. Then the solution simply does not see these parts.2017-02-21
  • 0
    Not only the method. Rounding errors as such are inherent property of floating-point arithmetic: sometimes the errors are negligible, sometimes they aren't. And it's technically possible to estimate them after (interval arithmetic) and even before the computations (but somewhat difficult).2017-02-21
  • 0
    The idea here is that the real and imaginary parts of all the complex numbers involved may be viewed as separate coefficients/variables of an equivalent linear system. \\ By the way. Costly? Are you calculating these by hand?2017-02-21
  • 0
    The transformation from $n$ complex into $2n$ real equation and hence treating the real and imaginary parts as separate coefficients may work, I'll try that. Not by hand, of course. I'm writing a boundary element program where elements of $\mathbf{A}$ are surface integrals, which I'm estimating numerically. Calculating with precision higher than what I have now would take too long.2017-02-21

1 Answers 1

0

Like Andrew said, this works by linearity. As an example, take \begin{equation} Ax=b \quad = \begin{bmatrix}1 & 0\\ 0 & i\end{bmatrix}\begin{bmatrix}1+i\\1+i\end{bmatrix}=\begin{bmatrix}1+i\\i-1\end{bmatrix} \end{equation} Then you can decouple the vector, i.e. $x=Re(x)+Im(x)$: \begin{equation} Ax=b \quad = \begin{bmatrix}1 & 0\\ 0 & i\end{bmatrix}\begin{bmatrix}1\\1\end{bmatrix}+\begin{bmatrix}1 & 0\\ 0 & i\end{bmatrix}\begin{bmatrix}i\\i\end{bmatrix}=\begin{bmatrix}1+i\\i-1\end{bmatrix} \end{equation} but you could also decouple the matrix, i.e. $A=Re(A)+Im(A)$: \begin{equation} Ax=b \quad = \begin{bmatrix}1 & 0\\ 0 & 0\end{bmatrix}\begin{bmatrix}1+i\\1+i\end{bmatrix}+\begin{bmatrix}0 & 0\\ 0 & i\end{bmatrix}\begin{bmatrix}1+i\\1+i\end{bmatrix}=\begin{bmatrix}1+i\\i-1\end{bmatrix} \end{equation}

edit: I thought this example would clarify how things work, but to again adopt what Andrew said: The Real part of the example becomes: \begin{align} &Re(A)Re(x)+Im(A)Im(x)=Re(b)\\ &\begin{bmatrix}1 & 0\\ 0 & 0\end{bmatrix}\begin{bmatrix}1\\1\end{bmatrix}+\begin{bmatrix}0 & 0\\ 0 & i\end{bmatrix}\begin{bmatrix}i\\i\end{bmatrix}=\begin{bmatrix}1\\-1\end{bmatrix} \end{align} Whereas the Imaginary part becomes: \begin{align} &Im(A)Re(x)+Re(A)Im(x)=Im(b)\\ &\begin{bmatrix}0 & 0\\ 0 & i\end{bmatrix}\begin{bmatrix}1\\1\end{bmatrix}+\begin{bmatrix}1 & 0\\ 0 & 0\end{bmatrix}\begin{bmatrix}i\\i\end{bmatrix}=\begin{bmatrix}i\\i\end{bmatrix} \end{align}

  • 0
    Thank you for the answer! This is an obvious solution. But I do not think this will really help me since both the real and imaginary parts of $\mathbf{A}$ are still in one equation, and when I solve the equation I use them both simultaneously. Or don't I?2017-02-21
  • 0
    @ivkarpov7 If you prefer matrix notation: $A x = (\Re(A) + i \Im(A)) (\Re(x) + i \Im(x)) = (\Re(A) \Re(x) - \Im(A) \Im(x) + i (\Re(A) \Im(x) + \Im(A) \Re(x)) = \Re(b) + i \Im(b) = b$. So the real system is $\Re(A) \Re(x) - \Im(A) \Im(x) = \Re(b)$ together with $\Re(A) \Im(x) + \Im(A) \Re(x) = \Im(b)$. Actually $A$ may be viewed as a block matrix.2017-02-21
  • 0
    @ivkarpov7 I made an edit to include Andrew his clarification, admitted in more sloppy(i.e. $Im(x)$ instead of $i\mathfrak{I}(x)$ notation, yet it is hopefully clear.2017-02-21
  • 0
    @WalterJ Thank you. And the answer is consistent with the Andrew's comment on transforming $n$ complex equation into $2n$ real equations. Hope, it works.2017-02-21
  • 0
    @AndrewMiloradovsky Thank you!2017-02-21