I have a spectroscopy problem that boils down to a matrix equation where X*A=C. I take N observations each consisting of 3 detector readings and my detectors suffer from some amount of cross-talk (some percentage of signal from detector 1 spills over into detector 2, etc.) In my specific case, C is an Nx3 matrix of detector readings, X is an Nx2 matrix of my unknown true signals, and A is a 2x3 matrix of constant coefficients that represent how much each signal source gets into each detector. So, I have:
$\begin{bmatrix} X_{1 1} & X_{12} \\ \vdots & \vdots \\ X_{N1} & X_{N2} \end{bmatrix} * \begin{bmatrix} A_{11} & A_{12} & A_{13} \\ A_{21} & A_{22} & A_{23}\end{bmatrix} = \begin{bmatrix} C_{11} & C_{12} & C_{13} \\ \vdots & \vdots & \vdots \\ C_{N1} & C_{N2} & C_{N3}\end{bmatrix}$
This is a system of linear equations with 3N equations and 2N+6 unknowns. When N = 6, this system should be determined, and in real world experimentation with noise contributions, when N > 6, I should begin to compensate for noise. In practice, I will usually take 100's of observations, so N will usually be 200 to 1000. Without noise contributions, of course this would be an overconstrained problem. One column of C does have to be a linear combination of the other two, however, with every C element having a Gaussian noise contribution of mean zero, it is underconstrained and one can only find the best estimates.
Intuitively, this should be solvable for the 6 elements of A and the 2N elements of X, but I cannot find a treatment for this construction of a problem. I have been searching for linear algebra solution approaches that address having two matrices of unknowns as I have described, but I haven't found anything appropriate yet. I can re-arrange the matrix equation to $ X = C * A^{-1} $, but I haven't seen a treatment for that construction either.
Any suggestions or insights into solving this? Thanks in advance.