0
$\begingroup$

Suppose that I have two real-valued matrices $\bf{A}$ and $\bf{B}$. Both matrices are exactly the same size. I multiply both matrices together in a point-by-point fashion similar to the Matlab A .* B operation.

Under what conditions can I approximately separate $\bf{A}$ and $\bf{B}$ using Principle Components Analysis (PCA)? Would it be possible to remove some components of the product A .* B to get an approximation of $\bf{A}$ or $\bf{B}$?

What algorithm might be best suited for this operation?

I am not looking for an exact separation of the matrices, but a separation using some sort of (statistical or numerical?) constraints. How would I set this problem up, and is there a good example of how to do this?

2 Answers 2

2

It seems to me that you can't separate the matrices from their pointwise multiplication (Hadamard/Schur product) without additional constraints.

Consider some matrix C. Any number in C is decomposable into an infinite number of products of two real numbers... which would give you an infinite number of "perfect" decompositions.

For example, you can always decompose C into 1 (a matrix of ones) and C. In fact, for any choice of A you can find a B such that their Schur product will result in any C that is specified (ignoring some problems when zero appears)...

  • 0
    Thanks, bitwise. But which constraints could I use? And could NMF be used with the proper constraints?2012-11-10
  • 0
    Is there a reference that documents how to set up the constraints? Ideally I would like to find a numerical procedure (and perhaps a sample problem) that will be able to separate the two matrices in a non-exact way.2012-11-10
  • 1
    I will explain it more clearly: You **cannot** find A and B from their Schur product in general. You can do this only if you assume something about what A and B should look like. For example, if you specify A exactly, then you can find B (that is simple division).2012-11-10
  • 0
    Thanks, Bitwise. Is there some numerical technique that can be used to set up the assumption? How do I apply constraints if I know "something" about A and B? Is there maybe some published example of the technique?2012-11-10
  • 0
    @NicholasKinar this is highly dependent on what kind of assumptions you want to make. What kind of knowledge do you have about A and B? For example, do you know what kind of process generated these matrices? Another example is to assume that A is some kind of signal and B is a noise matrix (both with some assumptions about what kind of noise and signal).2012-11-10
  • 0
    I have examples of A and B simulated using a mathematical model, but how might I identify assumptions that can be used to separate the matrices? I can say that A has less variability than B. I've tried low-pass filtering A .* B, but this doesn't particularly work well. Maybe a statistical method would work? When minimizing a norm, which norm should I use? That's why I am looking for an example of how to set the problem up, and that's also the reason why I thought PCA might be useful.2012-11-10
  • 0
    @NicholasKinar the best approach might be to start a new question (in a separate page), in the following form: I have a matrix C such that A.*B=C. I know that A/B/C are generate by a mathematical model blah blah and so on. The point is that you should give as much information as possible about what A,B,C are. More information means more constraints, and if you won't have enough constraints you won't be able to reach a meaningful solution.2012-11-11
  • 0
    Sure, that sounds good - I will look to see which constraints I can use and open another question.2012-11-11
0

If the entries are non-negative then you could use NMF (non-negative matrix factorization). Or let $\textbf{C} = \textbf{A} \textbf{B}$. Then you could use singular value decomposition on $\textbf{C}$.

  • 0
    Thank you so much, lerije! Yes, both matrices are indeed non-negative. Is this for point-by-point multiplication of the matrices? Could you suggest a good reference with some example applications on how to set up this class of methods and apply constraints?2012-11-10
  • 1
    I think this type of factorization isn't for pointwise multiplication (Hadamard/Schur product).2012-11-10
  • 0
    This report on non-negative matrix factorization (linked from http://www.cs.virginia.edu/~jdl/nmf/) talks about a component product (http://www.cs.kuleuven.ac.be/publicaties/rapporten/cw/CW440.pdf). Is this the same as the Hadamard/Schur product?2012-11-10
  • 0
    It is, but the document addresses a different problem that factorizes a combination of the Schur product with normal multiplication. Also, see my answer.2012-11-10