I'm not sure if these questions best fit here or somewhere else, since they are mathematical rather than a programmatic.
1) I know that linear transformations rotate and stretch (or shrink) vector when applied to them. The covariance matrix suppose to encode the scatter of the data samples, but what does it do to a vector when applied to it? (at least on intuitive level)
2) When we use CPA for face recognition, first thing we do is to subtract the mean image from each image in the working set. Do we do it just so the covariance matrix will be more easily computed, or is there more fundamental reason?
3) The recognition step requires us to project images onto the eigenspace. Do we normalized the eigenfaces? If we do, we get a vector with entries smaller than 1, but images entries are pixels with discrete values, so how does it coincide with that?
Several question about PCA application to face recognition
0
$\begingroup$
eigenvalues-eigenvectors
covariance
-
02) you probably mean PCA instead of CPA? – 2017-02-20
2 Answers
0
The pixels don't have to be necessarily integers. This-> https://youtu.be/8BTv-KZ2Bh8?list=LLhLyzYkcDApIMFlax04Gjog shows a professor presenting an eigenface algorithm in Matlab/octave in which he uses double precision numbers. I wonder if the numbers get rounded or truncated since even in Matlab gray scale is represented from 0 - 255. If you have found some answer please let me know, as Eigenfaces is my capstone project.
0
- Covariances are symmetric and can be represented by ellipses. Multiplying will scale with the variance in the "direction" of the random vector and dividing will normalize.
- If we remove means we get covariances, if not removed we will be getting correlations instead.
- Usually some normalization is required to be able to get algorithms like these to work for us.