4
$\begingroup$

Imagine a $N$ dimensional vector space. There are several points in it. I pick $m of them. They form a hyperplane in the $N$ dimensional vector space. For example $N = 10$, $m = 2$. Now I want to project all points to this $m$ dimensional hyperplane so I can work in $m$ (or as in the example in $2D$). How do I do that?

2 Answers 2

2

Say you picked $m$ $N$-vectors $v_1, ..., v_m$ that define the projection subspace. You can construct an orhonormal basis $e_1, ..., e_k$, in this subspace using the Gram–Schmidt process. The process terminates after $k$ steps, $k \le m$, and $k = m$ if and only if the selected vectors are linearly independent. After you have the orthonormal basis you project a vector $x$ into the subspace:

$x = e_1 + ... e_k$

i.e. the scalar product $$ is the coordinate of $x$ on the $e_i$ axis.

0

First you take any $m$-many $N$-dimensional vectors that spans that particular hyperplane. So in your example, two 10-dimensional vectors. Just for simplicity, let this hyperplane be the $x-y$ plane in this 10 dimensional $x-y-f-r-h-w-j-b-v-q$ hyper plane.

Then you can see where this is going. You take the first two components right? How to make it technical is that you put the two vectors together and call it $P$ the projection matrix and transpose it. Again back to our example, $ \begin{pmatrix} \bullet\\\bullet \end{pmatrix} = \begin{bmatrix}1 &0 &0 &0 &0 &0 &0&0&0&0\\ 0 &1 &0 &0 &0 &0 &0&0&0&0 \end{bmatrix}\begin{pmatrix} \bullet\\ \bullet \\ \vdots \\\bullet \end{pmatrix} $ You can feel that the rows of $P^T$, or the vectors that we picked should be "linearly independent" to actually do the trick. Now you can see that, if we have a hyperplane which is not that simple as x-y hyperplane (a subspace if the plane passes through the origin) but rather something spanned like $2x-3y-6j-(-b)$ vector and another complicated one, then you simply write the vectors in $P$ and perform the projection. Hence $x_P = P^T x_o$, the projected one is the original times a matrix. If your hyperplane is not a subspace but a translation of it such that it does not cross origin then you simply find the translation perform $x_P = P^T x_o + x_t$

Of course this is the very informal, straight to the point and rather sloppy argument. What you should do now is to find a reliable technical source and try to match this intuition with their boring but crucial definitions.

  • 0
    no, no. Maybe the example was stupid of me. Think about vectors x_1^T = \begin{bmatrix}1 &1\end{bmatrix}, x_2^T=\begin{bmatrix}1 &0\end{bmatrix} . They are not orthogonal but they span 2D that is to say every vector can be decomposed to a linear combination of these two. – 2011-08-26