7
$\begingroup$

Suppose B represents the matrix of orthogonal (perpendicular) projection of $\mathbb{R}^{3}$ onto the plane $x_{2} = x_{1}$. Compute the eigenvalues and eigenvectors of B and explain their geometric meaning.

What I have I come up so far as an attempt to deduce this question down is, for instance. If we pick an arbitrary point in space ($\mathbb{R}^{3}$), then we must project this point onto a plane (in particular $x_{2} = x_{1}$) which I imagine on a three dimensional axis of ($x,y,z$), if we choose $x_{1}$ to represent the $x$-axis, $x_{2}$ to represent the $y$-axis, and $x_{3}$ to represent the $z$-axis then we would have a plane of the equation that looks like $y=z$ or conversely ($x_{2}=x_{1}$) which is its equivalent. Once you project this point onto the plane, I see that it is true to be perpendicular and its vector is coming out of the plane. My troubles are finding the new coordinates of the new point that is projected onto the plane.

Here is a skeleton sketch of what I had in math-ese.

$\left[\begin{array}{c} ?\\ ?\\ ? \end{array} \right] = \left[\begin{array}{ccc} \Box & \Box & \Box \\ \Box & \Box & \Box \\ \Box & \Box & \Box \end{array} \right] \left[\begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right] $, $~~$ where B is the matrix with empty boxes for elements.

These question marks inside of the first matrix represent the coordinates in which I am trying to find. Once these our found, making some appropriate choices for the entries in the coefficient matrix labeled B in the question can be found, so that when B is multiplied by the last matrix $\left(\left[\begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]~\right) $, we will get back the matrix with the ? marks in the entries. I believe this is known as doing a linear transformation. I didn't know how to include graphics, but I hope the words was enough detail to be able to duplicate what I am saying on paper in a graphical meaning. If not, please let me know how I can clarify anything up. Some help would be very appreciated.

Thanks

  • 0
    @Kristi: Please do not so thoroughly erase and change questions that it renders the question unintelligible and the answers useless. See http://meta.math.stackexchange.com/questions/1343/what-to-do-with-a-user-who-is-editing-existing-questions-and-replacing-with-entir2011-03-19

6 Answers 6

10

Kristi, first of all if you are projecting $\begin{bmatrix} x_1\\ x_2\\ x_3 \end{bmatrix}$ onto the $x_{1}=x_{2}$ plane, then you are projecting onto the plane $x=y$, not $y=z$ (since you defined $x=x_{1}$, $y=x_{2}$, and $z=x_{3}$).

Now, as you are trying to find the coordinates of the projection vector, imagine the geometric meaning -- $z$, the 'height' of the vector will not ever change, as it is not relevant to the equation, but $x$ and $y$ will, depending on where the vector lies. When we are trying to find a projection on an n-dimensional subspace $W$, we can use a formula of ${proj_{W}}{\vec{x}}$=$(\vec{u_1}\cdot \vec{x})$\vec{u_1}$+$(\vec{u_2}\cdot \vec{x})$\vec{u_2}$+\cdots +$(\vec{u_n}\cdot \vec{x})$\vec{u_n}$, where $\vec{u_1}, \vec{u_2}\dots \vec{u_n}$ form an orthonormal basis of the subspace $W$. Here, $W$ is defined as $x=y$, meaning it can be spanned by vectors $\vec{v_1}$ = $ \begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix}$ and $\vec{v_2}$ = $ \begin{bmatrix} 1\\ 1\\ 2 \end{bmatrix}$, for example. To find an orthonormal basis of our space (meaning that all vectors in it will be mutually orthogonal/perpendicular, as well as of a length one), let's use the Gram-Schmidt process. An orthonormalized version of the vector $ \begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix}$ would be $\vec{u_1}$ = $\frac{1}{\sqrt{2}}$ \begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix}$, as that will make it of length one. Now, by Gram-Schmidt, $\vec{u_2}=\vec{v_2}-\frac{\vec{v_2}\cdot\vec{u_1}}{\vec{u_1}\cdot\vec{u_1}} \vec{u_1}$, since we are basically subtracting the $\vec{u_1}$ component from our second vector, in order to get a vector perpendicular to $\vec{u_1}$ as a result. Calculations result into the following: $\vec{u_2}$= $\begin{bmatrix} 1\\ 1\\ 2 \end{bmatrix}$ $-\frac{\begin{bmatrix} 1\\ 1\\ 2 \end{bmatrix} \cdot \frac{1}{\sqrt{2}} \begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix}}{\frac{1}{\sqrt{2}} \begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix} \cdot \frac{1}{\sqrt{2}} \begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix}} \frac{1}{\sqrt{2}} \begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix}$ = $\begin{bmatrix} 1\\ 1\\ 2 \end{bmatrix}$ - $\begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix}$=$\begin{bmatrix} 0\\ 0\\ 2 \end{bmatrix}$. Normalizing the resulting vector, we get $\vec{u_2}$ = $\begin{bmatrix} 0\\ 0\\ 1 \end{bmatrix}$.

Now that we have an orthogonal basis $\vec{u_1}$ = $\frac{1}{\sqrt{2}}\begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix}$ and $\vec{u_2}$ = $\begin{bmatrix} 0\\ 0\\ 1 \end{bmatrix}$, we can calculate the projection.

So, to find the projection of your vector $\begin{bmatrix} x_1\\\ x_2\\\ x_3 \end{bmatrix}$ we use our orthonormal basis and the projection formula: ${proj_{W}}\begin{bmatrix} x_1\\ x_2\\ x_3 \end{bmatrix}$=($\frac{1}{\sqrt{2}}$\begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix} \cdot \begin{bmatrix} x_1\\ x_2\\ x_3 \end{bmatrix})(\frac{1}{\sqrt{2}}$\begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix})$+$(\begin{bmatrix} 0\\ 0\\ 1 \end{bmatrix} \cdot \begin{bmatrix} x_1\\ x_2\\ x_3 \end{bmatrix})(\begin{bmatrix} 0\\ 0\\ 1 \end{bmatrix})$. After arithmetic, this results into ${proj_{W}}\begin{bmatrix} x_1\\ x_2\\ x_3 \end{bmatrix}$=$(\frac{x_1+x_2}{\sqrt{2}})$$(\frac{1}{\sqrt{2}}\begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix})$+$\begin{bmatrix} 0\\ 0\\ x_3 \end{bmatrix}$=$\begin{bmatrix} \frac{x_1+x_2}{2}\\ \frac{x_1+x_2}{2}\\ x_3 \end{bmatrix}$.

So, now you have your coordinates.

To find out the eigenvalues, think of the nature of the transformation -- the projection will not do anything to a vector if it is within the plane onto which you are projecting, and it will crash it if the vector is perpendicular to the plane. So, your eigenvalues are 1 and 0. A basis of eigenspace of 1 $\xi_{1}$ will have two vectors, as the plane is spanned by two of them. You could choose them to be your original $v_1$ and $v_2$, which were $ \begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix}$ and $\begin{bmatrix} 1\\ 1\\ 2 \end{bmatrix}$. To find a basis of eigenspace of 0 $\xi_{0}$, you need to find a vector perpendicular to this plane. You could use a property of cross-product, which states that $\vec{v_1} \times \vec{v_2}$ produces a vector $\vec{v_3}$ perpendicular to both. Crossing the aforementioned vectors, you get $\vec{v_3}=\begin{bmatrix} 2\\ -2\\ 0 \end{bmatrix}$.

Now that you know all of this, finding the matrix B is very easy by inspection; consider $\begin{bmatrix} \frac{1}{2} & \frac{1}{2} & 0 \\ \frac{1}{2} & \frac{1}{2} & 0 \\ 0 & 0 & 1 \end{bmatrix}$.

  • 0
    Now go and do your own h/w! you were helpful enough for today :)2011-03-18
3

Hint: First solve the same problem for the orthogonal projection on the plane $P$ of equation $z=0$. Surely you know what is the projection on $P$ of a point $(x,y,z)$, right? So, what is the matrix translation of the formulae you just wrote? Now, the eigenvectors and the eigenvalues of this matrix should be obvious.

If you write a complete solution of this case, the solution of the case you ask for should be within your reach.

2

An orthogonal projection of $3$-dimensional space to whatever plane in it has the property that when repeated it is still equal to itself. Symbolically, if $p:{\Bbb R}^3\rightarrow\pi$ denotes the orthogonal projection onto the plane $\pi$ through te origin, then $p^2=p\circ p=p$.

This means that if $v\in{\Bbb R}^3$ is an eigenvector, i.e. $p(v)=\lambda v$, then we must have $p^2(v)=p(p(v))=p(\lambda v)=\lambda^2 v$ but also $p^2(v)=p(v)=\lambda v$, thus $\lambda^2=\lambda$, which says that $\lambda$ is either $0$ or $1$.

With this intuition we get immediately that $\pi$ itsel must be the $1$-eigenspace and the line perpendicular to $\pi$ is the $0$-eigenspace.

2

To sum it all up:

First the matrix: Its columns are the images of the basis vectors. As $p({\bf e}_1) =p({\bf e}_2)=({1\over2},{1\over2},0)$ and $p({\bf e}_3)=(0,0,1)$ we obtain the matrix $[p]=\left[\matrix{1/2 & 1/2 & 0 \cr 1/2 & 1/2 &0 \cr 0&0&1\cr}\right]\>.$ Since $p$ is a projection its eigenvalues are $0$ and $1$. The eigenspace $E_0$ consists of the vectors which are parallel to the direction of the projection, i.e. the scalar multiples of $(1, -1,0)$, and the eigenspace $E_1$ consists of all vectors that stay fixed under $p$, i.e., $E_1=P$. A basis of $E_1$ is given by, e.g., $\bigl((1,1,0),(0,0,1)\bigr)$.

  • 0
    Thank you. I apologize for the slip.2011-03-18
1

You really should think of this geometrically:

The plane $P = \{x_{1}=x_{2}\}$ is defined by its normal vector $n = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 \\\ -1 \\\ 0 \end{bmatrix}$ (the square root appears in order to achieve $\langle n, n \rangle = 1$). A vector $v$ lies in the plane $P$ if and only if $\langle n,v \rangle = 0$.

Projecting orthogonally onto the plane by $p: \mathbb{R}^3 \to \mathbb{R}^{3}$ means that $p(v) \in P$ and $v - p(v) = \lambda(v) n$ for some $\lambda(v) \in \mathbb{R}$.

The condition $p(v) \in P$ is the equation $\langle p(v), n \rangle = 0$ and we can determine $\lambda(v)$ by \[ \lambda(v) = \lambda(v) \langle n, n \rangle = \langle \lambda(v) n, n \rangle = \langle v - p(v), n \rangle = \langle v,n \rangle - \langle p(v), n \rangle = \langle v,n \rangle. \] Solving the equation $v - p(v) = \lambda(v) n$ for $p(v)$ and plugging in $\lambda(v) = \langle v,n \rangle$ gives us the formula \[ p(v) = v - \langle v,n \rangle \cdot n = v - \frac{1}{2} \langle v, \begin{bmatrix} 1 \\ -1 \\ 0 \end{bmatrix}\rangle \, \begin{bmatrix} 1 \\ -1 \\ 0 \end{bmatrix}. \] This last expression is easily converted into matrix form by writing $v = \begin{bmatrix} x_{1} \\\ x_{2} \\\ x_{3} \end{bmatrix}$ and computing.

As for the eigenvectors, simply observe that $p(n) = 0$ and $p(v) = v$ for all $v \in P$.


Added:

As Prof. Blatter gave away the solution, there is no harm in showing what I left for you:

We have \begin{align*} p\left(\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}\right) & = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} - \frac{1}{2} (x_1 - x_2) \begin{bmatrix} 1 \\ -1 \\ 0 \end{bmatrix} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} - \frac{1}{2} \begin{bmatrix} x_1 - x_2 \\ x_{2} - x_{1} \\ 0 \end{bmatrix} \\ & = \begin{bmatrix} \frac{1}{2} (x_{1} + x_{2}) \\ \frac{1}{2} (x_{1} + x_{2}) \\ x_{3} \end{bmatrix} = \begin{bmatrix} 1/2 & 1/2 & 0 \\ 1/2 & 1/2 & 0 \\ 0 & 0 & 1\end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}. \end{align*} You can also compute $p$ directly on the basis vectors to confirm what is written in other answers. However, the method I described above always works (since it is coordinate-free up to the end), and it is very rare that you can read the matrix off as easily as in Prof. Blatter's approach.

Finally, the vector $n = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 \\\ -1 \\\ 0\end{bmatrix}$ is an Eigenvector to the eigenvalue $0$ (geometrically it is a normal vector to the plane and hence it must be sent to zero). Every vector of the plane $P$ is an eigenvector to the eigenvalue $1$: Geometrically, this means that because it lies in the plane onto which you project, it remains unaffected by the projection. If you want a basis of eigenvectors, take $n$ and any two vectors that span the plane, e.g. $\frac{1}{\sqrt{2}} \begin{bmatrix} 1 \\\ 1 \\\ 0 \end{bmatrix}$ and $\begin{bmatrix} 0 \\\ 0 \\\ 1 \end{bmatrix}$.

  • 0
    @Kristi: 2) No, it was a multiplication sign (scalar times vector). I removed it since it caused confusion. Note that the formula in the comment above is *not yet* a matrix expression. You'll have to do this conversion yourself.2011-03-16
1

I think that Kristi is just looking for the new coordinates that she would get after projecting her arbitrary point ($x_1,x_2,x_3$) onto the plane $x_2=x_1$. I don't think she was looking for the other responses in particular, but they could have been some help maybe. Once the new coordinates are found, which she states as the question marks in the the first matrix of elements, she would be able to find the values that she could plug in to the coefficient matrix B to find the eigenvalues and eigenvectors.

So restating, I think she is just having a troublesome time finding those new coordinates after projecting the arbitrary point onto the plane. I would answer but I am not sure I know how to do this. I just thought I should clear up the question for everyone else who views and responds.