13
$\begingroup$

This an exercise out of Spivak's "Calculus on Manifolds".

Edit: There was a typo in the exercise as is noted below in the answers. The statement has been edited to reflect this.

Given $x,y\in\mathbb{R}^{n}$, the angle between $x$ and $y$ is defined by

$\angle(x,y) = \arccos\left(\frac{\langle x,y \rangle}{|x|\cdot |y|}\right),$ where $\langle x,y \rangle$ denotes the standard Euclidean inner product.

A linear operator $T:\mathbb{R}^{n}\to\mathbb{R}^{n}$ is said to be angle-preserving if $\angle(T(x),T(y)) = \angle(x,y)$ for every $x,y\in\mathbb{R}^{n}$.

The exercise as stated:

Let $\{x_{1},\dots, x_{n}\}$ be a basis for $\mathbb{R}^{n}$. Then suppose that $\lambda_{1}, \dots, \lambda_{n}\in \mathbb{R}$ are such that $Tx_{j} = \lambda_{j}x_{j}$ for each $j = 1,\dots, n$.

Then $T$ is angle-preserving only if (not if and only if!)$|\lambda_{i}| = |\lambda_{j}|$ for every $1\leq i\leq j\leq n$.

I'm having problems with the $(\Rightarrow)$ direction.

My best attempt (which seems to lead nowhere) is to suppose that $|\lambda_{j}|\neq |\lambda_{k}|$. Then by assumption, \begin{align*} \angle(Tx_{j},Tx_{k}) & = \arccos\left(\frac{\langle Tx_{j},Tx_{k} \rangle}{|Tx_{j}|\cdot |Tx_{k}|}\right)\\ & = \arccos\left(\frac{\langle \lambda_{j}{x_{j}},\lambda_{k}{x_{k}} \rangle}{|\lambda_{j}{x_{j}}|\cdot |\lambda_{k}{x_{k}}|}\right)\\ & = \arccos\left(\frac{\lambda_{j}\lambda_{k}\langle {x_{j}},{x_{k}} \rangle}{|\lambda_{j}|\cdot|\lambda_{k}|\cdot|{x_{j}}|\cdot |{x_{k}}|}\right)\\ & = \arccos\left(\text{sign}(\lambda_{j})\text{sign}(\lambda_{k})\frac{\langle {x_{j}},{x_{k}} \rangle }{|{x_{j}}|\cdot |{x_{k}}|}\right)\\ \end{align*} may also be calculated as \begin{align*} \angle(Tx_{j},Tx_{k}) & = \angle(x_{j},x_{k})\\ & = \arccos\left(\frac{\langle x_{j},x_{k} \rangle}{|x_{j}|\cdot |x_{k}|}\right). \end{align*}

Then since $\arccos$ is injective, I believe I can make the jump that $\text{sign}(\lambda_{j})\text{sign}(\lambda_{k}) = 1$, which does not resemble the conclusion that I should arrive at.

Note: I wasn't sure what tag to put this under, so anyone who knows better please feel free to adjust.

Thanks for any help you can give.

  • 0
    I will check first thing in the morning! :) I forgot to bring the text home with me.2012-07-31

3 Answers 3

11

If necessary, rescale each $x_i$ so they all have unit length and relabel the indices so that $|\lambda_1|\neq |\lambda_2|$.

Now, consider the vectors $v_1 = x_1 + x_2$ and $v_2 = x_1-x_2$.

First, I claim that $v_1$ and $v_2$ are orthogonal, for \begin{align*} \langle v_1, v_2\rangle &= \langle x_1+x_2,\; x_1-x_2\rangle \\\ &= \langle x_1, x_1\rangle + \langle x_1, x_2\rangle - \langle x_2,x_1\rangle - \langle x_2, x_2\rangle \\\ &= |x_1|^2-|x_2|^2 \\\ &=0\end{align*} where the last equality follows since both $x_1$ and $x_2$ are unit length.

Next, I claim that $Tv_1$ and $Tv_2$ are not orthogonal. For, \begin{align*} \langle Tv_1, Tv_2\rangle &= \langle \lambda_1 x_1 + \lambda_2 x_2,\; \lambda_1 x_1 - \lambda_2 x_2\rangle \\\ &= \lambda_1^2 \langle x_1, x_1\rangle +\lambda_1 \lambda_2 \langle x_1, x_2\rangle - \lambda_2\lambda_1 \langle x_2,x_1\rangle - \lambda_2^2 \langle x_2, x_2\rangle \\\ &= \lambda_1^2 |x_1|^2 - \lambda_2^2 |x_2|^2 \\\ &= \lambda_1^2 - \lambda_2^2\end{align*}

and this last line is $0$ iff $|\lambda_1| = |\lambda_2|$.

  • 0
    Fortunately for me the error was not mine for a $c$hange. Very nice solution by the way.2012-07-31
10

This is not true.

Let $T=\begin{bmatrix} 1 & -2 \\ 0 & -1 \end{bmatrix}$. Then $x_1=\begin{bmatrix} 1 \\ 0 \end{bmatrix}$, $x_2=\begin{bmatrix} 1 \\ 1 \end{bmatrix}$ form a basis, and $T x_1 = x_1$, $T x_2 = - x_2$, hence the eigenvector requirement is satisfied.

However, take $x=\begin{bmatrix} 1 \\ 0 \end{bmatrix}$, $y=\begin{bmatrix} 0 \\ 1 \end{bmatrix}$, then $\langle x , y \rangle = 0$, but $\langle T x , T y \rangle = -2$. Since $\arccos$ is bijective on the domain $[-1,1]$, it follows that $\arccos \frac{\langle x , y \rangle}{\|x\| \|y\|} \neq \arccos \frac{\langle T x , T y \rangle}{\|T x\| \|T y\|}$.

Here is a slightly more satisfactory answer:

$T$ is angle preserving iff $T^T T = \sigma^2 I$, for some $\sigma >0$.

($\Rightarrow$) Suppose $T^T T = \sigma^2 I$. Then $T' = \frac{1}{\sigma} T $ is orthogonal. It follows that $\langle T'x, T'y \rangle = \langle x, y \rangle$, and $\|T'v \| = \|v\|$, hence $\frac{\langle T'x, T'y \rangle}{\|T'x \| \|T'y \|} = \frac{\langle Tx, Ty \rangle}{\|Tx \| \|Ty \|} = \frac{\langle x, y \rangle}{\|x \| \|y \|}$.

($\Leftarrow$) Let $e_k$ be the standard basis on $\mathbb{R}^n$, and suppose $T$ is angle preserving. It follows immediately that $\frac{\langle T e_i, T e_j \rangle}{\|Te_i \| \|Te_j \|} = \delta_{i,j}$, hence $\frac{T e_i}{\|Te_i \|}$ form an orthonormal basis. Let $ Q = \begin{bmatrix} \frac{T e_1}{\|Te_1 \|} & \cdots & \frac{T e_n}{\|Te_n \|} \end{bmatrix}$, and let $\Lambda = \mathbb{diag}(\|Te_1 \|, \cdots, \|Te_n \|)$. It should be clear that $Q$ is orthogonal, and since $Tx = \sum x_i \|Te_i \| \frac{T e_i}{\|Te_i \|} = Q \Lambda x$, we have $T = Q \Lambda$. Now, adapting @Jason DeVito's trick, we notice that $\langle e_i+e_j, e_i-e_j \rangle = 0$, and so since $T$ is angle preserving, we have $\langle T(e_i+e_j), T(e_i-e_j) \rangle = \langle \Lambda(e_i+e_j), \Lambda(e_i-e_j) \rangle = \langle \Lambda e_i, \Lambda e_i \rangle - \langle \Lambda e_j, \Lambda e_j \rangle= 0$. It follows that $ \|T e_i \|^2 = \|T e_j\|^2$, and so we can write $\Lambda = \sigma M$, with $M$ a diagonal matrix filled with $\pm1$ (note $M^2=I$) and $\sigma = \|T e_1 \|$. Hence $T = \sigma QM$, from which is follows that $T^T T = \sigma^2 I$.

  • 0
    @copper.hat: I have worked through the proof given in a PDF document I have found in this answer: http://math.stackexchange.com/q/354848/152538 (link to PDF: cs.bsu.edu/homepages/fischer/math445/angles.pdf ), which also clarified the trouble I had with your answer. It's a lot clearer now. Thanks.2014-06-05
-1

New bug in above arguments! If $AA^T=\sigma ^2I$ we cannot say $A=\sigma I$. Counterexample: take a diagonal matrix with $+\sigma$ or $-\sigma$ on diagonal, and $0$ elsewhere.

The very final answer to these circle of ideas is enter image description here Which I have proved with ALL details, and optimal care to such issues as orientation. For 1 => 2 see ALL Orthogonality preserving linear maps from $\mathbb R^n$ to $\mathbb R^n$? Please challenge.

  • 0
    I didn't downvote, but the answers above do not conclude that if $A A^T = \sigma^2 I$ then $A = \sigma I$. Any rotation suffices as a counterexample. Where is the bug in the argument?2016-12-04