In $\mathbb{R}^n$, can we find an orthogonal basis $v_1,v_2,\ldots,v_n$ with $\|v_i\|=1$ such that
$$\|\bar{v}_1\|=\|\bar{v}_2\|= \cdots =\|\bar{v}_n\|$$
where $\bar{v}_i\in\mathbb{R}^m$ is the vector formed by the first $m$ coordinates of $v_i$ with respect to the standard basis and $m$ is a given integer $m
A specific type of orthogonal bases
-
0The question still is a bit unclear to me, even after your edit. It seems that $P$ is supposed to be the orthogonal projection of $\mathbb{R}^{n}$ onto the subspace isomorphic to $\mathbb{R}^{m}$ spanned by the first $m$ vectors $v_{1},\ldots,v_{m}$. If that is the case we clearly have $\|P(v_{1})\| = \cdots = \|P(v_{m})\| = 1$ and $\|P(v_{m+1})\| = \cdots = \|P(v_{n})\| = 0$. – 2011-05-19
-
0I think the operator $P(v_i)$ makes the question unclear. I hope the new question is clear. – 2011-05-19
-
0Okay, I think I see where you're heading at. Is your question the following: Let $v_{j} = (x_{1}^{(j)}, x_{2}^{(j)}, \ldots, x_{n}^{(j)})$ with respect to the standard basis. Suppose $v_{1}, \ldots, v_{n}$ forms an orthonormal basis. Fix $m$ between $1$ and $n-1$ and put $\bar{v}_{j} = (x_{1}^{(j)}, \ldots, x_{m}^{(j)}, 0, \ldots, 0)$. Is it possible that $\|\bar{v}_{1}\| = \cdots = \|\bar{v}_n\|$ and if so how can I find such a basis? (I interpreted "elements of $v_{i}$" as "coordinates of $v_{i}$ with respect to the standard basis"). – 2011-05-19
-
0Yes! That's exactly what I mean! – 2011-05-19
-
0I slightly rephrased your question in the hope of making it more transparent. If you disagree with the edits, you can roll it back. – 2011-05-19
-
0Thank you very much for your edit. – 2011-05-19
-
0An obvious remark : if you have a solution for a given $m$, you get a solution for $n-m$ by "flipping the order of coordinates" (since $|\!|v_1 - \overline{v}_1|\!| = \ldots = |\!|v_n - \overline{v}_n|\!|$). – 2011-05-19
4 Answers
[Edit:] I just realized that this only works for $mk\le n$, so it's also only a partial answer.
Note first that the orthonormal column vectors $v_i$ form an orthogonal matrix whose rows are also orthonormal. Since we can expand any set of $m$ orthonormal row vectors to a set of $n$ orthonormal row vectors, we only have to worry about filling in the first $m$ rows with orthonormal row vectors such that the truncated column vectors satisfy the condition.
If $m\mid n$, this is easy; just fill the $m$ rows with $n/m$ copies of the $m\times m$ identity matrix. If $m\nmid n$, write $n=qm+k$ and put $q$ copies of the identity matrix into the $m$ rows (say, beginning at the left). That produces orthogonal rows, but they don't fulfill the condition, since the norm of the first $qm$ truncated column vectors is $1$ and that of the last $k$ is $0$. To fix this, rotate the row vectors in the plane formed by the two row vectors consisting of $1$s in the first $qm$ columns and in the last $k$ columns, respectively. That leaves the row vectors orthogonal (since they're all rotated by the same isometry), but shifts the excess weight from the first $qm$ columns to the last $k$ columns. By choosing the rotation angle suitably, we can make the weights in all columns coincide. The result looks something like this:
$$ \begin{array}{l} \overbrace{\hphantom{\begin{array}{ccccccccccc} 1+\alpha & 1+\alpha & 1+\alpha & \cdots & 1+\alpha & 1+\alpha & 1+\alpha & \cdots & \beta & \cdots & \beta \end{array}}}^{\displaystyle n}\\ \overbrace{\hphantom{\begin{array}{ccccccccccc} 1+\alpha & 1+\alpha & 1+\alpha & \cdots & 1+\alpha & 1+\alpha & 1+\alpha&\cdots \end{array}}}^{\displaystyle qm} \hphantom{\cdots} \overbrace{\hphantom{\begin{array}{ccccccccccc} \beta & \cdots & \beta \end{array}}}^{\displaystyle k}\\ \overbrace{\hphantom{\begin{array}{ccccccccccc} 1+\alpha & 1+\alpha & 1+\alpha & \cdots \end{array}}}^{\displaystyle m} \hphantom{\cdots} \overbrace{\hphantom{\begin{array}{ccccccccccc} 1+\alpha & 1+\alpha & 1+\alpha&\cdots \end{array}}}^{\displaystyle m}\\ \left. \begin{array}{ccccccccccc} 1+\alpha & \alpha & \alpha & \cdots & 1+\alpha & \alpha & \alpha & \cdots & \beta & \cdots & \beta\\ \alpha & 1+\alpha & \alpha & \cdots & \alpha & 1+\alpha & \alpha & \cdots & \beta & \cdots & \beta\\ \alpha & \alpha & 1+\alpha & \cdots & \alpha & \alpha & 1+\alpha & \cdots & \beta & \cdots & \beta\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots&&\vdots \end{array} \;\;\;\right\}m\\ \vphantom{q} \end{array} $$
with $\alpha=(\sqrt{1-mk/n}-1)/m$ and $\beta=\sqrt{q/n}$.
-
0I think the formulae might still not be correct. For $m=2$, $n=3$ we have $k=q=1$, so $\beta=1/\sqrt3$ and $\alpha=-1+\beta$ so you seem to get the matrix $$\frac1{\sqrt3}\begin{pmatrix}1&1-\sqrt3&1\\1-\sqrt3&1&1\end{pmatrix}.$$ But don't you want the rows of this matrix to be orthogonal? – 2011-05-19
-
0@mac: You're quite right -- I was missing another factor of $m$ -- I've corrected it -- sorry about the mistakes. – 2011-05-19
-
0I find the idea to work on the rows very clever. – 2011-05-20
I think there's a trick that can make joriki's idea work in any case (I hope I'm not mistaken). Let's denote $A_{m,n}$ the assertion that there exists a solution to the problem for the values $m \le n$ (we take the convention that $A_{0,n}$ is always true). Obviously, $A_{m,m}$ is true for all $m$. By flipping rows, we see that
$A_{m, n} \Longleftrightarrow A_{n-m, n}$
And joriki showed that $A_{m,n}$ is equivalent to the fact that there exist a (non zero) $m$ by $n$ matrix whose rows are orthonormal, and columns have same norm. So by concatenation of such matrices (with suitable scaling factor so that the rows remain of norm 1, and column of both matrices have same norm), we get
$(A_{m, n_1} \textrm{ and } A_{m, n_2} ) \Longrightarrow A_{m, n_1 + n_2}$
Let $m < n$ be two integers, and $r$ the remainder of $m$ modulo $n$. Using the properties above, we see that
$A_{r, m} \Longrightarrow A_{r, m+r} \Longrightarrow A_{m, m+r} \Longrightarrow A_{m, n}$
So checking if $A_{m,n}$ is true can be reduced to checking if $A_{r, m}$ is. Applying Euclide's algorithm, we can reduce this to a point where the remainder is $0$, in which case there is a solution.
-
0@Joel: This is also pretty nifty :-) I was slightly sloppy about the normalization, and you don't explicate that either, but I think it should be possible to match the solutions up like you're doing it, possibly with some rescaling at each step. – 2011-05-20
-
0@ joriki : Yes, there is a rescaling involved at each step. But it become messy very fast. – 2011-05-20
-
0@Joel: This looks very nice! But I don't understand what you mean by "flipping rows", or see why $A_{m,n}\iff A_{n-m,m}$. Please could you explain? – 2011-05-27
-
0@mac : If $(a_{i,j})_{1 \le i, j \le n}$ is an orthogonal matrix, then so is $(a_{n-i,j})_{1 \le i, j \le n}$ (this matrix is obtained by flipping the order of rows). If the first one is a solution for $(m, n)$, then the "flipped one" is a solution for $(n-m, n)$ (because $|\!|\overline{v}_1 - v_1|\!| = \ldots = |\!|\overline{v}_n - v_n|\!|$ from pythagora's theorem). This gives you an equivalence because of symmetry ($A_{m,n} \Rightarrow A_{m-n, n} \Rightarrow A_{m,n}$). – 2011-05-27
-
0@Joel: thanks for clearing that up! So I guess it should be $A_{m,n}\iff A_{n-m,n}$ above (not $A_{n-m,m}$). – 2011-05-27
-
0@mac : Yes, I corrected the mistake. – 2011-05-28
I only have partial answers: for when $n=2^k$, for when there is an $n\times n$ Hadamard matrix, and for $n=3$.
When $n=2^k$ we can find one such basis that works for every $m For $n=2$, we can take $v_1=(1,1)/\sqrt2$ and $v_2=(1,-1)/\sqrt{2}$. For $k>1$, a suitable basis for $\mathbb{R}^{2^k}$ is given by the $2^k$ vectors $w=w_1\otimes w_2\otimes\dots\otimes w_k$ where $w_j\in \{v_1,v_2\}$ for $1\le j\le k$, and $\otimes$ is the Kronecker product. Every entry of such a vector $w$ is $\pm 1/2^{k/2}$, so $\|\overline w\|=\sqrt{m/2^k}$, regardless of how the $w_k$'s are chosen from $\{v_1,v_2\}$. This construction is a special case of the following: if you can find an $n\times n$ Hadamard matrix then its columns give a basis of $\mathbb{R}^n$ which you can scale by $1/\sqrt n$ to answer your question in the affirmative. The size of a Hadamard matrix is either $1$, $2$ or a multiple of $4$, and it's conjectured that they exist for every multiple of $4$. (Remark: the same argument shows that if you change the question from $\mathbb{R}^n$ to $\mathbb{C}^n$, then the answer is always yes, because Fourier matrices can be used instead of Hadamard matrices in this argument, and they exist for every $n$.) An ad hoc calculation for $n=3$ shows that you can find a suitable basis then, although this time my choice of basis depends on whether $m=1$ or $m=2$. For $m=2$ we can take $v_1=(\sqrt{2/3},0,1/\sqrt3)$, $v_2=(1/\sqrt6,1/\sqrt2,-1/\sqrt3)$, $v_3=(1/\sqrt6,-1/\sqrt2,-1/\sqrt3)$ and for $m=1$ you can turn each of these vectors back to front.
The answer is yes: Assume that $m=n-1$. Same construction could be done for other $m$ as well.
Denote by $W$ the subspace of $V=\mathbb{R}^n$ spanned by $e_1,..,e_m$, the first $m$ vectors of the standard basis (hence $W \cong \mathbb{R}^m$).
Denote by $p:V\to W$ the projection (i.e. $p(a_1,...,a_n)=(a_1,...,a_m,0,...,0)$ w.r.t the standard basis). Let the unit sphere in $V$ be denoted by $S_1=\{{v\in V : ||v||=1\}}$.
Denote $U={\rm Span}(e_2-e_1,...,e_n-e_1)$. And finally, $A=e_1+U$, which is an affine subspace, passing through the endpoints of $e_1,..,e_n$.
Find a linear operator (rotation) $T:V\to V$ such that $\forall v\in V,||T(v)||=||v||$ and $T(U)=W$, $T(e_1)\in W^\perp$. Hence, denoting $v=T(e_1)$, $v+W=T(A)$ (This makes $A$ "paralel" to $W$). Such an operator exists since $U$ is of co-dimension $1$, and hence can be rotated to coincide with any other sub-space of co-dimension 1.
Now consider $T(e_1),...,T(e_n)$. Since $T$ is a rotation, they form an orthonormal basis of $V$.
Now, if $w_1,w_2\in S_1\cap T(A)$ then $||w_1||=||w_2||=1$ and $w_1=T(e_1)+T(u_1)$, $w_2=T(e_1)+T(u_2)$ where $u_1,u_2\in U$. From the choice of $v\in W^\perp$, we have $p(w_1)=w_1-v$, $p(w_2)=w_2-v$. Hence $$||p(w_1)||^2=||w_1-v||^2=||w_1||^2+||v||^2=||w_2||^2+||v||^2=||p(w_2)||^2$$
(Pythagoras theorem).
Since $T(e_1),...,T(e_n)\in S_1\cap T(A)$, we have $||p(T(e_1))||=...=||p(T(e_n))||$, as required.
-
0For other values of $m$, you will need to find an operator $T$ such that $v+W\subseteq T(A)$ – 2011-05-19
-
2Why is $T(e_1)$ orthogonal to $W$? This would mean that $e_1$ is orthogonal to $U$, but it doesn't appear to be. – 2011-05-19