7
$\begingroup$

In $\mathbb{R}^n$, can we find an orthogonal basis $v_1,v_2,\ldots,v_n$ with $\|v_i\|=1$ such that $\|\bar{v}_1\|=\|\bar{v}_2\|= \cdots =\|\bar{v}_n\|$ where $\bar{v}_i\in\mathbb{R}^m$ is the vector formed by the first $m$ coordinates of $v_i$ with respect to the standard basis and $m$ is a given integer $m.

  • 0
    An obvious remark : if you have a solution for a given $m$, you get a solution for $n-m$ by "flipping the order of coordinates" (since $|\!|v_1 - \overline{v}_1|\!| = \ldots = |\!|v_n - \overline{v}_n|\!|$).2011-05-19

4 Answers 4

6

[Edit:] I just realized that this only works for $mk\le n$, so it's also only a partial answer.

Note first that the orthonormal column vectors $v_i$ form an orthogonal matrix whose rows are also orthonormal. Since we can expand any set of $m$ orthonormal row vectors to a set of $n$ orthonormal row vectors, we only have to worry about filling in the first $m$ rows with orthonormal row vectors such that the truncated column vectors satisfy the condition.

If $m\mid n$, this is easy; just fill the $m$ rows with $n/m$ copies of the $m\times m$ identity matrix. If $m\nmid n$, write $n=qm+k$ and put $q$ copies of the identity matrix into the $m$ rows (say, beginning at the left). That produces orthogonal rows, but they don't fulfill the condition, since the norm of the first $qm$ truncated column vectors is $1$ and that of the last $k$ is $0$. To fix this, rotate the row vectors in the plane formed by the two row vectors consisting of $1$s in the first $qm$ columns and in the last $k$ columns, respectively. That leaves the row vectors orthogonal (since they're all rotated by the same isometry), but shifts the excess weight from the first $qm$ columns to the last $k$ columns. By choosing the rotation angle suitably, we can make the weights in all columns coincide. The result looks something like this:

$ \begin{array}{l} \overbrace{\hphantom{\begin{array}{ccccccccccc} 1+\alpha & 1+\alpha & 1+\alpha & \cdots & 1+\alpha & 1+\alpha & 1+\alpha & \cdots & \beta & \cdots & \beta \end{array}}}^{\displaystyle n}\\ \overbrace{\hphantom{\begin{array}{ccccccccccc} 1+\alpha & 1+\alpha & 1+\alpha & \cdots & 1+\alpha & 1+\alpha & 1+\alpha&\cdots \end{array}}}^{\displaystyle qm} \hphantom{\cdots} \overbrace{\hphantom{\begin{array}{ccccccccccc} \beta & \cdots & \beta \end{array}}}^{\displaystyle k}\\ \overbrace{\hphantom{\begin{array}{ccccccccccc} 1+\alpha & 1+\alpha & 1+\alpha & \cdots \end{array}}}^{\displaystyle m} \hphantom{\cdots} \overbrace{\hphantom{\begin{array}{ccccccccccc} 1+\alpha & 1+\alpha & 1+\alpha&\cdots \end{array}}}^{\displaystyle m}\\ \left. \begin{array}{ccccccccccc} 1+\alpha & \alpha & \alpha & \cdots & 1+\alpha & \alpha & \alpha & \cdots & \beta & \cdots & \beta\\ \alpha & 1+\alpha & \alpha & \cdots & \alpha & 1+\alpha & \alpha & \cdots & \beta & \cdots & \beta\\ \alpha & \alpha & 1+\alpha & \cdots & \alpha & \alpha & 1+\alpha & \cdots & \beta & \cdots & \beta\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots&&\vdots \end{array} \;\;\;\right\}m\\ \vphantom{q} \end{array} $

with $\alpha=(\sqrt{1-mk/n}-1)/m$ and $\beta=\sqrt{q/n}$.

  • 0
    I find the idea to work on the rows very clever.2011-05-20
4

I think there's a trick that can make joriki's idea work in any case (I hope I'm not mistaken). Let's denote $A_{m,n}$ the assertion that there exists a solution to the problem for the values $m \le n$ (we take the convention that $A_{0,n}$ is always true). Obviously, $A_{m,m}$ is true for all $m$. By flipping rows, we see that

$A_{m, n} \Longleftrightarrow A_{n-m, n}$

And joriki showed that $A_{m,n}$ is equivalent to the fact that there exist a (non zero) $m$ by $n$ matrix whose rows are orthonormal, and columns have same norm. So by concatenation of such matrices (with suitable scaling factor so that the rows remain of norm 1, and column of both matrices have same norm), we get

$(A_{m, n_1} \textrm{ and } A_{m, n_2} ) \Longrightarrow A_{m, n_1 + n_2}$

Let $m < n$ be two integers, and $r$ the remainder of $m$ modulo $n$. Using the properties above, we see that

$A_{r, m} \Longrightarrow A_{r, m+r} \Longrightarrow A_{m, m+r} \Longrightarrow A_{m, n}$

So checking if $A_{m,n}$ is true can be reduced to checking if $A_{r, m}$ is. Applying Euclide's algorithm, we can reduce this to a point where the remainder is $0$, in which case there is a solution.

  • 0
    @mac : Yes, I corrected the mista$k$e.2011-05-28
3

I only have partial answers: for when $n=2^k$, for when there is an $n\times n$ Hadamard matrix, and for $n=3$.

When $n=2^k$ we can find one such basis that works for every $m.

For $n=2$, we can take $v_1=(1,1)/\sqrt2$ and $v_2=(1,-1)/\sqrt{2}$.

For $k>1$, a suitable basis for $\mathbb{R}^{2^k}$ is given by the $2^k$ vectors $w=w_1\otimes w_2\otimes\dots\otimes w_k$ where $w_j\in \{v_1,v_2\}$ for $1\le j\le k$, and $\otimes$ is the Kronecker product. Every entry of such a vector $w$ is $\pm 1/2^{k/2}$, so $\|\overline w\|=\sqrt{m/2^k}$, regardless of how the $w_k$'s are chosen from $\{v_1,v_2\}$.

This construction is a special case of the following: if you can find an $n\times n$ Hadamard matrix then its columns give a basis of $\mathbb{R}^n$ which you can scale by $1/\sqrt n$ to answer your question in the affirmative. The size of a Hadamard matrix is either $1$, $2$ or a multiple of $4$, and it's conjectured that they exist for every multiple of $4$. (Remark: the same argument shows that if you change the question from $\mathbb{R}^n$ to $\mathbb{C}^n$, then the answer is always yes, because Fourier matrices can be used instead of Hadamard matrices in this argument, and they exist for every $n$.)

An ad hoc calculation for $n=3$ shows that you can find a suitable basis then, although this time my choice of basis depends on whether $m=1$ or $m=2$. For $m=2$ we can take $v_1=(\sqrt{2/3},0,1/\sqrt3)$, $v_2=(1/\sqrt6,1/\sqrt2,-1/\sqrt3)$, $v_3=(1/\sqrt6,-1/\sqrt2,-1/\sqrt3)$ and for $m=1$ you can turn each of these vectors back to front.

0

The answer is yes: Assume that $m=n-1$. Same construction could be done for other $m$ as well. Denote by $W$ the subspace of $V=\mathbb{R}^n$ spanned by $e_1,..,e_m$, the first $m$ vectors of the standard basis (hence $W \cong \mathbb{R}^m$).
Denote by $p:V\to W$ the projection (i.e. $p(a_1,...,a_n)=(a_1,...,a_m,0,...,0)$ w.r.t the standard basis). Let the unit sphere in $V$ be denoted by $S_1=\{{v\in V : ||v||=1\}}$.
Denote $U={\rm Span}(e_2-e_1,...,e_n-e_1)$. And finally, $A=e_1+U$, which is an affine subspace, passing through the endpoints of $e_1,..,e_n$.
Find a linear operator (rotation) $T:V\to V$ such that $\forall v\in V,||T(v)||=||v||$ and $T(U)=W$, $T(e_1)\in W^\perp$. Hence, denoting $v=T(e_1)$, $v+W=T(A)$ (This makes $A$ "paralel" to $W$). Such an operator exists since $U$ is of co-dimension $1$, and hence can be rotated to coincide with any other sub-space of co-dimension 1.
Now consider $T(e_1),...,T(e_n)$. Since $T$ is a rotation, they form an orthonormal basis of $V$.
Now, if $w_1,w_2\in S_1\cap T(A)$ then $||w_1||=||w_2||=1$ and $w_1=T(e_1)+T(u_1)$, $w_2=T(e_1)+T(u_2)$ where $u_1,u_2\in U$. From the choice of $v\in W^\perp$, we have $p(w_1)=w_1-v$, $p(w_2)=w_2-v$. Hence $||p(w_1)||^2=||w_1-v||^2=||w_1||^2+||v||^2=||w_2||^2+||v||^2=||p(w_2)||^2$ (Pythagoras theorem).
Since $T(e_1),...,T(e_n)\in S_1\cap T(A)$, we have $||p(T(e_1))||=...=||p(T(e_n))||$, as required.

  • 2
    Why is $T(e_1)$ orthogonal to $W$? This would mean that $e_1$ is orthogonal to $U$, but it doesn't appear to be.2011-05-19