3
$\begingroup$

Let a homogeneous system be given by:

$$\left(\begin{matrix} 0 & \pm1 & \pm1 & \ldots & \pm1 \\ \pm1 & 0 & \pm1 & \ldots & \pm1 \\ \pm1 & \pm1 & 0 & \ldots & \pm1 \\ &&\ldots \\ \pm1 & \pm1 & \pm1 & \ldots & 0 \\ \end{matrix}\right){\bf{x}}={\bf{0}}$$

Where this matrix is $n\times n$, where $n$ is odd, and the sum of each row is zero - that is, there are exactly $(n-1)/2$ ones, $(n-1)/2$ minus ones, and one zero in each row.

I wish to prove that the solution set is the set of vectors $\bf{x}$ with all elements of $\bf{x}$ being equal. Is there a nice matrix based way of proving this? I think this may be proved using induction, but a non-inductive method would be preferable.

  • 0
    Sorry didnt understand that. But now I see.2017-01-24
  • 1
    If you are correct the matrix should row reduce to $$\left( \begin{matrix} 1 & -1 & 0 & ... & 0 \\ 0 & 1 & -1 & ... & 0 \\ 0 & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ldots & 0 & 1 & -1 \\ 0 & \ldots & {} & {} & 0 \\ \end{matrix} \right)$$ so maybe looking at LPU factorization is the way to go.2017-01-24
  • 0
    @Paul , maybe I dont undertand what you wrote, but that matrix is not row reduced, you should get all $-1$, in the last column.2017-01-24
  • 0
    For the matrix B I wrote down, the solution of Bx = 0 will be your solution, $x =t(1,1,...,1)^T$. So, can you row reduce your original matrix to this form? It is an upper triangular matrix, hence the idea that LU or LPU factorization might be useful (I don't know though). By the way, are your rows always ordered 1, -1, 1, ..., 1, -1 so that the matrix is unique? your original question only had a weaker assumption of row elements adding to 0.2017-01-24
  • 0
    @Paul - I stand by my original assumption, as stated. You don't know the order, just what each element contains, and therefore the sum.2017-01-24
  • 0
    Your x is always a solution as it is orthogonal to each row in the matrix. You then need to show that one, and only one, row is linearly dependent on the others. Can't see how to do that either, though it looks likely with the position of the 0's.2017-01-24
  • 0
    Can you tell us the origin of this question ?2017-01-24
  • 0
    I think I have a solution. Am I allowed to use knowledge about the Galois Field $\mathbb{F}_2$?2017-01-24
  • 0
    @ReneSchipperus - An interview question which I chose to formulate in this manner. The induction solution to the original problem is much easier, but I thought the matrix formulation would make for a more interesting problem.2017-01-24
  • 0
    @ReinhardMeier - Give it a shot :)2017-01-24
  • 0
    What was the original problem ?2017-01-24

2 Answers 2

3

We only have to show that the matrix (let's call it $M$) has rank $n-1$. Then the vectors with identical components are the only solutions, there cannot be any other solutions which are linearly independent of those. If we can show that one of the diagonal minors is invertible, we are done. We use the fact that the invertibility of $A$ in $\mathbb{Q}^{m\times m}$ follows from the invertibility of the matrix in $\mathbb{F}_2^{m\times m}$. (Assume that $A$ is singular in $\mathbb{Q}^{m\times m}$. Then there must be a vector $v\neq 0$ with integer elements such that $Av=0$. We can assume that there is at least one odd element in this vector, otherwise we could divide all elements by $2$. If we interpret the elements of this vector and the elements of $A$ as elements of $\mathbb{F}_2$, we have obviously found a non-trivial vector for which $Av=0$ in $\mathbb{F}_2^m$. Therefore $A$ is also singular in $\mathbb{F}_2^{m\times m}$) Now we only have to show that the diagonal minors are invertible in $\mathbb{F}_2^{(n-1)\times (n-1)}$ if $n$ is odd. This is easy, because they are all their own matrix inverse.

  • 0
    Great solution.2017-01-24
  • 0
    I just noticed that there is an easier way: If you multiply one of the diagonal minors with itself, you get a matrix with an odd determinant, because the only odd elements are on the diagonal. Therefore the determinant cannot be 0 and the original factors must have been invertible.2017-01-24
  • 0
    I'm not sure I follow, though that's probably due to lack of familiarity with some of the theorems you used2017-01-24
  • 0
    If $A$ is one of the diagonal minors of $M$, it can easily be shown that the diagonal elements of $A^2$ are odd while all other elements are even. Now take the Leibniz formula for determinants. The only way of getting an odd addend is the permutation that yields the product of all diagonal elements. Therefore the determinant of $A^2$ is odd. This means that $A^2$ is invertible, which in turn means that $A$ must have been invertible, too.2017-01-24
2

Let me describe a solution which is pretty much equivalent to Reinhard's solution (the main idea is the same) but doesn't involve the determinant.

The question doesn't specify which field we are working for so if we want to try and simplify the problem, we can ask whether the result is true when we work over the simplest field $\mathbb{Z}_2$. Since over $\mathbb{Z}_2$ we have $+1 = -1$, this simplifies the question considerably so let us solve the simple version first.

Call your matrix $A$ and let $B$ be the matrix which has all $1$'s (so we have $A = B - I$). The matrix $B$ is a rank-one matrix so we can analyze it easily. It has an $n-1$ dimensional kernel (consisting of vectors $(x_1,\dots,x_n)^T$ that satisfy $x_1 + \dots + x_n = 0$) and the vector $(1,\dots,1)^T$ is an eigenvector of $B$ which corresponds to the eigenvalue $\lambda = 1$ (only here we use the fact that $n$ is odd). Hence $\dim \ker(A) = \dim \ker(B - I) = 1$ and we solved the problem for $\mathbb{Z}_2$.

How can we use this to solve the problem over $\mathbb{Q}$? If $n = 1$, the problem is trivial so assume $n > 1$ and that $Ax = 0$ has a solution over $\mathbb{Q}$ with $x_i \neq x_j$. I will show that given such an $x$, we can construct another solution $x' \in \mathbb{Z}^n$ with $Ax' = 0$ such that $x'$ has an even coordinate and an odd coordinate. But then reducing $Ax' = 0$ modulo $2$ we will get a solution over $\mathbb{Z}_2$ which is not $(0,\dots,0)^T$ or $(1,\dots,1)^T$, a contradiction. The reducing modulo $2$ is done by defining $y'_i = x'_i \mod 2$ and $(A')_{ij} = A_{ij} \mod 2$. Since $\operatorname{mod}$ respects addition and multiplication, the equation $Ax' = 0$ over $\mathbb{Z}$ will imply that $A' y' = 0$ over $\mathbb{Z}_2$.

So let $x \in \mathbb{Q}^n$ solve $Ax = 0$ with $x_i \neq x_j$. By multiplying $x$ with an integer, we can assume that $x \in \mathbb{Z}^n$. By subtracting $(x_i, \dots, x_i)^T$ from $x$ we can assume that $x_i = 0$. Since $x_i = 0$ it cannot be that all the coordinates in $x$ are odd and since $n > 1$, it cannot be that all the coordinates in $x$ are zero. Consider the following process:

  1. If $x$ has an odd coordinate, we are done.
  2. If all the coordinates in $x$ are even and some coordinate is non-zero, divide $x$ by $2$ and go to step one.

After finitely many steps, we will get a solution in $\mathbb{Z}^n$ with an even coordinate ($x_i = 0$) and an odd coordinate $\pm 1$, obtaining the required contradiction.


How about other fields? Since our matrix has entries $0,\pm 1$ and the rank of a matrix over a field is the same as the rank of the matrix over the subfield generated by the entries of the matrix, it is enough to investigate the prime fields $\mathbb{Z}_p$ and $\mathbb{Q}$. We have handled $\mathbb{Q}$ and $\mathbb{Z}_2$. For $n = 3$, the matrices are still of rank $n - 1$ over any field. However, for $n = 5$ and $\mathbb{F} = \mathbb{Z}_3$, we can give a counterexample (found using sage):

$$ A = \begin{pmatrix} 0 & 1 & -1 & -1 & 1 \\ 1 & 0 & -1 & 1 & -1 \\ 1 & -1 & 0 & -1 & 1 \\ -1 & 1 & -1 & 0 & 1 \\ 1 & -1 & -1 & 1 & 0 \end{pmatrix}, x = \begin{pmatrix} 1 \\ 0 \\ 2 \\ 1 \\ 0 \end{pmatrix}. $$

Here, $Ax = 0$ while $x \notin \operatorname{span} (1,\dots, 1)^T$.

  • 0
    Really nice! Out of curiosity, are there any theorems stating that there can be no other solutions over $\mathbb{R}$ as well given that there are none over $\mathbb{Q}$?2017-01-25
  • 1
    Yep. Given a matrix $A$ over $\mathbb{F}$, denote by $\mathbb{F}'$ the smallest subfield of $\mathbb{F}$ which contains all the entries of the matrix $A$. We can calculate the rank of the matrix $A$ over $\mathbb{F}$ or over $\mathbb{F}'$ using row reduction. Since the explicit algorithm involves operations which use as parameters only the entries of the matrix, the result of the algorithm over $\mathbb{F}$ and over $\mathbb{F}'$ will be the same. In your case, the rank calculation over $\mathbb{Q}$ extends to all characteristic zero fields.2017-01-25
  • 0
    @nbubis: I've checked using sage and indeed the system may have additional solutions when working over fields of characteristic $2 < p < \infty$. I've edited my answer to give an example.2017-01-25
  • 0
    @levap This is a really impressive answer indeed! There's a typo at the end of the third paragraph where there should be ‘$\dim \ker (A) = \dim \ker (B - I) = 1$’ instead of ‘$\dim \ker (B) = \dim \ker (A - I) = 1$’ which is somewhat confusing, please edit. Also, would you like to explain a little more what it means to reduce $Ax'=0$ modulo $2$? (I think I can understand, but it would be nice to write one more line about it). Thank you!2017-01-25
  • 0
    @Pythagoricus: Thanks for the comment, I've fixed the confusion between $A$ and $B$ and added an explanation regarding the modulo reduction.2017-01-25