8
$\begingroup$

Given non-negative integer $n$-vectors $u$ and $v$, how does one find all $n \times n$ non-negative integer matrices $R$ and powers $g$ such that $R^gu=v$?

  • 0
    The case $g=1$ is probably the most interesting. If the set of matrices with nonnegative integer entries and $Ru=v$ can be somehow described, the case g>1 reduces to checking that $R^g$ belongs to this set.2013-06-29

1 Answers 1

2

All solutions can be found using an exhaustive algorithmic approach. Note:

$ Ru=v=\left[ \begin {array}{ccc} R_{{1,1}}&R_{{1,2}}&...\\ ...&...&...\\ ...&R_{{n,n-1}}&R_{{n,n}} \end {array} \right]\left[ \begin {array}{c} u_{{1}}\\ ... \\ u_{{n}}\end {array} \right]=\left[ \begin {array}{c} v_{{1}}\\ ... \\ v_{{n}}\end {array} \right], $ implies: $\sum_{j=1}^{n}R_{i,j}u_j=v_i\,:\,i=1..n,\,\,\,\,\,\,\,\,\,\,(1)$ so the problem is equivalent to:

Find all (non-negative) solutions to $n$ independent linear Diophatine equations in $n$ variables, $R_{i,1}..R_{i,n}$ (Handbook of Discrete and Combinatorial Mathematics, Rosen & Michaels).

Solution: a particular solution is found using the Euclidean Algorithm (or some educated guessing) and from there all solutions are found in an algorithmic fashion.

With that said:

  • no solutions exist unless $\text{gcd}(u_1,u_2,..,u_n)|v_i,\forall i$. Proof: $\text{gcd}(u_1,u_2,..,u_n)$ divides the left hand side of Eqn (1) so it must divide the right.
  • only non-negative solutions (values for $R_{i,j}$) are aloud so, when $u_j>0 \,\,\forall j$, the number of solutions to each Diophantine equation is finite and thus the number of distinct matrix solutions is finite. Proof: non-negativity bounds the numbers from below and the finite size of $v_i$ bounds them from above.
  • if $u_j=0$ for some $j$, then $R_{i,j}$ can be made arbitrary $\forall j$ and thus, if any solutions exists the number of distinct matrix solutions is infinite.


Example: n=2

Consider the simple linear Diophantine equation for the $n=2$ case: $ax+by=c\,:\, \text{gcd}(a,b)|c$ if $x_0,y_0$ is a particular solution then all solutions are of the form: $x=x_0+\frac{b}{\text{gcd}(a,b)}k,y=y_0-\frac{a}{\text{gcd}(a,b)}k\,:\,k\in\mathbb{Z}$ Proof: Subbing in the solutions shows they are solutions. To prove they are the only solutions change variables to $x=X+x_0,y=Y+y_0$ to obtain: $\frac{a}{\text{gcd}(a,b)}X+\frac{b}{\text{gcd}(a,b)}Y=0$ then because $\frac{b}{\text{gcd}(a,b)}\nmid\frac{a}{\text{gcd}(a,b)}$ we have $\frac{b}{\text{gcd}(a,b)}|X$ so $X=k\frac{b}{\text{gcd}(a,b)},k\in \mathbb{Z}$ and similarly $Y=-k\frac{a}{\text{gcd}(a,b)},k\in \mathbb{Z}$.


Apply this to the following example: $ \left[ \begin {array}{cc} R_{{1,1}}&R_{{1,2}}\\ R_{ {2,1}}&R_{{2,2}}\end {array} \right] \left[ \begin {array}{c} 6\\ 9 \end {array} \right] =\left[ \begin {array}{c} 45\\ 51 \end {array} \right] $ divide through by $\text{gcd}(6,9)=3$ to simplify things and obtain: $2\,x+3\,y=15:(x,y)=(R_{1,1},R_{1,2})$ $2\,s+3\,t=17:(s,t)=(R_{2,1},R_{2,2})$ educated guess leads to particular solutions: $(x_0=-15,y_0=15),(s_0=-17,t_0=17)$ general solutions for the first equation are: $x=-15+3k,y=15-2k$ only positive solutions occur for $k=5,6,7$ which give: $(R_{1,1},R_{1,2}):(0,5),(3,3),(6,1)$ a similar reasoning leads to: $(R_{2,1},R_{2,2}):(1,5),(4,3),(7,1)$ and thus the only matrices that solve the problem are formed by all combinations of the row vectors found above, $9$ in total: $ \left[ \begin {array}{cc} 0&5\\ 1&5\end {array} \right],\left[ \begin {array}{cc} 0&5\\ 4&3\end {array} \right],\left[ \begin {array}{cc} 0&5\\ 7&1\end {array} \right],\left[ \begin {array}{cc} 3&3\\ 1&5\end {array} \right],\left[ \begin {array}{cc} 3&3\\ 4&3\end {array} \right],\left[ \begin {array}{cc} 3&3\\ 7&1\end {array} \right],\left[ \begin {array}{cc} 6&1\\ 1&5\end {array} \right],\left[ \begin {array}{cc} 6&1\\ 4&3\end {array} \right],\left[ \begin {array}{cc} 6&1\\ 7&1\end {array} \right]. $

If $n>2$ an iterative procedure is needed to solve the Diophantine equations (see reference). Because of the equivalence of this problem with such a well studied research area this is probably the best, if not the only approach.

To see if $R$ and $R^g, g>1$, both solve the equation find the row vectors in the $g=1$ case following the method outlined above and check for any closure when raising to a power, i.e. check if any of the allowed matrices raised to the power are also one of the found solutions as the set of solutions for the $g=1$ case contains all allowed solutions (as noted in the comments by @ˈjuː.zɚ79365 ).

But one important restriction should be noted. If: $Ru=R^gu=v,\,\,\,\,\,\,\,\,\,(2)$ then multiplying by $R^{g-1}$ gives: $R^gu=R^{2g-1}u,$ which by $(2)$ implies: $R^{2g-1}u=Ru=v,$ and multiplying by $R^{g-1}$ again will show that $R^{3g-2}u=v$ e.t.c, continuing this process shows $(2)$ implies:$\lim_{m \to \infty}R^{m(g-1)+1}u=v.$ Now instead of non-negative $(\ge0)$, suppose all elements of $R$ are positive integers $\ge1$. Then by the definition of matrix multiplication: $(AB)_{i,j}=\sum_{k=1}A_{i,k}B_{k,j},$ raising $R$ to higher powers will be adding and multiplying positive integers $\ge1$ which will mean matrix elements of $R^{m(g-1)+1}$ will be increasing to infinity as $m\rightarrow \infty$. Eventually for $m$ large enough: $R^{m(g-1)+1}_{i,j}>\text{max}(v)\,\forall \,i,j,$ and so when $R^{m(g-1)+1}$ is multiplied by $u$, a vector with non-negative elements, there is no way it can equal $v$. This leads to a contradiction and thus a theorem is obtained.

Theorem: If $R$ is a positive integer matrix, $u$ and $v$ are non-negative integer vectors, $v$ is not the zero vector and: $Ru=v,$ then: $R^gu\ne v\,\,:\,\,\forall\,g>1\in \mathbb{Z}.$

This is very restrictive. While it does not extend to all non-negative matrices it does rule out any solution in $R$ for $Ru=R^gu=v,g>1$, if when raised to a high enough power $R$ becomes a positive matrix (such a matrix is known as "primitve") as we could then just apply the theorem.