29
$\begingroup$

I was just thinking about this problem:

Can every nonsingular $n\times n$ matrix with real entries be made singular by changing exactly one entry?

Thanks for helping me.

  • 2
    Can you think how th$e$ d$e$termi$n$ant of the matrix de$p$$e$nds on a specific entry?2012-05-17

4 Answers 4

10

Here's a fast solution. The adjugate matrix $\def\adj{\operatorname{adj}}\adj A$ satifies (more or less by definition) $ \adj A\cdot A=\det(A)I_n, $ and the right hand side is nonzero by the nonsingular hypothesis. Now looking at what contributes the diagonal entry $\det(A)$ at position $k,k$ on the right, one sees that of row $k$ of $\adj A$, which I will denote $(c_1~\ldots~c_n)$ and column $k$ of $A$, which I will denote $(x_1~\ldots~x_n)^t$, produce a matrix product $c_1x_1+\cdots+c_nx_n=\det(A)$. Clearly $c_ix_i\neq0$ for at least one $i$. Choose such $i$, and form a matrix $A'$ by replacing the entry $x_i$ of $A$ (which is at position $(i,k)$) by $x_i-\frac{\det(A)}{c_i}$. Row $k$ of $\adj A'$ is still $(c_1~\ldots~c_n)$ (since it does not depend on column $k$ of $A$), and the $(k,k)$ entry of $\adj A'\cdot A'$ equals $c_1x_1+\cdots+c_nx_n-c_i\frac{\det(A)}{c_i}=0$. But it is also equal to $\det(A')$, so $A'$ is singular.

This shows that not only the answer is "yes, you can make any real invertible matrix singular by changing a single entry", you can even specify in advance the column in which an entry must change. Or the row, by transposition of the argument (but not both). And it works over any field.

  • 0
    Coming back to this question after several years, I realise that I made the mistake I am fond of pointing out to others, namely forgetting about the $0\times0$ matrix. It is definitely nonsingular (it is an identity matrix) but one has a hard time finding any entries to change to make it singular. So the statement to prove _must_ stipulate n>0 as hypothesis in order to be true. Where does my proof implicitly use this assumption? By jumping from $\det(A)\neq0$ (the non-singular hypothesis) to the conclusion that $\det(A)I_n$ is nonzero (false for $n=0$: a $0\times0$ matrix cannot be nonzero).2018-09-01
16

If $A$ is a nonsingular matrix with rows $r_1,r_2,\ldots,r_n$, then $\{r_2,\ldots,r_n\}$ spans an $(n-1)$-dimensional subspace $P$ of $\mathbb R^n$. At least one of the standard basis vectors $e_1,e_2,\ldots,e_n$ is not in $P$, say $e_i$. Then $\{e_i,r_2,r_3,\ldots,r_n\}$ is a basis of $\mathbb R^n$, and it follows that there is a real number $c$ such that $r_1-ce_i$ is in $P$. The matrix $A'$ with rows $(r_1-ce_i),r_2,r_3,\ldots,r_n$ is singular, and it is obtained from $A$ by subtracting $c$ from the entry in the first row and $i^\text{th}$ column.


Here's a way to rephrase this somewhat more geometrically. The subspace $P$ is a hyperplane that divides $\mathbb R^n$ into two half-spaces, and $r_1$ lies in one of these halves. The line through $r_1$ in the direction of a vector $v$ has the form $\{r_1+tv:t\in\mathbb R\}$. This line is parallel to $P$ only if $v$ is in $P$; otherwise, the line will cross $P$. Since $P$ can't be parallel to all of the coordinate directions (or else it would fill up all of $\mathbb R^n$), there must be a line of the form $\{r_1+te_i:t\in\mathbb R\}$ that crosses $P$, where $e_i$ is the standard basis vector with a $1$ in the $i^\text{th}$ position and $0$s elsewhere. This means that there exists $t_0\in \mathbb R$ such that $r_1+t_0e_i\in P$. And then, linear dependence of the vectors $r_1+t_0e_i,r_2,\ldots,r_n$ means that the matrix with those rows is singular.

15

The determinant is a linear polynomial in any given entry, so yes.

To see that $\det\begin{pmatrix} a_{11} & \cdots & a_{1n}\\ \vdots & \ddots & \vdots\\ a_{n1} & \cdots & a_{nn}\end{pmatrix}$ depends linearly on $a_{k\ell}$ for any given $k$ and $\ell$, note that $\det\begin{pmatrix} a_{11} & \cdots & a_{1n}\\ \vdots & \ddots & \vdots\\ a_{n1} & \cdots & a_{nn}\end{pmatrix}=\sum_{\sigma\in S_n}\operatorname{sgn}(\sigma)\prod_{i=1}^n a_{i,\sigma(i)}=\sum_{\substack{\sigma\in S_n\\ \sigma(k)=\ell}}\operatorname{sgn}(\sigma)\prod_{i=1}^n a_{i,\sigma(i)}+\sum_{\substack{\sigma\in S_n\\ \sigma(k)\neq \ell}}\operatorname{sgn}(\sigma)\prod_{i=1}^n a_{i,\sigma(i)}$ $=\left(\sum_{\substack{\sigma\in S_n\\ \sigma(k)=\ell}}\operatorname{sgn}(\sigma)\prod_{\substack{i=1\\i\neq k}}^n a_{i,\sigma(i)}\right)a_{k\ell} + \left(\sum_{\substack{\sigma\in S_n\\ \sigma(k)\neq \ell}}\operatorname{sgn}(\sigma)\prod_{i=1}^n a_{i,\sigma(i)}\right)$ As JeffE rightly points out below, we might have that the coefficient of $a_{k\ell}$ in the above expression is 0, and that therefore varying the value of $a_{k\ell}$ won't change the determinant. I don't see any way of guaranteeing that won't happen, but we can show that, given an $\ell$, it can't happen for every $k$: if it did, then varying the entire column $A_\ell=\begin{pmatrix} a_{1\ell} \\ \vdots \\ a_{n\ell}\end{pmatrix}$ in any way we want doesn't change the determinant, so (for example) $\det(A_1\mid \cdots \mid 2A_\ell\mid \cdots \mid A_n)=\det(A_1\mid \cdots \mid A_\ell\mid \cdots \mid A_n).$ But because the determinant of $A$ is a multilinear function of the columns, $\det(A_1\mid \cdots \mid 2A_\ell\mid \cdots \mid A_n)=2\det(A_1\mid \cdots \mid A_\ell\mid \cdots \mid A_n)$ so $2\det(A_1\mid \cdots \mid A_\ell\mid \cdots \mid A_n)=\det(A_1\mid \cdots \mid A_\ell\mid \cdots \mid A_n)$ which is impossible because the assumption that $A$ is non-singular means that $\det(A_1\mid \cdots \mid A_\ell\mid \cdots \mid A_n)\neq0.$ Thus, given an $\ell$, there exists at least one $k$ such that varying $a_{k\ell}$ can produce a singular matrix.

  • 0
    @ZevChonoles Thanks for providing me such an excellent answer. math.stackexchange.com is my life:)2012-05-18
11

You definitely can do that. One way to see it is the Sherman-Morrison-Woodbury formula

$(\mathbf A+\mathbf u\mathbf v^\top)^{-1}=\mathbf A^{-1}-\frac1{1+\mathbf v^\top\mathbf A^{-1}\mathbf u}(\mathbf A^{-1}\mathbf u\mathbf v^\top\mathbf A^{-1})$

and let $\mathbf u$ and $\mathbf v$ be appropriate multiples of the $k$-th column of the identity matrix $\mathbf e_k$.

In particular, letting $a_{j,k}$ be an entry of the matrix $\mathbf A$, and $c$ some constant, we have

$(\mathbf A+(c-a_{j,k})\mathbf e_j\mathbf e_k^\top)^{-1}=\mathbf A^{-1}-(c-a_{j,k})\frac1{1+(c-a_{j,k})\mathbf e_k^\top\mathbf A^{-1}\mathbf e_j}(\mathbf A^{-1}\mathbf e_j\mathbf e_k^\top\mathbf A^{-1})$

You have to understand what the operation $\mathbf A+(c-a_{j,k})\mathbf e_j\mathbf e_k^\top$ does; as you can see, this operation corresponds to replacing the entry $a_{j,k}$ with $c$. (Write it out yourself if you need more convincing.)

What we now want to do is to find $c$ such that the denominator in the expression given above is zero; that is,

$1+(c-a_{j,k})\mathbf e_k^\top\mathbf A^{-1}\mathbf e_j=0$

First, we tackle the expression $\mathbf e_k^\top\mathbf A^{-1}\mathbf e_j$. Letting $\mathbf W=\mathbf A^{-1}$, $\mathbf e_k^\top\mathbf W\mathbf e_j$ is in fact the $(k,j)$ entry of $\mathbf W$, which we will denote by $w_{k,j}$. Our equation is now

$1+(c-a_{j,k})w_{k,j}=0$

and we can now solve for the value of $c$ that will make $\mathbf A$ singular if it replaces $a_{j,k}$:

$c=a_{j,k}-\frac1{w_{k,j}}$

  • 0
    @J.M. I was not aware of the Sherman-Morrison-Woodbury formula. But now i am studying. Sincerely thanks to you.2012-05-18