4
$\begingroup$

Is the set of all $n\times n$ matrices, such that for any fixed matrix B AB=BA, a subspace of the vector space of all $n\times n$ matrices?

Alright, I understand the question and I know what I have to do, basically. I need to show that additive closure and multiplicative closure are satisfied. The problem is, I can't figure out how to do this generally. I tried playing around with $2\times 2$ matrices but that seemed like a dead end. Obviously two such matrices are the 0 matrix and the identity matrix, and those form a subspace, but that doesn't really tell me about all the matrices. Any ideas for how I should be tackling this? I feel like I'm not thinking generally enough.

  • 0
    For a specific B, it might be hard to find any interesting A. To make the exercise easy, just imagine you had found two such guys, A and A'. Is A+A' also one of those guys? Multiply against B and check, just using foil and such.2011-10-20

3 Answers 3

7

Careful: when you say "multiplicative closure", you have to be clear, since when dealing with $n\times n$ matrices there is a "multiplication" that has nothing to do with the vector space structure (the multiplication of matrices). It is clearer if you refer to it as the "scalar multiplication".

So, fix the matrix $B$. You need to show that:

  1. There is at least one matrix $A$ such that $AB=BA$;
  2. If $A_1$ and $A_2$ are two matrices such that each of them commutes with $B$, then $A_1+A_2$ also commutes with $B$ (closure of your set under vector addition).
  3. If $A$ is a matrix that commutes with $B$, and $k$ is any scalar, then $kA$ also commutes with $B$ (closure of your set under scalar multiplication).

So, with that in mind:

  1. Is there a matrix that necessarily commutes with $B$? (If this thing is really going to be a subspace, it better have what "vector" [i.e., matrix] in it for sure? Try that matrix).

  2. Suppose $A_1$ and $A_2$ both commute with $B$. That is, $A_1B=BA_1$ and $A_2B=BA_2$. You want to show that $(A_1+A_2)$ also commutes with $B$. That is, you want to show that $(A_1+A_2)B = B(A_1+A_2).$ Of course, you'll want to use the fact that each of $A_1$ and $A_2$ commutes with $B$, and perhaps some properties you know about matrix multiplication. Is there some property of matrix multiplication that would let you relate $(A_1+A_2)B$ with the products you do know something about, namely $A_1B$ and $A_2B$?

  3. Suppose $A$ commutes with $B$, and $AB=BA$. Let $k$ be a scalar. You want to show that $kA$ also commutes with $B$: that is, you need to prove that $(kA)B = B(kA).$ Again, is there some property of matrix multiplication that you know and that might help here?

And if you establish these three, you're done: the set in question is a subspace!

  • 0
    @Arturo +1-VERY good point about the difference between an actual "multiplication" in the ring-theoretic sense and scalar multiplication in the R-module sense. This is can be a little confusing for the rank amateur. Multiplication in the first sense is an actual operation on the vectors and this can only happen in a vector space if the vector space is itself the field of scalars( i.e. every field is a vector space over itself).,where in the latter case, we simply multiply the vector by a scalar and this results in a magnification of it's length (or norm).2011-10-20
3

Assume you have 2 such matrices, $A_1,A_2$ then $(A_1+A_2)B = B(A_1+A_2)$ must hold, so you need to prove this, using the fact that $A_iB=BA_i$ for $i=1,2.$

You then need to show that if $AB=BA$ the it also holds that $(\lambda A)B=B(\lambda A)$.

Tip: Use distribution law for matrices...

  • 0
    Yeah, perfect, got it. If I had just thought about it like that for the first like 40 minutes it would have been no problem, hah. Thanks a lot.2011-10-20
0

Here is a different kind of answer that is just a lot harder for no reason, but might help you transition.

Suppose we had a specific B, such as: $B = \begin{bmatrix} 2 & 3 \\ 4 & 5 \end{bmatrix}$ We want to find all A so that AB = BA. Well, we don't know the entries of A, so we replace the unknown numbers with... well unkowns, also known as variables. $A = \begin{bmatrix} x & y \\ z & t \end{bmatrix}$

Ok, now we write down what we know about the variables from AB = BA: $ AB = \begin{bmatrix} x & y \\ z & t \end{bmatrix} \begin{bmatrix} 2 & 3 \\ 4 & 5 \end{bmatrix} = \begin{bmatrix} 2x+4y & 3x+5y \\ 2z+4t & 3z+5t \end{bmatrix} $ $ BA = \begin{bmatrix} 2 & 3 \\ 4 & 5 \end{bmatrix} \begin{bmatrix} x & y \\ z & t \end{bmatrix} = \begin{bmatrix} 2x+3z & 2y+3t \\ 4x+5z & 4y+5t \end{bmatrix} $ For two matrices to be equal, we need to have the entries equal: $\left\{\begin{align} 2x + 4y &= 2x + 3z \\ 2z+4t &= 4x+5z \\ 4x+5y &= 2y + 3t \\ 3z+5t &= 4y+5t \end{align}\right.$ What do you know? We have a system of linear equations. We can solve them as usual: find a particular solution (oooo let me let me!): $(x=0,y=0,z=0,t=0)$. Now we know all other solutions are found by adding a "homogeneous solution", which form a subspace.

If we want, we can be even more linear-algebra-y. let's move all the variables to one side:

$\left\{\begin{align} 0x + 4y - 3z + 0t &= 0 \\ -4x+0y-3z+4t &= 0 \\ 4x+3y +0z -3t &= 0 \\ 0x-4y+3z+0t &= 0 \end{align}\right.$

We can write it as a matrix equation: $\left[\begin{array}{rrrr} 0 & 4 & -3 & 0 \\ -4 & 0 & -3 & 4 \\ 4 & 3 & 0 & -3 \\ 0 & -4 & 3 & 0 \end{array}\right] \begin{bmatrix} x \\ y \\ z \\ t \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}$

So we are just asking for the null space of the matrix! Clearly that is a subspace. We could even do Gaussian elimination to find which one. I won't.