30
$\begingroup$

The trace is the sum of the elements on the diagonal of a matrix. Is there a similar operation for the sum of all the elements in a matrix?

8 Answers 8

27

I don't know if it has a nice name or notation, but for the matrix $\mathbf A$ you could consider the quadratic form $\mathbf e^\top\mathbf A\mathbf e$, where $\mathbf e$ is the column vector whose entries are all $1$'s.

16

Using the sum of all elements does not contain any information about endomorphisms, which is the reason why you will not find such an operation in the literature.

If this is interesting enough, you can get the sum of all squares using the scalar product $ \phi(A,B) := \mathrm{tr}(A^T B)$ In fact $\mathrm{tr}(A^T A) = \sum_{i=1}^n \sum_{j=1}^n a_{i,j}^2$

  • 0
    Trace can be found at the center of many applications of matrices but I am not aware of a trivial intuitive formulation2016-10-18
16

The term "grand sum" is commonly used, if only informally, to represent the sum of all elements.

By the way, the grand sum is a very important quantity in the contexts of Markovian transition matrices and other probabilistic applications of linear algebra.

Regards, Scott

7

You can certainly consider the sum of all the entries in a square matrix. But what would it be good for?

Mind that square matrices are a way to write explicitly endomorphisms (i.e. linear transformations of a space into itself) so that any quantity you attach to a matrix should be actually say something about the endomorphisms. Trace and determinant remain unchanged if the matrix $A$ is replaced by the matrix $PAP^{-1}$ where $P$ is any invertible matrix. Thus, trace and determinant are numbers that you can attach to the endomorphism represented by $A$.

It wouldn't be the case for the sum of all entries, which does not remain invariant under the said matrix transformation.

  • 2
    I do not think it is completely clear that the Euclidean norm in $\mathbb{R}^{n^2}$ is invariant under conjugation by orthogonal elements, which are defined using the Euclidean norm in $\mathbb{R}^n$.2012-07-30
5

The max norm:

The max norm is the elementwise norm with $p = \infty$: $ \|A\|_{\text{max}} = \max \{|a_{ij}|\}. $ This norm is not sub-multiplicative. $p=\infty$ refers to $\Vert A \Vert_{p} = \left( \sum_{i=1}^m \sum_{j=1}^n |a_{ij}|^p \right)^{1/p}. \, $

If you want something without absolute bars, think of the projection of your matrix on $E$, $\text{tr}\left(E\cdot A\right)$, where $E$ is a matrix full of $1$'s, which is equivalent to calculate the scalar product $\langle e |Ae \rangle$, with $e$ being a vector full of $1$'s, since $|e \rangle \langle e|=E$.

3

I refer you to the article Merikoski: On the trace and the sum of elements of a matrix, Linear Algebra and its applications, Volume 60, August 1984, pp. 177-185.

  • 3
    Could you give some description of what that article says? Links are fine, but if the whole answer is essentially a link, it is of little value if the link goes stale. The library reference is also good, but not of much use to someone who doesn't have access to a University Library.2013-01-31
2

I just want to add that the "grandsum" operation, as Scott's answer calls it, does in fact show up in (vector) geometry.

Consider the sequence of formulae:

  • $|x|^2 = x\bullet x$

  • $|x+y|^2 = |x|^2+|y|^2+2x\bullet y$

  • $|x+y+z|^2 = |x|^2+|y|^2+|z|^2+2x\bullet y+2x\bullet z+2y\bullet z$

  • etc.

A nice way of remembering these is to instead remember the following, more intuitive formulae:

$|x|^2 = x\bullet x$

$|x+y|^2 = \mathrm{grandsum}\left( \begin{array}{ccc} x \bullet x & x \bullet y \\ y \bullet x & y \bullet y \end{array} \right)$

$|x+y+z|^2 = \mathrm{grandsum} \left( \begin{array}{ccc} x \bullet x & x \bullet y & x \bullet z \\ y \bullet x & y \bullet y & y \bullet z \\ z \bullet x & z \bullet y & z \bullet z \end{array} \right)$

etc.

Now one might object - "but those aren't really matrices, they're just arrays! The important thing is really matrix multiplication - that's what sets matrices apart from arrays, so if you haven't used matrix multiplication, you're not really using matrices."

Actually, by making use of J.M.'s answer, we can involve matrix multiplication in the proofs of these identities. For example, here's the $n=3$ case.

$|x+y+z|^2 = (x+y+z)^\top (x+y+z) = ([x,y,z]\tilde{1}_3)^\top([x,y,z]\tilde{1}_3) = \tilde{1}_3^\top[x,y,z]^\top[x,y,z] \tilde{1}_3 = \mathrm{grandsum}([x,y,z]^\top[x,y,z]) = \mathrm{grandsum} \left( \begin{array}{ccc} x \bullet x & x \bullet y & x \bullet z \\ y \bullet x & y \bullet y & y \bullet z \\ z \bullet x & z \bullet y & z \bullet z \end{array} \right)$

Of course, this isn't really necessary, since we can just expand things out by hand. Still, it's nice to know that there's a proof out there that involves matrix multiplication in a very real way, since reassures us that we're really taking the sum of a matrix, and not just a "mere array."

2

If your matrix $A$ is invertible, then the sum over all of its elements is given by $\sum_{i,j}A_{ij} = 1 - \det (I-AJ)$ where $J$ is the matrix all of whose entries are $1$. To see why, consider the determinant $\det (B-J)$. If ${\bf b}_i$ are the column vectors of $B$ and ${\bf j}$ is the column vector whose only entry is $1$, we have \begin{align} \det ({\bf b}_1 - {\bf j}, \ldots, {\bf b}_n - {\bf j}) &= \det ({\bf b}_1, {\bf b}_2 - {\bf j}, \ldots, {\bf b}_n - {\bf j}) - \det ({\bf j}, {\bf b}_2 - {\bf j}, \ldots, {\bf b}_n - {\bf j})\\ &=\det B - \displaystyle\sum_{k=1}^n \left( {\bf b}_1, \ldots, {\bf b}_{k-1}, {\bf j}, {\bf b}_{k+1}, \ldots, {\bf b}_n \right). \end{align} Notice that the last term is the sum over all entries of the adjugate matrix of $B$, and so we have $ \displaystyle\sum_{k=1}^n \left( {\bf b}_1, \ldots, {\bf b}_{k-1}, {\bf j}, {\bf b}_{k+1}, \ldots, {\bf b}_n \right) = \det B \left(\sum_{i,j=1}^n (B^{-1})_{ij} \right) $ and so setting $B^{-1} = A$ gives the result.