My background is in Clifford algebra as applied to physics. I'll do my best to try to phrase this answer in a way that is broadly understandable.
In Clifford algebra, one distinguishes between tensors that represent multivectors and those that represent linear operators on multivectors. A multivector could be a scalar or a vector, but it could also be a bivector--an oriented object representing a planar subspace through the origin, just as a vector is an oriented object representing a one-dimensional subspace through the origin--or something higher-dimensioned still. Linear operators are the same as you'd expect from more conventional approaches to linear algebra, however.
The reason for distinguishing the two kinds is that the invariants come from different lines of reasoning.
Now, what is an isometric linear operator? I admit I'm not entirely sure of this, but it ought to be an operator $\underline M$ such that $\underline M(a) \cdot \underline M(b) = a \cdot b$ for two vectors $a, b$. If my interpretation is incorrect, please let me know. I believe this is correct, though, for it implies that, if the adjoint operator (or, in Euclidean space, the transpose operator) is denoted $\overline M$, then $\underline M \overline M = \underline I$, the identity--thus $\overline M = \underline M^{-1}$ and the operator is unitary (orthogonal) and corresponds to an isometry of the space.
So now that we've talked about multivectors and isometric transformations, Consider a general multivector $A$. This multivector can be acted on by some isometric transformation $\underline M$ forming $\underline M(A)$. Any multivector has a scalar product (which we can denote by $\cdot$ just as with vectors), and so we can form $\underline M(A) \cdot \underline M(A) = A \cdot A$. In index notation, this would be denoted by a contraction over all indices. The scalar product of a multivector with itself is always invariant with respect to a unitary transformation, and so any function of only the scalar products of multivectors is also invariant.
But what about invariants of linear operators? Again, these stem from some sort of scalar product. Consider a linear operator $\underline F$. The natural scalar product to form is $\nabla_a \cdot \underline F(a)$. This is the trace. By the same logic as with the multivectors, one ultimately gets the isometric transformation to cancel out by having the operator act on its own adjoint/inverse. This applies to any linear operator on a vector that returns a vector. If the dimensions of the input and output no longer match--for example a function of a vector that returns a scalar--then the trace is no longer a scalar and no longer an invariant (one could rightly ask whether it has any meaning, even). However, for any "square" operator, there is a well-defined scalar trace, and isometric transformations will hold it invariant. Any function of only the trace is then invariant under unitary transformations.
It should be noted that when one considers a stricter set of linear transformations, the space of invariants may be different. When one considers only the space of rotation operators, for instance, more invariants emerge--in particular, the pseudoscalar of the space--the volume element, if you will--is unchanged in both size and orientation. This captures that rotation operators have determinant 1, instead of -1 as reflection operators have. Some multivectors can combine with themselves to form invariants that are multiples of the pseudoscalar under the set of rotation operators.
Edit: I think I did neglect that we can take the trace of the operator extended to its counterparts via outermorphism. I will illustrate.
Let $\underline A(a) = (a \cdot e^1) f_1 + (a \cdot e^2) f_2 + (a \cdot e^3) f_3$ represent a linear operator on a vector in a 3d vector space. The vectors $f_i$ would be columns of the corresponding matrix representation.
The trace is $\nabla_a \cdot \underline A(a) = e^1 \cdot f_1 + e^2 \cdot f_2 + e^3 \cdot f_3 = {A^1}_1 + {A^2}_2 + {A^3}_3$. This is the first invariant.
Now, we extend this operator via wedge (outer) products. $\underline A(a \wedge b) = \underline A(a) \wedge \underline A(b)$. You need only know that wedge pro ducts are anticommutative to evaluate this, though some identities help. Let $e^1 \wedge e^2 \equiv e^{12}$ for notational compactness, and define $B = a \wedge b$ for two vectors $a, b$ to get
$\underline A(B) = (B \cdot e^{21}) f_{12} + (B \cdot e^{32}) f_{23} + (B \cdot e^{13}) f_{31}$
Taking the trace of this operator yields $e^{21} \cdot f_{12} + e^{32} \cdot f_{23} + e^{13} \cdot f_{31}$. In terms of the original matrix coefficients, this i s ${A^1}_1 {A^2}_2 - {A^1}_2 {A^2}_1 + \ldots$. This is essentially the same as that discussed in the wiki article, except they explicitly consider a symmetric matrix.
Building by outermorphism again yields an operator on the 3d pseudoscalar of the space that simply returns a multiple of it---that multiple is the determinant, and so the trace is trivial.
I realize the mathematical machinery I've used to discuss these invariants may be strange, but here's a qualitative description: every linear operator on vector s can be extended to act on planes, volumes, and so on. Each of these extensions has its own trace, which is an invariant. Hence, an invertible linear operator has $N$ invariants in an $N$-dimensional space.