3
$\begingroup$

UPD Equivallent formulation --- how do you find all the independent isometric invariants of a tensor?

In what follows $V$ is a real inner product space.

I want to understand how does one find all (scalar) invariants of tensors under isometries. Wikipedia has a relevant article, but it confines itself only with tensors of type $V^* \otimes V$ or equivalent.

What I mean by an invariant of a tensor under the isometry. Invariant of a $V \otimes V^*$ tensor is a function $I : V \otimes V^* \to \mathbb R$ if

$I\left( Q(A) \right) = I(A)$

for any isometry $Q$ (transformation) properly lifted to act on tensors.

I've asked a question about the invariants of tensor of type $V$, but couldn't get the intuition to extend the reasoning to higher rank tensors

So far as I understand vectors have invariants of the form

$f(\langle v, v \rangle)$

whereas tensors in $V^* \otimes V$ have

$f(\operatorname{Tr} A, \operatorname{Tr} A^2, \operatorname{Tr} A^3)$

$f$ is meant to be an arbitrary function to $\mathbb R$ of appropriate type.

I don't even see why there are three independent invariants for rank two tensors.

One might be be puzzled of the importance of the question and why I'm asking it. The above mentioned wikipedia article provides a good example. Potential energy (scalar) in an elastic material is a function of the strain tensor. The coupling between them should be natural in the sense that it should not involve arbitrary tensors or vectors (because there are simply none), one have to construct a scalar just from the strain tensor. Theories in continuum mechanics abounds in dozens of various tensor fields of different ranks and one need to seek the forms of natural couplings between them.

3 Answers 3

2

This seems too large for a comment, so I'll post it as an answer.

First of all for matrices (aka tensors of type $V\otimes V^*$). If the tensor/matrix is symmetric, then it is diagonalizable by an isometry (this is spectral theorem), so any isometric invariant is a function of eigenvalues. Moreover as permutations are isometries, it has to be a symmetric function of eigenvalues. If one further restricts to polynomial symmetric functions then all such are well understood - they are all expressible interms of either $a_1+a_2+\ldots a_n$, $a_1^2+a_2^2+\ldots a_n^2$,$\ldots$, $a_1^n+a_2^n+\ldots a_n^n$ - these are the traces in your question - or in terms of coefficients of $(x-a_1)(x-a_2)\cdots (x-a_n)$ - those are the invariants in the wikipedia article (see also http://en.wikipedia.org/wiki/Elementary_symmetric_polynomial). When written not in terms of eigenvalues, but as traces or coefficients of the characteristic polynomial they become invariants of the matrix entries.

For not-necessarily symmetric matrices there could be other invariants - indeed there must be, for the space of all matrices is of dimension $n^2$ and the space of isometries is of dimension $n(n-1)/2$, so the quotient is of dimension $n(n+1)/2>n$. In particular in 3-D one expects 6 invariants). I don't know the answer, but what one is doing is taking the space (variety) of all matrices and quotienting it by the action of the group of isometries, and then asking what are the functions on the resulting space. In the symmetric case, the quotient is actually $\mathbb{R}^n$ coodinatized in this funny way. In the general case I'm not sure, but I suspect people who do invariant theory or representation theory know this.

Note: In the scalar case the quotient $V$ divided by isometries is $\mathbb{R}_+$ coordinatized by the square length.

Note 2: For skew-symmetric matrices one can bring them to canonical block form by isometric transformations. http://en.wikipedia.org/wiki/Skew-symmetric_matrix so the the invariants are given in terms of squares of the eigenvalues, well expressed as invariants of the $A^2$ - so one again gets either traces of $A^2$, $A^4$ and so on or coefficients of the characteristic polynomial of $A^2$. Of course when $n=3$ there is only one invariant - trace $A^2$.

For "higher" tensors things seem less obvious still. You agin take some vector space (of tensors) and quotient it by the action of the group of isometries and ask for the functions on the resulting quotient.

One other comment - if I understand correctly in some theories you get scalars that depend not only on the tensor itself, but also on its derivatives of various orders. If you allow this generalization the problem becomes harder.

1

My background is in Clifford algebra as applied to physics. I'll do my best to try to phrase this answer in a way that is broadly understandable.

In Clifford algebra, one distinguishes between tensors that represent multivectors and those that represent linear operators on multivectors. A multivector could be a scalar or a vector, but it could also be a bivector--an oriented object representing a planar subspace through the origin, just as a vector is an oriented object representing a one-dimensional subspace through the origin--or something higher-dimensioned still. Linear operators are the same as you'd expect from more conventional approaches to linear algebra, however.

The reason for distinguishing the two kinds is that the invariants come from different lines of reasoning.

Now, what is an isometric linear operator? I admit I'm not entirely sure of this, but it ought to be an operator $\underline M$ such that $\underline M(a) \cdot \underline M(b) = a \cdot b$ for two vectors $a, b$. If my interpretation is incorrect, please let me know. I believe this is correct, though, for it implies that, if the adjoint operator (or, in Euclidean space, the transpose operator) is denoted $\overline M$, then $\underline M \overline M = \underline I$, the identity--thus $\overline M = \underline M^{-1}$ and the operator is unitary (orthogonal) and corresponds to an isometry of the space.

So now that we've talked about multivectors and isometric transformations, Consider a general multivector $A$. This multivector can be acted on by some isometric transformation $\underline M$ forming $\underline M(A)$. Any multivector has a scalar product (which we can denote by $\cdot$ just as with vectors), and so we can form $\underline M(A) \cdot \underline M(A) = A \cdot A$. In index notation, this would be denoted by a contraction over all indices. The scalar product of a multivector with itself is always invariant with respect to a unitary transformation, and so any function of only the scalar products of multivectors is also invariant.

But what about invariants of linear operators? Again, these stem from some sort of scalar product. Consider a linear operator $\underline F$. The natural scalar product to form is $\nabla_a \cdot \underline F(a)$. This is the trace. By the same logic as with the multivectors, one ultimately gets the isometric transformation to cancel out by having the operator act on its own adjoint/inverse. This applies to any linear operator on a vector that returns a vector. If the dimensions of the input and output no longer match--for example a function of a vector that returns a scalar--then the trace is no longer a scalar and no longer an invariant (one could rightly ask whether it has any meaning, even). However, for any "square" operator, there is a well-defined scalar trace, and isometric transformations will hold it invariant. Any function of only the trace is then invariant under unitary transformations.

It should be noted that when one considers a stricter set of linear transformations, the space of invariants may be different. When one considers only the space of rotation operators, for instance, more invariants emerge--in particular, the pseudoscalar of the space--the volume element, if you will--is unchanged in both size and orientation. This captures that rotation operators have determinant 1, instead of -1 as reflection operators have. Some multivectors can combine with themselves to form invariants that are multiples of the pseudoscalar under the set of rotation operators.

Edit: I think I did neglect that we can take the trace of the operator extended to its counterparts via outermorphism. I will illustrate.

Let $\underline A(a) = (a \cdot e^1) f_1 + (a \cdot e^2) f_2 + (a \cdot e^3) f_3$ represent a linear operator on a vector in a 3d vector space. The vectors $f_i$ would be columns of the corresponding matrix representation.

The trace is $\nabla_a \cdot \underline A(a) = e^1 \cdot f_1 + e^2 \cdot f_2 + e^3 \cdot f_3 = {A^1}_1 + {A^2}_2 + {A^3}_3$. This is the first invariant.

Now, we extend this operator via wedge (outer) products. $\underline A(a \wedge b) = \underline A(a) \wedge \underline A(b)$. You need only know that wedge pro ducts are anticommutative to evaluate this, though some identities help. Let $e^1 \wedge e^2 \equiv e^{12}$ for notational compactness, and define $B = a \wedge b$ for two vectors $a, b$ to get

$\underline A(B) = (B \cdot e^{21}) f_{12} + (B \cdot e^{32}) f_{23} + (B \cdot e^{13}) f_{31}$

Taking the trace of this operator yields $e^{21} \cdot f_{12} + e^{32} \cdot f_{23} + e^{13} \cdot f_{31}$. In terms of the original matrix coefficients, this i s ${A^1}_1 {A^2}_2 - {A^1}_2 {A^2}_1 + \ldots$. This is essentially the same as that discussed in the wiki article, except they explicitly consider a symmetric matrix.

Building by outermorphism again yields an operator on the 3d pseudoscalar of the space that simply returns a multiple of it---that multiple is the determinant, and so the trace is trivial.

I realize the mathematical machinery I've used to discuss these invariants may be strange, but here's a qualitative description: every linear operator on vector s can be extended to act on planes, volumes, and so on. Each of these extensions has its own trace, which is an invariant. Hence, an invertible linear operator has $N$ invariants in an $N$-dimensional space.

  • 0
    I find your view, stated basically in your last paragraph, very inspiring and insigthful. I'll need some more time to get to it fully, but the time for the bounty is short and I'll reward it now.2012-11-16
-1

A second rank tensor in 3 dimensions has only trhee eigenvalues lambda1, lambda2 and lambda3, its characteristic polynomial has 3 different coefficients J1, J2 and J3 and it has only 3 independent traces of its powers Tr(A), Tr(A^2) and Tr(A^3). All this reduces to say that it has only three independent invariants, that can be given in different formats: eigenvalues, principal invariants or traces. The relation between eigenvalues and principal invariants is very direct because the first are the roots of the characteristic polynomial and the second are its coefficients. The third relation is much more difficult to see and its demonstration is called Cayley Hamilton Theorem. In a few words it says that "a second rank tensor satisfies its own characteristic equation". So you have J0*A^3+J1*A^2+J2*A^1+J3*A^0=0 (with J0=1), taking traces you find the last set of independent invariants. Cayley Hamilton theorem is valid in n-dimensions, it is to say J0*A^n+J1*A^(n-1)+...+Jn=0. The demonstration is really amazing and surprising. Note that the independence of the traces seems natural because we have only one equation: the characteristic polynomial. On the other side, Cayley Hamilton theorem is surprising because from this unique equation it gives n^2 equations between the components of the tensor.