0
$\begingroup$

The definition I learned of tensors is that they are multilinear maps. But in GR, when you drop into a chart I often see people do the following sort of operation:

$$A^{ab}B_{bc}=C^a_c$$

This never made sense to me. Why would this sort of operation produce a $(1,1)$-tensor?

2 Answers 2

1

$A^{ab}$ are the components of a $(2, 0)$-tensor, $B_{dc}$ are the components of a $(0, 2)$-tensor, and $A^{ab}B_{dc}$ are the components of their tensor product which is a $(2, 2)$-tensor. Taking the trace in the $b$ and $d$ indices gives a $(1, 1)$-tensor which has components $A^{ab}B_{bc}$.

  • 0
    I like how straightforward this proof is. Unfortunately the set of lectures I've been watching (Schuller's GR lectures on youtube) doesn't really go over the tensor product, so while I have a vague idea of what it is, I don't really know what it is. Is there any proof without requiring the use of a tensor product?2017-02-26
0

The standard convention is the repeated indices are summed over. The $b$ index in $A^{ab}B_{bc}$ gets "summed out", leaving one upper index $a$ and one lower index $b$.