0
$\begingroup$

I used to think that $$\Lambda_\alpha^\kappa \Lambda_\beta^\lambda \epsilon^{\alpha\beta}$$ is equivalent to $$\underline{\Lambda}{\underline{\Lambda}}\underline{\epsilon}$$, where $\underline{\Lambda}$ and $\underline{\epsilon}$ are 2x2 matrices. When I do the calculation however, the results are different. Did I just make a mistake or are these expressions fundamentally different?

The reason why I ask is that I'd like to have an intuitive understanding of intimidating expressions like $$\Lambda_\alpha^\kappa\Lambda_\beta^\lambda\Lambda_\gamma^\mu\Lambda_\delta^\nu \epsilon^{\alpha\beta\gamma\delta}$$. What's going on here? I know how to expand this sum (reversing Einstein's convention), but I don't know what it actually means? Is it like taking all the row-vectors (covariant vectos) of the first $\Lambda$ and multiply them somehow by some of the column vectors of $\epsilon$? This really confuses me, and I don't see the benefit of this complicated tensor notation.

  • 0
    What do you get when you do the calculation? Don't forget that you can only contract with like indices.2017-01-17
  • 0
    In general the components of the matrix $AB$ is given by $(AB)_{ij} = A_{ik}B_{kj}$. For three matrices it's $(ABC)_{ij} =A_{ik}B_{km}C_{mj}$. Note the repeated indices follow each other $kk$ then $mm$.2017-01-17

1 Answers 1

1

Let $\underline{\Lambda}$ and $\underline{\epsilon}$ be the matrices with elements $\underline{\Lambda}_{\,i,j}=\Lambda^i_j$ and $\underline{\epsilon}_{\,i,j}=\epsilon^{ij}$ respectively. Then $(\underline{\Lambda}^T)_{i,j}=\Lambda^j_i$.

So the appropriate matrix expression is, $$ \Lambda^{\kappa}_{\alpha}\Lambda^{\lambda}_{\beta}\epsilon^{\alpha\beta}=(\underline{\Lambda}\,\underline{\epsilon}\,\underline{\Lambda}^T)_{\kappa,\lambda}\,. $$

  • 0
    Ok, that explains why my calculation was wrong. Can you tell me where this comes from2017-01-17
  • 0
    I'm not sure what information you're looking for, sorry. One thing that might be worth saying is the following. When the tensors have two indices or fewer, then it's sometimes useful to put this into matrix and vector notation, because this is what people may be familiar with. Additionally if your tensors have more than two indices, then they cannot be visualised this way. Hence the index notation is in this sense more general.2017-01-17
  • 0
    Thx. Yes I know that higher order tensors cannot be expressed as matrices, that's why I've chosen a 2D-example. In particular I'm interested how you came up with the matrix expression. Where is the transpose coming from and why can I rearrange the order of the matrices and $\epsilon$? I know that it's quite a stupid question but I don't really know where to start from in order to understand it. EDIT: Of course you don't need to answer.But maybe you can provide me a useful link?2017-01-17
  • 0
    $\Lambda^1_2$, for example, is just a number, $\Lambda^2_2$ is just some other number, and similarly $\epsilon^{12}$ etc are just numbers. So the order we put these things in does not matter - numbers can be reordered freely. Only when we want to represent the expression with matrices do we need to worry about ordering, because the order of matrices tells you which indices are contracting with which (which rows with which columns), while in the index expression it's always clear which indices are contracting with which.2017-01-17