I used to think that $$\Lambda_\alpha^\kappa \Lambda_\beta^\lambda \epsilon^{\alpha\beta}$$ is equivalent to $$\underline{\Lambda}{\underline{\Lambda}}\underline{\epsilon}$$, where $\underline{\Lambda}$ and $\underline{\epsilon}$ are 2x2 matrices. When I do the calculation however, the results are different. Did I just make a mistake or are these expressions fundamentally different?
The reason why I ask is that I'd like to have an intuitive understanding of intimidating expressions like $$\Lambda_\alpha^\kappa\Lambda_\beta^\lambda\Lambda_\gamma^\mu\Lambda_\delta^\nu \epsilon^{\alpha\beta\gamma\delta}$$. What's going on here? I know how to expand this sum (reversing Einstein's convention), but I don't know what it actually means? Is it like taking all the row-vectors (covariant vectos) of the first $\Lambda$ and multiply them somehow by some of the column vectors of $\epsilon$? This really confuses me, and I don't see the benefit of this complicated tensor notation.