1
$\begingroup$

I am following the proof of the existence of the Hodge-* operator in Naber's Geometry, Topology and Guage Fields.

Given a basis $(e_1, \dots, e_n)$ for a vector space and it dual $(e^1, \dots, e^n)$ suppose that

$ \sum_{i_1 < \cdots < i_k} \gamma_{i_1 \dots i_{n-k}}e^{l_1}\wedge \cdots \wedge e^{l_k} \wedge e^{i_1} \wedge \cdots e^{i_{n-k}} = 0 $

The text claims that evaluating this expression at $(e_{l_1}, \dots, e_{l_k}, e_{j_1}, \dots e_{j_{n-k}})$ shows that $\gamma_{j_1 \dots j_{n-k}}=0$. I am trying to see how this is so; here is my progress thus far:

I know that $ e^1 \wedge \cdots \wedge e^n(e_1, \dots, e_n) = 1 $ so my strategy is to manipulate the given expression into a form that this identity can be applied. Toward this end, select the permutation $\sigma$ that leaves the $l$ indices fixed and that associates $i_s$ with $j_s$. That is, $\sigma(l_r) = l_r$ and $\sigma(i_{s}) = j_s$. Then, using this permutation and evaluating the expression above at the indicated point yields

$ \sum_{i_1 < \cdots < i_k} \gamma_{i_1 \dots i_{n-k}}e^{l_1}\wedge \cdots \wedge e^{l_k} \wedge e^{i_1} \wedge \cdots e^{i_{n-k}}(e_{\sigma(l_1)}, \dots, e_{\sigma(l_k)}, e_{\sigma(i_1)}, \dots e_{\sigma(i_{n-k})}) $

=

$ \sum_{i_1 < \cdots < i_k} \gamma_{i_1 \dots i_{n-k}}e^{l_1}\wedge \cdots \wedge e^{l_k} \wedge e^{i_1} \wedge \cdots e^{i_{n-k}} \epsilon(\sigma) (e_{l_1}, \dots, e_{l_k}, e_{i_1}, \dots e_{i_{n-k}}) $

=

$ \sum_{i_1 < \cdots < i_k} \gamma_{i_1 \dots i_{n-k}} \epsilon(\sigma) $

where $\epsilon(\sigma)$ denotes the sign of the permutation $\sigma$.

So, does this look OK so far? If so, I'm unsure about how to simplify this further.

Update Ok, I think I've got it; any constructive criticism most welcome:

Note that, and I probably should have mentioned this, that $\gamma_{i_1, \dots, i_k} := \gamma(e_{i_1}, \dots e_{i_k})$ where $\gamma$ is an alternating $k$-form. Let $\rho := \sigma^{-1}$ and note that the last expression can be further manipulated to yield

$ \sum_{i_1 < \cdots < i_k} \gamma_{i_1 \dots i_{n-k}} \epsilon(\sigma) = \sum_{i_1 < \cdots < i_k} \gamma(e_{\rho(j_1)}, \dots e_{\rho(j_k)}) \epsilon(\sigma) = \sum_{i_1 < \cdots < i_k} \gamma_{j_1 \dots j_{n-k}} \epsilon(\sigma)\epsilon(\rho) $

Since $\sigma$ and $\rho$ always have the same sign, $\epsilon(\sigma)\epsilon(\rho) = 1$ and the last expression is independent of the summation indices and therefore $ \sum_{i_1 < \cdots < i_k} \gamma_{j_1 \dots j_{n-k}} \epsilon(\sigma)\epsilon(\rho) = \gamma_{j_1 \dots j_{n-k}}. $ which proves the claim.

  • 0
    @Sigur I believe I have it now2012-07-04

0 Answers 0