8
$\begingroup$

Let $V$ be a real vector space of dimension $2$, and let $\langle\ \ ,\ \ \rangle$ be an inner product on $V$. Define $f:V^4 \to \mathbb{R}$ by $$f(x,y,z,w):=\langle x,y \rangle \langle z,w \rangle- \langle x,w \rangle \langle z,y \rangle$$

Show there's a skew-symmetric bilinear form $g:V^2 \to \mathbb{R}$ such that $$f(x,y,z,w)=g(x,z)g(y,w)$$

My thought: Let $z=y,w=x$ to get $f(x,y,y,x)= \langle x,y \rangle ^2-\langle x,x \rangle \langle y,y \rangle$. If $g$ exists, then $f(x,y,y,x)=g(x,y)g(y,x)=-g(x,y)^2$ (notice that $g$ is skew-symetric). Equating both identities we get $g(x,y)^2= -\langle x,y \rangle ^2+\langle x,x \rangle \langle y,y \rangle$. Here is where I can't proceed any more...

  • 5
    Hint: How many linear independent skew-symmetric bilinear forms are there on a vector space of dimension $2$?2012-08-23
  • 0
    Choose an orthonormal basis. What is $f(e_1,e_1,e_2,e_2)$?2012-08-23
  • 0
    It's 1, so $g(x,y)=\det (x,y)$.. Thanks! Just another query, is $\det(x,y)$ the standard notation in abstract algebra for a basis-independent determinant?2012-08-23
  • 0
    actually, the standard way to write this is $e^1\wedge e^2$. $e^i$ are the covectors (linear forms) dual to the basis $e_1$, $e_2$ (that is, $e^i(e_j)=\delta^i_j$ where the right hand side is the [Kronecker delta](https://en.wikipedia.org/wiki/Kronecker_delta), and $\wedge$ is the exterior product or wedge product, which is used to build $n$-forms (skew-symmetric $n$-linear functions on the vector space). Especially, if $\alpha$ and $\beta$ are $1$-forms (linear functions), $(\alpha\wedge\beta)(u,v)=\alpha(u)\beta(v)-\beta(u)\alpha(v)$.2012-08-24
  • 0
    Of course $e^1\wedge e^2$ is the "determinant form" (the proper name is "volume form) only for the two-dimensional vector space (the definition is of course general, but for more dimensions, this doesn't give the determinant). For a general $n$-dimensional vector space, it would be $e^1\wedge e^2\wedge\dots\wedge e^n$.2012-08-24
  • 0
    Thanks! I understand that picking an orthonomal basis is necessary to construct $g$ explicity, but I wonder at the moment is there a "basis independent" way to deduce the existence of $g$? (Is it possible to do this just by using your hint at the start that the dimension of $\Lambda^{3}V^{*}$ is 1?) If there is then one can avoid the "determinant form" and it may be easily generalised to higher dimensions.2012-08-24
  • 0
    One could start with noting that $f$ is skew-symmetric in the first and third argument, and also in the second and fourth argument. I think that this (plus linearity of $f$ in all arguments) should be enough to conclude that $f$ is the product of two $2$-forms. Be $g_1$ and $g_2$ $2$-forms whose product is $f$. Due to dimension 1, $g_2=\lambda g_1$ for some number $\lambda\ne 0$. Therefore if you define $g=\sqrt{\lvert\lambda\rvert}g_1$, you have $f(x,y,z,w) = \pm g(x,z)g(y,w)$. To get the sign, you only have to check that $f(x,x,z,z)$ is positive.2012-08-24
  • 0
    True, and this can be naturally generalised to any rank $2n$ tensor product which has similar form to $f$. Thanks for all the comments!2012-08-24
  • 0
    @celtschk Please consider converting your comments into an answer, so that this question gets removed from the [unanswered tab](http://meta.math.stackexchange.com/q/3138). If you do so, it is helpful to post it to [this chat room](http://chat.stackexchange.com/rooms/9141) to make people aware of it (and attract some upvotes). For further reading upon the issue of too many unanswered questions, see [here](http://meta.stackexchange.com/q/143113), [here](http://meta.math.stackexchange.com/q/1148) or [here](http://meta.math.stackexchange.com/a/9868).2013-06-22
  • 0
    @JulianKuelshammer: OK, I've now expanded the comments into a full answer (which actually got quite long because I expanded all the hints into the actual calculations).2013-06-22

2 Answers 2

3

To prove the relation, start by noticing that the dimension of the space of skew-symmetric bilinear forms (also known as $2$-form) on a $2$-dimensional vector space is one-dimensional. This is a special case of the general rule that the space of $p$-forms on an $n$-dimensional vector space is $n\choose p$, but we can see that also quite directly:

Be $u,v\in V$ and $\alpha,\beta$ $2$-forms on $V$. Let's assume that none of the vectors or $2$-forms is $0$. First note that if $u$ and $v$ are linearly dependent, any $2$-form an them vanishes because then $u=\lambda v$, and $\alpha(u,v) = \alpha(\lambda v,v) = \lambda\,\alpha(v,v)=0$. Therefore let's assume wlog that $u$ and $v$ are linearly independent. Since the vector space has dimension $2$, $u$ and $v$ thus form a basis of $V$. Any vector $x$ can therefore be written as $x=\lambda u+\mu v$, with real numbers $\lambda$, $\mu$.

Now applying the two $2$-forms on the vectors $x_1=\lambda_1 u + \mu_1 v$ and $x_2=\lambda_2 u + \mu_2 v$ gives due to linearity and skew symmetry:

$$\alpha(x_1,x_2) = \lambda_1\lambda_2\,\underbrace{\alpha(u,u)}_{=0} + \lambda_1\mu_2\,\alpha(u,v) + \mu_1\lambda_2\,\underbrace{\alpha_v,u}_={-\alpha(u,v)} + \mu_1\mu_2\underbrace{\alpha(v,v)}_{=0} = (\lambda_1\mu_2-\lambda_2\mu_1)\,\alpha(u,v)$$ $$\beta(x_1,x_2) = \lambda_1\lambda_2\,\beta(u,u) + \lambda_1\mu_2\,\alpha(u,v) + \mu_1\lambda_2\,\beta_v,u + \mu_1\mu_2\beta(v,v) = (\lambda_1\mu_2-\lambda_2\mu_1)\,\beta(u,v)$$

Since $x_1$ and $x_2$ are arbitrary vectors, and $\alpha(u,v)$ and $\beta(u,v)$ are just numbers independent of $x_1$ and $x_2$, this means that $\alpha$ and $\beta$ only differ by a factor.

Now, since the space of $2$-forms has dimension $1$, it suffices to show that there are two $2$-forms $g_1$ and $g_2$ so that $f(x,y,z,w)=g_1(x,z)g_2(y,w)$.

Now $f$ is quite obviously linear in each of its arguments. Therefore it can be written in the form $$f(x,y,z,w) = \sum_k c_k \alpha_k(x) \beta_k(y) \gamma_k(z) \delta_k(w)$$ where $c_k$ are real constants and $\alpha_k$, $\beta_k$, $\gamma_k$, $\delta_k$ are linear functions (also known as $1$-forms).

Now we can write the product $\alpha_k(x)\gamma_k(z)$ as sum of its symmetric and skew-symmetric part, and the same for $\beta_k(y)\delta_k(w)$: $$ \begin{align} \alpha_k(x)\gamma_k(z) &= s_{\alpha_k,\gamma_k}(x,z) + a_{\alpha_k,\gamma_k}(x,z)\\ \beta_k(y)\delta_k(w) &= s_{\beta_k,\delta_k}(x,z) + a_{\beta_k,\delta_k}(x,z)\\ s_{\eta,\zeta}(u,v) &= \frac{\eta(u)\zeta(v)+\zeta(u)\eta(v)}{2}\\ a_{\eta,\zeta}(u,v) &= \frac{\eta(u)\zeta(v)-\zeta(u)\eta(v)}{2} \end{align} $$ Note that for $a_{\eta,\zeta}$ there exists a standard notation: $$(\eta\wedge\zeta)(u,v) = \eta(u)\zeta(v)-\zeta(u)\eta(v)$$ so that $$a_{\eta,\zeta} = \frac12\eta \wedge \zeta$$ The product $\eta\wedge\zeta$ is called wedge product of $\eta$ and $\zeta$.

Using those relation, we can write $f$ as $$f(x,y,z,w) = \sum_k c_k (s_{\alpha_k\gamma_k}(x,z) + a_{\alpha_k\gamma_k}(x,z)) (s_{\beta_k\delta_k}(y,w) - a_{\beta_k\delta_k}(y,w))$$ or split in four terms: $$f(x,y,z,w) = \sum_k c_k\,s_{\alpha_k\gamma_k}(x,z)s_{\beta_k\delta_k}(y,w) + \sum_k c_k\,s_{\alpha_k\gamma_k}(x,z)a_{\beta_k\delta_k}(y,w) + \sum_k c_k\,a_{\alpha_k\gamma_k}(x,z)s_{\beta_k\delta_k}(y,w) + \sum_k c_k\,a_{\alpha_k\gamma_k}(x,z)a_{\beta_k\delta_k}(y,w)$$ Note that the first term is symmetric under exchange of $x$ with $z$ and also under $y$ with $z$, the second term is symetric under exchange of $x$ and $z$, but skew-symmetric under exchange of $y$ and $w$, and so on.

One easily checks that $f$ is skew-symmetric under exchange of $x$ with $z$, and also of $y$ with $w$. Therefore the first three of the terms above must vanish because they are symmetric in at least one of those pairs. Therefore we have (now using the standard notation for the antisymmetric products) $$f(x,y,z,w) = \sum_k \frac{c_k}{4} (\alpha_k \wedge \gamma_k)(x,z) (\beta_k \wedge \delta_k)(y,w)$$ However as we have seen, since $V$ has dimension $2$, all $2$-forms are multiples of each other, therefore for an arbitrary non-zero $2$-form $\omega$, there are real numbers $\lambda_k$ and $\mu_k$ so that $$ \begin{align} \alpha_k \wedge \gamma_k &= \lambda_k \omega\\ \beta_k \wedge \delta_k &= \mu_k \omega \end{align} $$ Therefore we can write $f$ as $$f(x,y,z,w) = \sum_k c_k (\lambda_k \omega(x,z) + \mu_k \omega(y,w) \left(\sum_k c_k(\lambda_k + \mu_k\right)\omega(x,z)\omega(y,w)$$ Note that the pre-factor of $\omega$ is a pure number that doesn't depend on the arguments of $x$.

Now we define $g(u,v) = \sqrt{\left|\sum_k c_k(\lambda_k + \mu_k\right|}$. Then obviously either $f(x,y,z,w) = g(x,z)g(y,w)$ or $f(x,y,z,w)=-g(x,y)g(z,w)$.

To determine which of the two equations holds, consider the case $x=y$ and $z=w$. The left hand side then reads $\langle x,x\rangle\langle z,z\rangle - \langle x,z\rangle^2$ which according to the Cauchy-Schwarz inequality is always $\ge 0$. On the right hand side, we get $g(x,z)^2$ which as square of a real number is also $\ge 0$. Therefore we have proved that indeed, $f(x,y,z,w) = g(x,z)g(y,w)$ for some $g(x,y)$.

Now to explicitly write down the $g$, introduce an orthonormal basis $\{e_1,e_2\}$ of $V$. Then we can define the dual basis $\{e^1,e^2\}$ of $V^*$, the set of $1$-forms (linear functions) on $V$, through the relation $e^i(e_j)=\delta^i_j$ (where $\delta^i_j$ is the Kronecker delta, which is $1$ for $i=j$ and $0$ otherwise). Note that $e^i$ just maps a vector to its $i$-th component when written in the basis $e_i$.

Now we can write $g$ as $\lambda e^1\wedge e^2$, so that $$f(x,y,z,w)=\lambda^2(e^1\wedge e^2)(x,z)(e^1\wedge e_2)(y,w)$$ To determine $\lambda$ we just use $x=y=e_1$ and $z=w=e_2$. One easily checks that $f(e_1,e_1,e_2,e_2) = 1$. On the other hand, $(e^1\wedge e^2)(e_1,e_2) = e^1(e_1)e^2(e_2) - e^1(e_2)e^2(e_1) = 1$. Therefore we get $\lambda=1$ and therefore $$g = e^1\wedge e^2$$

A few remarks at the end:

If you write vectors $u$ and $v$ in the standard basis, $(e^1\wedge e^2)(u,v)$ gives the determinant of the $2\times2$ matrix whose columns are the coefficients of $u$ and $v$. As it is well known, this determinant gives the area of the parallelogram spanned by $u$ and $v$.

This can be generalized to higher dimensions. In $n$-dimensional vector spaces, you need $n$ vectors to span a parallelotope (the $n$-dimensional generalization of parallelogram and parallelepiped). In more than two dimensions, you can define wedge products of more than two linear functions (well, formally you could also do that in $2$ dimensions, but there all those higher products are zero). Those are completely skew-symmetric functions of $k$ vectors.

It turns out that for $n$-dimensional vector spaces, the space of $n$-forms (wedge products of $n$ vectors) has always dimension $1$, so the $n$-forms can again be written as $\lambda\omega$ where $\omega = e^1\wedge e^2\wedge\ldots\wedge e^n$. Now applying $\omega$ to $n$ vectors, you again get the determinant of the $n\times n$ matrix whose columns are formed by the components of the vectors in the bases $e_i$. That determinant gives the $n$-dimensional volume of the parallelotope spanned by the $n$ vectors. Therefore $\omega$ is also called the volume form.

0

This becomes a lot simpler using some clifford algebra. The map $f$ can be seen as taking the form

$$f(x,y,z,w) = (x \cdot y)(z \cdot w) - (x \cdot w)(y \cdot z) = (w \wedge y)\cdot (x \wedge z)$$

The wedge product is exactly the key to defining this alternating bilinear form that you're looking for. If you feel like what I wrote above just comes out of thin air, it can be derived using a geometric product and looking at scalar terms. The geometric product is associative, and we can use this to our advantage. But what comes below is merely symbol pushing to derive the identity I have presented.

$$\langle wyxz \rangle_0 = (w \cdot y)(x \cdot z) + (w \wedge y) \cdot (x \wedge z) = (y \cdot x) (w \cdot z) + w \cdot [(y \wedge x) \cdot z]$$

Here, use an analogue of the BAC-CAB rule (this is, unfortunately, a tedious derivation I will not perform here).

$$(y \wedge x) \cdot z = (x \cdot z) y - (y \cdot z) x$$

Substituting this back into the previous equation yields

$$(w \cdot y)(x \cdot z) + (w \wedge y) \cdot (x \wedge z) = (y \cdot x)(w \cdot z) + (w \cdot y)(x \cdot z) - (y \cdot z)(x \cdot w)$$

The first term on the left cancels the second term on the right, and you're done.