To prove the relation, start by noticing that the dimension of the space of skew-symmetric bilinear forms (also known as $2$-form) on a $2$-dimensional vector space is one-dimensional. This is a special case of the general rule that the space of $p$-forms on an $n$-dimensional vector space is $n\choose p$, but we can see that also quite directly:
Be $u,v\in V$ and $\alpha,\beta$ $2$-forms on $V$. Let's assume that none of the vectors or $2$-forms is $0$. First note that if $u$ and $v$ are linearly dependent, any $2$-form an them vanishes because then $u=\lambda v$, and $\alpha(u,v) = \alpha(\lambda v,v) = \lambda\,\alpha(v,v)=0$. Therefore let's assume wlog that $u$ and $v$ are linearly independent. Since the vector space has dimension $2$, $u$ and $v$ thus form a basis of $V$. Any vector $x$ can therefore be written as $x=\lambda u+\mu v$, with real numbers $\lambda$, $\mu$.
Now applying the two $2$-forms on the vectors $x_1=\lambda_1 u + \mu_1 v$ and $x_2=\lambda_2 u + \mu_2 v$ gives due to linearity and skew symmetry:
$\alpha(x_1,x_2) = \lambda_1\lambda_2\,\underbrace{\alpha(u,u)}_{=0} + \lambda_1\mu_2\,\alpha(u,v) + \mu_1\lambda_2\,\underbrace{\alpha_v,u}_={-\alpha(u,v)} + \mu_1\mu_2\underbrace{\alpha(v,v)}_{=0} = (\lambda_1\mu_2-\lambda_2\mu_1)\,\alpha(u,v)$ $\beta(x_1,x_2) = \lambda_1\lambda_2\,\beta(u,u) + \lambda_1\mu_2\,\alpha(u,v) + \mu_1\lambda_2\,\beta_v,u + \mu_1\mu_2\beta(v,v) = (\lambda_1\mu_2-\lambda_2\mu_1)\,\beta(u,v)$
Since $x_1$ and $x_2$ are arbitrary vectors, and $\alpha(u,v)$ and $\beta(u,v)$ are just numbers independent of $x_1$ and $x_2$, this means that $\alpha$ and $\beta$ only differ by a factor.
Now, since the space of $2$-forms has dimension $1$, it suffices to show that there are two $2$-forms $g_1$ and $g_2$ so that $f(x,y,z,w)=g_1(x,z)g_2(y,w)$.
Now $f$ is quite obviously linear in each of its arguments. Therefore it can be written in the form $f(x,y,z,w) = \sum_k c_k \alpha_k(x) \beta_k(y) \gamma_k(z) \delta_k(w)$ where $c_k$ are real constants and $\alpha_k$, $\beta_k$, $\gamma_k$, $\delta_k$ are linear functions (also known as $1$-forms).
Now we can write the product $\alpha_k(x)\gamma_k(z)$ as sum of its symmetric and skew-symmetric part, and the same for $\beta_k(y)\delta_k(w)$: $ \begin{align} \alpha_k(x)\gamma_k(z) &= s_{\alpha_k,\gamma_k}(x,z) + a_{\alpha_k,\gamma_k}(x,z)\\ \beta_k(y)\delta_k(w) &= s_{\beta_k,\delta_k}(x,z) + a_{\beta_k,\delta_k}(x,z)\\ s_{\eta,\zeta}(u,v) &= \frac{\eta(u)\zeta(v)+\zeta(u)\eta(v)}{2}\\ a_{\eta,\zeta}(u,v) &= \frac{\eta(u)\zeta(v)-\zeta(u)\eta(v)}{2} \end{align} $ Note that for $a_{\eta,\zeta}$ there exists a standard notation: $(\eta\wedge\zeta)(u,v) = \eta(u)\zeta(v)-\zeta(u)\eta(v)$ so that $a_{\eta,\zeta} = \frac12\eta \wedge \zeta$ The product $\eta\wedge\zeta$ is called wedge product of $\eta$ and $\zeta$.
Using those relation, we can write $f$ as $f(x,y,z,w) = \sum_k c_k (s_{\alpha_k\gamma_k}(x,z) + a_{\alpha_k\gamma_k}(x,z)) (s_{\beta_k\delta_k}(y,w) - a_{\beta_k\delta_k}(y,w))$ or split in four terms: $f(x,y,z,w) = \sum_k c_k\,s_{\alpha_k\gamma_k}(x,z)s_{\beta_k\delta_k}(y,w) + \sum_k c_k\,s_{\alpha_k\gamma_k}(x,z)a_{\beta_k\delta_k}(y,w) + \sum_k c_k\,a_{\alpha_k\gamma_k}(x,z)s_{\beta_k\delta_k}(y,w) + \sum_k c_k\,a_{\alpha_k\gamma_k}(x,z)a_{\beta_k\delta_k}(y,w)$ Note that the first term is symmetric under exchange of $x$ with $z$ and also under $y$ with $z$, the second term is symetric under exchange of $x$ and $z$, but skew-symmetric under exchange of $y$ and $w$, and so on.
One easily checks that $f$ is skew-symmetric under exchange of $x$ with $z$, and also of $y$ with $w$. Therefore the first three of the terms above must vanish because they are symmetric in at least one of those pairs. Therefore we have (now using the standard notation for the antisymmetric products) $f(x,y,z,w) = \sum_k \frac{c_k}{4} (\alpha_k \wedge \gamma_k)(x,z) (\beta_k \wedge \delta_k)(y,w)$ However as we have seen, since $V$ has dimension $2$, all $2$-forms are multiples of each other, therefore for an arbitrary non-zero $2$-form $\omega$, there are real numbers $\lambda_k$ and $\mu_k$ so that $ \begin{align} \alpha_k \wedge \gamma_k &= \lambda_k \omega\\ \beta_k \wedge \delta_k &= \mu_k \omega \end{align} $ Therefore we can write $f$ as $f(x,y,z,w) = \sum_k c_k (\lambda_k \omega(x,z) + \mu_k \omega(y,w) \left(\sum_k c_k(\lambda_k + \mu_k\right)\omega(x,z)\omega(y,w)$ Note that the pre-factor of $\omega$ is a pure number that doesn't depend on the arguments of $x$.
Now we define $g(u,v) = \sqrt{\left|\sum_k c_k(\lambda_k + \mu_k\right|}$. Then obviously either $f(x,y,z,w) = g(x,z)g(y,w)$ or $f(x,y,z,w)=-g(x,y)g(z,w)$.
To determine which of the two equations holds, consider the case $x=y$ and $z=w$. The left hand side then reads $\langle x,x\rangle\langle z,z\rangle - \langle x,z\rangle^2$ which according to the Cauchy-Schwarz inequality is always $\ge 0$. On the right hand side, we get $g(x,z)^2$ which as square of a real number is also $\ge 0$. Therefore we have proved that indeed, $f(x,y,z,w) = g(x,z)g(y,w)$ for some $g(x,y)$.
Now to explicitly write down the $g$, introduce an orthonormal basis $\{e_1,e_2\}$ of $V$. Then we can define the dual basis $\{e^1,e^2\}$ of $V^*$, the set of $1$-forms (linear functions) on $V$, through the relation $e^i(e_j)=\delta^i_j$ (where $\delta^i_j$ is the Kronecker delta, which is $1$ for $i=j$ and $0$ otherwise). Note that $e^i$ just maps a vector to its $i$-th component when written in the basis $e_i$.
Now we can write $g$ as $\lambda e^1\wedge e^2$, so that $f(x,y,z,w)=\lambda^2(e^1\wedge e^2)(x,z)(e^1\wedge e_2)(y,w)$ To determine $\lambda$ we just use $x=y=e_1$ and $z=w=e_2$. One easily checks that $f(e_1,e_1,e_2,e_2) = 1$. On the other hand, $(e^1\wedge e^2)(e_1,e_2) = e^1(e_1)e^2(e_2) - e^1(e_2)e^2(e_1) = 1$. Therefore we get $\lambda=1$ and therefore $g = e^1\wedge e^2$
A few remarks at the end:
If you write vectors $u$ and $v$ in the standard basis, $(e^1\wedge e^2)(u,v)$ gives the determinant of the $2\times2$ matrix whose columns are the coefficients of $u$ and $v$. As it is well known, this determinant gives the area of the parallelogram spanned by $u$ and $v$.
This can be generalized to higher dimensions. In $n$-dimensional vector spaces, you need $n$ vectors to span a parallelotope (the $n$-dimensional generalization of parallelogram and parallelepiped). In more than two dimensions, you can define wedge products of more than two linear functions (well, formally you could also do that in $2$ dimensions, but there all those higher products are zero). Those are completely skew-symmetric functions of $k$ vectors.
It turns out that for $n$-dimensional vector spaces, the space of $n$-forms (wedge products of $n$ vectors) has always dimension $1$, so the $n$-forms can again be written as $\lambda\omega$ where $\omega = e^1\wedge e^2\wedge\ldots\wedge e^n$. Now applying $\omega$ to $n$ vectors, you again get the determinant of the $n\times n$ matrix whose columns are formed by the components of the vectors in the bases $e_i$. That determinant gives the $n$-dimensional volume of the parallelotope spanned by the $n$ vectors. Therefore $\omega$ is also called the volume form.