5
$\begingroup$

Let $V$ be an $n$-dimensional vector space and let $(v_1, \dots, v_n)$ denote any oriented basis for $V$. Also, let $g$ be an inner product on $V$ and let $G$ denote the Gram matrix of inner products $G = [g(v_i, v_j)]$. I am trying to show that if $v_j = A^k_je_k$, where $e_k$ denotes a basis that is orthonormal with respect to g, then $\det{(A^i_j)} = \sqrt{G}$.

I believe I have found a useful intermediate result, but I'm not really sure how to close the deal. For vectors $v_i$ and $v_j$ we have:

$$ g(v_i, v_j) = g(A^k_i e_k, A^r_j e_r) = A^k_iA^r_j \delta_{kr} = \sum\limits_{m=1}^n A^m_iA^m_j = \langle A_i | A_j\rangle $$

where $A_k$ denotes the $k^{th}$ column of $A$ and $\langle\cdot | \cdot\rangle$ denotes the standard Euclidean inner product. Therefore, the matrix $G$ is given by

$$ G = [\langle A_i|A_j \rangle] $$

At this point, I'm not sure what to do. I'm thinking there's some essential fact I need to know in order to continue.

So, my question is, am I on the right track and if so what should my next step be?

Edit: I updated this question to change the assumption that the $e_i$ are the standard basis vectors to the assumption that the $e_i$ are orthonormal with respect to $g$

  • 0
    Two nitpicks: First, it's the [Gram determinant](http://en.wikipedia.org/wiki/Gramian_matrix#Gram_determinant) (after [Jørgen Pedersen Gram](http://en.wikipedia.org/wiki/Jørgen_Pedersen_Gram)), second: it is much better to use `\langle` and `\rangle` for inner products: compare $\langle x,y \rangle = < x, y >$ (since `<` and `>` are relations, this results in awkward spacing).2011-06-23
  • 0
    @Theo Buehler Thanks for fixing those things2011-06-23

1 Answers 1

7

In fact, you're almost done.

You have calculated that $G = A^{T}A$ (since you're working with the standard inner product!). Now $\det{G} = \det{(A^TA)} = \det{(A)}^2$ and taking square roots gives what you want.


The Gram determinant is uniquely characterized by the following properties.

Let $v_{k} : V^{k} = V \times \cdots \times V \to \mathbb{R}_{\geq 0 }$ be a map satisfying

  1. $v_k (a_{1},\ldots, a_{i-1}, \lambda a_{i}, a_{i+1}, \ldots, a_{k}) = |\lambda| v_k(a_{1},\ldots, a_{k})$ for all $(a_{1},\ldots,a_k) \in V^{k}$, all $i$ and all $\lambda \in \mathbb{R}$.
  2. $v_{k}(a_{1}, \ldots, a_{i-1}, a_{i} + a_{j}, a_{i+1}, \ldots, a_k)$ for all $i \neq j$.
  3. $v_{k}(a_{1},\ldots,a_{k}) = 1$ if the $a_{i}$ are orthonormal.

Then $v_{k}(A) = \sqrt{\det{A^{T}A}}$, where $A$ is the $n \times k$ matrix with columns $A = (a_1,\ldots, a_k)$. Clearly, the expression $A \mapsto \sqrt{\det{A^TA}}$ has the desired properties.


Added: On 3Sphere's request, I'm sketching an argument.

First of all, note that a function satisfying the three properties above allows us to perform the following things while keeping track of the value of $v_{k}$:

  • Multiply a column by a scalar.
  • Adding one column to another.

These are the two things one needs to do to perform Gaussian elimination and Gram-Schmidt. If the vectors $a_1, \ldots, a_k$ are linearly dependent, then we can express one column as linear combination of the others using properties 1. and 2. of $v_k$, hence $v_k (a_1,\ldots, a_k) = 0$ (by property 1). Now Gram-Schmidt tells us that there is a way of transforming $(a_1,\ldots,a_k)$ into an orthonormal system spanning the same $k$-dimensional subspace, so $v_k (a_1, \ldots, a_k)$ is determined uniquely by property 3.

Since the expression $v_k(A) = \sqrt{\det{(A^TA)}}$ has the three desired properties, we see that $v_k$ exists and is uniquely determined by these properties.

Of course, $v_k(A)$ is nothing but the $k$-dimensional volume of the parallelepiped spanned by $(a_1,\ldots,a_k)$, and the proof is essentially the same as one of the proofs of the existence and uniqueness of the determinant.

The proof appears this way in the very nice German textbook Analysis 2 by K. Königsberger, but I don't think that there is a translation into other languages.

  • 0
    @Theo Buehler I guess what I missed (am missing) is that the matrix $G$ I have computed is actually $A^TA$. I'll have to ponder that a bit.2011-06-23
  • 0
    Well, $\langle x | y \rangle = x^T y$, no? Unfortunately I can't elaborate my answer further because the rendering problems make this virtually impossible for me at the moment. By the way, feel free to call me Theo (and @Theo suffices for pings).2011-06-23
  • 0
    @Theo No need for elaboration on this point; I see this now. Thanks for your help!2011-06-23
  • 0
    @Theo Umm...well, now that I read over my entire argument again, I realized that I assume that $g(e_i, e_k) = \delta^i_j$ and I'm not sure this is justified; Even though the $e_i$'s are orthonormal $g$ is an abribrary inner product. How can I justify this step?2011-06-23
  • 0
    Wait a minute. I read over this. I assumed that the $e_i$'s are *orthonormal with respect to* $g$ and that $A$ is given with respect to $e_i$.2011-06-23
  • 0
    If the basis $e_i$ is not $g$-orthonormal, let $f_i$ be a $g$-orthonormal basis. Apply the argument to $AF$ and you'll see that you'll multiply by $|\det{F}|$ as a result.2011-06-23
  • 0
    @Theo I think actually what makes this work is that, as you indicate, it must be assumed that the $e_i$'s are orthoromal with respect to $g$ and not that they are the standard basis vectors.2011-06-23
  • 0
    That's exactly what I'm saying in my last comment. I think we agree now.2011-06-23
  • 0
    @Theo Yep; I updated the qestion to reflect the clarification2011-06-23
  • 0
    @Theo As an aside, I've never before encountered the definition you give for a Gram determinant. Can you provide a reference that treats it from this perspective so I can read up on it? Thanks.2011-06-24
  • 0
    @3sphere: unfortunately, no, I don't know a reference off the top of my head. I'll add some details a bit later, okay?2011-06-24
  • 0
    @Theo Sounds good, thanks.2011-06-24
  • 0
    @3Sphere: Okay, done.2011-06-24
  • 0
    @Theo Thank you, sir!2011-06-24
  • 0
    Dear Theo, a) I too am a great admirer of the two-volume textbook *Analysis* by the late Königsberger . I met him only once, in Oberwolfach, a very long time ago and he looked like a nice, shy person. Did you know him? b) How do you know that you added 1183 characters to your edit? I often edit my posts and haven't noticed where the software gives you that information. Will you tell me the secret?2011-06-24
  • 0
    Dear @Georges, a) No, unfortunately I never met Königsberger personally. I made my first steps in understanding Analysis from working through his books and it is still one of the most elegant, precise and concise textbooks that I've ever read. b) The secret is to be a lazy person and not fill in the "Edit Summary" field, then the software takes over and provides this information "automagically". I don't know of another way that would give you this information on this site than submitting the post.2011-06-24
  • 0
    Thanks a lot, Theo. I had never considered not filling the "Edit Summary", but now...2011-06-24
  • 0
    @Georges: If you want to provide this information together with a further, more descriptive summary, one solution would be to submit your changes without filling out that field. You then have a five minute window given by the software to improve your edits that don't count as further editing. So you could immediately edit again and provide further information in the summary. This is not elegant and requires some further effort on your part, but nobody else would notice and the curious people would have the benefit of having both descriptions, the automatic one and yours.2011-06-24