3
$\begingroup$

I am trying to understand the proof of the following which comes from "Matrix Groups for Undergraduates" by Kristopher Tapp.

Let $G \subset GL_n(\mathbb K)$ be a matrix group with Lie algebra $\mathfrak g \subset gl_n(\mathbb K)$. Then for all $X \in \mathfrak g$, $e^X \in G$.

It starts with let $\{ X_1,\ldots,X_k \}$ be a basis of $\mathfrak g$. For each $i=1,\ldots,k$ choose a differentiable path $\alpha_i: (-\epsilon,\epsilon)\to G$ with $\alpha_i(0)=I$ and \alpha'_i(0)=X_i. Define $F_\mathfrak{g}:(\text{neighborhood of 0 in } \mathfrak{g}) \to G$ as follows: $F_\mathfrak{g}(c_1X_1+\cdots+c_kX_k)=\alpha_1(c_1)\cdot\alpha_2(c_2)\cdots\alpha_k(c_k)$. Notice that $F_\mathfrak{g}(0)=I$, and $d(F_\mathfrak{g})_0$ is the identity function: $d(F_\mathfrak{g})_0(X)=X$ for all $X\in\mathfrak{g}$ as is easily verified on basis elements.

EDIT to include use of inverse function theorem in response to Bill

Choose a subspace $\mathfrak p\subset M_n(\mathbb K)$ which is complementary to $\mathfrak g$, which means completing the set $\{X_1,\ldots,X_k\}$ to a basis of all of $M_n(\mathbb K)$ and defining $\mathfrak p$ as the span of the added basis elements. So $M_n(\mathbb K)=\mathfrak g\times \mathfrak p$.

Choose a function $F_{\mathfrak p}: \mathfrak p \to M_n(\mathbb K)$ with $F_{\mathfrak p}(0) = I $ and with $d(F_{\mathfrak p})_0(V)=V$ for all $V\in \mathfrak p$. For example, $F_{\mathfrak p}(V)=I+V$ works. Next define the function $F:(\text{neighborhood of 0 in }\mathfrak g \times \mathfrak p = M_n(\mathbb K)) \to M_n(\mathbb K)$ by the rule $F(X+Y)=F_{\mathfrak g}(X)\cdot F_{\mathfrak p}(Y)$ for all $X\in \mathfrak g$ and $Y\in \mathfrak p$. Notice that $F(0)=I$ and $dF_0$ is the identity function: $dF_0(X+Y)=X+Y$.

By the inverse function theorem, $F$ has an inverse function defined on the neighborhood of $I$ in $M_n(\mathbb K)$.

My question is how does one see that $d(F_\mathfrak{g})_0(X)=X$ given that $F_\mathfrak{g}$ is a function from matrices to matrices and normally the jacobian is defined for functions of the type similar to $f:\mathbb R^n \to \mathbb R^m$. And how would one go about computing efficiently that $d(F_\mathfrak{g})_0(X)=X$?

  • 0
    Matrices are just $\mathbb R^{n^2}$, written in a different way (a square matrix rather than a row or column). So you have a function from $\mathbb R^{n^2}$ (or an open subset of $\mathbb R^{n^2}$) to itself, and you can compute its Jacobian.2012-01-11

1 Answers 1

0

You can view the map $F_{\mathfrak{g}}$ as a map from a neighborhood of $0$ in $\mathbb{R}^n$: $(x_1,x_2,\dots,x_k) \mapsto \alpha_1(x_1)\cdots \alpha_k(x_k)$. Then D_{x_i}[F_{\mathfrak{g}}](x_1,\dots,x_k)=\alpha_1(x_1)\cdots \alpha_{i-1}(x_i) \alpha_i'(x_i) \alpha_{i+1}(x_{i+1})\cdots \alpha_k(x_k), so the $i^{th}$ partial at $0$ is \alpha_1(0)\cdots \alpha_{i-1}(0) \alpha_i'(0) \alpha_{i+1}(0)\cdots \alpha_k(0) = I\cdots I\cdot X_i \cdot I\cdots I=X_i.

Thus the Jacobian is $[X_1 \; X_2\; \cdots \; X_k]$. So to get the derivative at $X=c_1X_1+\cdots+c_kX_k$ multiply this by the Jacobian by $[c_1\;c_2\;\cdots\;c_k]^T$ and get $c_1X_1+\cdots+c_kX_k=X$ (as desired).

  • 0
    I edited my original post to include the usage of the inverse function theorem which I am unclear about.2012-01-11