5
$\begingroup$

The Kronecker delta can be defined like this:

$\delta_{ij} = \begin{cases} 1, & \text{if } i = j \\ 0, & \text{if } i \ne j \end{cases}$

The dual space is a vector space $V^*$ that can be defined to any $\mathbb{K}-$vector space $V$. Its elements are linear transformations $\Phi: V \rightarrow \mathbb{K}$.

According to my script and Wikipedia a base to the dual space can be found quite easily. If $X = \{x_i\}_{i = 1, 2, ..., n}$ is a base of V, then $X^* = \{x_i^*\}_{i = 1, 2, ...,n}$ with $x_i^*(x_j) = \delta_{ij}$.

Now I chose the $\mathbb{R}^1$-vector space with $\mathbb{K} = \mathbb{R}$ to get a really simple example. Then $X = \{1\}$ and $X^* = \{x_1^*\}$ with $x_1^* (x) = \begin{cases} 1, & \text{if } x = 1 \\ 0, & \text{if } x \neq 1 \end{cases}$. So this should be a base for some linear transformations. But all linear transformations $\Phi : V \rightarrow W$ have to fulfil two conditions:

  1. $\forall x, y \in V: \Phi(x+y) = \Phi(x) + \Phi(y)$
  2. $\forall a \in \mathbb{K}: \forall x \in V: \Phi(a \cdot x) = a \cdot \Phi(x)$

But $x_1^*(1 + -1) = x_1^*(0) = 0 \neq 1 = 1 + 0 = x_1^*(1) + x_1^*(-1)$. So $x_i^*$ is no element of the dual space but its base?!? Can you please tell me where I got something wrong?

  • 0
    It seems as if I got this wrong. I thought only $x_1^*(1)$ would be 1 and everything else would be 0.2012-03-26

3 Answers 3

6

Remember that if $\mathbf{V},\mathbf{W}$ are vector spaces, $\mathbf{v}_1,\ldots,\mathbf{v}_n$ is a basis, and $f\colon\{\mathbf{v}_1,\ldots,\mathbf{v}_n\}\to \mathbf{W}$ is any function, then $f$ uniquely determines a linear tranformation $T\colon\mathbf{V}\to\mathbf{W}$ such that $T(\mathbf{v}_i) = f(\mathbf{v}_i)$. That is: we can define a linear transformation by specifying what it does to a basis, and then "extending linearly"; the definition of $T$ is that given any $\mathbf{v}\in \mathbf{V}$, we express $\mathbf{v}$ in terms of the basis, $\mathbf{v} = \alpha_1\mathbf{v}_1+\cdots+\alpha_n\mathbf{v}_n$ and then let $T(\mathbf{v}) = \alpha_1f(\mathbf{v}_1)+\cdots+\alpha_nf(\mathbf{v}_n).$

In the definition you give, we are specifying the function $x_i^*$ by saying what it does on the basis $\{x_1,\ldots,x_n\}$. The actual function will be given by linearity.

So given an element $\mathbf{v}\in\mathbf{V}$, the way to compute $x_i^*(\mathbf{v})$ is as follows:

  1. First, express $\mathbf{v}$ in terms of the basis $x_1,\ldots,x_n$: $\mathbf{v}=\alpha_1x_1+\cdots+\alpha_nx_n$ for some (uniquely determined) scalars $\alpha_1,\ldots,\alpha_n$.

  2. Then we compute $x_i^*$ on $\mathbf{v}$ by the rule: $x_i^*(\mathbf{v}) = \alpha_1x_i^*(x_1)+\alpha_2x_i^*(x_2)+\cdots + \alpha_nx_i^*(x_n).$

This gives: $x_i^*(\mathbf{v}) = \alpha_1\delta_{i1} + \alpha_2\delta_{i2}+\cdots+\alpha_n\delta_{in} = \alpha_i.$

(Instead, you seem to think that the way to compute it is by taking $x_i^*(\mathbf{v})$, and the value will be $1$ if $\mathbf{v}=x_i$ and $0$ if $\mathbf{v}\neq x_i$. You are absolutely correct that such a function is not linear, let alone a linear functional, but luckily that is not the definition of $x_i^*$...)

In the case of $V=\mathbb{R}$ and $x_1 = 1$, the functional $x_1^*$ is completely determined by its value at $1$, which is $1$. However, the actual definition of the function as a linear transformation is:

Given $\alpha\in\mathbb{R}$, write $\alpha$ as a linear combination of $x_1$, $\alpha = \alpha x_1$. Then $x_1^*(\alpha) = \alpha x_1^*(1) = \alpha$.

So your computation is incorrect: $x_1^*(1+(-1)) = x_1^*(0) = x_1^*(0\cdot x_1) = 0x_1^*(x_1) = 0(1) = 0,$ as you compute, but to compute $x_1^*(1)$ and $x_1^*(-1)$, you need to first express these vectors in terms of the basis $\{x_1\}$ (which you did not do). You get: $x_1^*(1) + x_1^*(-1) = x_1^*(1x_1) + x_1^*(-1x_1) = 1x_1^*(x_1)+(-1)x_1^*(x_1) = 1\delta_{11}+(-1)\delta_{11} = 0$ as well.

2

You should think of $x^*_i$ as the $i$-th coordinate function for the basis $X$, in other words the map that associates to any vector $v$ the coefficient of $x_i$ when $v$ is expressed on the basis $X$. You may write this maps as $ x^*_i: v=v_1x_1+\cdots+v_nx_n\mapsto v_i $ Now it should be clear that when you take $v=x_j$ then you find that $x^*_i(x_j)=1$ if $j=i$, and $x^*_i(x_j)=0$ otherwise. That is more succinctly expressed by $x_i^*(x_j) = \delta_{i,j}$.

1

You got it wrong. If $\dim V=1$ a basis is given by any choice of $0\neq v\in V$. Now the dual basis consists on the only form $v^*\in V^*$ such that $ v^*(v)=1. $ It is now clear that $v^*(kv)=kv^*(v)=k$ for all $k\in K$.

Kronecker's delta is a shorthand to define the dual basis when $\dim V\geq 2$. For instance if $\dim V=2$ with basis $\{v,w\}$, the dual basis consists of the forms $\{v^*,w^*\}$ where $v^*$ is the unique linear form on $V$ such that $ v^*(v)=1,\qquad v^*(w)=0 $ and $w^*$ is the unique linear form on $V$ such that $ w^*(v)=0,\qquad w^*(w)=1. $ You should convince yourself that these conditions define $v^*$ and $w^*$ unambigously and that they form a basis for $V^*$. Indeed, if $\lambda\in V^*$ is such that $\lambda(v)=a$ and $\lambda(w)=b$ it turns out that $ \lambda=av^*+bw^*. $ The general case is not essentially different.