3
$\begingroup$

Take $d$-dimensional Vector space $V$ with Field $R$.

A typical linear algebra linear transformation $V \to V$ can be represented by a $d \times d$ matrix $A$ such that for some $v,w \in V$, $Av=w$.

I'm learning about tensors, and I understand that a $(1,1)$ tensor $T$ is a linear transformation $V^* \times V \to R$. I've read that such a $(1,1)$ tensor is equivalent to such a matrix.

However, I find it very difficult to imagine what $V^*$ (the dual space, i.e. set of all maps $V\to R$) has to do with a simple linear transformation from $R^d$ to $R^d$.

Moreover, the tensor components apparently are defined as $T^i_{\space \space j}=T(\epsilon_i, e^j)$, where $e^j, \epsilon _i$ are the $d$ bases of $V$ and $V^*$ respectively. This means that if we would write $T$ as a 2-dimensional array, it would have nothing to do with a matrix as in linear algebra.

So how are these two concepts connected?


This post is related to my question, but it doesn't really go into the difference between the matrix and tensor form.

2 Answers 2

1

Take $v\in V$ and $w_*\in V^*$. You have a natural bijection $i$ between $V$ and $V^*$ (because the dimension is finite), hence you can obtain the corresponding $i(w_*)=w\in V$.

Consider a map $Q:V\times V\to \Bbb R$ given by $$Q(x,y) = T(x,i(y)).$$ Obviously, $Q$ is a bilinear map, hence it can be given by a matrix $M$: $$Q(x,y) = x^TMy.$$ We will call this matrix - by extension - the matrix of $T$.

  • 0
    Ok. good, so now I know how to make a $V\times V \to R$ version of a given $V\to V$ transformation. How do I do it the other way around? i.e. we have a certain tensor $T$, s.t. $T(v,w) = \sum \sum v^i w_j T(e_i, e^j)$. This is probably embarrassingly simple, but I don't seem to know how to "remove" the "dot product" that's in that equation, and just turn it into a map from $v$ to $w$.2017-02-08
  • 0
    If you have a bilinear form $T(v,w)$ over $V\times V$, you can obtain a corresponding linear operator $S:V\to V$. Take a map $v\to T(v,\cdot)$. The object $T(v,\cdot)$ is a linear functional over $V$, hance it is - up to some isomorphism - an element of $V$ itself. Therefore, the linear map $v\to T(v,\cdot)$ can be seen as $V\to V$, therefore, a matrix.2017-02-08
  • 0
    If you consider $T(v,w)$ you will not get a map from $v$ to $w$, it is by considering the action of $T$ on these vectors that you obtain an associated linear operator.2017-02-08
  • 0
    I don't agree with this. Even when $V$ is finite dimensional there's not a *natural* isomorphism with its dual. And the answer doesn't need one.2017-02-08
  • 0
    @OscarCunningham hm, Riesz representation theorem + isomorphism $V\leftrightarrow \Bbb R^d$? Also, for the sake of future readers, could you add the version without isomorphism?2017-02-08
  • 0
    I added my own answer. There definitely are isomorphisms between a finite dimensional vector space and its dual, but none of these is canonical. If the vector space is equipped with an inner product then this picks out a particular isomorphism as being natural (c.f. Riesz representation theorem). But in the case of a general vector space there isn't a unique Hilbert space structure (maybe the field isn't even $\Bbb R$ or $\Bbb C$).2017-02-08
  • 0
    @OscarCunningham Thanks! The author explicitly stated that $V$ was a linear space over $\Bbb R$, so we can safely equip $V$ with an inner product. The question of "canonicity" is, after all, a matter of taste - it is sufficient that this isomorphism exists (like in $V^{**}\cong V$ for finite-dimensioned case or $V'\cong V$ for Hilbert spaces).2017-02-08
0

Given a linear map $\alpha:V\to V$ we can construct a bilinear form $\tau:V^*\times V\to R$, by taking $\tau(f,v)=f(\alpha v)$. (Note that $f(\alpha v)$ makes sense because $v\in V$ and $\alpha:V\to V$ so $\alpha v\in V$, and then $f\in V^*$ means $f:V\to R$, so $f(\alpha v)\in R$.)

Similarly given a bilinear form $\tau':V^*\times V\to R$ we can construct a map $\alpha': V\to V$ by noting that if $v\in V$ then $\tau'(-,v):V^*\to R$, and hence $\tau'(-,v)\in V^{**}$. Since $V$ is finite dimensional we have $V^{**}\cong V$ and hence we can define $\alpha'(v)$ to be the element of $V$ corresponding to $\tau'(-,v)$ in $V^{**}$. This means that $f(\alpha' v)=\tau'(f,v)$.

Hence given a map $V\to V$ we get a map $V^*\times V\to R$ and given a map $V^*\times V\to R$ we get a map $V\to V$ (and furthermore if we translate back and forth we end up where we started). So we can view maps $V\to V$ as "the same as" maps $V^*\times V\to R$.

Finally lets check the matrices are the same. Given a map $\alpha:V\to V$ its matrix is defined by $A^i_{\;j}=\epsilon_j(\alpha e^i)$, and given a map $\tau:V^*\times V\to R$ its matrix is defined by $T^i_{\;j}=\tau(\epsilon_j,e^i)$. So if we have $\tau(f,v)=f(\alpha v)$ then $T^i_{\;j}=\tau(\epsilon_j,e^i)=\epsilon_j(\alpha e^i)=A^i_{\;j}$.