Take $d$-dimensional Vector space $V$ with Field $R$.
A typical linear algebra linear transformation $V \to V$ can be represented by a $d \times d$ matrix $A$ such that for some $v,w \in V$, $Av=w$.
I'm learning about tensors, and I understand that a $(1,1)$ tensor $T$ is a linear transformation $V^* \times V \to R$. I've read that such a $(1,1)$ tensor is equivalent to such a matrix.
However, I find it very difficult to imagine what $V^*$ (the dual space, i.e. set of all maps $V\to R$) has to do with a simple linear transformation from $R^d$ to $R^d$.
Moreover, the tensor components apparently are defined as $T^i_{\space \space j}=T(\epsilon_i, e^j)$, where $e^j, \epsilon _i$ are the $d$ bases of $V$ and $V^*$ respectively. This means that if we would write $T$ as a 2-dimensional array, it would have nothing to do with a matrix as in linear algebra.
So how are these two concepts connected?
This post is related to my question, but it doesn't really go into the difference between the matrix and tensor form.