Suppose you have a linear transformation $T\colon\mathbb{R}^n\to\mathbb{R}^m$. If you know what happens to the standard basis $\mathbf{e}_1,\ldots,\mathbf{e}_n$ of $\mathbb{R}^n$, then you know what happens to every vector in $\mathbb{R}^n$, because given any vector $\mathbf{v}=(a_1,\ldots,a_n)$, we have: $T\mathbf{v} = T\Bigl(a_1\mathbf{e}_1+\cdots+a_n\mathbf{e}_n\Bigr) = a_1T\mathbf{e}_1+\cdots a_nT\mathbf{e}_n.$ So if you know $T\mathbf{e}_i$ for each $i$, we can get $T\mathbf{v}$ for every $\mathbf{v}$.
The standard matrix of $T$ is a way of keeping track of precisely this information, and making it easy to perform the computation above. What we are using is the fact that if $A$ is a matrix, and we let $\mathbf{a}_i$ be the $i$th column of $A$, that is, $A = (\mathbf{a}_1\;|\;\cdots\;|\;\mathbf{a}_n),$ and you multiply $A$ by an $n\times 1$ column vector, then the result of the product is the same as taking an appropriate linear combination of the columns, to wit, if $\mathbf{v} = (a_1,\ldots,a_n)$, then: $A\mathbf{v}^t = A\left(\begin{array}{c}a_1\\\vdots\\a_n\end{array}\right) = a_1\mathbf{a}_1 + \cdots + a_n\mathbf{a}_n$ (where $\mathbf{v}^t$ is the transpose of $\mathbf{v}$).
That means that if we make a matrix $A$ wuch that $\mathbf{a}_i$ is $(T\mathbf{e}_i)^t$, then we have $A(\mathbf{v})^t = \left((a_1T\mathbf{e}_1)^t + \cdots + (a_nT\mathbf{e}_n)^t\right)^t = (T\mathbf{v})^t,$ so we can compute $T\mathbf{v}$ by multiplying $\mathbf{v}^t$ by the matrix $A$. The matrix $A$ is the "standard matrix of $T$" (with respect to the standard bases of $\mathbb{R}^n$ and $\mathbb{R}^m$).
So you have computed $A$; you know $\mathbf{v}$. Now you just need to multiply $A$ by $\mathbf{v}^t$ to get the (transpose of the) image of $\mathbf{v}$ under $T$.