My understanding, from multiple sources (here, books, articles, etc.), is that the columns of the matrix of a matrix of a linear mapping correspond to the images of the basis vectors in the domain expressed in terms of the basis vectors in the range. That is, if $ \phi:E \rightarrow F $ is a linear map,
$ \mathcal{X} = \{x_1, \dots, x_n \} $
is a basis for $E$ and
$ \mathcal{Y} = \{y_1, \dots, y_m\} $
is a basis for $F$ then the $m \times n$ matrix that represents $\phi$ has colums determined by
$ [\phi(x_1)]_\mathcal{Y}, \dots, [\phi(x_n)]_\mathcal{Y} $
I have encountered a definition, if I am reading it correctly, that seems to turn this definition on its head. The following excerpt comes from Greub's Linear Algebra, 3rd ed:
Consider two linear spaces $E$ and $F$ of dimensions $n$ and $m$ and a linear mapping $\phi E \rightarrow F$. With the aid of bases $x_{\nu}(\nu = 1 \dots n)$ and $y_{\mu}(\mu = 1 \dots n)$ every vector $\phi x_{\nu}$ can be written as a linear combination of the vectors $y_{\mu}$,
$ \phi x_{\nu} = \Sigma_{\mu} {\alpha}^{\mu}_{\nu} y_{\mu}. $
In this way, the mapping $\phi$ determines an $n \times m$ matrix $({\alpha}^{\mu}_{\nu})$ where $\nu$ counts the rows and $\mu$ counts the columns
To me, it looks like what he's actually defining is the matrix that represents the transpose of the mapping rather than the mapping itself. Considering though that this book is supposed to be the "gold standard" of linear algebra, I'm more inclined to believe that I'm misinterpreting his definition. On the other hand, he clearly states that the matrix is $n \times m$ and this can only be the case is if it represents the transpose, at least according to the "accepted" definition of this matrix as stated at the beginning of this post.
Can anyone shed some light on this?