Let me back up a bit. Remember what the point is of the "coordinate matrix of a linear transformation with respect to a basis."
If $T\colon\mathbf{V}\to\mathbf{V}$ is a linear transformation, and $\beta=[\mathbf{v}_1,\ldots,\mathbf{v}_n]$ is a basis for $T$, then the values of $T$ at every vector in $\mathbf{V}$ is completely determined by its values on $\mathbf{v}_1,\ldots,\mathbf{v}_n$. Why? Because given any $\mathbf{x}\in\mathbf{V}$, we know there exist (unique) scalars $c_1,\ldots,c_n$ such that $\mathbf{x}=c_1\mathbf{v}_1+\cdots+c_n\mathbf{v}_n$, so $T(\mathbf{x}) = T(c_1\mathbf{v}_1+\cdots+c_n\mathbf{v}_n) = c_1T(\mathbf{v}_1)+\cdots+c_nT(\mathbf{v}_n).$ So if you know what $T(\mathbf{v}_1),\ldots,T(\mathbf{v}_n)$ are, you know that $T(\mathbf{x})$ is for all $\mathbf{x}\in\mathbf{V}$.
Now, since $\beta$ is a basis, each $T(\mathbf{v}_i)$ can be expressed as a linear combination of the $\mathbf{v}_i$ in a unique way; that is, there are scalars $a_{11}, a_{12},\ldots,a_{nn}$ such that $\begin{align*} T(\mathbf{v}_1) &= a_{11}\mathbf{v}_1 + a_{21}\mathbf{v}_2 + \cdots +a_{n1}\mathbf{v}_n\\ T(\mathbf{v}_2) &= a_{12}\mathbf{v}_1 + a_{22}\mathbf{v}_2 + \cdots + a_{n2}\mathbf{v}_n\\ &\vdots\\ T(\mathbf{v}_n) &= a_{1n}\mathbf{v}_1 + a_{2n}\mathbf{v}_2 + \cdots + a_{nn}\mathbf{v}_n. \end{align*}$ Then $\begin{align*} G(\mathbf{x}) &= c_1T(\mathbf{v}_1 + \cdots + c_nT(\mathbf{v}_n)\\ &= c_1\left(a_{11}\mathbf{v}_1 + a_{21}\mathbf{v}_2 + \cdots +a_{n1}\mathbf{v}_n\right)\\ &\qquad+c_2\left(a_{12}\mathbf{v}_1 + a_{22}\mathbf{v}_2 + \cdots + a_{n2}\mathbf{v}_n\right)\\ &\qquad+\cdots+c_n\left(a_{1n}\mathbf{v}_1 + a_{2n}\mathbf{v}_2 + \cdots + a_{nn}\mathbf{v}_n\right)\\ &= \Bigl(c_1a_{11} + c_2a_{12} + \cdots + c_na_{1n}\Bigr)\mathbf{v}_1\\ &\qquad +\Bigl(c_1a_{21} + c_2a_{22} + \cdots + c_na_{2n}\Bigr)\mathbf{v}_2\\ &\qquad+\cdots+ \Bigl(c_1a_{n1}+c_2a_{n2}+\cdots+c_na_{nn}\Bigr)\mathbf{v}_n\\ &= \left(\begin{array}{cccc} \mathbf{v}_1 & \mathbf{v}_2 & \cdots & \mathbf{v}_n\end{array}\right) \left(\begin{array}{cccc} a_{11} & a_{12} & \ldots & a_{1n}\\ a_{21} & a_{22} & \ldots & a_{nn}\\ \vdots & \vdots & \ddots & \vdots\\ a_{n1} & a_{n2} & \ldots & a_{nn} \end{array}\right) \left(\begin{array}{c}c_1\\c_2\\\vdots\\c_n\end{array}\right). \end{align*}$
The column vector $\left(\begin{array}{c}c_1\\c_2\\\vdots\\c_n\end{array}\right)$ represents the vector $\mathbf{x}$; it is the "coordinate vector of $\mathbf{x}$ with respect to $\beta$". The matrix $\left(\begin{array}{ccc} a_{11} & \cdots & a_{1n}\\ \vdots & \ddots & \vdots\\ a_{n1} & \cdots & a_{nn}\end{array}\right)$ is the matrix that "codifies" the images of $\mathbf{v}_i$ under $T$ in terms of $\beta$: the $i$th column is the coordinate vector of $T(\mathbf{v}_i)$ relative to $\beta$. This is the "coordinate matrix of $T$ relative to $\beta$."
What happens if $\mathbf{v}_1$ is an eigenvector corresponding to $\lambda_1$? Then $T(\mathbf{v}_1) = \lambda_1\mathbf{v}_1 = \lambda_1\mathbf{v}_1 + 0\mathbf{v}_2 + 0\mathbf{v}_3 + \cdots + 0\mathbf{v}_n.$ That is, the first column of the coordinate matrix is just $\begin{array}{c} \lambda_1 \\ 0 \\ 0\\ \vdots \\ 0\\ 0\end{array}$ What if $\mathbf{v}_2$ is an eigenvector corresponding to $\lambda_2$? Then $T(\mathbf{v}_2)=\lambda_2\mathbf{v}_2$. To write it as a linear combination of $\beta$, we just put $\lambda_2$ for the coefficient of $\mathbf{v}_2$, and $0$s for every other coefficient: $T(\mathbf{v}_2) = \lambda_2\mathbf{v}_2 = 0\mathbf{v}_1 + \lambda_2\mathbf{v}_2 + 0\mathbf{v}_3 + \cdots + 0\mathbf{v}_n.$ So the second column of the coordinate matrix is just $\begin{array}{c}0\\\lambda_2\\0\\\vdots\\0\\0\end{array}$ And so on: if $\mathbf{v}_i$ is an eigenvector corresponding to $\lambda_i$, then $T(\mathbf{v}_i) = \lambda_i\mathbf{v}_i = 0\mathbf{v}_1+0\mathbf{v}_2+\cdots+0\mathbf{v}_{i-1} + \lambda_i\mathbf{v}_i + 0\mathbf{v}_{i+1} + \cdots+0\mathbf{v}_n,$ so the $i$th column of the coordinate matrix will have $0$s everywhere except in the $i$th row, where it will have $\lambda_i$.
And so, if the entire basis is made up of eigenvectors, then the coordinate matrix will be such that the $i$th column has zeros everywhere except (perhaps) in the $i$th row, where it has the eigenvalue of $\mathbf{v}_i$; that is, we get a diagonal matrix $\Lambda = \left(\begin{array}{cccc} \lambda_1 & 0 & \cdots & 0\\ 0 & \lambda_2 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & \lambda_n \end{array}\right).$ And that is the coordinate matrix of $T$ with respect to $\beta$.