I've been doing functional programming for a long time, and in that discipline, which is based on lambda calculus and therefore on function application and composition, the juxtaposition of f to x, as in f x, is called the "application of f to x" just as f(x) is.
Matrix multiplication seems to follow similar syntactical intuition, with the multiplication of a matrix on the left against another matrix (or a vector) on the right being another matrix or vector using the left matrix's values as the coefficients of a linear function applied to the right hand side.
Specifically multiplying a matrix against another matrix is called matrix "composition," even, which happens to be the same term used to describe the application of a function to another function.
So then given that similarity to function application, the fact that matrix multiplication involves more than just the multiplication of each of the elements, and the fact that it isn't commutative the way anything described prior and called "multiplication" presumably would have been, is it simply due to historical reasons that matrix multiplication isn't called "matrix application"? Is there some characteristic of matrix multiplication which thinking of it as multiplication rather than as function application help understand?