I vaguely remember the Frobenius matrix norm ( ${||A||}_F = \sqrt{\sum_{i,j} a_{i,j}^2}$ ) was somehow considered unsuitable for numerical analysis applications. I only remember, however, that it was not a subordinate matrix norm, but only because it did not take the identity matrix to $1$. It seems this latter problem could be solved with a rescaling, though. I don't remember my numerical analysis text considering this norm any further after introducing this fact, which seemed to be its death-knell for some reason.
The question, then: for fixed $n$, when looking at $n \times n$ matrices, are there any weird gotchas, deficiencies, oddities, etc, when using the (possibly rescaled) Frobenius norm? For example, is there some weird series of matrices $A_i$ such that the Frobenius norm of the $A_i$ approaches zero while the $\ell_2$-subordinate norm does not converge to zero? (It seems like that can not happen because the $\ell_2$ norm is the square root of the largest eigenvalue of $A^*A$, and thus bounded from above by the Frobenius norm...)