In most of the books on numerical methods and finite difference methods the error is measured in discrete $L^2$ norm. I was wondering if people do the in Sobolev norm. I have never see that done and I want to know why no one uses that.
To be more specific look at the $Au=f,$ where assume $A_h$ is some approximation for $A$ and $U$ is the numerical solution for the system. Then if we plug the actual function $u$ into $A_hU=f$ and substruct we have $A_h(u-U)=\tau$ for $\tau$ being a local error. Thus I have an error equation $e=A_h^{-1}\tau$ What are the problems I am facing If I use discrete Sobolev norm?