This is Lemma 6.1 from Gilbarg - Trudinger. It states "Let $\textbf{P}$ be a constant matrix which defines a nonsingular linear transformation $y=x\textbf{P}$ from $\mathbb{R}^n \rightarrow \mathbb{R}^n$. Letting $u(x) \rightarrow \tilde{u}(y)$ under this transformation one verifies easily that $A^{ij}D_{ij}u(x) = \tilde{A}^{ij}D_{ij}\tilde{u}(y)$, where $\tilde{\textbf{A}} = \textbf{P}^t\textbf{A}\textbf{P}$."
Here, we are using the summation convention, and $A^{ij}$ is a constant matrix with $A^{ij} = A^{ji}$.
I have no idea where this is coming from. It says it's an easy verification... but even trying to do this with 2 by 2 matrices gives a huge mess. Second, what exactly does $u(x) \rightarrow \tilde{u}(y)$ mean? Is $\tilde{u}$ the same function but just a different variable? Why not call it $u(y)$ then? Any help is appreciated.