3
$\begingroup$

This is Lemma 6.1 from Gilbarg - Trudinger. It states "Let $\textbf{P}$ be a constant matrix which defines a nonsingular linear transformation $y=x\textbf{P}$ from $\mathbb{R}^n \rightarrow \mathbb{R}^n$. Letting $u(x) \rightarrow \tilde{u}(y)$ under this transformation one verifies easily that $A^{ij}D_{ij}u(x) = \tilde{A}^{ij}D_{ij}\tilde{u}(y)$, where $\tilde{\textbf{A}} = \textbf{P}^t\textbf{A}\textbf{P}$."

Here, we are using the summation convention, and $A^{ij}$ is a constant matrix with $A^{ij} = A^{ji}$.

I have no idea where this is coming from. It says it's an easy verification... but even trying to do this with 2 by 2 matrices gives a huge mess. Second, what exactly does $u(x) \rightarrow \tilde{u}(y)$ mean? Is $\tilde{u}$ the same function but just a different variable? Why not call it $u(y)$ then? Any help is appreciated.

2 Answers 2

2

The idea is to perform a "rotation and stretching" $(P)$ of coordinates which transforms $u$ (defined on $\Omega$) into a function $\tilde{u}$ defined on $P(\Omega)$ so that $\tilde{u}$ satisfies a nice equation.

Computationally, we have $u(x) = \tilde{u}(Px)$. The general formula (by the Chain Rule) for $D^2u$ is $D^2u(x) = P^T D^2\tilde{u}(Px) P.$

Thus, we have $0 = tr(AD^2u(x)) = tr(AP^T D^2\tilde{u}(Px) P) = tr(PAP^T D^2\tilde{u}(Px)).$

For a simple example try $u_{xx} + u_{xy} + u_{yy} = 0$, say defined on $B_1$. By rotating coordinates to the $(1,1)$ and $(1,-1)$ directions we can write the equation without mixed derivatives, and by stretching in one direction and squeezing in the other we obtain harmonic $\tilde{u}$ defined on some rotated ellipse.

The idea of changing coordinates is also very useful in scaling arguments for PDE, where we have some estimate in $B_1$ which we would like to apply at all scales.

  • 0
    Why is $0 = tr(AD^2u(x))$?2013-02-24
1

The definition is $ \tilde{u}(xP)=u(x), \qquad\textrm{or equivalently,}\qquad \tilde{u}(y)=u(xP^{-1}). $ Then use the beloved rule.