1
$\begingroup$

Context: 'A Theory of Regularity Structures' by Marin Hairer, example 3.1 (page 10).

Define $$\Gamma_h X^k := (X-h)^k$$ and $$(\Pi_x X^k)(y) := (y-x)^k.$$

Also define $$\Gamma_{xy} := \Gamma_{y-x}.$$

We need to check that $$\Gamma_{xy}\Gamma_{yz} = \Gamma_{xz}$$ and that $$\Pi_x \Gamma_{xy} = \Pi_y.$$

The first identity is easy to check. $$\Gamma_{xy}\Gamma_{yz}(X^k) = \Gamma_{xy}(X-(y-z))^k = (X-(y-z)-(z-y))^k = (X-(z-x))^k = \Gamma_{xz}(X^k).$$

The second one's giving me some trouble: $$(\Pi_x \Gamma_{xy} X^k)(z) = (\Pi_x (X-(y-x))^k)(z).$$

How do I procede to show that this is equal to $(\Pi_y(X^k))(z)$?

2 Answers 2

1

I am not sure what kind of proof you are looking for. If you want to procede formally, we have to consider the polynomial regularity structure. That is we take $T$ as the free abelian group generated by the symbols $X^k.$ At this level $X^k$ could be replaced by stars and ducks, or just by a general basis $e_k.$

Now we need to understand what the maps $\Gamma$ and $\Pi$ do. The definition you gave works fine if you think of the intuition behind these objects. Formally (and please note: here we use $\Gamma_h(X^k) = (X + h)^k$, so there is a sign change w.r.t your notation): $$\Gamma_h(X^k) = \sum_{l=0}^k {{k}\choose{l}}h^lX^{k-l} \in T$$ and $$\Pi_{u}(X^k)(\cdot) = (\cdot - u)^k \in L^1_{loc}(\mathbb{R}) \subset \mathcal{D}'(\mathbb{R})$$ These maps are then extended by linearity on the whole of $T$. Finally we define (here due to our different definitions we invert $x$ and $y$): $$\Gamma_{xy} = \Gamma_{x-y}$$

Now it is quite straight forward to check the property you need. Let us fix two points $u,w$, and so that notations do not collide let us avoid using the letter $x$. We want to check that : $$\Pi_u \Gamma_{u w} = \Pi_w $$ By linearity it is clear that we only need to check this equality on each element $X^k$ of our basis for $T.$ We get: $$\Pi_u \Gamma_{u w}(X^k) (\cdot) = \Pi_u \left( \sum_{l=0}^k {{k}\choose{l}}(u - w)^lX^{k-l} \right) = \\ =\sum_{l=0}^k {{k}\choose{l}}(u - w)^l(\cdot - u)^{k-l} = (\cdot - w)^k = \Pi_w(X^k)$$

Now we come to the intuition. A general element $\tau \in T$ is just a set of coefficients: say, the first $m$ derivatives of a function in a point $p$. The elements $X^k$ are just symbols, that stand for abstract monomials.

If you want to pass from this "jet" $\tau$ to a concrete polynomial you have to choose a base point. This is done through the map $\Pi.$ Note that if you have the derivatives in a point $p,$ the only reasonable thing to do is apply $\Pi_p.$

Another thing you can do is recenter $\tau$ at a point different from $p.$ This is done by $\Gamma_{q,p}$ in the abstract space $T.$ Then you can build the concrete polynomial by applying $\Pi_q.$ What we have shown is that the distribution (here just a polinomyal) you get is always the same.

  • 0
    Thanks for the effort you put into typing this answer, really appreciate it2017-01-29
  • 1
    No problem :) I'm starting studying these things as well, so I'm happy to write down some very explicit answers.2017-01-29
  • 0
    have you had a look at section 7 from that paper? If so, do you mind if I start a new question and tag you in it?2017-04-23
  • 0
    Unfortunately I have not read that section. But you can still ask the question :)2017-04-26
1

I looked at the linked text but couldn't see how page 11 related to the question and as far as I can tell there is no example 3.1.

Saying that, it is unclear exactly what the objects you have defined are and so I will assume we are talking about polynomials and maps defined by polynomials.

In particular, for a field $k$ and an element $h\in k$ we define $\Gamma_h :k[X] \to k[X]$ by $\Gamma_hP(X)=P(X-h)$ for every polynomial $P\in k[X]$. We will denote by $k[X]^*$ the set of polynomials considered as functions $P(X):k\to k$. We then define $\Pi_x:k[X]\to k[X]^*$ by $\Pi_xP(X)(y)=P(y-x)$. i.e, it is $\Gamma_xP(X)$ but considered as a function on $k$.

Now with these definitions we have that \begin{align*} \Gamma_{xy}\Gamma_{yz}P(X)=&\Gamma_{xy}P(X-(z-y)) \\ =&P(X-(z-y)-(y-x)) \\ =&P(X-(z-x)) \\ =&\Gamma_{xz}P(X). \end{align*}

Now for the second equality, we can easily check that we have $\Gamma_{x}\Gamma_{xy} = \Gamma_{y}.$

\begin{align*} \Gamma_{x}\Gamma_{xy}P(X)=&\Gamma_{x}P(X-(y-x)) \\ =&P(X-(y-x)-x) \\ =&P(X-y) \\ =&\Gamma_{y}P(X). \end{align*}

But then $\Pi_{x}\Gamma_{xy}=\Pi_y$ follows since all we are doing is taking these polynomials now as functions on $k$.

  • 0
    Thanks for your answer - I corrected the hyperlink, now the correct paper is linked and you can find the example on page 10, sorry about my oversight2017-01-29