1
$\begingroup$

I have the following Schröder functional equation:

$f(h(s))=c.f(s)$

where $f,h: ℂ→ℂ$, here $f$ is not analytic and $h$ is analytic and $c∈ℝ$.

My question is: How we can solve this equation (the form of $f$ and its domain of definition). We can take $h$ as $h(s)=1-s$ or $h(s)=s-1$ and for both cases we can take $c=-1$.

Some motivations are available here:

http://en.wikipedia.org/wiki/Schr%C3%B6der%27s_equation

1 Answers 1

1

I solve such questions using the Carleman-matrix-concept (which you also find mentioned in the wikipedia article). Carleman-matrices contain in its rows the coefficients of a function in its power series representation, and of its powers. So the Carleman-matrix, say C contains the coefficients of $f(x)^0$ (which is the constant 1), $f(x)$, $f(x)^2$ ... in its rows.
In your case where $f(x)=1 - 1 x$ we have $ C= \begin{bmatrix} 1&.&.&. \\ 1&-1&.&. \\ 1&-2&1&. \\ 1&-3&3&1 \\ \end{bmatrix} \qquad \text{ for } \qquad \begin{array} {} f(x)^0 &=&1 \\ f(x)&=&1-x \\ f(x)^2&=&1-2x+x^2 \\ f(x)^3&=&1-3x+3x^2-x^3 \\ \end{array} $ such that with a "vandermonde"-type column-vector $V(x) = [1,x,x^2,x^3,...]$ of appropriate size we shall have $ C \cdot V(x) = V(f(x)) $


Then if you find a diagonalization of C such that $M^{-1}\cdot D \cdot M = C $ where $D$ is diagonal and $M$ and $M^{-1}$ are triangular, then $M$ in its second row contains the coefficient of the Schröder-function and that in $M^{-1}$ in its second row its inverse. In our case we find that $ M^{-1} =\begin{bmatrix} 1 & . & . & . \\ 1/2 & 1/2 & . & . \\ 1/6 & 1/2 & 1/3 & . \\ 0 & 1/4 & 1/2 & 1/4 \end{bmatrix} $ , with $D=\operatorname{diag}([1,-1,1,-1])$ and $ M = \begin{bmatrix} 1 & . & . & . \\ -1 & 2 & . & . \\ 1 & -3 & 3 & . \\ -1 & 4 & -6 & 4 \end{bmatrix}$ is a possible solution. (Here $M^{-1}$ can be recognized as the set of coefficients of integrals of the bernoulli-polynomials when we extend the dimension of the matrix infinitely)
In general, the eigenvector-matrix $M$ can be understood as limit of the n'th power of $C$ scaled by the reciprocal of the n'th power of $f'(0)$ when n goes to infinity, and so the Schröder-function as limit of the n'th iterate of $f(x)$ divided by $f'(0)^n$ where $n \to \infty \qquad $ - but can furtherly be scaled by an arbitrary constant factor $\gamma \ne 0$

Remark: Your example which requires only matrix size of $n \times n= 2 \times 2$ is much easier, but then one wouldn't see the general principle (and the relation to the bernoulli-polynomials, so I used the bigger matrix size with n=4 here.

  • 0
    @rh1: hmm, what do you mean? Unfortunately I mixed your given $h(s)$ by my usual $f(x)$ and so this overlaps your $f(s)$, sorry perhaps I should adapt this. By my example, the function $f(s)$ in your sense is $f(s)=-1+2s$ and $f^{-1}(s)=1/2(1+s)$ so $h(s)=f^{-1}(c \cdot f(s))= 1/2(1+(-1(-1+2s)))=1/2(2-2s)=1-s$ for any complex $s$. The $m$'th iterates are then expressible simply by $m$'th powers of $c$ - and, taking care of the problem of non-unique solutions of fractional or complex powers of $c=-1$ one can define even fractional- and complex-valued iterations heights $m$2012-12-31