2
$\begingroup$

I've been searching for an approximation that will allow me to solve for $p$ in $\frac{\log(p-1)}{\log p}$. Since it is so "obviously" $1 - \frac1{(\text{something})}$ for larger $p$ I thought it would be simple, but I have run into infinities with the substitutions I have tried so far. For my purposes I need an approximation with the following properties:

  • It is invertible so I can have a simple function for $p$
  • It is guaranteed smaller than the true expression

Since I am working with integers I was thinking that an expansion "around infinity" would work well, though I could probably use an expansion around $p=2$ if it converged to $1$ for larger numbers. I've considered either creating a series for the whole expression, or creating a series for the numerator only using a substitution which would give powers of $\log$, then I could divide through by the denominator.

So is there an obvious effective substitution that I am missing?

  • 0
    This is really similar to [my question right here.](http://math.stackexchange.com/questions/257455/inverse-function-of-y-frac-lnx1-ln-x) Maybe these answers can help.2013-01-05

3 Answers 3

0

This does not get to your bullets, so I'm not sure if this will help you, but it does accomplish an overall goal of approximately inverting $y=\frac{\log(p-1)}{\log(p)}$ to arbitrary accuracy.

If $y=\frac{\log(p-1)}{\log(p)}$, then $p^y = p-1$. So for fixed $y$, let $f(p)=p^y-p+1$. From here using Newton's method should be hassle-free to approximately solve for $p$ in $f(p)=0$.

Since $y<1$, $f$ is concave down and decreasing near its root, so Newton's method will leave you with an approximation that is larger than the true root, which is not what you asked for.

  • 0
    If instead you apply regular Newton to $f^2(p)$, as long as your first test value $p_0$ is close to the true root ($p_r$) and below it, then Newton's method will approach $p_r$ from below. This should work since $f^2(p) = c(p-p_r)^2 + O((p-p_r)^3)$.2012-12-26
2

Write $\log (p-1)=\log p + \log(1-\frac 1p) = \log p - \frac{1}p + O(1/p^2)$

Then $\frac{\log(p-1)}{\log p} = 1-\frac{1}{p\log p} + O\left(\frac{1}{p^2\log p}\right)$

This is not going to be easily invertible, however. I think it satisfies your second condition (minus the $O(\dots)$ expression.)

  • 0
    Yeah, I'm just not sure how easily Lambert is computed...2012-12-24
0

After unsuccessfully trying some more substitutions and hoping that they would look like $1-\frac1{something}$ I decided to take the direct approach. Calling the original quantity $f(x)$ I defined $f(x)=1-\frac1{g(x)}$ and calculated a Taylor series for $g(x)=\frac1{1-f(x)} = \frac{\log x}{\log x - \log (x-1)}$ . The derivatives were messy but fortunately evaluating at $x=2$ removes a lot of terms due to the $\log (x-1)$ terms.

To first order $g(x) = 1 + (x-2)\frac1{\log 2}$ which when inverted leads to $x=2-\log 2 + \frac{\log 2}{1-f(x)}$ . This meets my criteria of estimating x large (approximating $f(x)$ small); it is a fair bit off but does provide a bound. The second order approximation is much closer but unfortunately the series appears to take alternating signs past there and thus second order estimates to the wrong side of the function. Of course third order would be pretty accurate but very messy to invert.

Are there any techniques for adjusting the second order function to be guaranteed smaller than the original function? (Hmm - I just remembered that I like Rational Polynomials because you can get an extra degree of freedom while keeping linear functions - might have to work through that approach.)

I might have to expand around $x=5$ or so to get a closer match at first order and just accept numerical coefficients instead of exact expressions. Or I can give in and do a lookup table or search with the forward function. Since I'm only interested in the values of the original function at the primes that is feasible - I always knew that would work but wanted a one-step calculation.

Update

In the course of writing the preceding I recalled the handy properties of Pade Approximants so I generated some first order expressions. I found that I got better results by choosing three points to send the function through than expanding around a single point using derivatives. Given that success I decided to go for it and generated a ratio of quadratics with numerical solutions. Sending the function through the values at 2, 3, 5, 7, and $\infty$ I got $f(x) = \frac{3.025072 + .861544 \cdot x - 1.18704 \cdot x^2}{1 + .547267 \cdot x - 1.18704 \cdot x^2}$

This function is very close to the true value. I have not attempted to verify that it is always conservative to the function, but a numeric check shows this to be the case through $300$ which is enough for my purposes. The sharp reader may note that this fit exceeds the function at the reals $\lt 7$ (and at $4$ and $6$) but since I am only interested in the values at the primes this is not a problem.