1
$\begingroup$

I think I may have found something new because it's giving the correct results. I'm using the functional-roots of $logx$ to calculate super-logarithms. You can read this post of mine to get the idea : How to find a function $f(x)$ such that $f(f(x))=\log_ax$?

By functional square-root, I mean a function $f(x)$ such that $f(f(x))=logx$. I've not introduced this new term. By $n^{th}$ functional root of a given function, I mean the function which has to be applied $n$ times to get the required function.

$slog_ax$ is defined as the number of times logarithm with base $a$ is applied to $x$ to get to 1. But this definition fails when an integral number of logarithms can't be applied to $x$ to get to 1. The idea is that we don't necessarily have to apply logarithms continuously. If $k$ number of $n^{th}$ functional root of $log_ax$ can be applied to $x$ to get to 1, then $slog_ax=\frac{k}{n}$. This is in agreement with what I've calculated.

First start with, let's say $3^3=27$ or. $^23=27$ which gives $3=^{0.5}27$

or $slog_{27}3=0.5$

This is analogous to $log_{a^n}a=\frac{1}{n}$.

Now, we can't apply an integral number of logarithms with base 27 to 3 to get to 1 because applying the logarithm even once will get us smaller than 1. But if the functional square-root of $log_{27}x$ when evaluated at $x=3$ gives us 1 , then I'm correct. Here's how I'm calculating the approximate functional square-root of $log_{27}x$ when x is close to 3 because we need to apply that function to 3:

The approximation of $log_{27}x$ around $x=2.9$ is: $$log_{27}x\approx 0.0196+\frac{x}{9.557}$$. ......(1)

Now, if we assume the functional square-root to be $f(x)=ax+b$, then $f(f(x))=a^2x+ab+b$. By comparing like power of $x$ in this equation and equation (1), we get , $a=0.323$ and $b=0.0148$. So the functional square-root approximately is: $$f(x)=0.323x+0.0148$$ So, this function, when evaluated at $x=3$ should get us very close to 1. It's value at $x=3$ is 0.9838 which is very close to 1. So, I think I'm correct. I've assumed $f(x)=ax+b$. If we assume $f(x)$ of higher degree, we'll have more terms to compare and I think the functional square root will be more accurate. The actual functional square root will get us exactly to 1.

THIS is my question: Is there anything wrong with this extension of superlogarithms? The extensions on wikipedia don't sound as natural as mine.

UPDATE: I tried to do the same thing with $slog_{16}2$ which should be equal to $\frac{1}{3}$ because $^32=16$. This means the functional cube root of $log_{16}x$ when applied once to 2 should equal to 1. By assuming the functional cube root to be $f(x)=ax+b$, we get $f(f(f(x)))=a^3x+a^2b+ab+b$. After comparing this with the first two terms of taylor series of $log_{16}x$ around $x=2$, the approximate functional cube root of $log_{16}x$ is: $0.5649x-0.0587$ which evaluates to 1.07 for $x=2$ which is very close to 1 which means the super-logarithm is equal to $\frac{1}{3}$ as expected because $^32=16$.

If we try to do the same thing for calculating $slog_{65536}2$ which should be equal to $\frac{1}{4}$ because $^42=65536$, then after applying the functional 4th root of $log_{65536}x$ to 2, we get 0.91 which isn't very close to 1. But to evaluate the functional 4th root, I've assumed it to be $ax+b$. So, I think the functional 4th root isn't that accurate. I think if we assume the 4th root to be $ax^2+bx+c$ or of higher degree, then after getting the constants, evaluating that function at $x=2$ should get us closer to 1.

UPDATE: If we evaluate the 4th functional root by taking the taylor series of of $log_{65536}x$ around $x=1.5$, then then even by taking the 4th root as $ax+b$, the 4th root comes out to be $0.495x-0.0172$ which evaluates to $0.97$ at $x=2$ which is very close to 1. Last time, I had used the approximation at $x=2$ which gave me $0.9$.

  • 0
    What's the motivation for this?2017-02-10
  • 0
    @selfawareuser: Logarithms can be defined in a similar way as the number of times x is divided by a to get to 1. I tried to extend this dedinition when x is not of the form $a^n$. That gave me motivation to extend the definition of superlogarithms in a similar way. Read this post of mine:http://math.stackexchange.com/questions/2134589/how-to-find-a-function-fx-such-that-ffx-log-ax?noredirect=1#comment4391136_2134589.2017-02-10
  • 0
    @Yves Daoust: I've not given any new definitions. I've given a way to extend the definition. Is there anything wrong with this extension?2017-02-10
  • 0
    @Yves Daoust: I've given a way to calculate functional-roots using polynomilas which is in agreement with calculations and isn't nearly as stressful as calculating Kneser's functions.2017-02-10
  • 0
    @Yves Daoust: Try evaluating the approximate functional cube root of $\log_{3^{3^3}}x$ using my polynomial method. I bet applying that function once to 3 will get you to 1.2017-02-10
  • 0
    @Yves Daoust: I've added it in the post. I did it with $2^{2^2}$ because $3^{3^3}$ is a very large number to work with.2017-02-10
  • 0
    Let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/53378/discussion-between-dove-and-yves-daoust).2017-02-10
  • 0
    @YvesDaoust: On wikipedia, they haven't even provided a method to evaluate Abel's function. All that's written is 'Abel's function could be determined'. I've extended the 'take logarithm continuously' definition of $slogx$ in the same way as the 'divide x continuously by a' definition of $log_ax$ can be extended. Please read the post, the link of which I've given in my post, to get my idea.2017-02-10
  • 0
    @YvesDaoust: Why aren't you replying? I gave you the formula.2017-02-10
  • 0
    @Dove: All the questions you are asking are addressed in my paper http://tetration.org/IF.pdf . It has undergone an initial peer review by three experts in tetration and now is in peer review with the Annals of Mathematics. I have developed a complete theory of continuously iterated functions, proven their existence and uniqueness. If you research the history of the tetration article on Wikipedia, you will discover that I have been an active editor of the article for over ten years.2017-02-10
  • 0
    @Daniel Geisler: I don't know advanced mathematics, so I did't understand your paper much. But, I've given a way here to approximate $n^{th}$ functional root by polynomials. At least that must be only my discovery.2017-02-10
  • 0
    @DanielGeisler: I don't know how Abel's function works or what it is. It must be great but my polynomial method is an easier way to get a functional root as a polynomial to as much accuracy as one wants.2017-02-10
  • 0
    @Dove My paper provides a completely self-contained treatment of continuously iterated functions. The mathematics is accessible to undergraduates, if the mathematics of the paper are beyond you, then you will have serious problems contributing to the understanding of continuously iterated functions. My primary concern it that I developed a general theory at the turn of the millennium, yet people are only interested in having their own more specific work validated.2017-02-10
  • 0
    @Daniel Geisler: Please don't get me wrong. I'm just in high school, so I understood very little of it. I'm sure your work must be great. I'll try to understand it. I'm just asking if I've done anything new about approximating functional roots.2017-02-10
  • 0
    @Dove Algorithms are irrelevant if they are not based on sound mathematics. Schroeder wrote the first paper on iterated functions in the early 1870. It was often remarked that the paper provided a very complete treatment of the subject and that researcher who published on the subject without understanding Schroeder's paper invariably just reinvented pieces of Schroeder's paper. For example, understanding that the correct treatment of fixed points can allow one to quickly disprove many ideas about iterated functions. Does your method guarantee that fixed points remain fixed by you polynomial?2017-02-10

0 Answers 0