0
$\begingroup$

For optimization of Find operations (an operation that returns the representative of the set a node belongs to) in a disjoint-set structure, path compression is used. Analysis of the operation approximately yields amortized complexity $\mathcal{O}\left(1\right)$.

Wikipedia and a variety of other sources provide brief explanations - most identify the relation between path compression and the inverse Ackermann function, however do not provide a compendious description of the reasoning behind the relation.

Why is the approximated amortized complexity of path compression in Find operations $\mathcal{O}\left(1\right)$?

  • 0
    @RahulNarain The inverse Ackermann function is an *extremely* slow growing function. For $n$ less than $2^{65536}$ (in other words all practical situations), $\mathcal{O}(\alpha(n))$ is equal to or less than $6$. Therefore for all realistic situations, $\mathcal{O}(\alpha(n))$ is *approximately* $\mathcal{O}(1)$2012-07-20

1 Answers 1

1

I don't think that "...approximately yields amortized complexity $O(1)$" is a good statement. There is a very clear notation what approximation means when speaking about algorithms. It is better to phrase it as "... in the amortized running time per operation is effectively a small constant." as done in Wikipedia. This means that for all practical values the inverse Ackermann function $\alpha(n)$ can be bounded by a constant, since it grows very slow.

If you want to get an idea, why the $\alpha(n)$ shows up, I recommend the very readable article by Seidel and Sharir. The paper shows first a $O(\log n)$ bound and then improves the bound to $O(\log^* n)$. However, the idea behind the improvement is still applicable to the $O(\log^* n)$ bound, and gives an even better bound. The repeated improvements culminate in the bound of $\alpha(n)$.