The post below answers your question, in perhaps too much detail.
The equation $x^3-2x-5=0$ has a solution in the interval $[2,3]$. For let $f(x)=x^3-2x-5$ as you did. Then $f(2)=-1<0$, and $f(3)=16>0$. So by the Intermediate Value Theorem, there is a solution of $f(x)=0$ somewhere between $x=2$ and $x=3$. More informally, the curve $y=f(x)$ is below the $x$-axis when $x=2$, above the $x$-axis when $x=3$, so since $f(x)$ is continuous, the curve must cross the $x$-axis somewhere between $2$ and $3$.
Since $f(2)$ is much closer to $0$ than $f(3)$, it is reasonable to expect that there is a root much closer to $2$ than to $3$. For fun, let's calculate $f(2.1)$. It is about $0.061$, already positive, so there is a root between $2$ and $2.1$.
The equation only has one root. There are various ways to show this. For example, note that f'(x)=3x^2-2. So our function is increasing until $x=-\sqrt{2/3}$, then decreasing until $x=\sqrt{2/3}$, then increasing. Since $f(-\sqrt{2/3}<0$, it follows that there is at most one root, and it is $>\sqrt{2/3}$.
But we can get all this, at least informally, by asking a program to graph the curve.
Now to the question you are really asking, about fixed point iteration. There are many ways to use fixed point iteration to solve the equation $f(x)=0$. For instance, the famous Newton Method is actually a form of fixed point iteration.
Your choice of using equivalent equation $g(x)=x$, where $g(x)=\sqrt[3]{2x+5}$, is good. We will soon see why.
Note that g'(x)=\frac{2}{3}(2x+5)^{-2/3}. Informally, we can be sure that the fixed point iteration $x_{k+1}=g(x_k)$ converges to a root if there is a non-negative constant $c<1$ such that
for all $i$, |g'(x_i)|. For then the distance of $x_{k+1}$ from the root is less than $c$ times the distance of $x_k$ from the root.
So if our initial estimate $x_0$ has error $e$, then the error in the estimate $x_k$ is less than $ec^k$.
Let's not work too hard in finding an initial estimate. The choice of $x_0=2$ is fine, and the choice of $x_0=2.1$ is much better. Let's make the not so good choice $x_0=2$. And let's be sloppy about the error estimate. The root is between $2$ and $3$, so the initial error is less than $1$.
In the interval $[2,3]$, g'(x) reaches a maximum at $x=2$. The value of g'(2) is $(2/3)(9^{-2/3})$, which is a little less than $0.155$. We will be sucked towards the root, so g'(x_k) will be positive and less than $0.155$ throughout the calculation.
Now calculate. I get $x_0=2$, $x_1=g(x_0)=2.0800838$, $x_2=2.0923507$, $x_3=2.094217$, $x_4=2.0945007$, $x_5=2.0945438$, $x_6=2.0945503$, $x_6=2.0945513$.
The error of $x_k$ is less (by quite a bit) than $(1)(0.155)^k$. That estimate shows that $k=7$ does the job. (We can do this by experimentation, or by solving $(0.155)^k<10^{-5}$ using logarithms.) But the error in $x_5$ is already less than $10^{-5}$. and if we had started with $x_0=2.10$, we would have gotten close enough with quite a bit less work.
Remark: Note that g'(x)<0.155 for all $x\ge 2$. So if we had started with a truly awful first estimate, like $x_0=100$, the procedure would still converge.
There is somewhat less leeway with bad first estimates that are smaller than $2$. We get that g'(x)=1 a bit to the left of $x=-2.2278$, so g'(x) is positive and less than $1$ for $x>-2.2278$. So even if we make a bad initial estimate like $x_0=-1$, the fixed point iteration converges to the root.
But remember, much work is saved if our initial estimate is close, so it is always worthwhile to invest some effort to choose $x_0$ well!