The statement is:
1. Let $x_0,y_0 \in \mathbb R^n$ and let $J_0, I_0$ be their respective neighborhoods. Let then $\mathbf f : J_0 \to I_0$ bijective and differentiable at $x_0$ such that $y_0 = \mathbf f(x_0)$, and let $d\mathbf f_{x_0} : \mathbb R^n \to \mathbb R^n$ denote the unique linear application such that, for $\mathbf h \in \mathbb R^n$ small enough, we may write $$\mathbf f(x_0 + \mathbf h) = \mathbf f(x_0) + d\mathbf f_{x_0}(\mathbf h) + |\mathbf h|\mathbf q(\mathbf h) \tag 1$$ for some $\mathbf q : \mathbb R^n \to \mathbb R^n$ such that $\lim_{\mathbf h\to \mathbf 0}\mathbf q(\mathbf h) = \mathbf 0$. Then, if $d\mathbf f_{x_0}$ is an isomorphism, $\mathbf f^{-1}$ is differentiable at $y_0$, and we have $$\tag 2d(\mathbf f^{-1})_{y_0} = (d\mathbf f_{x_0})^{-1} $$
The converse, that is,
2. If $\mathbf f^{-1}$ is differentiable at $y_0$, then $(2)$ holds and $d\mathbf f_{x_0}$ is an isomorphism
is true and can be proved using the general result about differentiability of composite functions.
My real analysis textbook does provide an ingenious counterexample that shows the falsity of statement 1., and therefore demonstrates the need for further hypotheses (specifically, continuity of $\mathbf f^{-1}$ at $y_0$) to achieve an invertible result. However, it is very elaborate, and even the author of the textbook admits that it took him a while to find it.
Are there more direct examples that show that 1. is false?