Thanks joriki and Patrick. Frankly I prefer joriki's answer because it considers the specific constraint. And Patrick's answer can be applied to more generic problems. For the record, I happen to have another solution, which I think is very similar to joriki's. I appreciate any comments on the method.
The idea is to use a rotation to satisfy the constraint $\|x_{k+1}\|=\|x_k\|$: $x_{k+1}=R_kx_k$ where $R_k$ is a rotation matrix ($R_k^TR_k=I, \det R_k=1$). Note rotation just change the direction of a vector, but no change the length. In order to determine $R_k$, I use the axis-angle representation of a rotation matrix (aka Rodrigues’ formula): $R_k=I+[\omega_k]_{\times}\sin\theta_k+[\omega_k]_{\times}^2(1-\cos\theta_k)$ where $\omega_k$ is the rotation axis, $\theta_k$ is the rotation angle and $[\omega]_{\times}$ is the skew-symmetric matrix associating with $\omega$. The rotation axis can be chosen as $\omega_k=x_k \times n_k$ and the step size $\theta_k$ might be determined by a one dimensional search.
EDIT: $R_kx_k=x_k+\sin\theta_k (\omega_k\times x_k)+(1-\cos\theta_k)(\omega_k\omega_k^T-I)x_k$ $=x_k+\sin\theta_k (\omega_k\times x_k)-(1-\cos\theta_k)x_k =\sin\theta_k (\omega_k\times x_k)+\cos\theta_k x_k$ In the above equation, we use the facts: $[\omega]{\times}^2=\omega\omega^T-\|\omega\|I$ and $\omega^Tx=0$ (note $\omega\times x\ne 0$). Choose $\omega_k\times x_k=n_k$, i.e., $\omega_k=x_k\times n_k$, the above equation is exactly the same as joriki's answer.

EDIT again: prove it is gradient descent. For $x_{k+1}=x_k+v$, as long as $v^T(-g)>0$, it is gradient descent. And $p(x)$ is guaranteed to decrease given sufficiently small step. For our problem, can we use this to prove? In our equation, $x_{k+1}=x_k+\underset{v_k}{\underbrace{\sin\theta_k (\omega_k\times x_k)-(1-\cos\theta_k)x_k}}$ so we need to check if $-v^Tg>0$. It is very interesting that unlike common cases, the evolving direction is depend on the step size $\theta$. We should know that gradient descent is only valid (mathematically) for sufficiently small step size. When $\theta$ is very small, $\sin\theta=\theta$ and $1-\cos\theta\approx0$ (second order). So we have $v_k\approx\theta_k n_k$ Obviously $-g^T\theta_k n_k>0$. So it is gradient descent.
Summary: in order to satisfy the constraint, we have $v_k=\sin\theta_k (\omega_k\times x_k)-(1-\cos\theta_k)x_k$ (complex). But in order to prove the descent property, we have $v_k\approx\theta_k n_k$ (simple and essential).