In high-dimensional task scheduling, it is common to use a Hilbert-curve ordering. Given a set of points $\{p_i\}_{i=1}^N \subset \mathbb{R}^d$ the goal is to linearly order them such that points nearby in space are also nearby in order. The heuristic behind Hilbert curve scheduling is that ordering based on distance along a Hilbert curve tends to preserve locality.
In practice, you divide space into sufficiently small dyadic boxes, round the location of the points to the centers of those boxes, then order the points based on their distance along the finite order Hilbert curve approximation that passes through each of the boxes exactly once, as demonstrated on the blue points in the following image
On the other hand, since $[0,1]$ is not homeomorphic to $[0,1]^2$ and since the fully infinite Hilbert curve is onto and continuous, it can't be 1-1. Thus it seems to me that Hilbert curve ordering is not a well-defined concept - for a point that is mapped to twice, it has no clearly defined "Hilbert distance" along the Hilbert curve.
Is this right, or is there something I'm missing? I think the above reasoning is pretty bulletproof, but I haven't seen the issue even mentioned in the computational literature.