In this MSR Technical Report, I came across a definition of the manifold learning problem:
Given a set of $k$ unlabelled observations $\{v_1,v_2,\dots,v_i,\dots,v_k\}$ with $v_i \in \mathbb{R}^d$ we wish to find a smooth mapping $f:\mathbb{R}^d \rightarrow \in \mathbb{R}^{d'}$, $f(v_i) = v_i'$ such that $d' << d$ and that preserves the observations' relative geodesic distances.
I understand the notation $f:\mathbb{R}^d \rightarrow \mathbb{R}^{d'}$ ; what's the significance of $\in$?