I think you are overthinking this (unless you need some sort of smoothly decaying distribution that puts more probability mass on the smaller elements). If you just need to distinguish the first 50 elements from the last 50, with no within-set distinctions, just hand-specify a discrete distribution without caring about the functional form too much.
If you want there to be a probability $p$ that the values come from $\{1,2,\cdots,50\}$ and probability $(1-p)$ that values come from $\{51,\cdots,100\}$, then just make a discrete distribution that is $p/50$ for any value in $\{1,2,\cdots,50\}$ and $(1-p)/50$ for any value in $\{51,\cdots,100\}$. Then $p$ tunes how lopsided the distribution is towards the first set, without making any individual element of that set more likely than any other. By putting $p$ close to 1, you can make the occurrence of elements from $\{51,\cdots,100\}$ as infrequent as you'd like for your purposes.
To implement this in software, you'll need to use a psuedorandom number generator to generate draws from a standard uniform distribution on (0,1). Compute the cumulative sum vector for the discrete distribution, $F(i) = \sum_{k\leq i}P(k)$. Define $F(0) = 0$ and then by definition you'll get $F(100) = 1$.
For a given uniform draw $u$, find the index $j$ such that $F(j) \leq u \leq F(j+1)$, and return $j+1$ as the drawn number. Languages such as Python (NumPy), Matlab, and C++ (Boost), provide this kind of user-defined discrete distribution as a built-in function, with built-in sampling functions. But it's often a good exercise to write your own discrete simulator at least once.