Base case
For $_{N}C_{M}$ where $M = 1$, T := RandInt(1, J)
will execute with $J = N-M+1 = N$, randomly selecting one item $I$ of the $N$ items, as is expected for $_{N}C_{1}$. In other words, the trivial base case is $P_{chosen}(I_{i\in1..N}|_NC_1) = {1 \over N}$, which I'll denote using
$P_{N,1}(I_i) = {1 \over N}$
Inductive hypothesis
$P_{N,M}(I_i) = {M \over N}$ for choosing $M$ items from $N$
Inductive step
The inductive step is to solve for
$P_{N',M'}(I_i)$ when $N'=N+1$ and $M'=M+1$
The inductive step is executed in the algorithm at every additional loop iteration.
For items $I_{i\in1..N}$
$P_{N',M'}(I_{i\in1..N}) = P_{N,M}(I_i) + \neg P_{N,M}(I_i) * {1 \over N'}$
$P_{N', M'}(I_{i\in1..N})$ = "For items $I_i$ except $I_{N'}$ in the additional iteration"
$P_{N,M}(I_i)$ = "The chance of $I_i$ to have already been chosen earlier"
$\neg P_{N,M}(I_i)$ = "The chance of $I_i$ to have not yet been chosen earlier"
$1 \over N'$ = "The chance of $I_i$ to be chosen in the additional iteration by RandInt()
"
Substitute in the hypothesis and simplify:
$P_{N',M'}(I_{i\in1..N}) = {M \over N} + (1-{M \over N}) * {1 \over N+1}$
$P_{N',M'}(I_{i\in1..N}) = {M+1 \over N+1}$
For for the last item $I_{N'}$ added in the additional iteration
$P_{N',M'}(I_{i=N+1}) = {1 \over N'} + {M \over N'} = {M+1 \over N+1}$
${M \over N'}$ = "The chance that one of the $M$ items already chosen earlier is chosen again in the additional iteration"
Now we know for all items $I_i$ in the additional iteration
$P_{N',M'}(I_i) = {M+1 \over N+1} = {M' \over N'}$
which completes the inductive step.
This proof works because for any $N'$ and $M'$, the inductive step reduces the problem to the sub-problem, $N=N'-1$ and $M=M'-1$. Eventually, $M$ will be reduced to $1$, giving us the base case, $_NC_1$.
Since the algorithm guarantees both
$P(I)={M \over N}$ for all items $I$
Exactly $M$ of $N$ items will be selected
We know the algorithm correctly selects a random combination of $M$ items from a set of $N$ items.