I suspect you do not need a theory of random matrices to prove this.
The value in an arbitrary basis of each component of the unit vectors has a distribution depending on $N$: for example with $N=3$ these are uniformly distributed on $[-1,1]$. As $N$ increases, the probability that these values are close to $0$ slowly increases towards $1$; the density is $\dfrac{\Gamma\left(\frac{N}{2}\right)}{\Gamma\left(\frac{N}{2}-\frac{1}{2}\right)\Gamma\left(\frac{1}{2}\right)}(1-x^2)^{(N-3)/2}$ with $x \in [-1,1]$.
But the distribution of these values for given $N$ is also the distribution of the cosine of angles between any two unit vectors in $V$ (imagine deciding the basis so the first vector is $(0,0,1,0,0,\ldots,0)^T$ and then taking the dot product of the other vectors with this). So the distribution of the angles themselves is such that as $N$ increases the probability these values are close to $\frac{\pi}{2}$ or $90^\circ$ slowly increases towards $1$. And so in this sense $V$ approaches orthonormal as $N$ increases, providing if $n$ remains constant. ($n=N$ would be a different question).
Actually doing the simulation, the convergence is indeed fairly slow. It seems that to have a probability greater than $90\%$ that the angle between a pair of random unit vectors is between $85^\circ$ and $95^\circ$, i.e. their dot product is between $\cos\left(\frac{35}{36} \pi\right)$ and $\cos\left(\frac{37}{36} \pi\right)$, you need $N$ to be about $357$.
It would need to be higher if $n \gt 2$ and you wanted all the angles in this range: they will not quite be independent.