I am working on a signal processing problem where I want to model the measurement sample covariance matrix (SCM) as random matrix and hence use the results from Random Matrix Theory (RMT). Let $\mathbf{X}$ be $p\times n$ matrix with i.i.d complex gaussian random variable with zero mean and variance $1$. The sample covariance matrix is
$\mathbf{S_n} = \frac{1}{n}\mathbf{X}\mathbf{X^H}$
which is a Wishart Matrix.
Than Random Matrix Theory tells us the limiting eigen value distribution of $\mathbf{S_n}$ as the matrix dimension grows infinitely large i.e. as $n,p \to \infty$ and $\frac{n}{p} \to c$, the eigen value distribution converges to a deterministic distribution function $F^S(x)$.
I am having unable to understand how the results of RMT applies to the SCM which is essentially limited to finite dimension in a real world problem. Is it sufficient that the matrix dimensions $n$ and $p$ are large values? ( How large? ).
I referred to papers where people have applied RMT with array processing problems but its not clear how the infinite limiting results of RMT are applicable in finite cases.
Here is a paper I read, and in the last sentence on page 8, authors suggest that
when the number of explanatory variables is very large compared to $n$, it is natural to formulate it as a case where both $n$ and $p$ are tending to infinity.
This statements seems to contrast what I understand of RMT where both dimensions of the matrix grow infinitely large.
I would like to understand this issue from those who are working with RMT. Please help me out.