Because $M$ is a correlation matrix, we know the diagonal elements $m_{ii} = 1 \ \forall i$. Computing the eigenvalues and eigenvectors of $M$ is is equivalent to performing principal components analysis on rescaled data (each column variable having unit variance).
The quantity $\delta_j=\frac{|\lambda_j|}{\sum_{i=1}^{N} |\lambda_i|}$ represents the proportion variation the $j$th eigenvector explains in your data set. Statisticians often order the eigenvalues of the correlation (or covariance) matrix by decreasing magnitude, and plot the level of cumulative variation explained by each eigenvector starting with the largest (respective) eigenvalue, and adding the next largest until all are exhausted. This is called a scree plot, and a quick google query will provide many examples.
The utility of plotting cumulative sums of $\delta_j$ is one can visualize the marginal explanatory power gained from including an additional principal component in a set of linear factors modeling $M$.
We know if $\delta_j \approx 1$, then all columns of $M$ are approximately the same, and if $\delta_j \approx 0$, then the $j$th column bears little resemblance to the others (in a linear sense).
Researching how statisticians choose principal components may prove useful for your purposes, as there is a lot written about this.
Similarly, operations researchers and applied mathematicians often study the column subset selection problem, which may also bear relevant fruit.