First some preamble:
Let A be a set of unordered elements, and let A' and A'' be subsets of A, where |A'| = |A''| = k. It’s given that as k approaches |A|, for every x in A' there is an increased probability of finding x in A''. That is, as k approaches |A|, |A' ∩ A''| approaches |A|.
On to the issue at hand:
Let’s say we have two different graphs G1 and G2, each with n vertices labeled 1 through n, and each with a set of non-random edges.* Let B be the set of vertices from G1, and C be the set of vertices from G2. Order B and C by vertex degree. Let B' and C' be subsets of B and C respectively, where |B'| = |C'| = k, and where subset elements are taken from their respective supersets from left to right (see example below), such that as k approaches n, B' and C' will tend to include vertices of lesser and lesser degree.
Here’s a general example of the subset selection process: B = {1,4,2,5,3} C = {4,2,3,1,5}
k = 1 -> B' = {1}, C' = {4}
k = 2 -> B' = {1,4}, C' = {4,2}
k = 3 -> B' = {1,4,2}, C' = {4,2,3}
k = 4 -> B' = {1,4,2,5}, C' = {4,2,3,1}
k = n -> B' = {1,4,2,5,3}, C' = {4,2,3,1,5}
If I were to plot k vs |B' ∩ C'|, the shape of the curve would be attributable both to the effect of increasing k, as described in my preamble, but ALSO, and especially, to the effect of the ordering. In this case, the ordering by degree. How do I determine the significance of this ordering?
*By non-random edges I mean that the connectivity of the respective graphs is determined by some other factor. In my real-world case, these graphs represent brain connectivity at different intervals of time. Connected nodes represent highly correlated activity between neurons.