Markov matrices $A$ are matrices with nonnegative entries whose rows sum to $1$*. We say a probability vector $p$ is a row vector with nonnegative entries which sums to $1$. When you multiply a probability vector on the left with a Markov matrix (as $pA$), you get another probability vector $q$. In this situation $p$ describes the initial distribution on the state space and $q$ describes the distribution after one step.
In migration, the state space is some collection of locations, and the entry $a_{ij}$, for two locations $i,j$, is the probability that a migrant goes from location $i$ to location $j$ (it could be that $i=j$ in which case the "migrant" does not migrate). Implicit in this description is that the probability that the migrant goes to $j$ from $i$ does not depend on their previous history. This is called the Markov property, and it is not particularly realistic for migration, especially of humans.
From here one could use the machinery of Markov chains to determine how people will be distributed around the region of interest in the long run, among other things.
*Some authors instead decide that the columns sum to $1$ and that probability vectors are column vectors, but the row convention is more common.