Here's an intuitive overview:
What is a matrix? A matrix is a representation of a linear transformation between two vector spaces. The way we get this representation is by considering the linear transformation of basis vectors. If we know the linear transformation of all the basis vectors, we know the transformation of any vector by expressing it as a combination of basis vectors.
So we define a matrix is a list of transformations of basis vectors (the columns of the matrix), and we define matrix multiplication as finding the appropriate combination of transformed basis vectors (tell me if this needs clarification).
Consider an eigenspace $E_\lambda$ of a linear transformation $T$. We know that there is a basis for $E_\lambda$ which has $\dim E_\lambda$ vectors. A basis is linearly independent, so there's an idea that if we have a set of linearly independent vectors, we can add in some more vectors to get a basis for the entire vector space.
Now, we use our basis for $E_\lambda$ and the other vectors we added in to create a matrix representation of $T$. What we're going to do this time though is probably get a different, but equally valid representation of $T$ than the one we originally had. When we transform the basis vectors of $E_\lambda$, we know we get a scalar multiple of the original vector. This is why the first few columns (list of transformations of basis vectors) of the matrix they give are columns with all 0s except for the one eigenvalue. From this matrix representation we created, we start to calculate the determinant, and we know that $(x-\lambda)$ shows up at least $\dim E_\lambda$ times.
But it might show up more times. Therefore the algebraic multiplicity (the number of times $(x-\lambda)$ appears) is greater than or equal to the dimension of the eigenspace ($\dim E_\lambda$).
Did that help?