I'm going to use the notation used in the paper.
Assuming that your $B = [\mathbf{u}_1\;\mathbf{u}_2\;\ldots\;\mathbf{u}_{M'}]$, $A = [\mathbf{\Phi}_1\;\mathbf{\Phi}_2\;\ldots\;\mathbf{\Phi}_M]$ in the paper's notation. You want to implement a k-nn classifier that operates in the "face space", i.e. $M'$-dimensional subspace of the space of all possible images.
Since images are represented by $N^2$-dimensional vectors, "projecting" an image $\mathbf{\Phi}_A$ onto another image $\mathbf{\Phi}_B$ in the image space is similar to how you would do it with 2- or 3-dimensional vectors: you use inner product (or 'dot' product.) Mathematically, $\frac1{\lVert \mathbf{\Phi}_B \rVert}\mathbf{\Phi}_A^T\mathbf{\Phi}_B$ is the "projection". Or, if you want a vector, $\mathbf{\Phi}_A^T\mathbf{\Phi}_B\frac{\mathbf{\Phi}_B}{\lVert \mathbf{\Phi}_B \rVert}$. We don't need $\lVert \mathbf{\Phi}_B \rVert$ if it's normalized (e.g. when dealing with orthonormal bases like in our construction of face space).
For our case, we want to describe a face by its "coordinates" in the eigenface space. To find that, we project the adjusted face input image $\mathbf{\Phi} = \mathbf{\Gamma} - \mathbf{\Psi}$ into each "axis" (hence "projecting the image to the eigenface space"), i.e. the eigenface vectors. Same as how you would do it in 3-d space. These "coordinates" are
$$\mathbf{\Omega}^T=[\omega_1\;\omega_2\;\ldots\;\omega_{M'}]$$ where
$$\omega_k = \mathbf{u}_k^T\mathbf{\Phi}$$
More compactly, in your notation, $B^T A$ would be a matrix of $\mathbf\Omega$'s for each image.
After doing this for our images, each image will have "coordinates" $\mathbf\Omega$ in our $M'$-dimensional face space. The distance of two "face coordinates" $\mathbf\Omega_A$ and $\mathbf\Omega_B$ is simply $\lVert\mathbf\Omega_A-\mathbf\Omega_B\rVert$, i.e.
$$\text{distance} = \sqrt{(\omega_{A,1}-\omega_{B,1})^2+(\omega_{A,2}-\omega_{B,2})^2+\cdots+(\omega_{A,M'}-\omega_{B,M'})^2}$$