Think of $A$ as a linear transformation from $\mathbb{R}^n$ to $\mathbb{R}^k$, and $B$ as a linear transformation from $\mathbb{R}^n$ to $\mathbb{R}^m$. White $L_M$ for the linear transformation defined by multiplication by the matrix $M$.
For the matrix $C$ to exist, you must have a linear transformation $L$ from $\mathbb{R}^k$ to $\mathbb{R}^m$ such that $L\circ L_A = L_B$. Then you can define $C$ to be the standard matrix representation of $L$.
A necessary condition for this to be possible is that $\mathbf{N}(L_A) \subseteq \mathbf{N}(L_B)$ (if there is $\mathbf{v}\in\mathbf{N}(L_A)$ that is not in $\mathbf{N}(L_B)$, then $L\circ L_A(\mathbf{v})=\mathbf{0}\neq L_B(\mathbf{v})$ and you're sunk).
In fact, this is sufficient: find a basis $\mathbf{v}_1,\ldots,\mathbf{v}_s$ for the nullspace of $A$, and then extend it to a basis for $\mathbb{R}^n$, $\mathbf{v}_{s+1},\dots,\mathbf{v}_n$. Note that the image of $\mathbf{v}_{s+1},\ldots,\mathbf{v}_n$ in $\mathbb{R}^k$ under $L_A$ is linearly independent (this is essentially the Rank-Nullity Theorem), so you can complete $A\mathbf{v}_{s+1},\ldots,A\mathbf{v}_n$ to a basis $\mathbf{w}_1,\ldots,\mathbf{w}_{(k-n+s)}$ of $\mathbb{R}^k$. Define $L\colon\mathbb{R}^k\to\mathbb{R}^n$ by letting $L(A\mathbf{v}_j) = B\mathbf{v}_j$ for $j=s+1,\ldots,n$, and arbitrarily for $\mathbf{w}_i$. Then let $C$ be the standard matrix for this linear transformation.