The formula for the determinant of an $n$ by $ n$ matrix given by expansion of minors involves $n!$ terms. As such, computing the determinant of a given matrix of with integer entries via expansion by minors takes a number of steps is bounded below by $n!$ . (In practice the number of steps required depends on the size of the matrix entries).
However, the determinant of such a matrix can also be computed by Gaussian Elimination; we know how each elementary row operation affects the determinant of a matrix and if we keep track of the row reduction steps which we perform in the process of performing Gaussian elimination we can almost immediately reconstruct the determinant of the original matrix from this data. The number of steps that it takes to row reduce a matrix with integer entries is $O(n^3)$ and so this method of computing the determinant of a matrix takes $O(n^3)$ steps.
Juxtaposing the two methods above, one sees that the computational problem under discussion looks to be very time consuming from one perspective but is much faster from another perspective. In this respect the problem is analogous to that of computing a partial sum of a telescoping series. And yet it's not at all clear to me how see the "implicit cancellation" from the formula that comes from expansion by minors. Indeed, the formula bears a strong superficial similarity with that of the formula for the permanent of a matrix, the computation of which is #P-complete. The fact that half of the terms of the determinant have negative signs in front of them makes the formula for the determinant look more like a telescoping sum than the formula for the permanent, but not by very much.
Is there a way to see the fact that the determinant of a matrix is bounded by a polynomial in the dimension of the matrix and size of the entries directly from the formula given by expansion by minors?