Probably not what you meant, but also an answer to your question. Fast integer multiplication uses algorithms altogether different than the "high school" algorithm. You can find a description of some of these algorithms on Wikipedia or in the documentation of GMP, which is a library for fast exact arithmetic.
The simplest of these, Karatsuba multiplication, is similar to Strassen's well-known matrix multiplication algorithm. To multiply two numbers $x$ and $y$, first divide both numbers into two roughly equal halves $x = x_H x_L$, $y = y_H y_L$. There's a trick that allows computation of $xy$ given only three multiplications of half-sized numbers, at the cost of having more additions (the trivial algorithm requires four: $x_Hy_H,x_Hy_Lx_Ly_H,x_Ly_L$). When applied recursively, the performance is $O(n^{\log_2 3})$ rather than $O(n^2)$. The idea generalizes to the Toom-Cook algorithm, which splits the numbers into more than two parts.
Tha Karatsuba algorithm is only worthwhile for numbers which are large enough (high school multiplication outperforms it for small numbers). For even larger numbers, FFT methods are used. The idea is to think of integer multiplication as polynomial multiplication (after all, an integer in base $B$ notation is a polynomial in $B$!), and then use FFT for multiplying the polynomials (apply the FFT, multiply the results pointwise, apply the reverse FFT). This improves the asymptotics to $\tilde{O}(n\log n)$.