Since $B$ is symmetric, $ABA^T$ is also symmetric, and so you only need to determine the entries above the diagonal. Thus, instead of doing two matrix multiplications, you can make do with one and a half.
However, without there being anything else special about $A$ or $B$, I am skeptical that any speedup you get will be more than a constant factor. In particular, it is very unlikely that computing $ABA^T$ can be done faster than computing $AB$. Algorithmically, this means that your won't change the order of magnitude of the run time of whatever you are using this for, and so unless you are doing a ton of matrix multiplications, looking for a clever optimization won't likely net you great gains.
However, on a related note, something that will speed up the multiplication, at least when you are dealing with large matrices, is Strassen's algorithm, which is an algorithm for multiplying $2\times 2$ matrices with 7 multiplications instead of 8, which can recursively be used to multiply large matrices together quickly ($O(n^{\log_2 7+o(1)})$ instead of $O(n^3)$).