Say the basis vectors in $B_1$ and $B_2$ are $e_{B_1}^1, e_{B_1}^2, \dots e_{B_1}^n$ and $e_{B_2}^1, e_{B_2}^2, \dots, e_{B_2}^n$ respectively.
From your definition of the matrix $S = (s_{ij})$, the first column is $e_{B_2}^1$ expressed in $B_1$, or in general, the $i$:th column is $e_{B_2}^i$ expressed in $B_1$, thus:
$Se_{B_2}^j = \sum_{i = 1}^n s_{ij} e_{B_1}^i$
Say you have have the $B_2$-coefficients of a vector $v = \sum_{k=1}^n \alpha_k e_{B_2}^k$. Applying $S$ to this vector yields:
$ \begin{align*} Sv &= S \left( \sum_{k=1}^n \alpha_k e_{B_2}^k \right) = \sum_{k=1}^n S (\alpha_k e_{B_2}^k ) = \sum_{k=1}^n \sum_{i=1}^n \alpha_k s_{ik} e_{B_1}^i = \\ &= \sum_{i=1}^n \sum_{k=1}^n \alpha_k s_{ik} e_{B_1}^i = \sum_{i=1}^n \left( \sum_{k=1}^n \alpha_k s_{ik} \right) e_{B_1}^i \end{align*} $ and you get the coefficients $\sum_{k=1}^n \alpha_k s_{ik}$ for the vector in the basis $B_1$.
So, yes, $S$ maps the coefficient vector for a vector in basis $B_2$ to the coefficient vector in the basis $B_1$. Per definition, $S^{-1}$ does the opposite. In $S^{-1}TS$, your first map from basis $B_2$ to basis $B_1$, then applies the linear transformation, then change back from $B_1$ to $B_2$.
Say, you have your transformation L' = S^{-1}LS, which maps vectors expressed in $B_2$ to vectors expressed in $B_2$. If you want to feed in vectors expressed in $B_2$, but get vectors expressed in $B_1$, you can just skip the changing back part and use $LS$. If you want to feed it vectors in $B_1$ and get vectors in $B_2$, you don't need to change basis before applying $L$, so $S^{-1}L$ will do.
You can see this as changing basis once more before or after. You know that your change of basis matrix is $S$, and L' takes vectors expressed in $B_2$ and gives you vectors expressed in $B_2$. If you want the output to be in $B_1$, apply $S$ after L', i.e. use SL' = SS^{-1}LS = LS.