Firstly, note that if $A$ is row-stochastic (i.e. the sum across columns is equal to 1 for every row and the entries are non-negative) and symmetric then it must also be column-stochastic, hence doubly stochastic. Also, since the entries are positive, the Perron-Frobenius Theorem would be very useful here. Basically, the matrix $A$ represents a discrete time Markov chain with a finite state space, and understanding the effect of repeatedly multiplying $B$ by $A$ requires an understanding of whether the following limit exists: $\lim_{t \to \infty} A^t$. This is a well-studied problem in the theory of Markov Processes (coming from results in linear algebra), and the answer is that if the matrix is irreducible and aperiodic (equivalently, if there exists some $k$ such that for all $m > k$, $A^k$ has all positive entries) then this limit exists.
Given that in this case $A$ has all positive entries, the infinite product of the matrix will have a limit, which is a matrix with identical rows $s$ given by the equation: $sA = s$. In other words, the limiting matrix $\lim_{t \to \infty} A^t$ is the $n \times n$ matrix with every row equaling the left eigenvector associated with the eigenvalue 1.
So, as you multiply $B$ repeatedly, you will find that $B_t$ will converge to a matrix with identical rows, where the entry in column $i$ is a weighted average of the column $i$ of $B$, with weights given by $s$.
A great reference for you would be Carl Meyer's book Matrix Analysis and Applied Linear Algebra. Alternatively, you'll find the essence of what you need in the following journal article's appendix: Jackson, Golub - Naive Learning in Social Networks and the Wisdom of Crowds