This is not too bad if you set it up as a discrete Markov chain. Let the different possible states of the game be "S" (for start), and then 1, 2, 3, 4, 5, "6/A" (for accept). Start means you failed to get the monotone run of numbers and had to start over from scratch at the beginning of the game. Accept means you got to a 6 only after a monotone sequence of the other numbers.
You want to know the long-term probability of being in state 6/A, and from this you can easily figure out what the expected number of die rolls would be (I'll leave that to you because it's satisfying.)
But to get you started, you can write down a transition matrix for this game:
$\begin{array}{| c || c | c | c | c | c | c | c |} \hline & S & 1 & 2 & 3 & 4 & 5 & 6/A \\ \hline S & 5/6 & 1/6 & 0 & 0 & 0 & 0 & 0 \\ \hline 1 & 4/6 & 1/6 & 1/6 & 0 & 0 & 0 & 0 \\ \hline 2 & 1/2 & 1/6 & 1/6 & 1/6 & 0 & 0 & 0 \\ \hline 3 & 1/2 & 1/6 & 0 & 1/6 & 1/6 & 0 & 0 \\ \hline 4 & 1/2 & 1/6 & 0 & 0 & 1/6 & 1/6 & 0 \\ \hline 5 & 1/2 & 1/6 & 0 & 0 & 0 & 1/6 & 1/6 \\ \hline 6/A & 5/6 & 1/6 & 0 & 0 & 0 & 0 & 0 \\ \hline \end{array}$
Now it's a matter of getting the right eigenvector for this matrix, to tell you the long-term probabilities, and then figuring out how to turn that long term probability into an expected number of rolls. Note that most numerical eigenvector routines only compute "right eigenvectors", which occur on the right side of the matrix in question. For a typical stochastic matrix, you want to compute the left eigenvector corresponding to the eigenvalue of 1. So to get this numerically, you first need to take the transpose of the transition matrix and then feed that to your linear algebra package.
Added
After fixing the typos I had made in the above transition matrix (which is correct now), I used the following Python code to check what the eigenvector method yielded as a solution, and I translated your Monte Carlo simulation into Python to check numerically.
The result I got for the long-term stationary distribution of the game is $[ 0.79168, 0.16667, 0.03333, 0.00667, 0.00133, 0.000267, 0.0000444],$
and so the reciprocals of these elements gives the expected number of rolls to reach each state: $[1.26, 6.0, 30, 150, 750, 3750, 22500]$ (there could be some roundoff error depending on how good NumPy's linalg package is; I'm not the one to ask about that). Note that the second entry, 6 rolls to get the first 1, fits in with intuition for a uniform 6-sided die, so that's comforting.
So according to this, the "analytical" solution is that it should take about 22500 rolls to reach a monotone sequence ending with a 6. My Monte Carlo simulations agree with this, as I obtained an expected number of rolls of 22609.618 in a simulation of 10,000 flips (somewhat slow in Python but shouldn't be an issue in C++).
Here is the code for my solution:
import numpy as np # Set up the transition matrix. tmat = np.asarray([ [5.0/6.0, 1.0/6.0, 0, 0, 0, 0, 0], [4.0/6.0, 1.0/6.0, 1.0/6.0, 0, 0, 0, 0], [3.0/6.0, 1.0/6.0, 1.0/6.0, 1.0/6.0, 0, 0, 0], [1.0/2.0, 1.0/6.0, 0, 1.0/6.0, 1.0/6.0, 0, 0], [1.0/2.0, 1.0/6.0, 0, 0, 1.0/6.0, 1.0/6.0, 0], [1.0/2.0, 1.0/6.0, 0, 0, 0, 1.0/6.0, 1.0/6.0], [5.0/6.0, 1.0/6.0, 0, 0, 0, 0, 0]]).astype('float64') # Compute long-run probability and expected rolls # with eigenvalue method. w,v = np.linalg.eig(np.transpose(tmat)) # Get stationary distribution as eigenvector # corresponding to eigenvalue 1. # Take real part to get rid of Python's # complex notation, and divide by vector sum # just in case round off error makes it not # quite sum up to 1.0 stationary_probs = np.real(v[:,0]) print stationary_probs/sum(stationary_probs) # Print the element-wise reciprocals which are # the expected number of rolls to each state. print 1/(stationary_probs/sum(stationary_probs)) # Monte Carlo simulation to compare results. total_rolls = 0; num_repeats = 10000; iter_rolls = np.zeros((1,num_repeats)) for ii in range(num_repeats): cur_roll = np.random.randint(1,7) num_rolls = 1 if cur_roll == 1: next_val = 2 else: next_val = 1 while( next_val <= 6 ): prev_roll = np.copy(cur_roll) cur_roll = np.random.randint(1,7) num_rolls = num_rolls + 1 if(cur_roll == next_val): next_val = next_val + 1 elif(cur_roll == prev_roll): pass elif(cur_roll == 1): next_val = 2 else: next_val = 1 iter_rolls[0,ii] = num_rolls total_rolls = total_rolls + num_rolls print "Finished iteration %d of %d"%(ii,num_repeats) print "Monte Carlo mean: %f"%(np.mean(iter_rolls[0,:]))
As an aside, this linked book chapter does a better job explaining what's going on than almost any other reference I was able to Google. In particular, the section where it defines the "mean passage time" and proves that these are the reciprocals of the stationary distribution entries is exactly the background information relevant for problems like these.
Wikipedia's treatment of this kind of common discrete Markov chain issue is paltry and disappointing. If you want to "pay back" this answer, consider editing the Wikipedia entry on Stochastic Matrix to include this example. The only example currently there is one with a trivial absorbing state which is totally unhelpful for someone learning, especially someone trying to learn how to model problems with Markov chains rather than to just solve pedagogical Markov chains given to you by teachers.