I never learned what markov chain is, but from googling it seems like if there are finite states and each state has probabilities to jump to other states, I can use markov chain.
What I'm on is http://projecteuler.net/problem=227, the chase.
"The Chase" is a game played with two dice and an even number of players.
The players sit around a table; the game begins with two opposite players having one die each. On each turn, the two players with a die roll it. If a player rolls a 1, he passes the die to his neighbour on the left; if he rolls a 6, he passes the die to his neighbour on the right; otherwise, he keeps the die for the next turn. The game ends when one player has both dice after they have been rolled and passed; that player has then lost.
In a game with 100 players, what is the expected number of turns the game lasts?
Give your answer rounded to ten significant digits.
N people sit around a table, so I name them 1, 2 ... 100 clockwise.( or counterclockwise, which doesn't matter) person 1 and 51 have the dice at beginning.
From description, given N people, state (A, B) is person A and person B having dice. They can move to state(A+1, B), state(A, B+1), state(A+1, B+1) ... state(A-1, B-1). There are 8 distinct states that at least one die changes its owner. I can calculate all the next states and probabilities of them for each state.
say probability (a, b) is the probability of finishing a game if person a and b has the dice. if a=b, which is condition of the game to finish, probability (a, b) = 1. if not a=b, I made a function to find the probability :
if a=b then return 1.
if not a=b,
for each next-state from (a, b): add up (probability of moving to a next-state)*(probability of ending game in that state)
so this function will recursively search for whether game could be ended in that state.
My concern is that above function could get into endless recursion. for example -
start with (1, 10) -> from this state game can't be done in one turn, so I search -- (0, 10), (2, 11) ... I keep searching until I hit (a, b) that a=b. say I am in the middle of search, ended up in (4, 5) and game can be ended in (5, 5). moving from (4, 5) to (5, 5) has P probability. but for (1-P) probability, I have to keep searching.
I think there is something I missing about probabilities or I really don't know about markov chain. Any help would be appreciated.