1
$\begingroup$

I'm asking a general question I think, but I couldn't find the answer myself. I have a system in which the next action depends on some variables, and those variables changes over time. At first I thought of modeling this as a markov chain, but I also need a variable set of probabilities to pass on each state.

Basically, I have 11 states in which I can be, and my probability to translate from state to another depends on the choices of all the other "players".

At the beginning, I start in one state, and at every instant, I look if too many players had choose my state. If so, with some probability I'll switch to another state and so on, until I found my best state.

Is that possible to to with a Markov chain or do I have to look for something else?

In addition, I'd like also to see when my chain gets to an equilibrium, that is, when no players change their state anymore.

2 Answers 2

4

A convenient construction of a (finite) Markov chain $(X_n)$ is to let $X_{n+1}=\xi(X_n,U_n)$, for some deterministic function $\xi$ and a given i.i.d. sequence $(U_n)$ (say, uniformly distributed on $(0,1)$). But your question asks that $X_{n+1}=\xi(X_n,Y_n,U_n)$, for some process $(Y_n)$ describing the positions of the other players.

Since $(Y_n)$ is not i.i.d., a common strategy to save the day is to consider the process $(Z_n)$ defined by $Z_n=(X_n,Y_n)$. Then, everything depends on the structure of the process $(Y_n)$.

If $(Y_n)$ is itself a Markov chain based on a function $\eta$ and a given i.i.d. sequence $(V_n)$ independent from $(U_n)$, that is, if $Y_{n+1}=\eta(Y_n,V_n)$, then $(Z_n)$ is a Markov chain as well. This chain is based on the sequence of innovations $(W_n)$ defined by $W_n=(U_n,V_n)$ since $Z_{n+1}=\zeta(Z_n,W_n)$ for some function $\zeta$ that I will let you write down.

This is the structure of a hidden Markov process: a state process, here $(Y_n)$, which is a Markov chain, and, coupled to the state process, an observation process, here $(X_n)$, which, for a given realization of the state process, is an inhomogenous Markov chain. Then, as we saw above, the couple process $(Z_n)$ defined by $Z_n=(X_n,Y_n)$ is a Markov chain but the observation process $(X_n)$ is not.

With no further hypothesis on the structure of $(Y_n)$, it seems difficult to say anything else. Note that one could simply assume that $(Z_n)$ is a Markov chain, and that this would allow $Y_{n+1}$ to depend on $X_n$ as well. For example, the dynamical structure $X_{n+1}=\xi(X_n,Y_n,U_n)$ and $Y_{n+1}=\eta(Y_n,X_n,V_n)$ could be summarized by $Z_n=\zeta(Z_n,W_n)$ as well.

1

I do not understand your question completely. But see if this simple analogy helps. Consider a game where players $A,B$ move independently on the states $S = \{{1,2}\}$ with transition probabilities $P= \left( \begin{array}{cc} p_{11} & p_{12} \\ p_{21} & p_{22} \\ \end{array} \right) $ and
$Q= \left( \begin{array}{cc} q_{11} & q_{12} \\ q_{21} & q_{22} \\ \end{array} \right) $ respectively. For simplicity, let us assume $A$ begins on $1$ and $B$ begins on $2$. Suppose our goal is to find the expected time for the game to end. Then one way to approach this is consider the Markov chain on the state space $\hat{S} = \{(1,1), (1,2), (2,1), (2,2)\} $ with the transition probabilities given by $Q= \left( \begin{array}{cccc} 1 & 0& 0 & 0 \\ p_{11}q_{21} & p_{11}q_{22} &p_{12}q_{21} &p_{12}q_{22}\\ p_{21}q_{11} & p_{21}q_{12} &p_{22}q_{11} &p_{22}q_{12}\\ 0 & 0& 0 & 1 \\ \end{array} \right).$ Our goal then reduces to finding the expected time for this Markov chain to hit the set $H=\{(1,1),(2,2)\}$ starting from $(1,2)$.

  • 0
    Not completely, because A and B change their probability according to the other's moves.2012-01-31