2
$\begingroup$

my probabilistic and statistics skills are just average. However I am working on a project and I was wondering if any of you could help me understanding which process would better model the following.

So basically there is a system that transitions between $n$ states, for example say $4$. The are "linear" (I just made this term up, don't know if it is in the jargon) meaning that from state 1 I cannot go to state 3 say, but I have to go to state 2 first and then state 3. Now the point is that I don't know what makes it change the states.

So I was thinking to assume that the time that it takes to go from one state to the next is somewhat random.

Now, I would like to know, considering that the passage between one state to another is random, what is the "distribution" of the total time it takes to get to the last state (say, state $4$).

For example, suppose $t_{ij}$ is the time to go from $i$ to $j$. Then suppose I have $t_{12}=1s$, $t_{23}=20s$, $t_{34}=0.1s$. Then the total time would be $T_{tot}=1+20+0.1=21.1s$

So I guess this is a random variable, so how can I model this distribution?

I was thinking that I studied the Poisson Process/Distribution which was indeed related to time. Although it counts how many events happen in a certain time interval, which is basically the opposite of what I want to do. Also, I recall that the exponential distribution shows the time between events. So is there a way to construct a model of this phenomena?

So far my best ideas are indeed the above, Poisson, Exponential or Randomwalk. However I don't know random walks that well, so it's very hard to follow what I find on the internet.

I know surely people have done this before, cause I guess to model queues that is what you do. So can you give me a start and key points on how to proceed at least? I'm studying maths in my second year, so my overall maths skills are good, although my probabilistic skills aren't that refined.

Edit

How would this change is I allowed the system to be able to go backwards in the states as well?

Also, if I allowed it to go backwards as well, but I defined some states as "checkpoints" so that after the particle gets there, it cannot go back beyond that check point. But in all the states after it can go forward or backwards independently, unless again it gets to another "checkpoint"

  • 0
    Poisson's exponential can be expanded as it's series: $ e^{A} = A + \frac{A}{2!} + ... + \frac{A^{n}}{n!}. The kth derivate gives the kth moment $\mu_{k}$ where you could use $\mu_{k} \pm \sigma^{k} = P(A) $. I think you'd get a spread of averages and the respective variances.2017-02-11
  • 0
    @Sam I don't understand. Can you expand in an answer? Please if you answer, can you be complete and specify exactly how someone should construct the model? I've put 100 reputation in this, I really need a good answer2017-02-13

1 Answers 1

4

A Markov process could be what you want. More specific a markov process in continuous time with a finite number of states (Poisson process is a continuous time markov process with countably many states given by $\mathbb{N}$ so that might not satisfy your needs as you only want $n$ states).

It follows then from the markov property that the distribution of the time between switching states is memoryless and thus has to be an exponential distribution (possibly with time and state dependent intensities).

You also said a particle can only move up.. so the number of particles in the $n$ states can be described by an $n$-variate birth and death process. However, as you only want to know the time to get to the last state (of one particle stating in state 1) you will have to compute a sum of exponential random variables (more details on continuous time finite state markov processes aka continuous time markov chains can be found here).

Example 1 Suppose you have $n$ states $\{S_1,\dots,S_n\}$ and particles in state $S_i$ can only reach state $S_{i+1}$. The time a particle spends in state $i$ is exponentially distributed say with rate $\lambda_i$ after that time is goes from state $i$ to state $j$ with probability $P_{ij}$($=1$ if $j=i+1$ and $0$ else). This completely determines the continuous time Markov chain.

advantages of these kind of models are (among others)

  • it's simple
  • computations are easy
  • simulation of the process is fast
  • ...

If you want to know more in this direction read up on

  1. Markov chains
  2. Markov Processes
  3. Poisson Point Process (and compound Poisson process)
  4. birth and death processes
  5. population models

Answer to the Edit Basically you can use the same model but you have to be more carefull of the transition matrix - in the sense that is describes the transition(-probability) of a particle from one state to another at the occurence of an event (usually the term transition matrix is reserved for discrete time markov chains).

In the above Example 1 you have the following transition matrix (n=4)$$T_1=\left(\begin{array}{cccc} 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 1 \end{array}\right)$$which means that the probability moving up is always one except in the last state where the probability of remaining in state 4 is one. If you want to 'enable' your particles to move up and down or remain you have to fill the zero entries appropriately.

Example 2 A system that allows particles to move up and down with equal probability (or remains and moves up in state 1 and remains or moves down in state 4) has the following matrix $$ T_2=\left(\begin{array}{cccc} \frac{1}{2} & \frac{1}{2} & 0 & 0\\ \frac{1}{2} & 0 & \frac{1}{2} & 0\\ 0 & \frac{1}{2} & 0 & \frac{1}{2}\\ 0 & 0 & \frac{1}{2} & \frac{1}{2} \end{array}\right)$$

Like this you can also incorporate traps in your model (allowing to go up but not down). The transistion matrix my also depend on time, e.g. until time $t_1$ you let $T_1$ be valid for your system and afterwards $T_2$.

If you have read up on Markov Processes in continuous time you might find the notion of infinitesimal Generators. $T_1$ and $T_2$ are not the infinitesimal generator of the Processes in the Examples but if you multiply by the intensity that an event occurs you can obtain it, i.e. $A=\lambda T_2$ is the infinitesimal Generator of the Process in Example 2, if the rate at which a Particle moves to another state is (equal for all states) $\lambda$. The intensity might also depend on time and the state of the particle) then you get for the inf. Gen. (a matrix valued function) $$A(t)=\big(\lambda_i(t)*T_{i,j}(t)\big)_{i,j\in\{1,\dots,n\}}.$$

  • 0
    I would say perfect answer, I will just wait before confirming the answer so that I can see other answers and study more on those topics and see if they are exactly what I was looking for. However really good answer, thank you!2017-02-11
  • 0
    @Euler_Salter Thank you. Its not a complete answer as there should be many more possibilities..hope to see more..2017-02-11
  • 0
    (nice name by the way ahah) I hope so. The more original/simple the better!2017-02-11
  • 0
    This is correct, poisson point process to extrapolate an entropy and markov chains to compute.2017-02-13
  • 0
    @Sam what do you mean by "extrapolate an entropy"?2017-02-13
  • 0
    There is more information here: https://www.researchgate.net/publication/1780598_Guessing_Probability_Distributions_From_Small_Samples2017-02-13
  • 0
    @Sam thank you I will have a look!2017-02-13
  • 0
    @Fckmth I've added an edit, would you be able to give me advice on that as well? Please tell me if my edit makes sense, if not, I'll reword it2017-02-14
  • 0
    @ Euler_Salter I will think about it2017-02-15
  • 0
    @Euler_Salter I edited my answer to accomodate your edit ;)2017-02-16
  • 0
    @Fckmath thank you!! I will comment if I don't understand certain passages :)2017-02-16