4
$\begingroup$

My system is a simple $P$ vs $I$ foxes- vs rabbits model given by: $$ \begin{cases} \frac{\mathrm{d}I}{\mathrm{d}t}=& \alpha_I+\lambda_IP- \gamma_II -\delta_IPI;\\ \frac{\mathrm{d}P}{\mathrm{d}t}=&\alpha_P+\theta_PP-\delta_PPI \end{cases} $$

with a parameter set: $$ \begin{cases} \theta_P &=0.15 \\ \delta_P&=0.01 \end{cases}\quad\text{ and }\quad \begin{cases} \alpha_I&=0.4 \\ \gamma_I&=0.1 \\ \delta_I&=0.05 \\ \lambda_I&=0.05 \end{cases} $$ but a condition on the initial introduction of $D$ rabbits ($P$) over a specific timeframe $T$.

$\alpha_P = \begin{cases} &\dfrac{D}{T} &\mbox{if } t\leq T \\ \\ &0 & \mbox{otherwise.} \end{cases}$

METHOD I'm using is: over any small timestep $\delta t<<1$, one of the following can occur:

I=I+1, with probability $\alpha_I+P\lambda_I$

I=I-1, with probability $I\gamma_I-PI\delta_I$

P=P+1, with probability $ P\theta_P$

P=P-1, with probability $ PI\delta_P$

QUESTION: But is this true when $t?

EDIT 18/04/2013:

Consider that P is actually Pathogens, and I is actually immunity cells in the human body. A literature search finds: (where Innoculation time is $t) enter image description here

Pujol 2009 -The Effect of Ongoing Exposure Dynamics in Dose Response Relationships (free access)

  • 0
    if the time $T$ is fixed or is it random? it it is fixed then the system is just a deterministic hybrid system. if it is random, then the systems is a piecewise-deterministic Markov process and methods are a bit different.2012-02-09
  • 0
    That's interesting. Initially we can think it would be fixed, however rabbits could be introduced at random intervals. Could you clarify your last statement please?2012-02-09
  • 0
    I changed a bit the format, hope that you're ok with it. I meant, that if $T$ is a pre-specified time, how do you define probabilities? If it is stochastic, what is the distribution?2012-02-09
  • 0
    I like it better now, it is much more readable, thanks! In the first case where $T$ is predefined, can the transition probabilities not be calculated as the ones given above? Eg Prob(I+1|I)=a+P*lambda? Or is this incorrect? If $T$ is random, let's assume a uniform distribution T~U(0,1).2012-02-09
  • 0
    Why don't you just put $\alpha_P+P\theta_P$ for the third transition probability?2012-02-09
  • 0
    @Raskolnikov Yes but P must increase by $\alpha_P$ every timestep during $t not +1 each time. Do you think this is correct?2012-02-09
  • 1
    No, only the expected value has to increase by $\alpha_P$ every time step. The stochastic variable $P$ will make discrete jumps.2012-02-09

2 Answers 2

4

You try to make a stochastic model corresponding to the deterministic model above. The Master equation of that model is

$$\begin{eqnarray}\frac{d}{dt}\mathbb{P}(I,P) & = & \mathbb{T}(I,P|I-1,P)\cdot \mathbb{P}(I-1,P) - \mathbb{T}(I+1,P|I,P)\cdot \mathbb{P}(I,P) \\ & & + \mathbb{T}(I,P|I+1,P)\cdot \mathbb{P}(I+1,P) - \mathbb{T}(I-1,P|I,P)\cdot \mathbb{P}(I,P) \\ & & + \mathbb{T}(I,P|I,P-1)\cdot \mathbb{P}(I,P-1) - \mathbb{T}(I,P+1|I,P)\cdot \mathbb{P}(I,P) \\ & & + \mathbb{T}(I,P|I,P+1)\cdot \mathbb{P}(I,P+1) - \mathbb{T}(I,P-1|I,P)\cdot \mathbb{P}(I,P) \; . \end{eqnarray}$$

Where the transition probabilities are

$$\begin{eqnarray} \mathbb{T}(I+1,P|I,P) & = & \alpha_I+P\lambda_I\\ \mathbb{T}(I-1,P|I,P) & = & I\gamma_I-PI\delta_I\\ \mathbb{T}(I,P+1|I,P) & = & \alpha_P+P\theta_P\\ \mathbb{T}(I,P-1|I,P) & = & PI\delta_P \end{eqnarray}$$

It takes a bit of work, but computing the expectation values $\mathbb{E}(I)=\sum_{I,P} I\cdot \mathbb{P}(I,P)$ and $\mathbb{E}(P)=\sum_{I,P} P \cdot\mathbb{P}(I,P)$, can be done by multiplying the master equation by either $I$ or $P$ and then summing over all values that $I$ and $P$ can get. This will give you the following equations:

$$\begin{cases} \frac{\mathrm{d}\mathbb{E}(I)}{\mathrm{d}t}=& \alpha_I+\lambda_I\mathbb{E}(P)- \gamma_I\mathbb{E}(I) -\delta_I\mathbb{E}(PI) \\ \frac{\mathrm{d}\mathbb{E}(P)}{\mathrm{d}t}=&\alpha_P+\theta_P\mathbb{E}(P)-\delta_P\mathbb{E}(PI) \end{cases}$$

Note however that you get the expectation of the product $PI$ and not the product of expectations.

EDIT:

Here's my code in R for a run of the stochastic model

    ai<-0; # parameters gi<-0.1; di<-0.05; li<-0.05; tp<-0.15; dp<-0.01; ap<-0.5; T<-10;   plusI<-c(1,-1,0,0); plusP<-c(0,0,1,-1);  Time<-c(0); # initial conditions I<-c(20);  P<-c(5);  for (k in 1:800) {  freq<-ai+li*P[k]+gi*I[k]+di*P[k]*I[k]+ap+tp*P[k]+dp*P[k]*I[k];  t<-rexp(1,freq); Time<-c(Time,Time[k]+t); sel<-sample(c(1,2,3,4),1,prob=c(ai+li*P[k],gi*I[k]+di*P[k]*I[k],ap*(Time[k]

To explain the most essential part, I draw a time from an exponential distribution with parameter the sum of all the rates of the processes. Then, I decide which process takes place by sampling with a probability vector equal to the 4 rates of the processes. This way, I manage to simulate the stochastic process. Here's a run of it:

Foxes vs Rabbits

I haven't tried to explore the state space in detail yet, but all choices of initial conditions I have tried lead to an exponential explosion of the rabbit population and a collapse of the fox population.

  • 0
    In that case, how does one calculate $\mathbb{E}(PI)$? You see, we're trying to solve these numerically and are interested in the probability of a given (I,P) (at $t=T$) combination which leads to unrestricted growth of P. The final value of $P$ does not depend just on the initial $(I,P)$, but also on the length of $T$. So it seems I am computing the raw P and I values and not $\mathbb{E}(P)$, but for one 'run' should this not be the same?2012-02-09
  • 0
    No, they are not the same, but if you are solving this numerically, you can take several runs for the same initial conditions and average over them to get $\mathbb{E}(P)$. As for computing $\mathbb{E}(PI)$, analitically, you would use the procedure I described for $\mathbb{E}(P)$ and $\mathbb{E}(I)$, i.e. multiply the master equation by $PI$ and sum over all possible states. But this will lead to equations involving even higher order moments. So, you will get an infinite hierarchy of equations. One way to solve that would be to break the hierarchy somewhere.2012-02-09
  • 0
    But since you are working numerically, I guess that should not really concern you and you could apply the same numerical technique, just generate a lot of paths and average over them.2012-02-09
  • 0
    Ok let me go back a few steps then as I think I am missing something fundamental. I am generating a random number and adding or taking away 1 from P or I depending which transition probability it falls into. So since you say that the eventual values of I and P at $T$ are not the expectations how do I convert from these raw I, P values to Expected (I,P) values? Eg. After T time steps $\mathbb{E}(P)\simeq \alpha_P$ but $P$ will be at most $=T$ not $\alpha_P$. So how do I make the conversion?2012-02-09
  • 1
    Yes, you got the idea. Well, just think about a coin flip. How do you get from the actual experimental results to $1/2$ heads? You just add up all the cases of heads you had in different experiments and divide by the total number of experiments. This number will approach $1/2$. Same with $P(t)$, except you don't have just one number, but a path in time. But you still generate many paths, add them up and divide by the total number of paths to get an average path. There is a catch however. This may not necessarily lead you to $\mathbb{E}(P)$.2012-02-09
  • 1
    Indeed, the coin flipping process I gave as an example is a nicely behaved process with finite expectation and variance. Take however a Cauchy distribution, and the procedure I describe will never give you an expected value, simply because the Cauchy distribution does not have one. The equivalent in your problem would arise if the variance where to know unrestricted growth for instance. If I have some more time later, I'll edit my post and put some numerics in there.2012-02-09
  • 0
    I'd really appreciate that. Can I illustrate my confusion with an example: the number of Rabbits introduced (D) may be delivered by a truck load or a few at a time for $t\leq T$. Let's say T=10, and I arbitrarily chose $\delta t=0.01$. I am happy about calculating the Expectation once I have some `paths` but I cannot see how P manages to reach $D$ in the timeframe $T$ unless $\delta t$ is altered to accommodate $D$ time steps. Should this be the case?2012-02-09
  • 0
    I would very much appreciate it if you could take another quick look at this, because I'm still slightly confused between E(P) and the stochastic P. I'm not sure how to replicate the new literature graph if I'm only calculating the discrete stochastic jumps... Is it just a simple transformation or are they completely different animals?2013-04-18
  • 1
    Wow, you like resurecting old posts. :D I'll have a look at this, but give me some time to remember what it was all about.2013-04-18
  • 0
    Hello HCAI, are you talking about the graph you included in the question? That is just the $P$, not $E(P)$. But for different initial conditions. I've redone some simulations and arrive at the same type of curves as in the paper.2013-04-21
  • 0
    Hi Raskolnikov, yes I was interested to understand how the authors of the paper arrived at their graph. They say that 66% of runs arrive at infection using the given initial conditions, so why isn't this represented by E(P) aswell as P? I.e. when calculating E(P) we always get divergence to infinity and hence 100% infection rate. What changes did you need to make to arrive at their graph? Regards,2013-04-21
  • 0
    OK, I'll make a completely new answer and I'll go more deeply into the meaning of $E(P)$ and $P$.2013-04-22
  • 0
    +1 to both answers. Note that, as far as confusions between random variables and their expectations are concerned, one might want to avoid using the symbols $I$ and $P$ in the master equation, in the transition probabilities and in the sums giving $E(I)$ and $E(P)$ (all before the "EDIT" mention in this answer).2013-04-22
2

I'll try to clear up the confusion you have about the meaning of $P$ and $E(P)$.

First, the meaning of $P$. $P$ is a stochastic variable. This means that $P$ is really a function from the space of all possible realizations of the stochastic process to a number (or, since $P$ is time-dependent here, a real function of time).

How does this connect with the picture? Each curve corresponds to one realization of the process. Let's call those $\omega_i$ with $i$ and index to label the realizations. So what is plotted is really $P(\omega_i)$ for several different $\omega_i$.

They then talk about the percentage of infections. Apparently, infection is defined as a situation where the number of pathogens increases indefinitely. So, all you have to do to determine that percentage is to count how many $\omega_i$ will have $P(\omega_i)$ diverging. Since it's impossible to do for all possible realizations, they only use a sample of them to estimate the percentage. If the sample is large enough, the estimate will be good.

What about $E(P)$? $E(P)$ is not a function of the realizations $\omega_i$ like $P$ is. In fact, $E(P)$ is the mean of $P$ over all possible realizations. One could formally notate this as

$$E(P)=\sum_{\omega_i}P(\omega_i)\text{Prob}(\omega_i) \; .$$

But since by simulating we don't have access to all possible realizations and the probability distribution of those, we again just make an estimate by sampling. All you have to do is take the arithmetic mean over all sampled realizations.

But now you face a problem. Since some trajectories diverge, they will dominate the mean. A smarter thing to do would be to calculate maybe two separate means: one for diverging trajectories and one for converging trajectories.

I hope this helps clearing up the confusion somewhat. If you're still stuck, I think you should look more seriously into stochastic processes. May I advise you the following book:

N. G. van Kampen: Stochastic processes in physics and chemistry, North Holland 1981, 3rd edn., 2007, ISBN 0-444-89349-0

I want to stress one point again though: $E(P)$ is not necessary to compute the percentage of infections at all!

EDIT: My slightly altered code in R:

    ai<-0.4; # parameters gi<-0.01; di<-0.005; li<-0.05; tp<-0.15; dp<-0.01; T<-1; ap<-25/T;  dt<-0.01; # timestep  I<-c(2); # initial conditions P<-c(10);   plusI<-c(1,-1,0,0); plusP<-c(0,0,1,-1);  Time<-c(0); # initial conditions I<-c(20);  P<-c(5);  for (k in 1:10000) {  freq<-ai+li*P[k]+gi*I[k]+di*P[k]*I[k]+ap+tp*P[k]+dp*P[k]*I[k];  t<-rexp(1,freq); Time<-c(Time,Time[k]+t); sel<-sample(c(1,2,3,4),1,prob=c(ai+li*P[k],gi*I[k]+di*P[k]*I[k],ap*(Time[k]
  • 0
    This is extremely helpful and insightful indeed! I'm very greatful. I would be interested in exploring the state space with different parameters (ie some experimental ones that I have found). Would you able to post the R code you used to calculate your runs?2013-04-22
  • 0
    It's the same as the one before, I just changed some parameters.2013-04-22
  • 0
    Many thanks indeed! This is truly fantastic! I have to admit that I'm a native Matlab user, so how would you run say 500 runs and plot let's say 10 random ones. I love R (or at least the concept) and this code runs faster than Matlab too, even without preallocating vector sizes for I and P.2013-04-22