2
$\begingroup$

I want to model the server under load.

I'm using following assumptions:

  • The server serves only one request at a time, and all requests take him exactly 100ms to process.
  • All requests that came while server was processing another request are placed into unlimited queue, and the processed in FIFO order.
  • Load is generated by "users", which make a request, wait for it to be served, the wait 5-15s(with flat distribution) before making next request.

The parameter I am most interested in is mean time of user waiting for his request. Ideally, I want to find a function like f(n), where n is number of users, and f(n) - is mean waiting time.

Modeling this for n=1 is easy - (f(1)=0.1), as it is for two users: the probability of second user making request while first user is being served is 0.01, so f(2) = 0.99*0.1 + 0.01*0.15 = 100,5ms.

But I'm stuck when I try to model more users.

Any suggestions?

  • 0
    @Didier Piau - Users are making requests each 5-15 seconds, so the average will be 10s between requests. So the probability is 0.1/10 = 0.01. Oh, but wait, mean time between requests depends on mean time to serve the request... And so here come the problems.2011-08-16

1 Answers 1

2

Have a look at Queuing Theory. Choose a model fits your system ( here is a good list , note there are other models apart from the ones listed in the book).

For example, let's use a M/G/k queue, average delay/waiting time can be calculated: $Eg= \frac{C^2 +1}{2}Em$ C is the coefficient of variation of the service time distribution, in your case it would be zero, since you service time is always 100 ms. $Em =\frac{ C(c,\frac{\lambda}{\mu}) }{c\mu-\lambda} + \frac{1}{\mu}$ $C(c,\frac{\lambda}{\mu})$ is the probability that an arriving customer is forced to join the queue (all servers are occupied), referred to as Erlang's C formula. $ C(c,\frac{\lambda}{\mu}) = \frac {1}{ 1+(1-\rho)\frac {c!}{ (c\rho )^{c} } \sum _{k=0}^{c-1}{\frac { (c\rho )^{k} }{ k! }} } $ where $\rho$ is server utilization, $c$ is the number of servers. $\rho = \frac{\lambda}{ c*\mu }$

Let $duration$ be the total time of your observation/experiment, which is the sum of requests inter-arrival times. Request arrival rate is $\lambda = \frac{\text{count of requests}}{duration}$ Let the average time it takes a server to service a request be $Ts$, then the average servicing rate would be (requests/second) $\mu=\frac{1}{Ts}$ In this case, I generated 100 random numbers, with value between 5 and 15, to simulate your requests inter-arrivals.

   11    8    9    7   12    6   14   12   11    9    8    8   11   15   14   11        15   15    9    5   12    8   10   10    6   12   13   11   10    8    6   10        11    7    9    8   13    5    5   14   12   10    5    5   15   14   13    7        15   12    6   11    9   10   11    9    5   12   14   10   12   11   11   11         8    9    5   13   12   12    5   14    6   12    6    5   15   12   12   14         8    7    8   15   11   13   10    6    5   11   10    8    8    5    8   13        13   11    7    5 

$duration = 990$ seconds

$count=100$ requests

$\lambda = 0.10101$ requests/s

$\mu = 10$ requests/s

$\rho = 0.010101$

$C(c,\frac{\lambda}{\mu}) = 0.010101$

$Em=0.10102$ seconds

$Eg=0.0505102$ seconds

The number of users (your $n$) has no direct relation with the model, the parameter changes the behavior of the Queue would be the request arrival distribution, which maybe affected by the number of users.