2
$\begingroup$

I have a real-life situation that can be solved using Queueing Theory.
This should be easy for someone in the field. Any pointers would be appreciated.

Scenario:
There is a single Queue and N Servers.
When a server becomes free, the Task at the front of the queue gets serviced.
The mean service time is T seconds.
The mean inter-Task arrival time is K * T (where K is a fraction < 1)
(assume Poisson or Gaussian distributions, whichever is easier to analyze.)

Question:
At steady state, what is the length of the queue? (in terms of N, K).

Related Question:
What is the expected delay for a Task to be completed?

Here is the real-life situation I am trying to model:
I have an Apache web server with 25 worker processes.
At steady-state there are 125 requests in the queue.
I want to have a theoretical basis to help me optimize resources and understand quantitatively how adding more worker processes affects the queue length and delay.

I know the single queue, single server, Poisson distribution is well analyzed.
I don't know the more general solution for N servers.

thanks in advance,
-- David Jones
dxjones@gmail.com

  • 0
    For future reference, from the FAQ: "Please don't use signatures or taglines in your posts. Every post you make is already "signed" with your standard user card, which links directly back to your user page. Your user page belongs to you — fill it with interesting information about your interests, links to cool stuff you've worked on, or whatever else you like!"2011-01-09

2 Answers 2