The following problem comes from a piece of software I am currently working on, but since it's partially a mathematical problem, I'll ask the question here.
In my application I have a fixed value (e.g. 1000) which needs to be divided into multiple different values. The problem is that the algorithm (in another application) that divides the value returns non-integer values, and the nature of my current application requires integer values.
If the starting value is divided in two values, this shouldn't be a problem in most cases. E.g. if 1000 is divided in 600.6 and 399.4, then I can safely round this and get 601 and 399, which still sum up to 1000.
However, if the value is divided in more than 2 values, I cannot simply round the results, as this may cause an overflow or a slack. E.g. if 1000 is divided in:
- 300.6
- 300.6
- 398.8
Then rounding these values give me: 301, 301 and 399, which sums up to 1001.
In my application I could decide to round down everything, which would give me 300, 300 and 398. Although the sum is not 1000 anymore, in my case a smaller value (slack) is more acceptible than a larger value (overflow).
Are there any known tricks in number theory that provide better rouding mechanisms so this problem can be minimized? E.g. by using information about statistical distribution of numbers/digits?
Notice that:
- I can't change the original division algorithm
- I can't try to sum up the values myself and manually change one of the input values (in my situation the starting value could be divided in hundreds, thousands or even millions of different values)
- I can only change the rounding method
Thanks in advance