1
$\begingroup$

I have a report that I have generated on some data. The requirements of this report in particular is that the values for every row of data should be rounded to the nearest dollar. My boss is checking this report against another similar report to see if the numbers match up, but the requirements of that report was that it round to the nearest penny. Everyone understands that the numbers will be different to a degree.

Is there a way to mathematically calculate the range that we can expect the numbers to differ (+/-). I dont want to leave it up to just what "feels about right" numbers can

  • 0
    Sorry to both posters who took the time to post an answer. I did not explain my query well enough. The situation is that given two reports both using the same dataset of financial data, one of which rouws the values of each row to the penny, another rounds the value of each row to the dollar. So for example, a given row with a value of 1.5145 would round to 1.51 for one report and 2 for the other. I was looking for an easy way to estimate the expected variance (if there was an easy way.)2011-08-31

2 Answers 2

2

As Ilmari Karonen says, if you round to the nearest penny, and then to the nearest dollar, the result should be the same as if you rounded directly to the nearest dollar.

If on the other hand you are only checking the sums of each rows, then rounding differences may become apparent, and the more terms there are in each row, the more likely they are to occur. I once wrote a note May not sum to total due to rounding: the probability of rounding errors

  • 0
    Thanks Ilmari. I think that was what I was looking for. Unfortunately, most of it was over my head. LOL.2011-08-31
0

If you round to the nearest penny, and then to the nearest dollar, the result should be the same as if you rounded directly to the nearest dollar.

In other words, the reports should match if and only if the numbers are the same after rounding to the nearest dollar.

Or do I misunderstand something?