As the question states, is there a way to find precision used based on margin of error in compound interest problem?
The question is based off the following scenario: A financial system that projects APY of a given certificate principal amount and interest rate along with the formula to solve it along can easily give the future value of the certificate. The problem; there is not a hard convention on the number of decimal places and the rounding method in which the system uses to compound. The problem I am running into is when I calculate it manually, I always seem to be pennies off from what the system projects. The problem is further complicated by inherit problems with the type of data used by the system (floating data types are known to produce inaccuracies). Still, it would take forever trying all the different combination of methods it could be; 2 decimal places rounded up, 4 decimal places truncated, 3 decimal places rounded down, etc. Thanks in advance.