It depends largely on what exactly you are measuring, etc.
You are calling these 'margins of error', which may be colloquial (or it may be an incorrect use of language). My guess is that these errors come from uncertainty in the measurements of quantities (that would appear to be the case based on the examples). In general then, you would want to use relative uncertainties and follow the rule that uncertainty can only grow as calculations are completed.
Let's, for example, say that you have $R = 1$ and $\delta R = 0.2$. Then you can say that the relative uncertainty is equal to $0.2 / 1 = 20\%$. So now instead of reporting it as absolute magnitudes, you would report the value as $R = 1 \pm 20\%$.
With this, work out the arithmetic as you would, ignoring all the errors. In our case, this would look like (assuming that, for the sake of it not being important we take $V_2 = 1 \pm 0$:
$$
V_1 = V_2\cdot \frac{1}{4\cdot1} = \frac{1}{4}
$$
So we then have $V_1$ equal to $0.25$. But recall that each $R$ had a $20\%$ error associated with it. The general rule of thumb is that, for division/multiplication, relative uncertainties add. So we must be $40\%$ uncertain about our final result.
$0.4 \cdot \frac{1}{4} = \frac{1}{10}$. So then you would say that $V_1 = \frac{1}{4} \pm \frac{1}{10}$.
For some more reading on mathematics using uncertainties check out this quick reference:
http://web.uvic.ca/~jalexndr/192UncertRules.pdf
If in fact you were talking about true margins of error (i.e. statistically caclulated quantities, not measurement errors, then this math can be ignored entirely!)