After the comment thread, I believe that "the decimal byte ring" must mean arithmetic modulo 256.
This is the same as ordinary arithmetic except you take all operands, intermediate and final results mod 256. Therefore
$ 5 + (-175+222)\times 13 = 5 + (81+222)\times 13 $ because $-175 \bmod 256 = 81$ (in contrast to the remainder operator of many programming languages the modulus operation always produces a number between 0 and 255);
$ \cdots = 5 + 47 \times 13 $ because $81+222=303$ and $303 \bmod 256=47$;
$ \cdots = 5 + 99 $ because $47\times 13 = 611$ and $611\bmod 256=99$;
$ \cdots = 104 $ which is itself mod 256.
Now, instead of this we could also just use ordinary arithmetic and take mod 256 once and for all at the end:
$5+(-175+222)\times 13 = 5+47\times 13 = 5+611 = 616$ and then $616\bmod 256=104$ again. This works because of the general rules $ (a\bmod N)+(b\bmod N) \bmod N = (a+b)\bmod N$ $ (a\bmod N)-(b\bmod N) \bmod N = (a-b)\bmod N$ $ (a\bmod N)(b\bmod N) \bmod N = ab\bmod N$ In practice one will often just reduce mod 256 at strategic places in the computation where the intermediate results would get impractically large otherwise.
Division is not defined modulo 256.
The reason why this is relevant is that we can think of it as arithmetic on numbers where we only remember the bottommost 8 bits of the binary representation. This happens fairly often in practice when we do arithmetic on 32- or 64-bit values but then only store 8 bits of the result in order to save space. Each combination of 8 remembered bits conceptually represents all the infinitely many integers that end with those 8 bits; we represent this infinite set of numbers with the one it would have been if the forgotten bits were all zeroes.
The reason why division is not allowed is that it would bring some of the forgotten bits into view, and then the result would not be well-defined.